Inference Profiling

Introduction

Profiling offers deep insights into function and kernel performance, including CPU time and memory usage. Profilers are essential for identifying and optimizing bottlenecks, detecting configuration issues, and reducing Mean Time to Resolution (MTTR). Graphsignal captures profiles at the trace level to enable precise performance analysis. While each profile is recorded during a single trace, it reflects aggregated, process-level activity.

The following profiles are currently supported:

  • profile.cpython - Python wall-time profile recorded using cProfile.
  • profile.pytorch - Set of PyTorch profiles recorded using PyTorch Profiler profiler.

Configuring profiling

See the Quick Start for instructions on installing and configuring the Graphsignal tracer.

Profiling is enabled automatically for manually traced spans and auto-traced libraries, if supported. To control which profiles are recorded, add include_profiles=[...] to the configure() or trace() call with the list of profiles to record, if applicable. To disable profiling, provide an empty include_profiles.

See Python API documentation for more information.