See the Quick Start Guide on how to install and configure the profiler.

To profile PyTorch, add the following code around a code step, e.g. training batch or a prediction. Only some steps will be profiled; the profiler decides which steps to profile for optimal statistics and low overhead. See profiling API reference for full documentation.

Profile PyTorch using with context manager:

with profile_step():
    # training batch, prediction, etc.

Profile using stop:

from graphsignal.profilers.pytorch import profile_step

step = profile_step()
    # training batch, prediction, etc.


The PyTorch MNIST example illustrates where and how to add the profile_step method.

Distributed workloads

Graphsignal provides built-in support for distributed training and inference. It is only necessary to provide the same run ID to all workers. Refer to Distributed Workloads section for more information.