PyTorch Profiling and Monitoring
See the Quick Start guide on how to install and configure Graphsignal.
Graphsignal automatically instruments and profiles PyTorch.
What’s captured
- Profiling: common PyTorch operator and module hot paths (for example
torch.nn.Linear.forward, attention layers, distributed collectives, CUDA sync points). - Metrics: CUDA memory metrics from
torch.cuda.memory_stats(allocated/reserved/peaks, OOMs, utilization, fragmentation) when CUDA is available.
Integration into your Python application that uses PyTorch
Call graphsignal.configure(...) in your app and import/use PyTorch normally:
import graphsignal
graphsignal.configure(api_key="my-api-key")
# or pass the API key in GRAPHSIGNAL_API_KEY environment variable