PyTorch Inference Tracing And Monitoring
See the Quick Start guide on how to install and configure Graphsignal.
Add the following code around inference. See API reference for full documentation.
with graphsignal.start_trace('predict'):
# function call or code segment
Examples
The PyTorch MNIST example illustrates where and how to add the graphsignal.start_trace()
method.
Model serving
Graphsignal provides a built-in support for server applications. See Model Serving guide for more information.