TensorFlow Inference Monitoring
See the Quick Start guide on how to install and configure Graphsignal.
Add the following code around inference. See API reference for full documentation.
with graphsignal.start_trace(endpoint='predict'):
# function call or code segment
Model serving
Graphsignal provides a built-in support for server applications. See Model Serving guide for more information.