TensorFlow Inference Monitoring and Profiling

See the Quick Start guide on how to install and configure Graphsignal.

Add the following code around inference. See API reference for full documentation.

with graphsignal.start_trace(endpoint='predict', profiler='tensorflow'):
    # function call or code segment

Model serving

Graphsignal provides a built-in support for server applications. See Model Serving guide for more information.