Quick Start

Install

Install Graphsignal agent by running:

pip install graphsignal

Or clone and install the GitHub repository:

git clone https://github.com/graphsignal/graphsignal.git
python setup.py install

Configure

Configure Graphsignal agent by specifying your API key directly or via GRAPHSIGNAL_API_KEY environment variable.

import graphsignal

graphsignal.configure(api_key='my-api-key', deployment='my-model-prod-v1')

To get an API key, sign up for a free account at graphsignal.com. The key can then be found in your account's Settings / API Keys page.

To track deployments, versions and environments separately, specify a deployment parameter.

Integrate

Use the following examples to integrate Graphsignal agent into your machine learning application. See integration documentation and API reference for full reference.

Graphsignal agent is optimized for production. All executions will be measured, but only a few will be recorded to ensure low overhead.

Monitoring and tracing

To measure and monitor executions, e.g. single or batch inference or a training step, wrap the code with start_trace method or use trace_function decorator.

with graphsignal.start_trace(endpoint='predict'):
    pred = model(x)
@graphsignal.trace_function
def predict(x):
    return model(x)

Other integrations and callbacks are available as well. See integration documentation for more information.

Exception tracking

When using trace_function decorator, start_trace method with with context manager or callbacks, exceptions are automatically recorded. For other cases, use EndpointTrace.set_exception.

Data monitoring

To track data metrics and record data profiles, EndpointTrace.set_data method can be used.

with graphsignal.start_trace(endpoint='predict') as trace:
    trace.set_data('input', input_data)

The following data types are currently supported: list, dict, set, tuple, str, bytes, numpy.ndarray, tensorflow.Tensor, torch.Tensor.

No raw data is recorded by the agent, only statistics such as size, shape or number of missing values.

Observe

After everything is setup, log in to Graphsignal to monitor and analyze execution performance and monitor for issues.

Examples

Model serving

import graphsignal

graphsignal.configure(api_key='my-api-key', deployment='my-model-prod')

...

def predict(x):
    with graphsignal.start_trace(endpoint='predict'):
        return model(x)

Batch job

import graphsignal

graphsignal.configure(api_key='my-api-key', deployment='my-model')

...

for x in data:
    with graphsignal.start_trace(endpoint='predict', tags=dict(job_id='job1')):
        preds = model(x)

More integration examples are available in examples repo.

Overhead

Graphsignal agent is very lightweight. While all executions are measured, Graphsignal agent only records certain executions, automatically limiting the overhead.

Troubleshooting

To enable debug logging, add debug_mode=True to configure(). If the debug log doesn't give you any hints on how to fix a problem, please report it to our support team via your account.

In case of connection issues, please make sure outgoing connections to https://agent-api.graphsignal.com are allowed.