Quick Start Guide
Installation
Install the profiler by running:
pip install graphsignal
Or clone and install the GitHub repository:
git clone https://github.com/graphsignal/graphsignal.git
python setup.py install
Import the module in your application:
import graphsignal
For GPU profiling, make sure the NVIDIA® CUDA® Profiling Tools Interface (CUPTI) is installed by running:
/sbin/ldconfig -p | grep libcupti
Configuration
Configure the profiler by specifying your API key and workload name directly or via environment variables.
graphsignal.configure(api_key='my_api_key', workload_name='job1')
To get an API key, sign up for a free account at graphsignal.com. The key can then be found in your account's Settings / API Keys page.
workload_name
identifies the job, application or service that is being profiled.
Profiling
Use the following examples to integrate Graphsignal into your machine learning script. See integration documentation and profiling API reference for full reference.
To ensure optimal statistics and low overhead, the profiler automatically profiles only certain training steps and/or predictions.
from graphsignal.profilers.tensorflow import profile_step
with profile_step():
# training batch, prediction, etc.
from graphsignal.profilers.keras import GraphsignalCallback
model.fit(..., callbacks=[GraphsignalCallback()])
# or model.predict(..., callbacks=[GraphsignalCallback()])
from graphsignal.profilers.pytorch import profile_step
with profile_step():
# training batch, prediction, etc.
from graphsignal.profilers.pytorch_lightning import GraphsignalCallback
trainer = Trainer(..., callbacks=[GraphsignalCallback()])
from graphsignal.profilers.huggingface import GraphsignalPTCallback
# or GraphsignalTFCallback for TensorFlow
trainer = Trainer(..., callbacks=[GraphsignalPTCallback()])
# or trainer.add_callback(GraphsignalPTCallback())
Other frameworks
ML operation and kernel statistics are not supported by generic profiler.
from graphsignal.profilers.generic import profile_step
with profile_step():
# training batch, prediction, etc.
Distributed workloads
Graphsignal has a built-in support for distributed training and inference, e.g. multi-node and multi-gpu training. See Distributed Workloads section for more information.
Dashboards
After profiling is setup, open to Graphsignal to analyze recorded profiles.
Example
# 1. Import Graphsignal modules
import graphsignal
from graphsignal.profilers.keras import GraphsignalCallback
# 2. Configure
graphsignal.configure(api_key='my_key', workload_name='training_example')
....
# 3. Add profiler callback or use profiler API
model.fit(..., callbacks=[GraphsignalCallback()])
More integration examples are available in examples
repo.
Overhead
Although profiling may add some overhead to applications, Graphsignal Profiler only profiles certain steps, e.g. training batches or predictions, automatically limiting the overhead.
Security and Privacy
See Security and Privacy section.
Troubleshooting
To enable debug logging, add debug_mode=True
to configure()
. If the debug log doesn't give you any hints on how to fix a problem, please report it to our support team via your account.
In case of connection issues, please make sure outgoing connections to https://profile-api.graphsignal.com