OpenAI API Tracing And Monitoring
See the Quick Start guide on how to install and configure Graphsignal.
Graphsignal automatically instruments and monitors OpenAI libraries. The calls to the following model endpoints will be automatically traced and monitored.
- Chat
- Embedding
- Image
- Audio
- Moderation
Graphsignal automatically records, prompts, completions, latency, token_counts, and costs.
Streaming
Python
When streaming, enable token statistics by adding stream_options: {"include_usage": true}
in OpenAI SDK calls. Alternatively, if the first option is not possible, you can enable token counting for streaming by installing pip install tiktoken
, and the tracer will be able to use it for counting.
Examples
The OpenAI app example illustrates how to add and configure the Graphsignal tracer.