OpenAI API Tracing And Monitoring
See the Quick Start guide on how to install and configure Graphsignal.
Graphsignal automatically instruments and monitors OpenAI Python library. The calls to the following model endpoints will be automatically traced and monitored.
Besides reporting latency, throughput, utilization, Graphsignal also records OpenAI specific information, such as data statistics for prompts, completions, embeddings and images. The statistics include costs, token counts, sizes, finish reasons and more.
When streaming is used for completion or chat requests, Grapshignal tracer does not count prompt tokens by default and therefore such requests will be undercounted in cost metrics. To enable token counting for streaming, simply
pip install tiktoken, and the tracer will be able to use it for counting. We also recommend to
import titktoken somewhere in your code, otherwise the first API request will have an increased latency due to importing the
tiktoken package for the first time.
The OpenAI app example illustrates how to add and configure the Graphsignal tracer.