OpenAI API Tracing And Monitoring

See the Quick Start guide on how to install and configure Graphsignal.

Graphsignal automatically instruments and monitors OpenAI libraries. The calls to the following model endpoints will be automatically traced and monitored.

  • Chat
  • Embedding
  • Image
  • Audio
  • Moderation

Graphsignal automatically records, prompts, completions, latency, token_counts, and costs. This insights is then monitored and is available for detailed analysis.

Learn more on how to set session and user tags for OpenAI in Session Tracking and User Tracking guides.

Streaming

When streaming is used for completion or chat requests, Grapshignal tracer does not count prompt tokens by default and therefore such requests will be undercounted in cost metrics. To enable token counting for streaming, simply pip install tiktoken (Python), and the tracer will be able to use it for counting.

Examples

The OpenAI app example illustrates how to add and configure the Graphsignal tracer.