Natively supported frameworks and libraries
The AI-native observability platform
Trace requests and runs with full AI context.
See latency breakdown by operations.
Analyze model API costs for deployments, models, or users.
Get notified about errors and anomalies.
Monitor API, compute, and GPU utilization.
Collaborate during incidents for faster resolution.
Read more about AI observability
Tracing OpenAI Functions with Graphsignal
Learn how to trace, monitor and debug OpenAI function calling in production and development.
Tracing and Monitoring LlamaIndex Applications
Learn how to trace, monitor and debug LlamaIndex applications in production and development.