Learn how to trace, monitor and debug Hugging Face Transformers Agents in production and development.
Hugging Face Transformers Agents are a natural evolution and an abstraction for models hosted on Hugging Face. It is a simple and elegant implementation of an AI agent.
When applications relying on Transformers Agents are deployed to production and serve user requests, a full visibility into agent runs is necessary to ensure applications behave as expected and users have the best experience. The essential signals include:
Graphsignal automatically instruments and starts tracing and monitoring Transformers Agents. It's only necessary to set it up by providing Graphsignal API key and a deployment name.
import graphsignal
# Provide an API key directly or via GRAPHSIGNAL_API_KEY environment variable
graphsignal.configure(api_key='my-api-key', deployment='my-langchain-app-prod')
You can get an API key here.
In order to additionally trace other functions, e.g. request handlers, you can also use a decorator or context manager. See the Quick Start for complete setup instructions.
Here is a simple example demonstrating how to add the Graphsignal tracer. See the complete example here.
After the application is started, traces and metrics are automatically recorded and available in the dashboards.
If OpenAI LLM is used as an agent, Graphsignal automatically instruments and traces the OpenAI API providing additional insights such as costs, token counts, sizes, and finish reasons.
Learn more about OpenAI cost tracking here.
Performance, data metrics, and resource utilization are available, to monitor applications over time and correlate any changes or issues.
Alerts can be set up to get notified on exceptions or outliers.
Give it a try and let us know what you think. Follow us at @GraphsignalAI for updates.