Monitoring and Tracing LangChain Applications
By Dmitri Melikyan | | 2 min read

Learn how to monitor and troubleshoot LangChain applications in production.

LangChain In Production

LangChain library is extremely useful for building AI applications that are based on or using LLMs. Chaining models, tools and actions seems to be the natural way to use LLMs.

When LangChain apps are deployed, especially if facing external users, it becomes important to ensure that application behave as expected in regards to agent reasoning, prompts, completions, latency, reliability, and costs. As multiple API calls and actions can be chained in a single run to complete a task, a chain-level visibility into runs is important.

Tracing LangChain Runs With Graphsignal

Graphsignal automatically instruments and starts tracing and monitoring chains. It's only necessary to set it up by providing Graphsignal API key and a deployment name.

import graphsignal

# Provide an API key directly or via GRAPHSIGNAL_API_KEY environment variable
graphsignal.configure(api_key='my-api-key', deployment='my-langchain-app-prod')

In order to additionally trace other functions, e.g. request handlers, you can also use a decorator or context manager. See the Quick Start for complete setup instructions.

To demonstrate, I run this example app. It simulates periodic chain runs. After running it, traces and metrics are continuously recorded and available in the dashboard for analysis. Traces are automatically recorded regularly, in case of exceptions, and anomalies.

Graphsignal traces dashboard

We can look into a trace to answer questions about slow latency, errors, see the data and whether it may have been the root cause of an issue.

Graphsignal trace dashboard

Prompts and completions are recorded along with the data statistics. This is instrumental for troubleshooting errors and data issues.

Graphsignal data view

Tracking OpenAI API Costs

If OpenAI LLMs are used, Graphsignal automatically instruments and traces OpenAI API providing additional insights such as costs, token counts, sizes, and finish reasons.

Graphsignal OpenAI dashboard

Learn more about OpenAI cost tracking here.

Performance, Data, and System Metrics

Performance, data metrics, and resource utilization are available, to monitor applications over time and correlate any changes or issues.

Alerts can be set up to get notified on exceptions or outliers.

Give it a try and let us know what you think. Follow us at @GraphsignalAI for updates.