LangChain Tracing And Monitoring
See the Quick Start guide on how to install and configure Graphsignal.
Graphsignal automatically instruments, traces and monitors LangChain. Trace samples, as well as outliers and errors, provide execution details for each chain, LLM and tool. These insights include prompts, completions, parameters, latency, and exceptions. Latency, throughput, utilization, and cost metrics are available for each chain as well.
When OpenAI APIs are used, Graphsignal automatically instruments and traces OpenAI API providing additional insights such as token counts, sizes, finish reasons, and costs.
Tracing other functions
To additionally trace any function or code, you can use a decorator or a context manager:
@graphsignal.trace_function
def handle_request():
chain.run("some initial text")
with graphsignal.start_trace('my-chain'):
chain.run("some initial text")
Troubleshooting
If LangChain traces are not reported and no errors are logged, you may need to add Graphsignal callback handlers explicitly. Here is how to do it:
from graphsignal.callbacks.langchain import GraphsignalCallbackHandler
chain.run("some initial text", callbacks=[GraphsignalCallbackHandler()])
Or, for async chains and LLMs:
from graphsignal.callbacks.langchain import GraphsignalAsyncCallbackHandler
chain.arun("some initial text", callbacks=[GraphsignalAsyncCallbackHandler()])
See LangChain Callbacks documentation for more information.
Examples
- The LangChain app example illustrates how to add and configure Graphsignal.
- FastAPI LangChain app example shows how to trace async LangChain applications serving via FastAPI endpoint.