LangChain Tracing And Monitoring

See the Quick Start guide on how to install and configure Graphsignal.

Graphsignal automatically instruments, traces and monitors LangChain. Trace samples, as well as outliers and errors, provide execution details for each chain, LLM and tool. These insights include prompts, completions, parameters, latency, and exceptions. Latency, throughput, utilization, and cost metrics are available for each chain as well.

When OpenAI APIs are used, Graphsignal automatically instruments and traces OpenAI API providing additional insights such as token counts, sizes, finish reasons, and costs.

Tracing other functions

To additionally trace any function or code, you can use a decorator or a context manager:

def handle_request():"some initial text")
with graphsignal.start_trace('my-chain'):"some initial text")


If LangChain traces are not reported and no errors are logged, you may need to add Graphsignal callback handlers explicitly. Here is how to do it:

from graphsignal.callbacks.langchain import GraphsignalCallbackHandler"some initial text", callbacks=[GraphsignalCallbackHandler()])

Or, for async chains and LLMs:

from graphsignal.callbacks.langchain import GraphsignalAsyncCallbackHandler

chain.arun("some initial text", callbacks=[GraphsignalAsyncCallbackHandler()])

See LangChain Callbacks documentation for more information.