LangChain Tracing And Monitoring
See the Quick Start guide on how to install and configure Graphsignal.
Graphsignal automatically instruments, traces and monitors LangChain. Traces are created for LLM calls and tools. They include prompts, completions, parameters, latency, and exceptions.
When OpenAI APIs are used, Graphsignal automatically instruments and traces OpenAI providing additional insights such as model details, token counts and costs.
Learn more on how to set session and user tags for LangChain in Session Tracking and User Tracking guides.
Manual tracing
To additionally trace any function or code, you can use a decorator or a context manager:
Python
@graphsignal.trace_function
def handle_request():
chain.run("some initial text")
with graphsignal.trace('my-chain'):
chain.run("some initial text")
Manual integration
You can add Graphsignal callback handlers explicitly:
Python
from graphsignal.callbacks.langchain import GraphsignalCallbackHandler
handler = GraphsignalCallbackHandler()
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])
Or, for async chains and LLMs:
from graphsignal.callbacks.langchain import GraphsignalAsyncCallbackHandler
handler = GraphsignalAsyncCallbackHandler()
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])
See LangChain Callbacks documentation for more information.
Examples
- The LangChain app example illustrates how to add and configure Graphsignal.