See the Quick Start guide on how to install and configure Graphsignal.
Graphsignal automatically instruments, traces and monitors LangChain. Trace samples, as well as outliers and exceptions, provide execution details for each chain, LLM and tool. latency, throughput and utilization metrics are available for each chain as well.
Sometimes it is useful to measure whole run to see total latency and the breakdown by each chain. Use decorator or context manager, for example:
with graphsignal.start_trace('mychain'): chain.run("some text")
Additionally, if OpenAI LLM is used, Graphsignal automatically instruments and traces OpenAI API providing additional insights such as token counts, sizes, finish reasons and more.
The LangChain app example illustrates how to add and configure Graphsignal.