LlamaIndex Tracing And Monitoring

See the Quick Start guide on how to install and configure Graphsignal.

Graphsignal automatically instruments, traces and monitors LlamaIndex. Traces and metrics provide execution details for each query, retrieval, and index operation. These insights include prompts, completions, embedding statistics, retrieved nodes, parameters, latency, and exceptions.

When OpenAI APIs are used, Graphsignal automatically instruments and traces OpenAI API providing additional insights such as token counts, sizes, finish reasons, and costs.

Tracing other functions

To additionally trace any function or code, you can use a decorator or a context manager:

with graphsignal.start_trace('load-external-data'):


If LlamaIndex traces are not reported and no errors are logged, you may need to add Graphsignal callback handler explicitly. Here is how to do it:

from graphsignal.callbacks.llama_index import GraphsignalCallbackHandler

callback_manager = CallbackManager([GraphsignalCallbackHandler()])
service_context = ServiceContext.from_defaults(callback_manager=callback_manager)

GPTVectorStoreIndex.from_documents(documents, service_context=service_context)

See LlamaIndex Callbacks documentation for more information.