Python API Reference

graphsignal.configure

configure(
    api_key: Optional[str] = None, 
    deployment: Optional[str] = None, 
    debug_mode: Optional[bool] = False) -> None

Configures and initializes the agent.

All arguments can also be passed via environment variables: GRAPHSIGNAL_{ARG_NAME}. Arguments passed directly to the function take precedence.

Arguments:

  • api_key: The access key for communication with Graphsignal cloud.
  • deployment: A unique name to group and identify endpoints for a particular deployment, version or environment.
  • debug_mode: Enable/disable debug output.

graphsignal.start_trace

start_trace(
    endpoint: str,
    tags: Optional[dict[str, str]] = None,
    ensure_sample: Optional[bool] = False,
    profiler: Optional[Union[bool, str, OperationProfiler]] = True) -> EndpointTrace

Measure and trace execution.

with context manager can be used around the function call or code segment. Otherwise, stop() method should be called on the returned EndpointTrace object.

Arguments:

  • endpoint: Unique identifier for a function or code segment.
  • tags: Any additional information to identify the trace.
  • ensure_sample: Record current trace. The number of recorded trace samples is limited.
  • profiler: Use profiler to automatically profile some executions. The following values are currently supported: True (or python), tensorflow, pytorch, jax, onnxruntime. Pass False to disable profiling

Returns:

  • EndpointTrace - trace object representing currently active execution, e.g. inference.

graphsignal.upload

upload(block: Optional[bool] = False) -> None

Performance data is uploaded periodically. Call this method to initiate upload.

Arguments:

  • block: If set to true True, any outstanding data will be uploaded in the current thread instead of a new thread.

graphsignal.shutdown

shutdown() -> None

Clean up and shut down the agent.

Normally, when python scripts exists, this method is automatically called. Use this method, if you want to explicitly clean up and shut down the agent.

graphsignal.EndpointTrace

EndpointTrace object represent currently active execution. It is returned by start_trace method and should not be created directly.

graphsignal.EndpointTrace.set_tag

set_tag(key: str, value: Any) -> None

Add any additional information to identify the trace.

Arguments:

  • key: Tag key.
  • value: Tag value. Will be converted to string using str().

Raises:

  • ValueError - When arguments are missing or invalid.

graphsignal.EndpointTrace.set_exception

set_exception(exc: Optional[Exception] = None, exc_info: Optional[bool] = None) -> None

When with context manager is used with start_trace method, exceptions are automatically reported. For other cases, use this method.

Arguments:

  • exc: Exception object.
  • exc_info: When True, the sys.exc_info() will be called internally.

Raises:

  • ValueError - When arguments are missing or invalid.

graphsignal.EndpointTrace.set_data

set_data(name: str, obj: Any) -> None

Measure any data related to current trace to track data metrics and record data profiles that will be available within traces.

with graphsignal.start_trace(endpoint='predict') as trace:
    trace.set_data('input', input_data)

No raw data is recorded by the agent, only statistics such as size, shape or number of missing values.

Arguments:

  • name: Data name, e.g. 'model-input'
  • data: Data object. The following types are currently supported: list, dict, set, tuple, str, bytes, numpy.ndarray, tensorflow.Tensor, torch.Tensor.

Raises:

  • ValueError - When arguments are missing or invalid.

graphsignal.EndpointTrace.stop

stop() -> None

Stops measuring and tracing current execution, if tracing is active. This method is automatically called if with context manager is used around the code.

graphsignal.callbacks.keras.GraphsignalCallback

GraphsignalCallback(
    name: str, 
    tags: Optional[dict[str, str]] = None)

Keras callback interface for automatic inference measurement and profiling.

Usage: model.fit(..., callbacks=[GraphsignalCallback(endpoint'predict')]) or model.predict(..., callbacks=[GraphsignalCallback(endpoint='predict')]).

See Model class for more information on adding callbacks.

Arguments:

  • endpoint: Unique identifier for a function or code segment.
  • tags: Any additional information to identify the trace.

graphsignal.callbacks.pytorch_lightning.GraphsignalCallback

GraphsignalCallback(
    name: str, 
    tags: Optional[dict[str, str]] = None)

PyTorch Lightning callback for automatic inference measurement and profiling.

Usage: Trainer(..., callbacks=[GraphsignalCallback(endpoint='predict')]).

See Trainer class for more information on adding callbacks.

Arguments:

  • endpoint: Unique identifier for a function or code segment.
  • tags: Any additional information to identify the trace.

graphsignal.profilers.onnxruntime.ONNXRuntimeProfiler.initialize_options

initialize_options(onnx_session_options: onnxruntime.SessionOptions)

Initializes profiling in ONNX session options. The ONNX session should be later created using this session options object.

Arguments:

  • onnx_session_options: ONNX Runtime session options.

graphsignal.profilers.onnxruntime.ONNXRuntimeProfiler.set_onnx_session

set_onnx_session(onnx_session: onnxruntime.InferenceSession)

Set ONNX session to the profiler. The session must be created using session options which where also passed to ONNXRuntimeProfiler.initialize method.

Arguments:

  • onnx_session: ONNX Runtime session.