High-resolution, inference-native profiling, tracing, and metrics across models, engines, and GPUs.
Continuous, high-resolution profiling timelines exposing operation durations and resource utilization across inference workloads.
LLM generation tracing with per-step timing, token throughput, and latency breakdowns for major inference frameworks.
System-level metrics for inference engines and hardware (CPU, GPU, accelerators).
Error monitoring for device-level failures, runtime exceptions, and inference errors.
Inference telemetry for AI agents to identify bottlenecks and drive targeted improvements across the inference stack.