Superpowers for
AI Engineers.

Stop guessing why your agents fail. Get high-fidelity observability into every token, tool call, and decision.

TRACE VIEWER

Visualize Every Step.

See the complete execution tree of your AI applications. Track nested runs, measure latency at each step, and understand token usage.

  • Hierarchical tree visualization
  • Input/Output inspection per run
  • Detailed latency breakdown
  • Token usage and cost tracking
AGENTresearch_agent2.34s
LLMgpt-4o-mini1.82s
TOOLgoogle_search0.42s
LLMgpt-4o-mini0.91s

Latency

342ms

Success

99.7%

LIVE MONITORING

Monitor Live.

Track key metrics as your application runs. Get instant visibility into token usage, costs, and error rates with real-time updates.

  • Live p95 latency percentiles
  • Token consumption by model
  • SSE-powered live updates
TIME TRAVEL

Replay & Refine.

Replay any trace with modified inputs to understand behavior changes. Perfect for debugging edge cases and optimizing prompts.

Input Version A

"Write a summary of..."

Input Version B (Replayed)

"Write a concise technical summary of..."

REPLAY STATUSCompleted - 1.2s
// Production App
const { tracer } = await collector({)
source: 'remote',
mode: 'agent',
remote: { endpoint: 'https://...' }
});
Traces streaming to collector...
REMOTE TRACINGNEW

Debug Production.

Send traces from production apps to a central collector, then view them in real-time from your local machine. Perfect for debugging live issues.

  • Agent mode: send traces to remote
  • Viewer mode: watch live production
  • Configurable sampling rates
  • Filter by environment & time

Ready to ship better AI?

Join hundreds of engineers who debug LLM apps with OrkaJS DevTools.