Accessing Traces
Navigate to your task in the Moxn web app and open the Traces tab. You’ll see a list of all traces for that task, showing:- Timestamp
- Prompt name
- Duration
- Token count
- Status
Traces Tab

Trace List View
The trace list provides a high-level overview:| Column | Description |
|---|---|
| Time | When the trace started |
| Prompt | Which prompt was used |
| Duration | Total time for the trace |
| Tokens | Input + output tokens |
| Cost | Estimated cost (based on model pricing) |
| Status | Success, error, or warning |
Filtering
Filter traces by:- Date range: Today, last 7 days, custom range
- Prompt: Filter to specific prompts
- Status: Success, error
- Branch/Commit: See traces from specific versions
Sorting
Sort by:- Newest/oldest
- Duration (slowest first)
- Token count (highest first)
- Cost (highest first)
Trace Detail View
Click a trace to see the full details:
Trace Header
The header shows:- Trace name
- Summary stats (completions, tool calls, token counts)
- Show details button
Span Hierarchy
See the tree structure of spans:- customer_support_request (root) - 2.5s
- classify_query - 0.8s
- search_documents - 0.3s
- generate_response - 1.4s
- LLM Call
Span Details
For each span, you can see: Timing- Start time
- Duration
- Percentage of total trace time
- Custom metadata you added
- System attributes (prompt_id, task_id, etc.)
- LLM calls made within the span
- Token counts per event
- Response types
Span Detail Modal
Click any span to view detailed information:
Modal Tabs
| Tab | Content |
|---|---|
| Conversation Flow | Visual message sequence with role indicators |
| Variables | Input variable values used in this call |
| Metrics | Latency, token usage, estimated costs |
| Raw Message Content | Full message content |
| Raw Data | Complete span data as JSON |
Navigation
- Previous/Next: Navigate between spans
- Keyboard shortcuts: Arrow keys, J/K, Esc to close
- Create Observation: Save span for test cases
LLM Event Details
Click an LLM event to see the complete interaction:Input Tab
Session Data: Your original typed input- Role (system/user/assistant)
- Content (with variables substituted)
- Any images or files included
Output Tab
Response Content: What the LLM returned For text responses:Metrics Tab
- Model: Which model was used
- Provider: Anthropic, OpenAI, etc.
- Input tokens: Prompt token count
- Output tokens: Completion token count
- Total tokens: Combined count
- Estimated cost: Based on model pricing
- Latency: Time to first token, total time
- Stop reason: Why the model stopped
Version Tab
- Prompt ID: UUID of the prompt
- Prompt name: Human-readable name
- Branch: If fetched by branch
- Commit: If fetched by commit
- Uncommitted: Whether working state was used
Use Cases
Debugging Issues
When something goes wrong:- Filter to the timeframe when the issue occurred
- Find traces with errors
- Click to see the full input/output
- Check if:
- Input data was correct
- Prompt content was expected
- Response was malformed
Performance Analysis
To identify slow traces:- Sort by duration (slowest first)
- Look for patterns:
- Long input (too many tokens?)
- Slow model (use a faster one?)
- Sequential calls (could parallelize?)
- Check span hierarchy for bottlenecks
Cost Optimization
To reduce costs:- Sort by cost (highest first)
- Identify expensive prompts
- Look for:
- Unnecessarily long context
- Verbose system prompts
- Large documents that could be summarized
A/B Testing
To compare versions:- Run both versions and log with different metadata
- Filter by your A/B test attribute
- Compare:
- Success rates
- Average latency
- Output quality (manual review)
Exporting Data
Export trace data for external analysis:- CSV: Basic metrics
- JSON: Full trace data
- Building dashboards
- Long-term storage
- Compliance records
Best Practices
Add meaningful metadata
Add meaningful metadata
Use custom attributes to make traces searchable:
Name spans descriptively
Name spans descriptively
Use names that describe what the span does:
Review regularly
Review regularly
Check traces periodically, not just when issues arise.
Set up alerts
Set up alerts
Use exports to build alerts for:
- Error rate spikes
- Latency increases
- Cost anomalies