
Accessing the Studio
Access the Studio from multiple places:- From the Dashboard: Click Studio in the sidebar
- From a Task: Click the Studio button in the task header
- From a Prompt: Click Open in Studio on any prompt
Studio Interface
Header Controls

| Control | Description |
|---|---|
| Prompt Selector | Choose which prompt to test |
| Go to | Navigate to the prompt’s detail page |
| Branch Selector | Choose which branch to use |
| Commit Selector | Test specific versions |
| Model Settings | Configure provider, model, temperature |
| Execute | Run the prompt |
Prompt Template Panel
The main editing area shows all messages in your prompt:- System messages (blue background)
- User messages (gray background)
- Assistant messages (white background)
Variables Panel
On the right side, you’ll see input variables:- Each variable from your prompt’s input schema appears here
- Fill in values to test different scenarios
- Supports text, objects, arrays, and media types
Testing Prompts
Basic Testing
1
Select a prompt
Use the prompt selector dropdown to choose a prompt.
2
Fill in variables
Enter values for each input variable.
3
Configure model settings
Choose provider, model, temperature, and max tokens.
4
Click Execute
Run the prompt and see the response.
Variable Types
| Type | How to Enter |
|---|---|
| String | Plain text in the input field |
| Number | Numeric value |
| Boolean | Toggle switch |
| Object | JSON in code editor |
| Array | JSON array in code editor |
| Image | Upload or paste URL |
| File | Upload file |
Viewing Results
After execution, you’ll see:- Response content: The LLM’s output
- Token usage: Input and output token counts
- Latency: Time to generate response
- Cost estimate: Based on model pricing
Import from Logs
Replay previous executions by importing variable values from traces:1
Click Import from Logs
Open the import dialog.
2
Select a trace
Browse recent traces or search by metadata.
3
Import
Variable values are populated from that execution.
4
Modify and re-run
Tweak values and execute again to compare.
- Debugging production issues
- Testing edge cases from real data
- Building test cases from actual usage
Test Cases
Create and manage test cases for systematic testing:Creating Test Cases
1
Set up variables
Fill in variable values for a test scenario.
2
Click Save as Test Case
Name your test case (e.g., “Password reset request”).
3
Add expected behavior
Optionally add notes about expected output.
Running Test Cases
- Click Test Cases to see saved cases
- Select one to load its variable values
- Execute and compare results
- Create observations for regression testing
Multi-Turn Conversations
Test conversational prompts with multiple turns:1
Execute initial prompt
Run with user’s first message.
2
Click Add Turn
Add the assistant response and a new user message.
3
Execute again
Continue the conversation.
- Follow-up question handling
- Context retention
- Conversation flow
Tool Calling
Test prompts with tools (function calling):- Configure tools: Attach schemas to your prompt
- Execute: The model may return tool calls
- Provide results: Enter mock tool results
- Continue: See how the model uses the results
Best Practices
Test edge cases
Test edge cases
Try empty inputs, very long inputs, and unusual characters.
Compare models
Compare models
Test the same prompt with different models to compare quality and cost.
Save test cases
Save test cases
Build a library of test cases for regression testing.
Test before committing
Test before committing
Always test prompt changes in Studio before committing.
Use real data
Use real data
Import from logs to test with production-like inputs.