Skip to main content
The Prompt Studio is an interactive environment for testing and iterating on your prompts before using them in production.
Prompt Studio interface

Accessing the Studio

Access the Studio from multiple places:
  1. From the Dashboard: Click Studio in the sidebar
  2. From a Task: Click the Studio button in the task header
  3. From a Prompt: Click Open in Studio on any prompt

Studio Interface

Header Controls

Studio prompt selector
The header contains:
ControlDescription
Prompt SelectorChoose which prompt to test
Go toNavigate to the prompt’s detail page
Branch SelectorChoose which branch to use
Commit SelectorTest specific versions
Model SettingsConfigure provider, model, temperature
ExecuteRun the prompt

Prompt Template Panel

The main editing area shows all messages in your prompt:
  • System messages (blue background)
  • User messages (gray background)
  • Assistant messages (white background)
Edit messages directly in the Studio—changes auto-save to your working state.

Variables Panel

On the right side, you’ll see input variables:
  • Each variable from your prompt’s input schema appears here
  • Fill in values to test different scenarios
  • Supports text, objects, arrays, and media types

Testing Prompts

Basic Testing

1

Select a prompt

Use the prompt selector dropdown to choose a prompt.
2

Fill in variables

Enter values for each input variable.
3

Configure model settings

Choose provider, model, temperature, and max tokens.
4

Click Execute

Run the prompt and see the response.

Variable Types

TypeHow to Enter
StringPlain text in the input field
NumberNumeric value
BooleanToggle switch
ObjectJSON in code editor
ArrayJSON array in code editor
ImageUpload or paste URL
FileUpload file

Viewing Results

After execution, you’ll see:
  • Response content: The LLM’s output
  • Token usage: Input and output token counts
  • Latency: Time to generate response
  • Cost estimate: Based on model pricing
Results are automatically logged for observability (visible in the Traces tab).

Import from Logs

Replay previous executions by importing variable values from traces:
1

Click Import from Logs

Open the import dialog.
2

Select a trace

Browse recent traces or search by metadata.
3

Import

Variable values are populated from that execution.
4

Modify and re-run

Tweak values and execute again to compare.
This is useful for:
  • Debugging production issues
  • Testing edge cases from real data
  • Building test cases from actual usage

Test Cases

Create and manage test cases for systematic testing:

Creating Test Cases

1

Set up variables

Fill in variable values for a test scenario.
2

Click Save as Test Case

Name your test case (e.g., “Password reset request”).
3

Add expected behavior

Optionally add notes about expected output.

Running Test Cases

  1. Click Test Cases to see saved cases
  2. Select one to load its variable values
  3. Execute and compare results
  4. Create observations for regression testing

Multi-Turn Conversations

Test conversational prompts with multiple turns:
1

Execute initial prompt

Run with user’s first message.
2

Click Add Turn

Add the assistant response and a new user message.
3

Execute again

Continue the conversation.
Useful for testing:
  • Follow-up question handling
  • Context retention
  • Conversation flow

Tool Calling

Test prompts with tools (function calling):
  1. Configure tools: Attach schemas to your prompt
  2. Execute: The model may return tool calls
  3. Provide results: Enter mock tool results
  4. Continue: See how the model uses the results

Best Practices

Try empty inputs, very long inputs, and unusual characters.
Test the same prompt with different models to compare quality and cost.
Build a library of test cases for regression testing.
Always test prompt changes in Studio before committing.
Import from logs to test with production-like inputs.

Next Steps