This guide covers the complete development workflow with Moxn—from creating prompts in the web app to deploying type-safe LLM applications with full observability.
Each prompt contains messages with roles (system, user, assistant). Insert variables using the /variable slash command in the editor, which opens a property editor.
Message editor with variables displayed as typed blocks
Variables are typed properties—not template strings. When you insert a variable:
Type /variable in the message editor
The Property Editor opens where you configure the variable
Set the name, type (string, array, object, image-url, etc.), and optional schema reference
The variable appears as a styled block in your message
Property editor for configuring variable types
Variables automatically sync to the prompt’s Input Schema—the typed interface your code will use:
Prompt detail showing the Input Schema derived from variables
Generate Pydantic models from your prompt schemas:
Copy
import asynciofrom moxn import MoxnClientasync def generate(): async with MoxnClient() as client: await client.generate_task_models( task_id="your-task-id", branch_name="main", output_dir="./generated_models" )asyncio.run(generate())
This creates a Python file with models for each prompt:
Copy
# generated_models/customer_support_bot_models.pyclass ProductHelpInput(RenderableModel): """Input schema for Product Help prompt.""" company_name: str customer_name: str query: str search_results: list[SearchResult] def render(self, **kwargs) -> dict[str, str]: return { "company_name": self.company_name, "customer_name": self.customer_name, "query": self.query, "search_results": json.dumps([r.model_dump() for r in self.search_results]), }
Codegen is optional. You can always define models manually—especially useful when iterating on prompts in a notebook or during early development. Codegen shines when you want to ensure your models stay in sync with your prompt schemas, and integrates well into CI/CD pipelines (similar to database migrations or OpenAPI client generation).
Using prompts without codegen
You don’t need generated models to use Moxn. Define your input model manually:
Copy
from moxn.types.base import RenderableModelclass MyInput(RenderableModel): query: str context: str def render(self, **kwargs) -> dict[str, str]: return {"query": self.query, "context": self.context}# Use it directlysession = await client.create_prompt_session( prompt_id="...", session_data=MyInput(query="Hello", context="Some context"))
This is often the fastest way to iterate when you’re actively editing prompts and experimenting with different variable structures.
Observe: Review traces to see how prompts perform in production
Branch: Create a branch for experimentation
Edit: Modify prompts in the web app
Test: Use branch access to test changes
Commit: When satisfied, commit and update production
Copy
# Test a branchsession = await client.create_prompt_session( prompt_id="...", branch_name="experiment-new-tone" # Your feature branch)# Deploy to productionsession = await client.create_prompt_session( prompt_id="...", commit_id="new-commit-id" # After committing the branch)