Why Moxn?
Prompts are code, but they don’t have the tooling. When building AI applications, you face a familiar set of challenges:- No dedicated tooling: Prompts are scattered across strings, YAML files, and spreadsheets—with no proper editor for structured content
- Version control friction: Prompts either live in git (coupled to deploys, painful for domain experts) or in config systems (schemaless, painful for engineers)
- No type safety: Input variables are stringly-typed, leading to runtime errors
- No observability: You can’t see what prompts actually ran in production
- No reuse: Want a shared system message across multiple agent surfaces? Good luck
- No collaboration: No shared workflows for reviewing prompts or debugging production traces
Core Architecture
Moxn separates content management from runtime execution:Prompt Template
Messages, variables, and model config—stored in Moxn, versioned like code.
Prompt Session
Template + your runtime data, created in your application.
Invocation
A plain Python dict you pass directly to the provider SDK.
Design Philosophy
Moxn Builds Payloads, You Own the Integration
The SDK produces standard Python dictionaries. You unpack them directly into provider SDKs:- Modify the payload before sending (add headers, override settings)
- Use new provider features without waiting for SDK updates
- Compose with other tools in your stack
Key Features
Rich Prompt Editor
A block-based editor with mermaid diagrams, code blocks, XML documents, and
multimodal content. Author prompts and review traces in the same interface.
Git-Like Versioning
Branch, commit, and rollback prompts. Pin production to specific commits.
Review diffs before deploying.
Type-Safe Interfaces
Auto-generated Pydantic models ensure type safety from editor to runtime.
Get autocomplete and validation.
Full Observability
View traces and spans in the same rich editor. W3C Trace Context compatible
with complete LLM event logging.

The Moxn editor with structured content blocks