1
Author prompts in a rich editor
Build prompts with typed variables, structured content, and Git-like
versioning.
2
Generate type-safe models
Run codegen to get Pydantic models with autocomplete and validation for your
LLM invocations.
3
Use prompts in your code
Fetch prompts, inject your context, get provider specific dialects prepared,
no more shoe horning.
4
Observe in production
Log traces and review them in the same editor you authored in.
Quick Example
For AI tools: This documentation is available as llms.txt and llms-full.txt for LLM consumption.