Skip to main content
This page documents the key types you’ll encounter when using the Moxn SDK.

Provider

Enum representing supported LLM providers.
from moxn.types.content import Provider

class Provider(Enum):
    ANTHROPIC = "anthropic"
    OPENAI_CHAT = "openai_chat"
    OPENAI_RESPONSES = "openai_responses"
    GOOGLE_GEMINI = "google_gemini"
    GOOGLE_VERTEX = "google_vertex"
Usage:
await client.log_telemetry_event_from_response(
    session, response, Provider.ANTHROPIC
)

RenderableModel

Protocol for session data models generated by codegen.
from moxn.types.base import RenderableModel

@runtime_checkable
class RenderableModel(Protocol):
    moxn_schema_metadata: ClassVar[MoxnSchemaMetadata]

    def model_dump(self, ...) -> Any: ...
    def render(self, **kwargs: Any) -> Any: ...
Generated models implement this protocol:
class QueryInput(BaseModel):
    """Generated by Moxn codegen."""

    moxn_schema_metadata: ClassVar[MoxnSchemaMetadata] = MoxnSchemaMetadata(
        schema_id=UUID("..."),
        prompt_id=UUID("..."),
        task_id=UUID("...")
    )

    query: str
    context: str | None = None

    def render(self, **kwargs) -> dict[str, str]:
        return {
            "query": self.query,
            "context": self.context or ""
        }
The render() method transforms typed data into a flat dict[str, str] for variable substitution.

PromptTemplate

A prompt template fetched from the Moxn API.
class PromptTemplate:
    id: UUID                           # Stable anchor ID
    name: str                          # Prompt name
    description: str | None            # Optional description
    task_id: UUID                      # Parent task ID
    messages: list[Message]            # Ordered messages
    input_schema: Schema | None        # Auto-generated input schema
    completion_config: CompletionConfig | None  # Model settings
    tools: list[SdkTool] | None        # Tools and structured outputs
    branch_id: UUID | None             # Branch (if branch access)
    commit_id: UUID | None             # Commit (if commit access)
Key Properties:
PropertyTypeDescription
idUUIDStable anchor ID that never changes
namestrHuman-readable prompt name
messageslist[Message]The message sequence
input_schemaSchema | NoneAuto-generated from variables
completion_configCompletionConfig | NoneModel and parameter settings
function_toolslist[SdkTool]Tools configured for function calling
structured_output_schemaSdkTool | NoneSchema for structured output

Task

A task containing prompts and schemas.
class Task:
    id: UUID                          # Stable anchor ID
    name: str                         # Task name
    description: str | None           # Optional description
    prompts: list[PromptTemplate]     # All prompts in task
    definitions: dict[str, Any]       # Schema definitions
    branches: list[Branch]            # All branches
    last_commit: Commit | None        # Latest commit info
    branch_id: UUID | None            # Current branch
    commit_id: UUID | None            # Current commit

Message

A message within a prompt.
class Message:
    id: UUID                          # Stable anchor ID
    name: str                         # Message name
    description: str | None           # Optional description
    role: MessageRole                 # system, user, assistant
    author: Author                    # HUMAN or MACHINE
    blocks: list[list[ContentBlock]]  # 2D array of content blocks
    task_id: UUID                     # Parent task ID
Roles:
class MessageRole(str, Enum):
    SYSTEM = "system"
    USER = "user"
    ASSISTANT = "assistant"

ParsedResponse

Normalized LLM response from any provider.
class ParsedResponse:
    provider: Provider                # Which provider
    candidates: list[ParsedResponseCandidate]  # Response candidates
    stop_reason: StopReason           # Why generation stopped
    usage: TokenUsage                 # Token counts
    model: str | None                 # Model used
    raw_response: dict                # Original response

ParsedResponseCandidate

A single response candidate.
class ParsedResponseCandidate:
    content_blocks: list[TextContent | ToolCall | ThinkingContent | ReasoningContent]
    metadata: ResponseMetadata
Content blocks preserve order, which is important for interleaved thinking/text/tool sequences.

Content Block Types

class TextContent(BaseModel):
    text: str

class ToolCall(BaseModel):
    id: str
    name: str
    arguments: dict[str, Any] | str | None

class ThinkingContent(BaseModel):
    thinking: str  # For Claude extended thinking

class ReasoningContent(BaseModel):
    summary: str   # For OpenAI o1/o3 reasoning

StopReason

class StopReason(str, Enum):
    END_TURN = "end_turn"
    MAX_TOKENS = "max_tokens"
    TOOL_CALL = "tool_call"
    CONTENT_FILTER = "content_filter"
    ERROR = "error"
    OTHER = "other"

TokenUsage

class TokenUsage(BaseModel):
    input_tokens: int | None = None
    completion_tokens: int | None = None
    thinking_tokens: int | None = None  # For extended thinking models

LLMEvent

Event logged to telemetry for LLM interactions.
class LLMEvent:
    promptId: UUID
    promptName: str
    taskId: UUID
    branchId: UUID | None
    commitId: UUID | None
    messages: list[Message]           # Messages sent
    provider: Provider
    rawResponse: dict                 # Original response
    parsedResponse: ParsedResponse    # Normalized response
    sessionData: RenderableModel | None  # Input data (typed)
    renderedInput: dict[str, str] | None  # Rendered variables (flat)
    attributes: dict[str, Any] | None    # Custom attributes
    isUncommitted: bool               # Whether from uncommitted state
    responseType: ResponseType        # Classification
    validationErrors: list[str] | None   # Schema validation errors

ResponseType

Classification of the response for UI rendering:
class ResponseType(str, Enum):
    TEXT = "text"
    TOOL_CALLS = "tool_calls"
    TEXT_WITH_TOOLS = "text_with_tools"
    STRUCTURED = "structured"
    STRUCTURED_WITH_TOOLS = "structured_with_tools"
    THINKING = "thinking"
    TEXT_WITH_THINKING = "text_with_thinking"
    THINKING_WITH_TOOLS = "thinking_with_tools"

Span Types

Span

An active span for tracing.
class Span:
    context: SpanContext              # Context for propagation

    def set_attribute(self, key: str, value: Any) -> None:
        """Add searchable metadata to the span."""

SpanContext

Context that can be passed to child spans.
class SpanContext:
    trace_id: str
    span_id: str

MoxnTraceCarrier

Carrier for propagating trace context across services.
class MoxnTraceCarrier(BaseModel):
    traceparent: str         # W3C trace context
    tracestate: str | None
    moxn_metadata: dict      # Moxn-specific context
Usage for distributed tracing:
# Extract from current span
carrier = client.extract_context()

# Send to another service
await queue.put({"carrier": carrier.model_dump(mode="json"), "data": ...})

# In the receiving service
carrier = MoxnTraceCarrier.model_validate(message["carrier"])
async with client.span_from_carrier(carrier) as span:
    await process(message["data"])

Version Types

VersionRef

Reference to a specific version (branch or commit).
class VersionRef(BaseModel):
    branch_name: str | None = None
    commit_id: str | None = None
Exactly one of branch_name or commit_id must be provided.

BranchHeadResponse

Response from get_branch_head().
class BranchHeadResponse(BaseModel):
    branch_id: str
    branch_name: str
    task_id: UUID
    head_commit_id: str | None
    effective_commit_id: str          # The commit to use
    has_uncommitted_changes: bool
    last_committed_at: datetime | None
    is_default: bool

Branch

A branch reference.
class Branch(BaseModel):
    id: UUID
    name: str
    head_commit_id: UUID | None

Commit

A commit snapshot.
class Commit(BaseModel):
    id: str               # Commit SHA
    message: str
    created_at: datetime | None

Schema Types

MoxnSchemaMetadata

Metadata embedded in JSON schemas.
class MoxnSchemaMetadata(BaseModel):
    schema_id: UUID
    schema_version_id: UUID | None
    prompt_id: UUID | None    # None for task-level schemas
    prompt_version_id: UUID | None
    task_id: UUID
    branch_id: UUID | None
    commit_id: str | None
This metadata is included in the x-moxn-metadata field of exported JSON schemas and in generated Pydantic models.