What We’ll Build
A three-prompt pipeline that:- Classifies documents into categories (contract, invoice, report, etc.)
- Extracts structured entities from the document
- Generates a summary report
Step 1: Create a Task
Navigate to the dashboard and click Create Task. Name itdocument-analysis-pipeline with a description.
A Task is a container for related prompts—like a Git repository for your AI features. All prompts within a task share schemas and are versioned together.
Step 2: Create the Classifier Prompt
Add a System Message
Create a new prompt calleddocument-classifier. Add a system message with instructions:
Add a User Message with Variables
Create a user message. Instead of hardcoding content, we’ll use a variable to inject the document at runtime. Type/ in the editor to open the slash command menu:

The slash command menu showing formatting and variable options

The property editor for configuring variable types
- Property Name:
document - Description: “The document content to classify”
- Type: String

Available property types including String, Number, Object, Array, and references

Variable block inserted in the message editor
Step 3: Define the Output Schema
Navigate to the Schemas tab. You’ll see:- Input Schemas: Auto-generated from prompt variables
- User-Defined Schemas: Custom schemas for structured outputs

The Schemas tab showing Input Schemas and User-Defined Schemas
Create an Enum Schema
Click Create Schema and name itClassificationResult. Add a property document_type with allowed values to create an enum:

Configuring enum values in the Allowed Values field
contract, invoice, report, memo, email
The validation message confirms: “Must be one of: contract, invoice, report, memo, email”
Step 4: Generate Pydantic Models
Run code generation to create typed Python models:Step 5: Use in Your Application
Extending the Pipeline
Entity Extractor with Object Schema
Create a second promptentity-extractor with an object schema for extracted entities:
Report Generator with Schema Reference
Create a third promptreport-generator that references outputs from earlier prompts:
Complete Pipeline
Key Concepts Demonstrated
| Feature | Where Used |
|---|---|
| Variables | document variable in classifier input |
| Enums | DocumentType with allowed values |
| Objects | ExtractedEntities with nested fields |
| Schema References | ReportGeneratorInput using other schemas |
| Code Generation | Type-safe Pydantic models |
| Telemetry | Spans and logging for observability |