context engineering pipeline — from pattern to response
The core pipeline: a Pattern builds a structured Context,
which assembles into a prompt and executes via any LLM provider.
The same Context can be exported to LangChain, CrewAI, LlamaIndex, or any framework.
scroll to zoom · drag to pan · click ⛶ to expand
88 cognitive templates. Each defines a research-backed methodology: RootCauseAnalyzer, DataAnalyzer, CodeReviewer, etc.
Assembles guidance, directive, constraints, and data into a Context object. Accepts output_format for presentation hints.
The structured prompt carrier. Contains guidance (role/rules), directive (task), constraints (guardrails), and data (user inputs). Research-backed assembly order.
Renders the Context into a final prompt string following primacy/recency research: role first, task last, reasoning in the middle.
to_langchain(), to_llamaindex(), to_crewai(), to_openai(), to_markdown(), to_json() — ship the same Context anywhere.
The LLM's reply: .response (text), .usage (tokens), .metadata (model, latency). Always a string — format depends on the prompt hint.
analyzer = DataAnalyzer() →
ctx = analyzer.build_context(data_description=..., goal=..., intent="executive") →
resp = ctx.execute(provider="openai")