mycontext Architecture

context engineering pipeline — from pattern to response

The core pipeline: a Pattern builds a structured Context, which assembles into a prompt and executes via any LLM provider. The same Context can be exported to LangChain, CrewAI, LlamaIndex, or any framework.

scroll to zoom · drag to pan · click ⛶ to expand

100%
flowchart TD
    subgraph input_layer [Input Layer]
        P["Pattern
88 cognitive templates"] BC["build_context()
+ output_format"] end subgraph context_layer [Context Object] CTX["Context
guidance + directive
+ constraints + data"] ASM["assemble()
research-backed
prompt flow"] end subgraph export_layer [Export Layer] LC["to_langchain()"] LI["to_llamaindex()"] CR["to_crewai()"] OAI["to_openai()"] MK["to_markdown()"] JS["to_json()"] end subgraph exec_layer [Execution Layer] EX["execute()
provider + model"] LP["LiteLLMProvider
OpenAI, Anthropic,
Google, etc."] end subgraph output_layer [Response] PR["ProviderResponse
.response (string)
.usage, .metadata"] end P -->|"problem, depth,
output_format"| BC BC --> CTX CTX --> ASM ASM --> EX ASM --> LC ASM --> LI ASM --> CR ASM --> OAI ASM --> MK ASM --> JS EX --> LP LP --> PR
Input
Core
Export / Output
input

Pattern

88 cognitive templates. Each defines a research-backed methodology: RootCauseAnalyzer, DataAnalyzer, CodeReviewer, etc.

input

build_context()

Assembles guidance, directive, constraints, and data into a Context object. Accepts output_format for presentation hints.

core

Context

The structured prompt carrier. Contains guidance (role/rules), directive (task), constraints (guardrails), and data (user inputs). Research-backed assembly order.

core

assemble()

Renders the Context into a final prompt string following primacy/recency research: role first, task last, reasoning in the middle.

export

Export Methods

to_langchain(), to_llamaindex(), to_crewai(), to_openai(), to_markdown(), to_json() — ship the same Context anywhere.

output

ProviderResponse

The LLM's reply: .response (text), .usage (tokens), .metadata (model, latency). Always a string — format depends on the prompt hint.

Three lines to insight: analyzer = DataAnalyzer()ctx = analyzer.build_context(data_description=..., goal=..., intent="executive")resp = ctx.execute(provider="openai")