Web Publishing Infrastructure: 3 Approaches

How HTTP requests reach the cogent's static files and dynamic handlers

All three share the same CogOS capability layer — the difference is the serving infrastructure behind Cloudflare.

A

Single Lambda + Cloudflare Cache (Recommended)

Request Flow
Browser → Cloudflare (Access + CDN cache)
↓ cache miss
→ Lambda Function URL (single entry point)
├── /api/* → channel bridge → handler process
└── /* → read from file store (Aurora) → serve
↑ Cache-Control headers → Cloudflare caches static

Pros

  • Simplest infra — one Lambda, no S3 sync
  • Scales to zero, no long-lived service
  • File store is single source of truth
  • Cloudflare cache handles the read load

Cons

  • Cache miss = Lambda + Aurora round trip
  • Cold start latency (~200-500ms)
  • Need cache purge on file update
B

S3 for Static + Lambda for Dynamic

Request Flow
Browser → Cloudflare (Access + routing)
├── /api/* →
Lambda → channel bridge → handler process
└── /* →
S3 bucket (static assets)
↑ synced from file store on write

Pros

  • S3 is fast + cheap for static
  • No Aurora hit for static files
  • Well-understood pattern

Cons

  • File store → S3 sync adds complexity
  • Two sources of truth (drift risk)
  • More infra to provision per cogent
  • Cloudflare needs split routing rules
C

Cloudflare Worker at the Edge

Request Flow
Browser → Cloudflare Worker (routing + cache logic)
├── static? → check KV/R2 cache → serve
│ └── miss → Lambda → file store → cache + serve
└── /api/* → Lambda → channel bridge

Pros

  • Edge-level caching + routing (fastest)
  • Full control over cache behavior
  • Could eliminate Lambda for static entirely

Cons

  • New platform dependency (CF Workers)
  • More moving parts to manage
  • Worker per cogent, or complex dispatch
  • Overengineered for current scale