Agent runtime for Cloudflare — memory, security, and MCP built in.
Building production AI agents requires five layers: compute and state, model calls, memory, security, and tool integration. Cloudflare solves layer 1. The AI SDK solves layer 2. Nobody solves layers 3, 4, and 5. Meridian does.
import { MeridianAgent, tool } from "@aiconnai/core";
import { anthropic } from "@ai-sdk/anthropic";
import { streamText } from "ai";
import { z } from "zod";
export class MyAgent extends MeridianAgent {
get config() {
return {
name: "my-agent",
security: {
policy: {
defaultAction: "deny",
rules: [{ domain: "api.example.com", action: "allow" }],
blocklist: { privateNetworks: true, linkLocal: true, localhost: true },
},
auditLog: true,
},
};
}
protected defineTools() {
return {
search: tool({
description: "Search the knowledge base",
input: z.object({ query: z.string() }),
execute: async ({ query }, ctx) => {
// This fetch is egress-policy enforced and audit-logged
const res = await ctx.fetch(`https://api.example.com/search?q=${query}`);
return res.json();
},
}),
};
}
async onMessage(message, history, memoryContext) {
return streamText({
model: anthropic("claude-sonnet-4-20250514"),
messages: history.map((m) => ({ role: m.role, content: m.content })),
tools: this.tools,
system: `You are a helpful assistant. ${memoryContext}`,
});
}
}
| Layer | Capability | Package |
|---|---|---|
| Agent | Durable Object base class with message persistence, WebSocket streaming, REST API | @aiconnai/core |
| Tools | Type-safe Zod-validated tools, auto-converted to AI SDK format | @aiconnai/core |
| Security | Default-deny egress policy, domain allowlists, metadata SSRF protection, rate limiting | @aiconnai/core |
| Audit | Every tool call logged to DO SQLite — input/output hashes, egress destinations, duration | @aiconnai/core |
| Scheduling | Cron, one-shot, delayed, and interval tasks via DO alarms + Cloudflare Workflows | @aiconnai/core |
| Observability | OpenTelemetry-compatible tracing for agent lifecycle, tool calls, model calls | @aiconnai/core |
| Memory | Engram hybrid search retrieval (direct/expanded/hybrid), automatic fact extraction, knowledge graph | @aiconnai/memory |
| MCP | Import tools from external MCP servers with schema security scanning; expose agent tools as MCP server | @aiconnai/mcp |
| UI | Svelte 5 reactive MeridianChat class with streaming, tool approval, auto-reconnection |
@aiconnai/svelte |
| Server | One-line SvelteKit route wiring to agent Durable Objects | @aiconnai/svelte |
| CLI | meridian init (scaffolding), meridian scan (AgentShield), meridian memory (Engram), meridian mcp (discovery/scanning/health) |
@aiconnai/cli |
# Scaffold a new project
npm create meridian my-agent
cd my-agent
npm install
npx wrangler dev
# Or clone this repo and run examples
bun install
bun run build
bun run test # 163 tests across 11 suites
| Package | Version | Status | Tests |
|---|---|---|---|
@aiconnai/core |
0.2.0 | Built | 73 passing (egress policy, cron, tracer, auth) |
@aiconnai/memory |
0.2.0 | Built | 24 passing (retrieval pipeline, storage pipeline) |
@aiconnai/mcp |
0.4.0 | Built | 28 passing (schema scanner, schema mapper) |
@aiconnai/svelte |
0.1.0 | Built | 13 passing (server helpers) |
@aiconnai/cli |
0.1.0 | Built | 25 passing (arg parsing, init scaffolding) |
create-meridian |
0.1.0 | Built | — (npm create meridian my-agent) |
| Example | Version | What it demonstrates |
|---|---|---|
chat-basic |
v0.1 | Tools (weather, calc, time) + egress policy + audit logging + streaming |
chat-memory |
v0.2 | MBRAS Concierge + Engram memory + LLM fact extraction + knowledge graph |
chat-scheduled |
v0.5 | Cron scheduling + Cloudflare Workflows + morning brief + OpenTelemetry tracing |
SvelteKit app (adapter-cloudflare)
├── UI: @aiconnai/svelte (MeridianChat, reactive messages, tool approval)
├── Routes: +server.ts → agentHandler({ binding: "MY_AGENT" })
│
└── Agent: MeridianAgent (Durable Object)
├── Tools → Zod validation → guarded fetch → audit log
│ ├── Local tools (defined in agent class)
│ └── Remote tools ← @aiconnai/mcp (toolFromMCP, schema scanning)
├── Memory → @aiconnai/memory
│ ├── Pre-generation: Engram hybrid search → context injection
│ └── Post-generation: fact extraction → Engram storage
├── Scheduling → DO alarms → Cloudflare Workflows
│ ├── Cron (recurring): "0 8 * * *" → MorningBriefWorkflow
│ ├── One-shot: scheduler.at(timestamp, workflow)
│ └── Interval: scheduler.every("5m", workflow)
├── Observability → MeridianTracer → OpenTelemetry spans
├── Messages → DO SQLite persistence
├── Models → AI SDK (any provider)
├── Security → egress policy engine (pure TS, V8-safe)
└── MCP server → @aiconnai/mcp (expose tools to other agents)
CLI: @aiconnai/cli
├── meridian init → scaffold project
├── meridian scan → AgentShield binary (build-time only)
├── meridian memory → Engram search/stats
└── meridian mcp → discover/scan/health external MCP servers
Add memory to any agent with the withMemory() mixin:
import { MeridianAgent } from "@aiconnai/core";
import { withMemory } from "@aiconnai/memory";
class MyAgent extends withMemory(MeridianAgent) {
get memoryConfig() {
return {
engram: { endpoint: "https://engram.my-app.com" },
retrieval: { mode: "hybrid", workspace: "my-app", topK: 5, minSalience: 0.7 },
storage: { enabled: true, workspace: "my-app", tier: "permanent" },
};
}
}
Three retrieval modes:
Import tools from any MCP server with automatic security scanning:
import { toolFromMCP } from "@aiconnai/mcp";
const tools = await toolFromMCP("https://mcp.example.com", {
allowTools: ["search", "get_file"],
egressPolicy: { allowDomains: ["api.example.com"] },
});
// Schema scanner runs BEFORE registration — critical findings block the import
Expose agent tools as an MCP server:
import { handleMCPRequest } from "@aiconnai/mcp";
// In your agent's fetch() handler
if (path === "/mcp") return handleMCPRequest(request, this.toolRegistry);
import { AgentScheduler } from "@aiconnai/core";
const scheduler = new AgentScheduler(ctx);
// Recurring: every day at 8 AM UTC
await scheduler.cron("morning-brief", "0 8 * * *", "MorningBriefWorkflow");
// One-shot: run once in 30 minutes
await scheduler.at("reminder", Date.now() + 30 * 60_000, "ReminderWorkflow");
// Fixed interval: every 5 minutes
await scheduler.every("health-check", 5 * 60_000, "HealthCheckWorkflow");
Tasks are stored in DO SQLite and dispatched via DO alarms to Cloudflare Workflows.
get config() {
return {
security: {
policy: {
defaultAction: "deny",
rules: [
{ domain: "api.openai.com", action: "allow" },
{ domain: "*.internal.corp", action: "deny" },
],
blocklist: {
privateNetworks: true, // 10.x, 172.16.x, 192.168.x
linkLocal: true, // 169.254.x (metadata SSRF)
localhost: true, // 127.0.0.1
},
rateLimit: {
requestsPerMinute: 60,
requestsPerHour: 500,
},
},
auditLog: true,
auditLogMaxEntries: 10000,
},
};
}
Blocklist checks run BEFORE rule matching and cannot be overridden. Every tool call is audit-logged to DO SQLite.
npx meridian scan # Scan project tools
npx meridian scan --format sarif # SARIF for GitHub Code Scanning
npx meridian mcp scan <server-url> # Scan external MCP server schemas
When importing tools via toolFromMCP(), schemas are scanned for:
Critical findings block registration. High findings auto-tag tools as requiresApproval: true.
<script>
import { MeridianChat } from "@aiconnai/svelte";
const chat = new MeridianChat({
agent: "support-agent",
onToolApproval: (tool) => {
// Human-in-the-loop for flagged tools
if (confirm(`Allow ${tool.toolName}?`)) tool.approve();
else tool.deny("User rejected");
},
});
$effect(() => chat.connect());
</script>
{#each chat.messages as message}
<div class={message.role}>{message.content}</div>
{/each}
<input bind:value={chat.input} onkeydown={chat.handleKeydown} />
Features: WebSocket with auto-reconnection, REST fallback, streaming chunks, tool approval flow, reactive state via $state runes.
Meridian runs in Cloudflare Workers V8 isolates, not Node.js:
child_process, no fs, no native modulesThese are features, not bugs. They force a clean architecture where the agent runtime is lightweight and the heavy infrastructure (memory search, security scanning) lives in purpose-built external services.
chat-memory example (missing package.json + wrangler.jsonc)svelte-package for .svelte.ts rune compilationchat-basic to Cloudflare Workerschat-memory with Engram on Fly.iochat-scheduled with Workflows binding@aiconnai/* scope@testing-library/svelte)npm create meridian scaffolding packageThe Rust trio handles infrastructure and heavy compute. Meridian handles user-facing orchestration. Primer helps developers set up projects. All connected via MCP and HTTP.
| Product | Role | Language | Where it runs |
|---|---|---|---|
| AgentShield | Security scanning + enforcement | Rust | CLI + config files |
| Engram | Memory infrastructure | Rust | Standalone server (Fly.io, VPS) |
| Cortex | Heavy agent runtime | Rust | Standalone server (Fly.io, K8s) |
| Meridian | User-facing orchestration | TypeScript | Cloudflare Workers + DOs |
| Primer | Developer tooling | JS/Python | CLI |
These are different products for different deployment models — not competitors.
| Cortex | Meridian | |
|---|---|---|
| Runtime | Tokio on a server | V8 isolate on Cloudflare |
| State | In-memory + SQLite/Sled | Durable Object SQLite |
| Scaling | Vertical (bigger server) | Horizontal (millions of DOs) |
| Cost model | Always-on server | Pay-per-request, hibernate when idle |
| Agent model | ReACT loop (long-running) | Request/response + alarms + Workflows |
| Frontend | Headless | SvelteKit native |
Use Cortex when you need: long-running autonomous agents, maximum throughput (10k concurrent agents per machine), self-hosted infrastructure, background processing pipelines, edge without Cloudflare.
Use Meridian when you need: user-facing agents behind a SvelteKit UI, per-user stateful agents that scale to millions and cost nothing when idle, built-in memory and security, scheduled tasks, MCP bridge for polyglot tool ecosystems.
The two runtimes compose in a Brain/Muscle architecture. A Meridian agent (Brain) dispatches heavy multi-agent crew work to a Cortex instance (Muscle) via MCP over HTTP+SSE:
// Meridian agent consuming Cortex as a tool backend
const crewTools = await toolFromMCP("https://cortex.fly.dev", {
allowTools: ["run_crew", "research_pipeline"],
egressPolicy: { allowDomains: ["cortex.fly.dev"] },
});
// AgentShield scans the tool schemas before registration
The Meridian DO manages state, memory, security, and the frontend. The Rust process does the compute-intensive reasoning loops. Neither replaces the other.
MIT