The TypeScript Framework for MCP Servers.
Type-safe tools, structured AI perception, and built-in security. Deploy once — every AI assistant connects instantly.
Type-safe tools, structured AI perception, and built-in security. Deploy once — every AI assistant connects instantly.
Works with every MCP-compatible AI client:
Model Context Protocol
An open standard for how AI agents talk to external tools. It handles transport, message format, and discovery. Think of MCP as the wire — it doesn't tell you how to build what's on the other end.
Model — View — Agent
A new architectural pattern for structuring what agents actually see. The Model owns your data. The View shapes it with domain rules and affordances. The Agent declares typed, safe actions. That's the whole contract.
With the raw MCP SDK, you're wiring transports, writing JSON schemas by hand, and handling errors from scratch. Vurb.ts takes care of the protocol so you can focus on your business logic.
You follow the same pattern every time — Model defines data, View shapes what agents see, Agent declares actions. Vurb.ts handles the MCP plumbing.
Typed schemas, domain rules, explicit affordances — every Vurb.ts server follows the same contract. Pick up any project and know where everything lives.
MVA gives coding agents a clear, repeatable target. Consistent conventions mean they write better code — and you spend less time fixing it.
The MCP SDK handles transport and messages. Auth, validation, access control? That's on you.
Vurb.ts ships with Zod schema validation on every input and a middleware pipeline for auth, rate limiting, and tenant isolation — ready to plug in, not build from scratch.
Security is part of the framework, not something you bolt on later.
Hook into tool lifecycle events and emit structured logs — input, output, duration, error context — to whatever backend you're already using.
Track latency, throughput, and error rates per tool through the middleware pipeline. Works with Prometheus, Datadog, or whatever you're running.
See exactly which data was accessed, which rules fired, and which actions were suggested. Built for teams that need audit trails.
Every tool gets its own error boundary. When something breaks, you know exactly where and why — no more guessing.
Every line is a capability that ships in Vurb.ts today. Not a roadmap.
Every claim below maps to real code in the repository. Not a roadmap. Not a promise.
The problem: Raw MCP servers leak password_hashes directly to the LLM when developers write SELECT *. Returning 100,000 records routinely triggers LLM OOM crashes or bankrupts teams with runaway API bills.
The mechanism: The Zod .schema() on every Presenter physically strips undeclared fields at RAM level — not filtered, gone. Simultaneously, .agentLimit() truncates massive arrays and teaches agents to use filters instead.
const UserPresenter = createPresenter('User')
.schema(z.object({
id: z.string(),
name: z.string(),
email: z.string(),
// password_hash, tenant_id, internal_flags
// → physically absent. Not filtered. GONE.
}));The problem: Teaching the AI about invoices, tasks, sprints, and users means a 10,000-token system prompt — sent on every call. The LLM loses coherence mid-text, misapplies rules across domains, and the company pays for irrelevant tokens on every request.
The mechanism: Like webpack tree-shaking removes unused code, .rules() removes unused rules from the context window. Domain rules travel with the data — the invoice rule only exists when the agent processes an invoice. Token overhead drops from ~2,000/call to ~200/call.
// Invoice rules — sent ONLY when invoice data is returned
const InvoicePresenter = createPresenter('Invoice')
.schema(invoiceSchema)
.rules((invoice, ctx) => [
'CRITICAL: amount_cents is in CENTS. Divide by 100.',
ctx?.user?.role !== 'admin'
? 'RESTRICTED: Mask exact totals for non-admin users.'
: null,
]);The problem: The developer begs in the prompt: "Please generate valid ECharts JSON." The AI gets the syntax wrong 20% of the time. Charts become a probabilistic coinflip instead of deterministic output.
The mechanism: Complex chart configs, Mermaid diagrams, and Markdown tables are compiled server-side in Node.js via .ui(). The AI receives a [SYSTEM] pass-through directive and forwards the block unchanged. Visual hallucination drops to zero.
const InvoicePresenter = createPresenter('Invoice')
.schema(invoiceSchema)
.ui((invoice) => [
ui.echarts({
series: [{ type: 'gauge', data: [{ value: invoice.amount_cents / 100 }] }],
}),
ui.table(
['Field', 'Value'],
[['Status', invoice.status], ['Amount', `$${invoice.amount_cents / 100}`]],
),
]);
// The LLM passes the chart config through. It never generates it.MVA replaces the human-centric View with the Presenter — an agent-centric perception layer that tells the AI exactly how to interpret, display, and act on domain data. The handler returns raw data (Model). The Presenter shapes perception (View). The middleware governs access (Agent). This isn't an iteration on MVC. It's a replacement.
Zod schema validates and filters data. Unknown fields rejected with actionable errors. The LLM cannot inject parameters your schema does not declare.
JIT rules, server-rendered UI, cognitive guardrails, action affordances — all deterministic, all framework-enforced.
The same ToolRegistry runs across Stdio, HTTP/SSE, and serverless runtimes without code changes. Auto-generate fully typed MCP tools from your existing infrastructure.
Vinkius Cloud — native deployment with vurb deploy. Zero config, edge-ready, built-in auth and observability.
Vercel Edge Functions — fast cold starts in a Next.js route.
Cloudflare Workers — D1, KV, R2 bindings from 300+ edge locations.
AWS Lambda — Step Functions connector.
Prisma Generator — CRUD tools with field-level security from your schema.
OpenAPI Generator — typed tools from any REST API.
n8n Connector — n8n workflows as MCP tools.
Vurb.ts gives you typed schemas, structured AI perception, built-in security, and observability — all out of the box. Skip the boilerplate and ship your first MCP server in minutes.
BUILD YOUR FIRST MCP SERVER →