Skip to content

The TypeScript Framework for MCP Servers.

Type-safe tools, structured AI perception, and built-in security. Deploy once — every AI assistant connects instantly.

Works with every MCP-compatible AI client:

ClaudeDesktop & Code
CursorIDE
CodexOpenAI
VS Code+ Copilot
WindsurfIDE
ClineTerminal
UNDERSTAND THE DIFFERENCE

MCP is the protocol. MVA is the architecture.
Together, they are Vurb.ts.

PROTOCOL

MCP

Model Context Protocol

An open standard for how AI agents talk to external tools. It handles transport, message format, and discovery. Think of MCP as the wire — it doesn't tell you how to build what's on the other end.

+
ARCHITECTURE

MVA

Model — View — Agent

A new architectural pattern for structuring what agents actually see. The Model owns your data. The View shapes it with domain rules and affordances. The Agent declares typed, safe actions. That's the whole contract.

=Vurb.tsA TypeScript framework for building production-grade MCP servers.
WHY MVA

Agents write your MCP servers.
MVA makes sure they get it right.

With the raw MCP SDK, you're wiring transports, writing JSON schemas by hand, and handling errors from scratch. Vurb.ts takes care of the protocol so you can focus on your business logic.

LEARNING CURVE

Skip the protocol deep-dive

You follow the same pattern every time — Model defines data, View shapes what agents see, Agent declares actions. Vurb.ts handles the MCP plumbing.

CONSISTENCY

Same structure, every server

Typed schemas, domain rules, explicit affordances — every Vurb.ts server follows the same contract. Pick up any project and know where everything lives.

AGENT-FIRST

Built for Cursor, Claude & Copilot

MVA gives coding agents a clear, repeatable target. Consistent conventions mean they write better code — and you spend less time fixing it.

SECURITY
SECURE BY DEFAULT

The MCP spec leaves security to you.
Vurb.ts ships it built-in.

The MCP SDK handles transport and messages. Auth, validation, access control? That's on you.

Vurb.ts ships with Zod schema validation on every input and a middleware pipeline for auth, rate limiting, and tenant isolation — ready to plug in, not build from scratch.

Security is part of the framework, not something you bolt on later.

OBSERVABILITY

Know exactly what every tool does, every time.

01

Structured Logs

Hook into tool lifecycle events and emit structured logs — input, output, duration, error context — to whatever backend you're already using.

02

Metrics

Track latency, throughput, and error rates per tool through the middleware pipeline. Works with Prometheus, Datadog, or whatever you're running.

03

Audit Traces

See exactly which data was accessed, which rules fired, and which actions were suggested. Built for teams that need audit trails.

04

Error Recovery

Every tool gets its own error boundary. When something breaks, you know exactly where and why — no more guessing.

WHAT CHANGES

Without MVA vs With MVA

Every line is a capability that ships in Vurb.ts today. Not a roadmap.

Without MVA
With MVA (Vurb.ts)
Tool surface
50 individual tools. LLM sees every one. Token explosion.
Action consolidation — 5,000+ operations behind ONE tool via discriminator. 10× fewer tokens.
Response shape
JSON.stringify() — the AI parses and guesses.
Structured perception — validated data + domain rules + UI blocks + action affordances in every response.
Domain context
None. 45000 — dollars? cents? yen?
System rules travel with the data. The AI knows it's cents. Always.
Next actions
AI hallucinates tool names and parameters.
Agentic HATEOAS — .suggest() with explicit affordances grounded in data state.
Large datasets
10,000 rows dumped into context. OOM crash.
Cognitive guardrails — .agentLimit() truncation + filter guidance. Context DDoS eliminated.
Security
Internal fields leak to LLM. password_hash exposed.
Egress Firewall — Zod .strict() rejects undeclared fields at RAM level. Automatic.
Visualizations
Not possible. Text-only responses.
Server-Rendered UI — ECharts, Mermaid diagrams, tables — compiled server-side. Zero hallucination.
Routing
switch/case with 50 branches.
Hierarchical groups — platform.users.list — infinite nesting with file-based autoDiscover().
Error recovery
throw Error — AI gives up or hallucinates a fix.
Self-healing — toolError() with recovery hints + suggested retry args.
Token cost
Full JSON payloads every call. Bills compound.
TOON encoding — toonSuccess() reduces response tokens by ~40%.
Type safety
Manual casting. No client types. Runtime crashes.
tRPC-style client — createVurbClient() with full end-to-end inference.
SEE THE FULL COMPARISON WITH CODE EXAMPLES →
PROBLEM SPACE

Three problems.
Framework-level solutions.

Every claim below maps to real code in the repository. Not a roadmap. Not a promise.

01

Egress Firewall & Anti-DDoS

The problem: Raw MCP servers leak password_hashes directly to the LLM when developers write SELECT *. Returning 100,000 records routinely triggers LLM OOM crashes or bankrupts teams with runaway API bills.

The mechanism: The Zod .schema() on every Presenter physically strips undeclared fields at RAM level — not filtered, gone. Simultaneously, .agentLimit() truncates massive arrays and teaches agents to use filters instead.

typescript
const UserPresenter = createPresenter('User')
    .schema(z.object({
        id: z.string(),
        name: z.string(),
        email: z.string(),
        // password_hash, tenant_id, internal_flags
        // → physically absent. Not filtered. GONE.
    }));

EXPLORE THE PRESENTER →

02

Context Tree-Shaking

The problem: Teaching the AI about invoices, tasks, sprints, and users means a 10,000-token system prompt — sent on every call. The LLM loses coherence mid-text, misapplies rules across domains, and the company pays for irrelevant tokens on every request.

The mechanism: Like webpack tree-shaking removes unused code, .rules() removes unused rules from the context window. Domain rules travel with the data — the invoice rule only exists when the agent processes an invoice. Token overhead drops from ~2,000/call to ~200/call.

typescript
// Invoice rules — sent ONLY when invoice data is returned
const InvoicePresenter = createPresenter('Invoice')
    .schema(invoiceSchema)
    .rules((invoice, ctx) => [
        'CRITICAL: amount_cents is in CENTS. Divide by 100.',
        ctx?.user?.role !== 'admin'
            ? 'RESTRICTED: Mask exact totals for non-admin users.'
            : null,
    ]);

SEE HOW IT WORKS →

03

SSR for Agents

The problem: The developer begs in the prompt: "Please generate valid ECharts JSON." The AI gets the syntax wrong 20% of the time. Charts become a probabilistic coinflip instead of deterministic output.

The mechanism: Complex chart configs, Mermaid diagrams, and Markdown tables are compiled server-side in Node.js via .ui(). The AI receives a [SYSTEM] pass-through directive and forwards the block unchanged. Visual hallucination drops to zero.

typescript
const InvoicePresenter = createPresenter('Invoice')
    .schema(invoiceSchema)
    .ui((invoice) => [
        ui.echarts({
            series: [{ type: 'gauge', data: [{ value: invoice.amount_cents / 100 }] }],
        }),
        ui.table(
            ['Field', 'Value'],
            [['Status', invoice.status], ['Amount', `$${invoice.amount_cents / 100}`]],
        ),
    ]);
// The LLM passes the chart config through. It never generates it.

SEE HOW IT WORKS →

THE MVA ARCHITECTURE

MVC was designed
for humans.
Agents are not
humans.

MVA replaces the human-centric View with the Presenter — an agent-centric perception layer that tells the AI exactly how to interpret, display, and act on domain data. The handler returns raw data (Model). The Presenter shapes perception (View). The middleware governs access (Agent). This isn't an iteration on MVC. It's a replacement.

// MODEL

Zod schema validates and filters data. Unknown fields rejected with actionable errors. The LLM cannot inject parameters your schema does not declare.

// PRESENTER

JIT rules, server-rendered UI, cognitive guardrails, action affordances — all deterministic, all framework-enforced.

ARCHITECTURE

Everything
you need.

Every capability designed for autonomous AI agents operating over the Model Context Protocol.

01 // MVA

Presenter Engine

Domain-level Presenters validate data, inject rules, render charts, and suggest actions. Use createPresenter() (fluent) or definePresenter() (declarative) — both freeze-after-build.

EXPLORE →
02 // DX

Context Init (initVurb)

tRPC-style f = initVurb<AppContext>(). Define your context type once — every f.query(), f.presenter(), f.registry() inherits it. Zero generics pollution.

EXPLORE →
03 // ROUTING

Action Consolidation

Nest 5,000+ operations into grouped namespaces. File-based routing with autoDiscover() scans directories automatically.

EXPLORE →
04 // SECURITY

Context Derivation

f.middleware() / defineMiddleware() derives and injects typed data into context. Zod .strict() protects handlers from hallucinated parameters.

EXPLORE →
05 // RESILIENCE

Self-Healing Errors

toolError() provides structured recovery hints with suggested actions and pre-populated arguments. Agents self-correct without human intervention.

EXPLORE →
06 // AFFORDANCE

Agentic HATEOAS

.suggest() / .suggestActions() tells agents what to do next based on data state. Eliminates action hallucination through explicit affordances.

EXPLORE →
07 // DEV

HMR Dev Server

createDevServer() watches tool files and hot-reloads on change without restarting the LLM client. Sends notifications/tools/list_changed automatically.

EXPLORE →
08 // GUARDRAILS

Cognitive Limits

.limit() / .agentLimit() truncates large datasets and teaches agents to use filters. Prevents context DDoS and keeps API costs under control.

EXPLORE →
09 // STATE

Temporal Awareness

RFC 7234-inspired cache-control signals prevent LLM Temporal Blindness. Cross-domain causal invalidation after mutations.

EXPLORE →
10 // CLIENT

Type-Safe Client

createVurbClient() provides end-to-end type safety from server to client. Wrong action name? TypeScript catches it at build time.

EXPLORE →
ECOSYSTEM

Deploy
anywhere.
Generate from
anything.

The same ToolRegistry runs across Stdio, HTTP/SSE, and serverless runtimes without code changes. Auto-generate fully typed MCP tools from your existing infrastructure.

// DEPLOY TARGETS

Vinkius Cloud — native deployment with vurb deploy. Zero config, edge-ready, built-in auth and observability.
Vercel Edge Functions — fast cold starts in a Next.js route.
Cloudflare Workers — D1, KV, R2 bindings from 300+ edge locations.
AWS Lambda — Step Functions connector.

// DATA CONNECTORS

Prisma Generator — CRUD tools with field-level security from your schema.
OpenAPI Generator — typed tools from any REST API.
n8n Connector — n8n workflows as MCP tools.

GET STARTED

Build MCP servers
that actually work in production.

Vurb.ts gives you typed schemas, structured AI perception, built-in security, and observability — all out of the box. Skip the boilerplate and ship your first MCP server in minutes.

BUILD YOUR FIRST MCP SERVER →