Skip to content

Client SDK Integrations

TELL YOUR AI AGENT
"Connect my Next.js frontend using Vercel AI SDK to my Vurb backend via stdio transport — the backend handles auth, PII, and tool routing."

BEST OF BOTH WORLDS
Frontend brilliance.
Backend you can trust.
Vercel AI SDK, LangChain, and LlamaIndex excel at chatting with LLMs. They are not enterprise backend servers. Vurb is the perfect complementary backend — middleware, tenant isolation, DLP, guardrails.

Does Vurb work with these frameworks?

Yes. Connect via stdio or standard HTTP transports. Your frontend framework automatically consumes Vurb's Consolidated MVA Actions — typed tool names, validated inputs, truncated payloads.

FRONTEND
Vercel AI SDK / LangChain / LlamaIndex
Rich UI streams, prompt templates, chat histories, RAG pipelines, agent orchestration.
BACKEND
Vurb.ts
Zero-Trust architecture, Zod security stripping, DLP, middleware pipelines, tenant isolation, deterministic tool execution.

Vercel AI SDK

Connect your Vurb server to useChat or streamText. The Vercel AI SDK natively reads tool schemas generated by Vurb Presenters.

Why not define tools directly in Vercel AI SDK?

Defining tools directly in tool() blocks works for scripts, but fails in production:

Mixed concerns
UI routing mixed with database logic.
Token explosion
Dozens of tools flooding the system prompt.
Context DDoS
No guardrails when query returns too much data.

Vurb keeps the Vercel AI SDK focused on UI while the MCP server handles the heavy, state-aware, guardrailed backend execution.


LangChain

Connect via @modelcontextprotocol/sdk client. Your LangChain agents gain immediate access to your entire backend.

Avoiding Tool Hell

A common LangChain problem: giving an agent 50 tools (e.g., list_users, create_user, delete_user) confuses the planner and wastes thousands of tokens.

With Vurb, your LangChain agent sees Consolidated MVA Actions — a single tool with a deterministic discriminator. 50 tools → 1 smart endpoint. Dramatically improves agent accuracy and reduces costs.


LlamaIndex

LlamaIndex excels at RAG but struggles with deterministic CRUD mutations. By offloading mutations to Vurb, you guarantee every state change passes through strictly typed middleware and Presenter logic — preventing LLM-driven data corruption.