Give your coding agent advanced Tool UI skills. Install now.
Add a Tool UI component to a chat app. By the end you'll have an assistant that renders a rich link preview card instead of raw JSON.
What you'll build: A backend tool that fetches link metadata, and a frontend renderer that turns the JSON response into an interactive LinkPreview card, all wired together through assistant-ui.
assistant-ui is the React runtime that manages chat state, message streaming, and tool rendering. If you already have it set up, skip to Step 2.
The fastest way to start is with the CLI. It creates a working chat app in a Next.js project:
npx assistant-ui@latest initOr install packages manually into an existing project:
pnpm add @assistant-ui/react @assistant-ui/react-ai-sdk ai @ai-sdk/openai zodAdd your OpenAI API key to .env.local:
OPENAI_API_KEY=sk-...Run pnpm dev and confirm you see a working chat interface before continuing.
npx shadcn@latest add https://tool-ui.com/r/link-preview.jsonThis copies the source files into your project. The code is yours - change it however you want.
Two pieces connect the component to the conversation: a backend tool that returns structured data when the model calls it, and a frontend renderer that turns that data into the LinkPreview component.
Create (or update) your API route to include a tool the LLM can call. This tool accepts a URL, fetches metadata, and returns JSON matching the LinkPreview schema.
import { streamText, tool, convertToModelMessages, jsonSchema } from "ai";import { openai } from "@ai-sdk/openai";export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: openai("gpt-4o"), messages: await convertToModelMessages(messages), tools: { // The LLM decides when to call this tool based on the description. previewLink: tool({ description: "Show a preview card for a URL", inputSchema: jsonSchema<{ url: string }>({ type: "object", properties: { url: { type: "string", format: "uri" } }, required: ["url"], additionalProperties: false, }), // execute() runs on the server when the LLM calls the tool. // It returns JSON that matches the LinkPreview component's schema. async execute({ url }) { return { id: "link-preview-1", href: url, title: "Example Site", description: "A description of the linked content", image: "https://example.com/image.jpg", }; }, }), }, }); return result.toUIMessageStreamResponse();}In a real app, fetch actual metadata here (Open Graph tags, screenshots, etc.).
Registering tells assistant-ui: "when the model calls a tool named previewLink, parse the result and render this component." Without registration, tool results appear as raw JSON.
"use client";import { AssistantRuntimeProvider, Tools, useAui, type Toolkit,} from "@assistant-ui/react";import { useChatRuntime, AssistantChatTransport,} from "@assistant-ui/react-ai-sdk";import { LinkPreview } from "@/components/tool-ui/link-preview";import { safeParseSerializableLinkPreview } from "@/components/tool-ui/link-preview/schema";import { createResultToolRenderer } from "@/components/tool-ui/shared";// Map backend tool names to frontend renderers.// The key "previewLink" must match the tool name in your API route.const toolkit: Toolkit = { previewLink: { type: "backend", render: createResultToolRenderer({ // Only render when the full payload is valid. // While the LLM is still streaming, safeParse returns null → nothing renders. safeParse: safeParseSerializableLinkPreview, render: (parsed) => <LinkPreview {...parsed} />, }), },};export default function Page() { // Connect to your /api/chat route. const runtime = useChatRuntime({ transport: new AssistantChatTransport({ api: "/api/chat" }), }); // Register the toolkit so the runtime knows how to render tool results. const aui = useAui({ tools: Tools({ toolkit }) }); return ( <AssistantRuntimeProvider runtime={runtime} aui={aui}> {/* Your chat thread component */} </AssistantRuntimeProvider> );}Run pnpm dev, then ask the assistant to "preview https://example.com." Instead of raw JSON, you'll see a styled LinkPreview card in the conversation.
Same pattern for any component:
npx shadcn@latest add https://tool-ui.com/r/approval-card.jsonEvery component ships with a colocated schema.ts exporting a Zod schema and a safeParseSerializable{ComponentName} function for validating tool output on both server and client.
Browse all available components in the Gallery.
The example above is a display-only component. The tool returns data, and the component renders it. Some components go further: they let the user make a choice that returns to the assistant.
Components like Option List and Approval Card support this pattern through frontend tools, where the model calls a tool, your UI renders the component, and the user's response is sent back via addResult(...).
For the full implementation pattern, including how to forward frontend tools through your API route and enable auto-continue after user decisions, see the Advanced page.
Tool UI components work with any React app. Without assistant-ui, you manually parse tool outputs and render components. Use assistant-ui for the best experience.
Tool UI components are installed from registry entries, and each entry includes components/tool-ui/shared automatically.
assistant-ui supports multiple runtimes: AI SDK, LangGraph, LangServe, Mastra, or custom backends. The examples above use AI SDK v6.