Overview

UI components for tool call results.

When an AI assistant calls a tool, the result is often plain text or raw JSON. Without purpose-built components, users see a wall of unformatted data they have to parse themselves. The experience breaks because nothing presents the data well.

Tool UI is a component library built for this. Each component turns a specific kind of tool output into real UI: a card, a table, an option list, a chart. They render inline in the conversation and users can interact without leaving the chat.

Chat
Find me a link to the Tailwind docs
Here's the link to the Tailwind CSS documentation:
{
"href": "https://tailwindcss.com/docs",
"title": "Tailwind CSS",
"description": "Rapidly build modern websites without ever leaving your HTML."
}
Without Tool UI
Chat
Find me a link to the Tailwind docs
Found it.
tailwindcss.com

Tailwind CSS

Rapidly build modern websites without ever leaving your HTML.

With Tool UI

Same data, different experience. The left side either dumps JSON to the user or, with a markdown renderer like MDX, gives you a plain text link. The right side renders a clickable card that looks and behaves like a native part of the conversation.

What is tool calling?

Tool calling happens when the assistant does something instead of saying something. The user asks "find me flights to Tokyo," and instead of describing options in a paragraph, the assistant calls a search tool and returns structured results.

You define the functions the model can invoke: searching a database, fetching a URL, running a calculation. The model decides when to call them based on the conversation. It sends structured arguments, your server executes the function, and a result comes back.

Usually that result is plain text or JSON dumped into the chat. Tool UI handles what comes after: rendering those results as real UI.

What if tool results could render UI?

When a tool returns JSON that matches a known schema, your app can render a component instead of showing raw data.

Every Tool UI component has a corresponding schema: a Zod definition that describes the shape of the data it needs. When the tool result matches, the component renders. When it doesn't, parsing fails safely and nothing renders.

// The tool returns structured JSON...{  id: "lp-1",  href: "https://tailwindcss.com/docs",  title: "Tailwind CSS",  description: "Rapidly build modern websites..."}// ...that matches the LinkPreview schema → renders as a card

Schema-first rendering means the data contract between server and client is typed and validated. No brittle string parsing. No hoping the model formats things correctly.

What if that UI could be interactive?

Some tool UIs just display information like a link preview, a chart, or a code block. But others let users make decisions that feed back into the conversation.

Chat
Help me pick a database for the new project
Based on your requirements, here are some options:
Click an option and confirm — then reset to try again

The user selects an option, and the choice returns to the assistant as a tool result. The assistant can then continue the conversation with that context. The selected state stays visible as a receipt — a record of what was chosen.

The actions model covers user interactions on tool UIs that produce side effects and records.

Where Tool UI fits

Tool UI sits between your design system and your LLM orchestration layer:

Radix/shadcn (design primitives) → Tool UI (conversation-native components) → AI SDK / LangGraph / etc. (LLM orchestration)

  • Radix / shadcn give you the base UI primitives (buttons, dialogs, inputs).
  • Tool UI gives you components designed for chat (inline cards, tables, option lists, approval flows) with schemas that map to tool outputs.
  • AI SDK / LangGraph handle the model communication, streaming, and tool execution.

Tool UI doesn't replace your design system. It extends it. Components use shadcn primitives internally and follow your theme.

How it works

Loading diagram...
  1. The assistant calls a tool. Based on the conversation, the model invokes a tool you've defined (e.g., previewLink, searchFlights).
  2. The tool returns JSON. Your server-side function executes and returns structured data matching an outputSchema.
  3. The schema matches. A component renders. On the client, a registered renderer parses the JSON against the component's schema. If it matches, the component renders inline in the chat message.
  4. The user interacts. For display components, this is the end. For interactive components (decisions, approvals), the user takes an action.
  5. The result returns. The user's choice is sent back to the assistant as a tool result via addResult, continuing the conversation.

Minimal example

The server defines a tool with a typed output schema. The client registers a renderer that maps that output to a component.

Server: define a tool that returns structured data.

import { streamText, tool, convertToModelMessages } from "ai";import { openai } from "@ai-sdk/openai";import { z } from "zod";import { SerializableLinkPreviewSchema } from "@/components/tool-ui/link-preview/schema";export async function POST(req: Request) {  const { messages } = await req.json();  const result = streamText({    model: openai("gpt-4o"),    messages: await convertToModelMessages(messages),    tools: {      previewLink: tool({        description: "Show a preview card for a URL",        inputSchema: z.object({ url: z.url() }),        // outputSchema tells the AI SDK what shape the result will have        outputSchema: SerializableLinkPreviewSchema,        async execute({ url }) {          // Fetch metadata and return structured data          return {            id: "link-preview-1",            href: url,            title: "Example Site",            description: "A description of the linked content",            image: "https://example.com/image.jpg",          };        },      }),    },  });  return result.toUIMessageStreamResponse();}

Client: register the component renderer.

"use client";import {  AssistantRuntimeProvider,  Tools,  useAui,  type Toolkit,} from "@assistant-ui/react";import {  useChatRuntime,  AssistantChatTransport,} from "@assistant-ui/react-ai-sdk";import { LinkPreview } from "@/components/tool-ui/link-preview";import { safeParseSerializableLinkPreview } from "@/components/tool-ui/link-preview/schema";// Register a renderer for each tool that should display a componentconst toolkit: Toolkit = {  previewLink: {    type: "backend",    render: ({ result }) => {      const parsed = safeParseSerializableLinkPreview(result);      if (!parsed) return null; // Wait for full payload before rendering      return <LinkPreview {...parsed} />;    },  },};export default function App() {  // Connect to your API route  const runtime = useChatRuntime({    transport: new AssistantChatTransport({ api: "/api/chat" }),  });  // Make tool renderers available to the runtime  const aui = useAui({ tools: Tools({ toolkit }) });  return (    <AssistantRuntimeProvider runtime={runtime} aui={aui}>      {/* Your chat thread component */}    </AssistantRuntimeProvider>  );}

toolkit maps tool names to renderers. Each renderer parses the tool result with safeParse and renders when valid. useAui and Tools connect everything to the assistant-ui runtime.

Next steps

  • Quick Start: Add your first Tool UI component to a chat app
  • Design Guidelines: The collaboration model, component roles, and constraints for building in chat
  • Actions: How interactive tool UIs feed user decisions back to the assistant
  • Receipts: How components show permanent records of past decisions
  • Gallery: Browse all available components