A comprehensive Next.js application demonstrating five different approaches to building AI-powered user interfaces: CopilotKit, Vercel AI SDK (Generative UI), AG-UI Protocol, Declarative A2UI, and MCP Apps.
# Install dependencies
npm install
cd mcp-server && npm install && npm run build && npm run build:ui && cd ..
# Configure environment
cp .env.example .env.local
# Add your OPENAI_API_KEY to .env.local
# Start dev server (Next.js + MCP server run concurrently)
npm run devVisit http://localhost:3000 to explore the demos!
Note:
npm run devusesconcurrentlyto start both the Next.js server and the MCP HTTP server (port 3001) at the same time. Demo 5 requires the MCP server to be running.
- Overview
- The Five Demos
- Project Structure
- Setup & Configuration
- Key Technologies
- Documentation
- Testing
- Troubleshooting
This project showcases five distinct patterns for building AI-powered UIs, ranging from high-level frameworks to low-level protocols:
- Demo 1: Trip Planner — High-level framework (CopilotKit)
- Demo 2: Recipe Explorer — Medium-level SDK (Vercel AI SDK)
- Demo 3: Story Builder — Low-level protocol (AG-UI/A2UI)
- Demo 4: Interactive Story — Declarative A2UI with CopilotKit
- Demo 5: MCP Apps — Model Context Protocol with interactive UIs via
ui://resources
All demos use OpenAI GPT-5 mini as the underlying language model.
Route: /trip-planner
A trip planning application where the AI can read your itinerary and modify it through natural language.
Key Features:
useCopilotReadable— Share app state with the AIuseCopilotAction— Register functions the AI can callCopilotPopup— Pre-built chat interface- State synchronization between UI and AI
Try saying:
- "Plan a 2-week trip through Japan"
- "Add Paris for 4 days with food and museum activities"
- "Set the budget to $5000"
Route: /recipe-explorer
An interactive recipe chat where the AI generates rich, structured UI components instead of plain text.
Key Features:
useChathook for streaming responses- Tool calling with
streamText - Dynamic React component generation
- Inline UI rendering in chat messages
Try saying:
- "Show me a pasta recipe"
- "I want to make tacos"
- "Suggest a dessert recipe"
Route: /story-builder
An interactive story generator with a real-time debug panel showing the underlying event protocol.
Key Features:
- Raw Server-Sent Events (SSE) handling
- 17 typed AG-UI event types
- A2UI component generation via tool calls
- Live event debugging panel
- Manual event stream processing
Try saying:
- "Start a fantasy adventure"
- "Create a sci-fi mystery story"
- "Tell me a detective story"
Route: /copilot-story
Story builder showcasing the declarative A2UI pattern — separating business logic from UI rendering for better maintainability.
Key Features:
- Declarative A2UI —
handlerfor logic,renderfor UI - Separation of concerns pattern
useCopilotActionwith render property- Status tracking (executing, complete)
- Inline generative UI components
The Pattern:
useCopilotAction({
handler: async (args) => {
// Business logic: update state, process data
return data;
},
render: ({ args, result, status }) => {
// UI rendering: transform data into components
return <Component {...result} />;
}
});Try saying:
- "Start a mystery adventure"
- "Tell me a space exploration story"
- "Create a fantasy quest"
Route: /mcp-apps
Experience the official MCP Apps pattern — MCP servers that deliver interactive UIs via ui:// resources, rendered in sandboxed iframes.
What Makes This "True" MCP Apps:
- HTTP Transport (
StreamableHTTPServerTransport) on port 3001 — not stdio @modelcontextprotocol/ext-appsSDK for the MCP Apps extension- Tools include
_meta.ui.resourceUripointing toui://resources MCPAppsMiddlewareauto-discovers tools and fetches their HTML resources- CopilotKit renders the HTML in a sandboxed iframe
- UI communicates bidirectionally via the
Appclass andpostMessage - Live data polling every 2 seconds for fresh system metrics
Key Features:
- Real-time CPU and memory monitoring from your machine
- CopilotKit v2
BuiltInAgent+createCopilotEndpointSingleRoute(Hono) MCPAppsMiddlewareauto-wiring between CopilotKit and MCP server- Vite-built iframe UI (
mcp-server/ui/)
Architecture:
CopilotChat
↓
CopilotKit v2 (BuiltInAgent + MCPAppsMiddleware) ← /api/copilotkit-mcp
↓ auto-discovers tools + fetches ui:// resources
MCP Server (HTTP, port 3001) ← mcp-server/
↓ tool: show_system_monitor
│ └─ _meta.ui.resourceUri = "ui://system-monitor/app"
↓
HTML bundle (mcp-server/ui/) ← rendered in sandboxed iframe
↓ postMessage ↕
App class (bidirectional communication)
↓ live polling
Node.js os module (real CPU/memory data)
Try saying:
- "Show me system metrics"
- "What's my CPU usage?"
- "Display the system monitor"
- "How does this MCP server work?"
Learn More:
- Server implementation:
mcp-server/README.md - MCP Protocol: https://modelcontextprotocol.io
- MCP SDK: https://github.com/modelcontextprotocol/typescript-sdk
ai-ui-demos/
├── src/
│ ├── app/ # Next.js pages and API routes
│ │ ├── page.tsx # Home page with demo links
│ │ ├── trip-planner/ # Demo 1: CopilotKit
│ │ ├── recipe-explorer/ # Demo 2: Vercel AI SDK
│ │ ├── story-builder/ # Demo 3: AG-UI Protocol
│ │ ├── copilot-story/ # Demo 4: Declarative A2UI
│ │ ├── mcp-apps/ # Demo 5: MCP Apps
│ │ └── api/ # Backend API routes
│ │ ├── copilotkit/ # CopilotKit runtime (Demos 1, 4)
│ │ ├── copilotkit-mcp/ # CopilotKit v2 + MCPAppsMiddleware (Demo 5)
│ │ ├── chat/ # Vercel AI SDK chat endpoint (Demo 2)
│ │ └── agent/ # AG-UI event emitter (Demo 3)
│ ├── components/ # Shared React components
│ │ ├── recipe-cards.tsx # Recipe UI components
│ │ └── story-components.tsx # Story A2UI components
│ └── lib/ # Utility functions
│
├── mcp-server/ # Standalone MCP server (Demo 5)
│ ├── src/
│ │ ├── index.ts # HTTP MCP server entry point
│ │ ├── metrics.ts # CPU/memory metrics collection
│ │ └── app.ts # App class for iframe communication
│ ├── ui/
│ │ └── app.html # Iframe UI source (built with Vite)
│ ├── dist/ # Compiled JS (generated)
│ ├── package.json
│ └── README.md
│
├── docs/ # Supplemental documentation
│ ├── SETUP.md
│ ├── COPILOT_STORY_DEMO.md
│ ├── DEBUG_FINDINGS.md
│ └── PLAYWRIGHT_DEBUG_SUMMARY.md
│
├── tests/
│ ├── playwright.config.ts
│ └── playwright/ # Playwright test specs
│
├── .env.example # Environment variable template
├── package.json
└── README.md
- Node.js 18+
- npm
- OpenAI API key (platform.openai.com)
Create a .env.local file:
OPENAI_API_KEY=sk-your_api_key_here
OPENAI_MODEL_NAME=gpt-5-miniThe MCP server must be built before running the dev environment. npm run dev handles this automatically via concurrently, but you can also build manually:
cd mcp-server
npm install
npm run build # compile TypeScript
npm run build:ui # build the Vite iframe bundle| Model | Value | Notes |
|---|---|---|
| GPT-5 mini | gpt-5-mini |
Recommended — current default |
| GPT-4o | gpt-4o |
Fallback if GPT-5 mini unavailable |
| GPT-4o mini | gpt-4o-mini |
Faster and cheaper for development |
Tool calling (required for Demos 2–5) works best with GPT-5 mini or GPT-4o.
React framework for building AI copilots. Provides useCopilotReadable, useCopilotAction, and pre-built UI components. Demo 5 uses the v2 runtime (CopilotRuntime + BuiltInAgent + createCopilotEndpointSingleRoute).
Provides useChat, streamText, and tool calling for AI-powered applications.
Agent-User Interaction Protocol. An event-based specification for real-time agent ↔ UI communication using Server-Sent Events (SSE) with 17 typed event types.
Agent-to-User Interface pattern where agents generate declarative UI definitions rendered as native React components. The declarative variant separates business logic (handler) from rendering (render).
Official extension to the Model Context Protocol enabling servers to attach interactive UIs (ui:// resources) to tools. When the AI calls a tool, the middleware fetches the associated HTML and renders it in a sandboxed iframe.
Middleware that sits between CopilotKit and an MCP server. It auto-discovers tools, handles ui:// resource fetching, and wires iframe rendering into the CopilotKit chat flow.
- SETUP.md — Detailed setup with troubleshooting
- COPILOT_STORY_DEMO.md — CopilotKit vs raw AG-UI comparison
- DEBUG_FINDINGS.md — Common issues and solutions
- mcp-server/README.md — MCP server deep-dive
# Run all Playwright tests
npx playwright test
# Run a specific test
npx playwright test tests/playwright/submit-test.spec.ts
# Run in headed mode (see the browser)
npx playwright test --headedEnsure OPENAI_API_KEY in .env.local starts with sk- and is valid.
- Ensure
npm run devstarted the MCP server (check for "MCP server running on port 3001" in the terminal) - Verify
mcp-server/dist/exists — runcd mcp-server && npm run build - Verify
mcp-server/dist/ui/exists — runcd mcp-server && npm run build:ui
- Check browser console for errors
- Verify the appropriate API endpoint is accessible (
/api/copilotkitor/api/copilotkit-mcp) - Confirm
OPENAI_API_KEYis set
- Do not use
render: "ComponentName"(string) inuseCopilotAction— pass a function - Verify handlers return JSX, not strings
- Reduce request frequency or use
gpt-4o-mini/gpt-5-minifor development - Upgrade your OpenAI account tier for higher limits
This is a demonstration project. Feel free to experiment, try different prompts, compare architectural approaches, and learn about AI UI patterns.
MIT — see LICENSE file for details.
Educational Note: Start with Demo 1 (CopilotKit — highest abstraction) and work toward Demo 5 (MCP Apps — full protocol stack) to understand how each layer of abstraction simplifies complexity.