Next.js AI Agent Landing Pages: Vercel AI SDK Guide
Build AI agent-powered landing pages with Next.js and Vercel AI SDK 6. Streaming responses, tool calling, and real-time personalization implementation guide.
Avg. Conversion Lift
Time to First Token
Tools per Agent
Type Safety
Key Takeaways
Landing pages have followed the same formula for over a decade: headline, subheadline, feature bullets, social proof, and a call to action. The format works because it is scannable and fast. But it has a fundamental limitation — every visitor sees the same content regardless of their specific questions, use case, or stage in the buying process. AI agents change that equation entirely.
An AI agent landing page combines traditional static content with an interactive chat interface powered by a large language model. The static content handles SEO, fast initial rendering, and the first impression. The agent handles everything else: answering product questions, qualifying leads, comparing features to competitors, pulling real-time pricing, and booking meetings. It is the difference between a brochure and a conversation.
This guide covers the complete implementation using Next.js and Vercel AI SDK 6. We walk through the architecture, agent configuration, streaming setup, tool calling for dynamic content, visitor personalization, SEO preservation, and production deployment patterns. Every code example follows production patterns — no toy demos.
Why AI Agents on Landing Pages
The business case for AI agents on landing pages comes down to one metric: the gap between what visitors need to know and what a static page can tell them. Enterprise software landing pages convert at 2-5% because 95% of visitors leave with unanswered questions. An AI agent closes that gap by providing immediate, contextual answers without requiring the visitor to search through documentation or wait for a sales response.
- xSame content for every visitor
- xCannot answer specific product questions
- xLead qualification requires human follow-up
- xPricing and availability are static snapshots
- Fast load, excellent SEO
- Works without JavaScript
- Personalized responses per visitor
- Answers any product question instantly
- Automated lead qualification and routing
- Real-time pricing and availability via tools
- Static shell preserves SEO and performance
- 24/7 availability across all time zones
The key insight is that AI agents do not replace the static landing page — they augment it. The static content handles the first impression, SEO indexing, and visitors who prefer scanning over chatting. The agent handles the 60-70% of visitors who have specific questions that the static content cannot anticipate. Both paths lead to the same conversion goals: form submission, meeting booking, or purchase.
Architecture Overview
The architecture separates static content from AI interactivity at the component level. The page itself is a React Server Component that renders all SEO-critical content at build time. The AI agent is a Client Component that mounts after hydration, connecting to a Server Action that handles model inference and tool execution.
app/landing/[product]/
├── page.tsx # Server Component (static)
│ ├── Hero section # Static: headline, description
│ ├── Features grid # Static: product features
│ ├── Social proof # Static: testimonials, logos
│ ├── Pricing table # Static: base pricing tiers
│ └── <AgentChat /> # Client: AI agent (dynamic import)
│
├── actions.ts # Server Action for AI inference
│ ├── Agent definition # Model, tools, system prompt
│ └── streamText() # Streaming response handler
│
└── components/
├── agent-chat.tsx # "use client" — chat UI
├── tool-results.tsx # Tool output renderers
└── lead-form.tsx # In-chat lead captureThis separation is critical for two reasons. First, the static Server Component renders at build time and is served from Vercel's CDN, delivering sub-100ms Time to First Byte globally. The AI agent loads asynchronously without blocking the initial render. Second, search engines index the full static content — hero, features, pricing, testimonials — without executing JavaScript. The agent adds interactivity for real visitors without sacrificing SEO.
Server Component
Static HTML at build time. Zero JavaScript for SEO content. CDN-delivered globally. Contains metadata, structured data, and all indexable content.
Server Action
Handles AI inference securely on the server. No API keys exposed to the client. Executes tools, manages conversation state, and streams responses.
Client Component
Interactive chat UI with useChat hook. Renders streaming tokens, tool results, and lead capture forms. Loads after hydration via dynamic import.
Setting Up the AI SDK Agent
AI SDK 6 introduces the Agent abstraction — a reusable configuration that bundles a model, system prompt, tools, and behavioral settings into a single object. For landing pages, the agent definition is the single most important piece of the architecture because it determines how the AI interacts with visitors.
// lib/agents/landing-agent.ts
import { openai } from "@ai-sdk/openai"
import { agent } from "ai"
import { z } from "zod/v4"
import { getProductPricing } from "@/lib/pricing"
import { submitLead } from "@/lib/crm"
import { getRelevantCaseStudies } from "@/lib/content"
export const landingAgent = agent({
model: openai("gpt-4.1-mini"),
system: `You are a helpful product specialist for
[Company]. Your role is to answer visitor questions,
explain features, and help qualified prospects take
the next step.
Guidelines:
- Be concise. Landing page visitors want quick answers.
- Use the pricing tool for any pricing questions.
- Use the case study tool when visitors ask for examples.
- Qualify leads naturally through conversation.
- Never fabricate pricing, features, or timelines.
- If unsure, suggest booking a call with the team.`,
tools: {
getProductPricing: {
description: "Get current pricing for a product tier",
parameters: z.object({
tier: z.enum(["starter", "pro", "enterprise"]),
billingCycle: z.enum(["monthly", "annual"]),
}),
execute: async ({ tier, billingCycle }) => {
return getProductPricing(tier, billingCycle)
},
},
getCaseStudies: {
description: "Get relevant case studies by industry",
parameters: z.object({
industry: z.string().describe("Visitor's industry"),
useCase: z.string().describe("Primary use case"),
}),
execute: async ({ industry, useCase }) => {
return getRelevantCaseStudies(industry, useCase)
},
},
submitLeadInfo: {
description: "Submit qualified lead information",
parameters: z.object({
email: z.string().email(),
company: z.string(),
role: z.string(),
interest: z.string(),
}),
execute: async (params) => {
return submitLead(params)
},
},
},
})The agent definition lives in a shared library file, not inside a component or Server Action. This lets you reuse the same agent across multiple landing pages, API routes, and testing environments. The system prompt is the most impactful part — spend time crafting it with your product team. Include specific product details, messaging guidelines, and explicit instructions about what the agent should and should not say.
Streaming Responses with Server Actions
AI SDK 6 replaced the API route pattern with Server Actions for streaming AI responses. Instead of creating a /api/chat endpoint and configuring CORS, streaming middleware, and request parsing, you write a Server Action that calls streamText and returns the data stream directly. The useChat hook on the client connects to the Server Action with full type safety.
// app/landing/actions.ts
"use server"
import { streamText } from "ai"
import { landingAgent } from "@/lib/agents/landing-agent"
export async function chat(messages) {
const result = streamText({
model: landingAgent.model,
system: landingAgent.system,
tools: landingAgent.tools,
messages,
maxSteps: 5, // Allow multi-step tool calls
})
return result.toDataStream()
}// components/agent-chat.tsx
"use client"
import { useChat } from "ai/react"
import { chat } from "@/app/landing/actions"
export function AgentChat() {
const {
messages,
input,
handleInputChange,
handleSubmit,
isLoading,
} = useChat({ api: chat })
return (
<div className="rounded-xl border p-4">
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong>
{m.content}
</div>
))}
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={handleInputChange}
placeholder="Ask about our product..."
/>
</form>
</div>
)
}The Server Action approach provides several advantages over API routes for landing pages. There is no URL to configure or mismatch between client and server. TypeScript validates the entire connection at compile time. The Server Action runs in the same serverless function as the page, sharing the same environment variables and configuration. And because Server Actions use POST requests internally, they work correctly behind CDNs and reverse proxies without special streaming configuration.
- Deploy to the same region as your model provider— Vercel's iad1 (US East) is optimal for OpenAI and Anthropic, reducing round-trip latency by 50-100ms
- Use maxSteps wisely — set it to the maximum number of tool calls your agent realistically needs. Lower values reduce the risk of runaway conversations; 3-5 is typical for landing page agents
- Stream tool results — use streamText instead of generateText so users see partial responses while tools execute. The perceived latency drops dramatically when the first token arrives in under 200ms
Tool Calling for Dynamic Content
Tools are what make an AI landing page agent more than a chatbot. Without tools, the agent can only generate text from its training data and system prompt. With tools, it can fetch real-time pricing, check inventory, pull case studies, search documentation, submit leads to your CRM, and schedule meetings — all within the conversation flow.
// lib/tools/landing-tools.ts
import { z } from "zod/v4"
export const landingTools = {
// Real-time pricing from your billing system
checkPricing: {
description: "Get current pricing for a product plan",
parameters: z.object({
plan: z.enum(["starter", "pro", "enterprise"]),
seats: z.number().min(1).max(1000),
annual: z.boolean(),
}),
execute: async ({ plan, seats, annual }) => {
const pricing = await fetch(
`${process.env.BILLING_API}/quote`,
{
method: "POST",
body: JSON.stringify({ plan, seats, annual }),
}
).then(r => r.json())
return pricing
},
},
// Search product documentation for answers
searchDocs: {
description: "Search product docs for technical answers",
parameters: z.object({
query: z.string().describe("The technical question"),
}),
execute: async ({ query }) => {
// Vector search against your docs
const results = await searchDocuments(query, { limit: 3 })
return results.map(r => ({
title: r.title,
content: r.snippet,
url: r.url,
}))
},
},
// Book a demo meeting
bookDemo: {
description: "Schedule a product demo meeting",
parameters: z.object({
email: z.string().email(),
name: z.string(),
preferredTime: z.string(),
topic: z.string(),
}),
execute: async (params) => {
const booking = await createCalendarEvent(params)
return { confirmed: true, link: booking.meetingUrl }
},
},
}Each tool definition has three parts: a natural language description that tells the model when to use it, a Zod schema that validates the parameters before execution, and an execute function that performs the actual operation. AI SDK 6 handles the multi-step flow automatically: the model decides which tool to call, the SDK validates and executes it, passes the result back to the model, and the model generates a response that incorporates the tool output.
- Pricing calculator with live rates
- Product documentation search
- Demo/meeting scheduler
- Lead qualification scorer
- Competitor comparison data
- Case study retrieval by industry
- Direct database writes (use CRM APIs instead)
- Payment processing (redirect to checkout)
- File uploads (separate workflow)
- Admin operations (privilege escalation risk)
- External API calls without rate limits
- Tools that expose internal system details
Real-Time Visitor Personalization
The most powerful application of AI agents on landing pages is personalization that goes beyond A/B testing. Instead of showing variant A or variant B, the agent tailors every response to the individual visitor based on context signals: their referral source, the questions they ask, the industry they mention, and their apparent stage in the buying process.
// app/landing/actions.ts
"use server"
import { streamText } from "ai"
import { headers } from "next/headers"
import { landingAgent } from "@/lib/agents/landing-agent"
export async function chat(messages) {
const headerList = await headers()
const referer = headerList.get("referer") ?? ""
const country = headerList.get("x-vercel-ip-country") ?? ""
const city = headerList.get("x-vercel-ip-city") ?? ""
// Build context from visitor signals
const visitorContext = buildVisitorContext({
referer,
country,
city,
utmSource: extractUTM(referer, "utm_source"),
utmCampaign: extractUTM(referer, "utm_campaign"),
conversationHistory: messages,
})
const result = streamText({
model: landingAgent.model,
system: `${landingAgent.system}
Visitor context:
- Location: ${city}, ${country}
- Referral: ${visitorContext.source}
- Campaign: ${visitorContext.campaign}
- Inferred intent: ${visitorContext.intent}
Adapt your tone and examples to match this context.
If they came from a technical blog, lead with
architecture details. If from a pricing comparison,
lead with value and ROI.`,
tools: landingAgent.tools,
messages,
maxSteps: 5,
})
return result.toDataStream()
}The personalization happens at the system prompt level, not in the UI. The Server Action reads visitor signals from request headers — geographic location from Vercel's edge headers, referral source from the referer header, and UTM parameters from the URL. These signals are injected into the system prompt before each model call, giving the agent contextual awareness without any client-side personalization logic.
Geographic Signals
Vercel provides country, region, and city from edge headers. Use for currency localization, regional pricing, compliance mentions (GDPR for EU visitors), and local case studies.
Referral Context
UTM parameters and referral source reveal the visitor's entry path. Visitors from a pricing comparison should see ROI-focused responses. Technical blog referrals get architecture-focused answers.
Intent Classification
Analyze the visitor's first message to classify intent: research, comparison, pricing, technical evaluation, or ready-to-buy. This determines the agent's strategy for the rest of the conversation.
Industry Detection
When visitors mention their company or industry, a tool fetches relevant case studies and success metrics. The agent automatically shifts its examples and value propositions to match the visitor's context.
Performance and SEO Considerations
Adding AI interactivity to a landing page creates a tension between performance and functionality. The solution is strict separation: everything search engines and performance tools measure (Time to First Byte, Largest Contentful Paint, Cumulative Layout Shift) is handled by the static Server Component. The AI agent loads asynchronously and does not affect any Core Web Vital metric.
// app/landing/page.tsx (Server Component)
import dynamic from "next/dynamic"
import { Suspense } from "react"
// Load agent chat without blocking initial render
const AgentChat = dynamic(
() => import("./components/agent-chat"),
{ ssr: false } // No server rendering for the chat
)
export default function LandingPage() {
return (
<main>
{/* Static content — rendered at build time */}
<HeroSection />
<FeaturesGrid />
<SocialProof />
<PricingTable />
{/* AI agent — loads after hydration */}
<section id="chat" className="py-16">
<Suspense fallback={<ChatSkeleton />}>
<AgentChat />
</Suspense>
</section>
{/* More static content below the fold */}
<Testimonials />
<CTASection />
</main>
)
}- All SEO content in the Server Component — never inside the chat widget
- Full metadata, structured data, and canonical URL on the static page
- Chat component uses ssr: false — invisible to crawlers
- Static pricing table alongside the dynamic agent pricing tool
- TTFB under 100ms (static CDN delivery)
- LCP under 1.5s (hero image + headline)
- CLS of 0 (chat loads below fold with fixed dimensions)
- Agent interactive within 2s of page load
The critical point is that the chat widget must not cause layout shift. Use a fixed-height container or position the chat as a floating widget in the bottom-right corner. If the agent is inline, reserve space with a skeleton loader inside a Suspense boundary. Google's Core Web Vitals penalize unexpected layout shifts, and a chat widget that pushes content down when it loads will hurt your SEO ranking regardless of the content quality.
Production Deployment Patterns
Moving an AI agent landing page from development to production requires attention to rate limiting, error handling, cost management, and observability. The stakes are higher than a typical chat demo because the agent represents your brand to potential customers on a high-traffic page.
// app/landing/actions.ts
"use server"
import { streamText } from "ai"
import { Ratelimit } from "@upstash/ratelimit"
import { Redis } from "@upstash/redis"
import { headers } from "next/headers"
import { landingAgent } from "@/lib/agents/landing-agent"
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(20, "1 h"),
analytics: true,
})
export async function chat(messages) {
// Rate limit by IP
const headerList = await headers()
const ip = headerList.get("x-forwarded-for") ?? "unknown"
const { success } = await ratelimit.limit(ip)
if (!success) {
throw new Error("Rate limit exceeded. Please try again later.")
}
// Limit conversation length to control costs
const recentMessages = messages.slice(-10)
const result = streamText({
model: landingAgent.model,
system: landingAgent.system,
tools: landingAgent.tools,
messages: recentMessages,
maxSteps: 5,
maxTokens: 1000, // Cap response length
onError: (error) => {
// Log to your observability platform
console.error("Agent error:", error)
},
})
return result.toDataStream()
}Rate Limiting
Protect against abuse with IP-based rate limiting using Upstash Redis. Set limits based on your expected conversion funnel: 20 messages per hour is generous for landing page visitors. Monitor analytics to adjust thresholds based on actual usage patterns.
Conversation Length Limits
Cap the conversation context window by slicing to the most recent 10 messages. This prevents token costs from growing unboundedly for long conversations and keeps the model focused on the current topic. For landing pages, most conversions happen within 4-6 messages.
Error Handling and Fallbacks
When the model provider has an outage, the landing page must still convert. Use the Vercel AI Gateway for automatic provider failover. On the client side, catch errors from useChat and display a contact form fallback — the visitor can still submit their question as a lead even if the AI is temporarily unavailable.
Observability and Analytics
Track every conversation as a structured event: messages sent, tools called, lead qualified, meeting booked. Use AI SDK 6's Agent DevTools in development for real-time debugging. In production, stream conversation events to your analytics pipeline for conversion attribution and agent performance monitoring.
Building Your First AI Agent Landing Page
AI agent landing pages represent the next evolution of conversion optimization. Instead of guessing what content each visitor needs and A/B testing toward marginal improvements, you give visitors a direct line to an AI that understands your product as deeply as your best sales representative. The technology stack — Next.js Server Components for SEO, AI SDK 6 for agent orchestration, and Vercel for deployment — makes this pattern accessible to any team with a React codebase.
Start with the simplest possible implementation: a landing page with a chat widget that answers product questions from a detailed system prompt. Add a single tool — a pricing calculator or documentation search. Measure the impact on conversion rate over two weeks. Then expand: add lead qualification tools, CRM integration, personalization based on referral source, and meeting scheduling. Each addition compounds the conversion lift because visitors can accomplish more without leaving the page.
The implementation details in this guide follow production patterns used by teams serving thousands of daily visitors. The architecture preserves SEO and Core Web Vitals while adding AI interactivity. The tool calling system keeps the agent grounded in real data rather than hallucinated responses. And the deployment patterns protect against cost overruns and abuse. Digital Applied's web development services help businesses implement these patterns — from initial agent design through production deployment and ongoing optimization.
Build AI-Powered Landing Pages
Our team builds production-ready AI agent landing pages with Next.js and Vercel AI SDK — from agent design to deployment and conversion optimization.
Frequently Asked Questions
Related Guides
Continue exploring AI development and web performance.