AI Filters
The data-table-filter-command-ai block adds natural language filtering to your data table. Type a query like "5xx errors in production last 24h" and the AI translates it into structured filters — applied progressively as the response streams in.
Provider-agnostic. Works with any LLM via the Vercel AI SDK.
Installation
npx shadcn@latest add https://data-table-filters.com/r/data-table-filter-command-ai.jsonThis installs the AI command palette component and the @/lib/ai utilities.
Prerequisites
- The data-table-filter-command block must be installed (auto-installed as a dependency)
- The data-table-schema block must be installed — AI context is generated from your table schema
- An LLM provider package (e.g.,
@ai-sdk/anthropic,@ai-sdk/openai)
pnpm add @ai-sdk/anthropicAPI Route Setup
Create a POST route that streams AI-inferred filter state. The createAIFilterHandler factory generates the system prompt and Zod output schema from your table schema automatically.
Direct Provider
// app/api/ai-filters/route.ts
import { anthropic } from "@ai-sdk/anthropic";
import { createAIFilterHandler } from "@/lib/ai";
import { tableSchema } from "../table-schema";
export const POST = createAIFilterHandler({
model: anthropic("claude-sonnet-4-20250514"),
schema: tableSchema.definition,
});Set your API key in .env:
ANTHROPIC_API_KEY=sk-ant-...Swap anthropic(...) for any Vercel AI SDK provider — openai("gpt-4o-mini"), google("gemini-2.0-flash"), etc.
Vercel AI Gateway
The live demo uses the Vercel AI Gateway, which provides a unified endpoint for multiple providers:
// app/api/ai-filters/route.ts
import { createAnthropic } from "@ai-sdk/anthropic";
import { createAIFilterHandler } from "@/lib/ai";
import { tableSchema } from "../table-schema";
const anthropic = createAnthropic({
baseURL: "https://ai-gateway.vercel.sh/v1",
apiKey: process.env.AI_GATEWAY_API_KEY,
});
export const POST = createAIFilterHandler({
model: anthropic("anthropic/claude-sonnet-4-20250514"),
schema: tableSchema.definition,
});Client Integration
Drop DataTableFilterAICommand into the commandSlot prop of DataTableInfinite:
import { DataTableFilterAICommand } from "@/components/data-table/data-table-filter-command-ai";
<DataTableInfinite
// ... other props
commandSlot={
<DataTableFilterAICommand
schema={filterSchema.definition}
tableSchema={tableSchema.definition}
api="/api/ai-filters"
tableId="my-table"
/>
}
/>;Props
| Prop | Type | Description |
|---|---|---|
schema | SchemaDefinition | BYOS filter schema for structured query parsing |
tableSchema | TableSchemaDefinition | Table schema for AI context and output schema generation |
api | string | API endpoint path (e.g., "/api/ai-filters") |
tableId | string | Unique ID for localStorage history (default: "default") |
How It Works
The AI command palette shares the same input as the regular command palette. What happens when you press Enter depends on the input:
- Structured input (
host:api,regions:ams,gru,latency:100-500) — parsed instantly by the existing command palette. No AI involved. - Natural language ("slow requests from eu regions") — sent to your API route. The AI streams back a structured JSON object matching your table schema's filter types.
The response is applied progressively as it streams:
- Input and checkbox filters update immediately as values arrive
- Slider and timerange filters wait until both bounds are present before applying (to avoid flashing partial ranges)
After the stream completes, a final validation pass clamps slider values to defined bounds, strips invalid checkbox options, and converts ISO date strings to Date objects.
The command palette also stores the last 5 searches in localStorage (namespaced by tableId) for quick re-use.
Schema Considerations
The quality of AI filter inference depends directly on the metadata in your table schema. Two things matter most:
Add descriptions to your columns
Descriptions are essential for AI filtering accuracy. Without them, the AI only sees field names and types, which can lead to ambiguous results. A description tells the AI what each column represents:
const tableSchema = createTableSchema({
host: col
.string()
.label("Host")
.description("Origin server hostname, e.g. api.example.com")
.filterable("input"),
latency: col
.number()
.label("Latency")
.description("Response time in milliseconds")
.filterable("slider", { min: 0, max: 5000, unit: "ms" }),
});The description, label, allowed values, min/max bounds, and unit are all included in the AI prompt. The more context you provide, the better the inference.
commandDisabled fields are still available to AI
Fields with .commandDisabled() are hidden from the manual command palette suggestions, but they are still included in the AI prompt. This is intentional — some fields are hard to use with key:value syntax but easy to express in natural language.
For example, the date column in the live demo has commandDisabled() because manually typing ISO date ranges is impractical. But saying "last 24 hours" or "this week" works naturally with the AI.
date: col
.timestamp()
.label("Date")
.commandDisabled() // Hidden from manual palette, available to AI
.sortable(),Customization
Swapping providers
Replace the model in your API route. Any Vercel AI SDK provider works:
import { openai } from "@ai-sdk/openai";
export const POST = createAIFilterHandler({
model: openai("gpt-4o-mini"),
schema: tableSchema.definition,
});Prompt caching
The handler automatically applies Anthropic prompt caching to the static system prompt via cacheControl: { type: "ephemeral" }. This means repeated queries reuse the cached prompt, reducing latency and cost. This works out of the box with Anthropic models — no configuration needed.
Using the library exports directly
If you don't use the Vercel AI SDK, you can use the library exports to build a custom integration:
import {
generateAIPrompt,
generateAIOutputSchema,
parseAIResponse,
} from "@/lib/ai";
// Generate the prompt and Zod schema from your table schema
const prompt = generateAIPrompt(tableSchema.definition, { now: new Date() });
const schema = generateAIOutputSchema(tableSchema.definition);
// Call your LLM with the prompt and schema
const response = await yourLLM({ system: prompt, schema });
// Validate and apply the response
const state = parseAIResponse(tableSchema.definition, response);
if (state) {
for (const [key, value] of Object.entries(state)) {
table.getColumn(key)?.setFilterValue(value);
}
}