MCP Server
The data-table-mcp block exposes your data table as an MCP endpoint. AI agents can discover the table's schema via tools/list and query it with filters, cursor-based pagination, sorting, and faceted stats via a single query_table tool.
Stateless. Serverless-compatible. Schema auto-generated from your existing field definitions.
Installation
npx shadcn@latest add https://data-table-filters.com/r/data-table-mcp.jsonThis installs @/lib/mcp and the @modelcontextprotocol/sdk dependency.
Prerequisites
- The data-table block must be installed (provides the BYOS schema system)
- A data source — any async function that accepts filters and returns rows + total
Route Setup
Create a route handler co-located with your table's API. Export POST, GET, and DELETE for Streamable HTTP spec compliance.
The recommended pattern derives the MCP schema from your existing filterSchema, excluding UI-only fields (live, uuid) via destructuring. This automatically includes all filter fields, sorting, and cursor-based pagination.
In-Memory Example
// app/infinite/api/mcp/route.ts
import { createTableMCPHandler } from "@/lib/mcp";
import {
filterData,
getFacetsFromData,
sortData,
splitData,
} from "@/app/infinite/api/helpers";
import { filterSchema } from "@/app/infinite/filter-schema";
import { mock, mockLive } from "@/app/infinite/api/mock";
import type { SchemaDefinition } from "@/lib/store/schema";
// Derive MCP schema from filterSchema, excluding UI-only fields
const { live, uuid, ...mcpSchema } =
filterSchema.definition satisfies SchemaDefinition;
const handler = createTableMCPHandler({
name: "data-table-filters",
description:
"Query HTTP request logs with filters for level, method, host, pathname, latency, status, regions, date range, sort, and cursor-based pagination (cursor, size, direction)",
schema: mcpSchema,
getData: async ({ filters }) => {
const totalData = [...mockLive, ...mock];
const filtered = filterData(totalData, filters);
const sorted = sortData(filtered, filters.sort ?? null);
const paginated = splitData(sorted, {
cursor: filters.cursor ?? null,
size: filters.size ?? 40,
direction: filters.direction ?? "next",
});
const facets = getFacetsFromData(filtered);
const total = filtered.length;
const rows = paginated.map((row) => ({
...row,
date: row.date.toISOString(),
}));
return { rows, total, facets };
},
});
export { handler as POST, handler as GET, handler as DELETE };Drizzle/Postgres Example
// app/drizzle/api/mcp/route.ts
import { createTableMCPHandler } from "@/lib/mcp";
import { db } from "@/db/drizzle";
import { logs } from "@/db/drizzle/schema";
import { createDrizzleHandler } from "@/lib/drizzle";
import { filterSchema } from "@/app/infinite/filter-schema";
import { columnMapping } from "../../column-mapping";
import { tableSchema } from "../../table-schema";
import type { SchemaDefinition } from "@/lib/store/schema";
const { live, uuid, ...mcpSchema } =
filterSchema.definition satisfies SchemaDefinition;
const drizzleHandler = createDrizzleHandler({
db,
table: logs,
schema: tableSchema.definition,
columnMapping,
cursorColumn: "date",
defaultSize: 40,
});
const handler = createTableMCPHandler({
name: "data-table-drizzle",
description:
"Query HTTP request logs (Drizzle/Postgres) with filters for level, method, host, pathname, latency, status, regions, date range, sort, and cursor-based pagination (cursor, size, direction)",
schema: mcpSchema,
getData: async ({ filters }) => {
const result = await drizzleHandler.execute(
filters as Record<string, unknown>,
);
type LogRow = typeof logs.$inferSelect;
const rows = result.data.map((row) => {
const r = row as LogRow;
return {
uuid: r.uuid,
level: r.level,
method: r.method,
host: r.host,
pathname: r.pathname,
status: r.status,
latency: r.latency,
regions: r.regions,
date: r.date.toISOString(),
message: r.message ?? undefined,
"timing.dns": r.timingDns,
"timing.connection": r.timingConnection,
"timing.tls": r.timingTls,
"timing.ttfb": r.timingTtfb,
"timing.transfer": r.timingTransfer,
};
});
return { rows, total: result.filterRowCount, facets: result.facets };
},
});
export { handler as POST, handler as GET, handler as DELETE };Config
| Option | Type | Default | Description |
|---|---|---|---|
schema | SchemaDefinition | — | BYOS field definitions for filter params |
description | string | — | Tool description (shown to agents) |
getData | (opts) => Promise<...> | — | Data source function |
name | string | "data-table" | MCP server name in serverInfo |
The query_table Tool
The handler registers a single MCP tool with these parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
filters | object (optional) | {} | Auto-generated from your schema |
format | "json" | "stats" | "json" | Output format |
Pagination & Sorting
Pagination and sorting are part of the filters object, derived from your filterSchema:
| Filter field | Type | Default | Description |
|---|---|---|---|
cursor | number (Unix ms) | now | Cursor timestamp |
size | number | 40 | Rows per page |
direction | "prev" | "next" | "next" | Pagination direction |
sort | { id, desc } | — | Sort descriptor |
Output formats
json — returns rows with total count:
{ "rows": [...], "total": 1234 }stats — returns total count and facet metadata (no rows):
{
"total": 1234,
"facets": {
"level": { "rows": [{ "value": "error", "total": 42 }], "total": 1234 }
}
}Schema Mapping
Your BYOS field types are automatically converted to typed tool parameters:
| Field type | Tool parameter type |
|---|---|
field.string() | string (optional) |
field.number() | number (optional) |
field.boolean() | boolean (optional) |
field.timestamp() | number (optional, Unix ms) |
field.stringLiteral(["a", "b"]) | enum ["a", "b"] (optional) |
field.array(field.number()) | array of number (optional) |
field.array(field.stringLiteral(...)) | array of enum (optional) |
Timestamp fields accept Unix milliseconds over the wire and are deserialized to Date objects before reaching your getData function.
Connecting Clients
Claude Code
Add to .mcp.json in your project root:
{
"mcpServers": {
"data-table-infinite": {
"type": "http",
"url": "http://localhost:3000/infinite/api/mcp"
},
"data-table-drizzle": {
"type": "http",
"url": "http://localhost:3000/drizzle/api/mcp"
}
}
}Verification (curl)
# Initialize
curl -X POST http://localhost:3000/infinite/api/mcp \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-03-26","capabilities":{},"clientInfo":{"name":"test","version":"1.0.0"}}}'
# List tools (see auto-generated schema)
curl -X POST http://localhost:3000/infinite/api/mcp \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-d '{"jsonrpc":"2.0","id":2,"method":"tools/list"}'
# Query with filters
curl -X POST http://localhost:3000/infinite/api/mcp \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-d '{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"query_table","arguments":{"filters":{"level":["error"],"size":5},"format":"stats"}}}'Use Cases
Conversational data exploration
Connect your MCP endpoint to Claude Desktop or Cursor. Ask questions in natural language — the agent translates them into query_table calls with the right filters, paginates through results, and summarizes findings.
"Show me all 500 errors from the HKG region in the last hour"
Monitoring and alerting
An autonomous agent polls format: "stats" on a schedule, watching facet distributions for anomalies. When the error rate spikes or latency percentiles drift, it triggers alerts via Slack, PagerDuty, or email — no dashboard required.
Post-deploy validation
After a deploy, a CI/CD agent queries the logs table for error spikes in a short window. If the error count exceeds a threshold compared to the pre-deploy baseline, it flags the release for rollback.
Cross-table orchestration
Multiple MCP endpoints (logs, orders, customers) each expose a query_table tool. An agent queries across all of them to answer cross-cutting questions like "which customers experienced 500 errors today?" — joining data that lives in separate tables or services.
Internal tooling for non-developers
Teams that don't have dashboard access or SQL knowledge can query production data through a chat interface. The MCP schema provides guardrails — agents can only filter on defined fields, preventing arbitrary queries.
Agent-to-agent pipelines
One agent's output feeds another's input. A triage agent identifies error patterns from the logs table, then passes those patterns to a root-cause agent that correlates with deploy metadata from a different MCP endpoint.
How It Works
createTableMCPHandlerconverts your BYOS schema to a Zod schema at startup- The MCP SDK auto-converts Zod to JSON Schema for
tools/listresponses - On each request, a fresh
McpServer+WebStandardStreamableHTTPServerTransportis created (stateless) - Tool calls are validated by the SDK via Zod, then timestamp values are deserialized (
number→Date) - Your
getDatafunction is called with typed filters (including pagination and sort) - Results are formatted as JSON (
rows+total) or stats (total+facets)
Dependencies
@modelcontextprotocol/sdk— MCP protocol implementationzod— Schema validation (already installed with the core block)
