Skip to main content
Pikku Fabric use case

Give your API
a brain.

Turn any API into a chat assistant, CLI tools, and an MCP server your customers can use right away. No rebuilding required.

You've invested years building your API. Adding AI shouldn't mean starting over. Most of your API's power is locked behind UIs only power users navigate. Give every customer a way in.

Three interfaces from one API spec.

Connect your OpenAPI spec. Pick your interfaces. Ship.

Chat assistant

Your customers ask questions. Your API answers them.

The agent maps natural language to the right endpoints automatically. No docs for your users to read, no tickets for your team to answer.

CLI tools

Power users get CLI tools they can script and chain.

Every API capability becomes a command. Pipe it, chain it, build workflows. Works with local AI agents and MCP clients too.

Platform integration

Your product works inside ChatGPT, Claude, Cursor, and more.

It becomes a native tool in the AI platforms your customers already use. No custom integration work on your end.

From OpenAPI spec to working agent.

1

Connect your API

Point it at your spec

$ pikku agent init --spec https://api.acme.dev/openapi.json

✓ Parsed 42 endpoints
✓ Generated function types
✓ Created agent project

Next: pikku deploy
2

Pick your interfaces

Chat, CLI, MCP — any combo

// Your API endpoints become agent tools automatically
const agent = pikkuAIAgent({
name: 'acme-assistant',
instructions: 'Help users manage their account',
tools: [getDeployments, createIncident, getCloudSpend, ...],
})

// Expose as chat, CLI, and MCP
wireChannel({ channel: 'support', agent })
wireCLI({ program: 'acme', agent })
wireMCP({ agent })
3

Ship it

Deploy to Fabric

$ pikku deploy
✓ Chat assistant wss://chat.acme.dev
✓ CLI published npx @acme/cli
✓ MCP server mcp://acme.dev
✓ 42 tools active from your OpenAPI spec
Live in 4.2s

One AI call per workflow. Zero tokens after that.

The AI figures out your workflow once and turns it into a deterministic execution plan. Every run after that is native code. No model calls, no token costs, no latency variance.

LLM on every request

~$14,400/yr

8-step workflow, 50x/day

Pikku (AI designs once)

~$1.20/yr

Same workflow, native execution

Your API's rules still apply.

Permissions carry over

Your API's auth layer stays in the loop. The agent can only do what the user could already do.

AI can't go rogue

It designs the workflow. It doesn't execute it. Your existing permissions enforce every request.

Full audit trail

Every conversation, workflow, and execution is logged.

Deterministic execution

Approved workflows produce the same result every time. No temperature, no variance.

You built the API.
Let your customers use all of it.

Upload your OpenAPI spec. Get a chat assistant, CLI tools, and MCP server — deployed on Pikku Fabric.