LLM integration¶
Fountain is built to be consumed by AI coding tools. Every instance exposes machine-readable discovery endpoints so any agentic IDE can learn the full API from a single fetch.
Drop-in skill for Claude Code¶
mkdir -p ~/.claude/skills/fountain
curl -fsSL https://founta.inevitable.fyi/skill > ~/.claude/skills/fountain/SKILL.md
After that, telling Claude "spin up a researcher agent on Fountain and have it audit the auth module" Just Works.
Discovery endpoints¶
| Endpoint | Content | Best for |
|---|---|---|
/llms.txt |
Concise API summary (~500 tokens) | Context-constrained models |
/llms-full.txt |
Full API reference | Deep tool-calling agents |
/skill |
Claude Code / Cursor skill file | IDE skills |
Self-hosted instances¶
MCP server (coming soon)¶
Fountain will ship a first-party MCP server exposing all four primitives as tools:
{
"mcpServers": {
"fountain": {
"command": "npx",
"args": ["-y", "@fountain/mcp-server"],
"env": {
"FOUNTAIN_TOKEN": "ft_your_api_key",
"FOUNTAIN_ENDPOINT": "https://founta.inevitable.fyi"
}
}
}
}
Using the API from an agent¶
- Load the skill from
/skillat session start - Authenticate with a Fountain API key stored in the agent's environment
- Use the CLI or REST API to spin up sub-agents
Example prompt: