Most data tools bolt on a chat box. Kirimana started AI-first — the
same governance state that powers contracts is what Kiri reads,
what Skills extend, and what every external AI agent sees over
Model Context Protocol. One trust ladder, one audit log,
everywhere AI touches your data.
01 Kiri · in-product assistant
Kiri lives inside the product, not beside it.
Every AI surface — UI, CLI, the docs chat above — speaks as Kiri. She knows your stack, your role, your contracts, and your audit history. She drafts contracts, explains lineage, triages failed applies, and never invents numbers she can't substantiate.
Brand voice locked. Persona-aware on every turn.
02 Skills · teach Kiri your domain
Markdown skills turn Kiri into your team's playbook.
A skill is a folder with a SKILL.md and a few prompts. Versioned in git, reviewed in PR, runnable from Kiri or as a CLI step. Reference skills ship with the product — draft-gold-model, debug-apply, azure-deploy, incident-coach, explain-kpi, maturity-coach — and you write your own.
User-authored. Reviewable. No fork required.
03 Claude · trust ladder, every call
Every LLM call goes through the AI gateway. Always.
Classification gate first — restricted data never leaves your tenant. Anthropic prompt cache always on, so Kirimana's brand voice and contract context don't get re-tokenised every turn. Local Ollama provider for air-gapped. Every prompt, response, model, cost, caller — audit-logged.
No direct SDK use. No exceptions.
04 MCP · open to any AI agent
Speak Model Context Protocol → read the governance state.
Claude Code, Cursor, Cline, your in-house copilot, any agent that speaks MCP can read contracts, classifications, lineage, AI-policy decisions, release status — without a bespoke integration. Every MCP read and tool-call is itself audit-logged.
One protocol. Every agent. Same audit trail.