Built for production AI agents.
Most "agent APIs" are LLM wrappers with a sandbox bolted on. Pokee Enterprise is the other direction: a dedicated per-tenant runtime, isolated at the operating-system level, designed to be the production substrate for long-running agentic workflows.
OS-level filesystem isolation
Each session runs in its own mount namespace. Files outside the session's scope literally don't exist from the agent's perspective — accessing them returns ENOENT, not a permission error. Filesystem-level enforcement, not application-level filtering.
// Sessions A and B in the same workspace,
// scoped to different subdirectories.
// Session A cannot see Session B's files at all —
// they don't exist in its view of the filesystem.Persistent memory across sessions
The agent maintains a living memory file at .memory/MEMORY.md that survives session boundaries and pod restarts. Facts, preferences, and project state accumulate over time. No manual context-passing required.
// Session 1: "I prefer concise answers; project is X"
// Session 2 (next day, fresh process):
// Agent reads .memory/MEMORY.md before responding.
// Already knows your preferences.Tenant-supplied agent context
Drop markdown files in .pokee/ to extend the agent's system prompt with your domain language, voice, and conventions. Effective on the next session — no API change, no SDK update, no redeploy.
PUT /v1/sessions/{id}/files/.pokee/soul.md
PUT /v1/sessions/{id}/files/.pokee/glossary.md
PUT /v1/sessions/{id}/files/.pokee/style_guide.md
// Next session reads these into the system prompt.Disconnect-resilient streaming
Once a message starts, the agent runs to completion — even if your client drops the connection mid-stream. Reconnect or poll the session to retrieve the result. No retry logic, no half-finished work, no surprise duplicate runs.
// Client disconnects 30 seconds in...
// Agent keeps running.
// 5 minutes later: GET /v1/sessions/{id}
// → busy: false, response is in workspace.More than the surface
Read the full reference, or talk to us about a deployment.