The governed runtime
for code agents.
Thymos treats a language model as a bounded proposer against a policy-governed, ledgered runtime. Cognition proposes. The runtime decides. The ledger remembers. Ship a coding agent that is reproducible, auditable, and runs on the model you want — cloud or local.
Cognition
Proposes.
The model emits typed Intents. It never executes, never mutates state, never touches the ledger.
Runtime
Decides.
Compiles Intents into Proposals under a signed Capability Writ. Evaluates policy. Stages effects through typed tool contracts.
Ledger
Remembers.
Append-only, content-addressed via BLAKE3, parent-chained. Every run replays byte-for-byte.
Live · in the repo today
A coding agent you actually trust
Typed file ops. Patched edits with pre/post-condition hashes. A secure shell with capability profiles and execution receipts. Run it against Claude, GPT, or a model loaded in LM Studio on your laptop. Same guarantees either way.
Why this instead of a wrapper CLI
Governance is the feature.
Everything an agent does has a typed contract, a signed writ, and a ledger entry. Nothing runs because the model said so. Everything runs because policy allowed it.
Signed writs
Ed25519-signed capability writs. Scoped tools, budgets, time windows, delegation bounds. Forged writs fail at the compiler.
Typed tool contracts
Effect class, risk class, pre/post-conditions against a world projection. Tools declare the state delta they produce.
Secure tool fabric
Shell and HTTP execute behind a subprocess worker boundary with capability profiles, timeouts, and execution receipts.
Replayable trajectory
BLAKE3 content-addressed commits, parent-chained. Branch, resume, diff, verify — same ledger primitives as git.
Any model, one loop
Anthropic, OpenAI, or any OpenAI-compatible endpoint — LM Studio, Ollama, vLLM, llama.cpp. Swap providers without re-plumbing.
Production seams
Async runtime, SSE token streaming, JWT + API-gateway auth, tenant isolation, run persistence, OpenTelemetry tracing.
Bring your own model
First-class local cognition.
The OpenAiCognition adapter accepts any base URL. Point it at a model running on your laptop and the same governance layer applies.
Stand up a governed coding agent today.
One cargo run. Full trajectory ledger. Zero-config mock cognition out of the box, production-grade cognition when you wire a key or a local endpoint.