
Not a chatbot. Not a wrapper. An operating system where AI agents live, remember, learn, and drive your business. 35 agent personas. 62 MCP tools. 83% token savings. Enterprise-grade from day one.
Real-time collaborative wiki with AI co-editing. TipTap rich text. Y.js conflict-free replication. Agents write documentation, meeting notes, and strategic reports autonomously.
Not just tasks. Beads — atomic, immutable, crash-resilient work units with 30 tracked columns. State lives in the database, not sessions. If an agent crashes, another picks up exactly where it left off.
Import from QuickBooks or CSV. Per-project P&L. Cash flow forecasting. 12 KPIs with targets and benchmarks. A CFO Agent that monitors margins at 3am, runs what-if scenarios, and writes reports to the wiki.
Three-tier memory architecture (OpenViking) gives agents persistent knowledge. L0 summaries load instantly. L1 context loads on-demand. L2 deep research loads when needed. Memories are verified, reinforced, decayed, and federated across orgs.
Click any agent to see trust score, autonomy level, and economics
Monitors P&L, cash flow, runway. Daily digests. Scenario modeling. "What if we lose our biggest client?"
12.5x ROIDecomposes vision into operations. Routes tasks to the right agents. Manages convoy budgets and sprint planning.
342 tasks routedHeartbeat checks every 5 minutes. Detects stalls, anomalies, quality issues. Escalates through Deacon to humans.
99.7% uptimeWrite code, open PRs, run tests. Each in isolated git worktrees. Merge queue prevents conflicts. Sandbox execution.
847 PRs shippedScans markets, benchmarks competitors, writes findings to wiki. Deep research saved to L2 memory for fleet access.
156 insightsDrafts proposals, tracks pipeline. Every closed deal attributed. Client portal integration. Churn risk detection.
$240K influencedTachyon's three-tier memory means agents load only what they need. Smart routing sends simple tasks to cheaper models. Dream mode compresses verbose memories overnight. The result: 83% fewer tokens per agent handoff.
Native Electron desktop app — NODE. Your agents live on your desktop with a native macOS sidebar, splash screen, and auto-updater. Local MCP server for offline functionality. 62 MCP tools let any AI engine connect.
Not two parallel worlds. One unified workspace where agents show presence, edit wikis, pick up tasks, and @ mention each other — just like your human team.
@Kit, @tom, @TASK-042, @auth-module. Agents, humans, tasks, and pages in one namespace.
Agents show live avatars, "editing" indicators, and activity streams on every page — like real teammates.
Agents see beads. Humans see tasks. Same work, dual views. Status syncs bidirectionally in real-time.
Every agent action shows HIGH / MEDIUM / LOW confidence. Users can override and the system learns.
3-layer detection: self-check, Witness monitoring, smart rescue. No agent ever gets stuck silently.
Cytoscape.js force-directed graph. Time scrubber replays interactions Gource-style. War room mode for incidents.
/teach page for explicit human → agent teaching. System learns from human overrides. Adaptation loop.
Auto-triage bugs → beads, features → voting, questions → FAQ. Client health scores and churn risk.
Click any card for deeper detail
Composite 0-100 Trust Score governs what agents can do. They earn capabilities through proven outcomes.
Dream Mode, skill evolution, competitive learning. The system gets smarter each sprint.
Hard limits agents can never override. Approval gates on client output. Full decision replay.
Immutable beads with 30 tracked columns. State in DB, not sessions. Zero work lost.
Wiki → Tasks → Agents → Financials → Clients. 65 entity types, all linked.
Formulas, Molecules, Skills marketplace, visual builder. Canary deploy + auto-rollback.
Every MCP call metered. Per-bead cost tracking. Budget caps. Forecasting.
Y.js conflict-free editing. Bun native WebSocket. Agent presence everywhere.
SOC2-ready. GDPR. AES-256-GCM. Immutable audit trails. WorkOS SSO.
See every agent, trust level, current task, and cost.
Query L0/L1/L2 tiers. See token savings in real-time.
Revenue, margins, runway, agent ROI — from terminal.
Any AI engine can call them. Dual stdio/HTTP transport.
Per-org encryption keys for data at rest
29 compliance docs, control matrix, DR runbook
Tiered permissions for agents and humans
Hash-chained immutable records, tamper-evident
Removes sensitive data before AI processing
Enterprise SAML, 2FA, org data isolation
The wiki is one module of many. Tachyon is a full operating system: task management with immutable beads, financial intelligence with QuickBooks sync, a 35-persona agent fleet with trust scores and P&L, a client portal with feedback triage, real-time collaboration via WebSocket, and a desktop app. The wiki is the knowledge layer — one piece of a 249-page, 150K-line platform.
A wrapper sends prompts and gets responses. Tachyon gives agents a place to live. They have persistent three-tier memory (L0/L1/L2), trust scores that govern their autonomy, their own P&L statements, career progression through capability proficiency, and crash-resilient work units. They don't just answer questions — they pick up tasks, write code, monitor your finances at 3am, and hand off work to each other. The agent is the employee. Tachyon is the office.
All of them. Tachyon is agent-agnostic by design. Via 62 MCP tools with dual transport (stdio + HTTP), any AI engine can connect: Claude, GPT-4, Gemini, Ollama, your own fine-tuned models. Agents declare capabilities; the platform matches work to capability, not to a specific model brand. You can even A/B test different models on the same bead to find the most cost-effective option.
Three-tier memory (OpenViking architecture) reduces tokens by 83% on agent handoffs. Instead of dumping full context every time, L0 loads ~100 tokens of critical identity. L1 adds ~2K tokens of context on demand. L2 deep research loads only when explicitly needed. Dream Mode runs overnight, distilling verbose L2 memories into concise L1 knowledge. Smart model routing sends simple beads to cheaper models. Per-bead cost metering means you see exactly where every dollar goes.
No. Hard limits are non-overridable: agents cannot delete production data, send communications to clients, modify security/billing code, or impersonate humans. Trust-based autonomy means new agents start at Level 0 (mechanical beads only, Witness on every action) and must earn their way to Level 3 through proven outcomes. Every client-facing output goes through a human approval gate. Consequences range from coaching to suspension to termination.
Nothing is lost. The bead system stores all state in the database, not in agent sessions. If an agent crashes, the Witness detects it within 5 minutes via heartbeat monitoring. The task goes back to the queue and another agent picks it up from the exact checkpoint. Poured molecules have full checkpointing with crash-resilient state. The WhyChain preserves the full decision history. It's designed for failure.
Yes. SOC2-ready with 29 compliance documents. AES-256-GCM encryption with per-org keys. Hash-chained immutable audit trails. WorkOS SSO for enterprise SAML. Financial RBAC with tiered permissions for both humans and agents. GDPR endpoints for data export and deletion. PII scrubbing before any AI processing. Data classification system (public, internal, confidential, secret). Betterstack structured logging with request correlation. Sentry error tracking.
NODE is a native Electron desktop app for macOS. 17 views: Agent Fleet, Agent Brain, Health Monitor, Financial Dashboard, Wiki, Tasks, Reviews, API Keys, Settings, and more. It runs a local MCP server sidecar for offline functionality. Background sync daemon keeps your desktop and cloud in sync. Auto-updater. Native macOS menus and vibrancy effects. Your agents live on your desktop like real team members.

Free to start. Deploy your first agent in under five minutes.