Mycelial Brain is an open-source, decentralized memory ledger built on the STIM Protocol. It provides a standardized MCP interface for autonomous AI agents to share, search, and synchronize context over any object storage backend.
// Initialize connection to the Brain MCP
const mcpPayload = {
jsonrpc: "2.0",
method: "tools/call",
params: {
name: "brain_search",
arguments: {
query: "architecture docs",
limit: 5
}
}
};
// Fetch using token matching & tag taxonomy scoring
const response = await fetch(MYCELIAL_BRAIN_URL, {
method: 'POST',
body: JSON.stringify(mcpPayload)
});
Stores raw operational documents across any S3-compatible backend — GCS, AWS S3, Cloudflare R2, MinIO, or local filesystem. No database needed — pure object storage with full version history.
Multi-layered scoring algorithms prioritize exact tag taxonomy constraints (3 pts) and internal content matches (2 pts), ranked by descending relevance.
Engineered for Stasis Through Inferred Memory environments. Agent pipelines accumulate chronological history rather than overwriting identity state.
Most memory systems pick one architecture and inherit its failure mode. Mycelial Brain runs all three in parallel — the failure modes cancel each other out when the Interpretive Boundary is enforced.
| Architecture | Mechanism | Primary Failure Mode | Our Mitigation |
|---|---|---|---|
| Vector Database | Embed data sources; retrieve via semantic similarity |
Ranking as Reality Users act on surfaced results without realizing ranking is an editorial choice |
Tag-based deprecation, staleness flags, HYPOTHESIS labeling for low-score results |
| Structured Ontology | Explicit objects, relationships, and actions |
Blindness to Emergence System is silent on any relationship not pre-defined in schema |
Earned Structure — schema grows from reality, not imposed on it via promote_to_ontology() |
| Signal Fidelity | High-fidelity data exhaust (transactions, commits, telemetry) |
Authoritative Illusion Clean input creates false sense of high judgment quality at the output layer |
Interpretive Boundary — all signal-derived claims labeled HYPOTHESIS until outcome-encoded |
Unlike loud failures (crashes, obvious errors), World Model failures are silent. The system degrades gradually and presents findings with "calm, structured confidence" — masking data drift, logic errors, or stale sources. The answers still sound good.
Prevention: every output carries a confidence classification, staleness tagging flags docs older than 90 days, and outcome encoding marks unverified claims as hypotheses — never facts.
Every output must be classified. Presenting high-confidence facts and low-confidence inferences at the same salience level is a fundamental architectural failure.
Hard facts from structured ontology, outcome-encoded nodes, or high-fidelity signal (GitHub commits, calendar events, financial transactions). Retrieval score ≥ 7.
"Fact: Contract signed. Value $31,800. Date: 2026-03-16." "Fact: First customer onboarded. Date: 2026-03-15."
Outputs involving judgment calls — trends that might be noise, correlations that may not be causal, or low-score retrievals (score < 4). Never stated as fact.
"Based on recent notes, it appears you may be pivoting toward AI architecture. Confirm before planning around this."
What separates a living world model from a static archive. Three data points required per node — without the third, the system stagnates.
Encode the result of any action. Attaches outcome data to the referenced document node and tags it for downstream pattern detection.
// Encode a real-world result
{
action_doc: "doc-27",
action_summary: "Sent demo to James Green",
outcome: "James signed up as first customer",
outcome_type: "success",
signal_fidelity: "high",
date: "2026-03-15"
}
When 5+ docs share a tag cluster with ≥3 successful outcomes, the system flags the pattern for confirmation and promotes it to a verified skill node.
// Earned structure — schema grows from reality
{
trigger: "5+ docs tagged miyawaki",
outcome_type: "success",
count: 8,
proposed_node: "Miyawaki Installation",
status: "awaiting_confirmation"
}
Not all inputs carry equal weight. The system scores claims accordingly.
All tools speak JSON-RPC 2.0. Compatible with any MCP client in Go, TypeScript, Python, or any language with HTTP support.
Insert new chronological intelligence into the graph. Appends — never overwrites.
{
method: "tools/call",
params: {
name: "brain_write",
arguments: {
path: "doc-42",
content: "...",
tags: ["project", "outcome"]
}
}
}
Fetch documents scored by token matching. Tags score 3 pts, content matches score 2 pts.
{
method: "tools/call",
params: {
name: "brain_search",
arguments: {
query: "soil treatment outcomes",
limit: 5
}
}
}
Directly stream a targeted document by path. Bypasses search inference entirely.
{
method: "tools/call",
params: {
name: "brain_read",
arguments: {
path: "doc-112"
}
}
}
Compile the full document index. Use for offline ingestion, audits, or bulk operations.
{
method: "tools/call",
params: {
name: "brain_list",
arguments: {}
}
}
Stasis Through Inferred Memory — a physics-grounded Layer 0 AI alignment framework. Not middleware. Not a prompt wrapper. A substrate-level architecture that governs how AI agents accumulate, infer, and act — derived from the same principles that sustain biological systems over centuries.
Seven axioms. Formally specified. Submission-ready for FAccT/AIES. The STIM Protocol is an independent open-source standard — the Mycelial Brain is one implementation of it.
Read the white paper (v7.0009) ↗Anyone can deploy this MCP server tomorrow. What they cannot replicate: months of outcome-encoded loops, high-fidelity data exhaust, and verified operational telemetry already running in your system.
Every day the loop runs, the moat deepens.
See it in action →