Companion engineering project to the research vision

The user-agency substrate (brief)

2026-04-28 — published v1, after external calibration.

Brief orientation. One of the three active research directions on the home page is "personal cognition systems for clearer thinking." The user-agency substrate is the engineering instantiation of that direction. This page is a brief; the canonical artifacts are linked at the bottom.

What it is

An early-stage layer between frontier-model providers (Anthropic, OpenAI, Google, open-weights) and individual users. Persistent memory, role-isolated agent identities, doctrine that propagates across instances, sediment trail going back to inception.

Small at the time of writing — under 50 users, ~7 cloud nodes plus a couple of local hosts. The substrate is real but early. The substrate-code repo is currently private during bootstrap; it flips public when consolidation, license matrix, security review, and lawyer review on public-language risk are complete.

What's actually in place vs design goal

In place: cross-device persistent memory (canonical store), per-role identity isolation (server-side enforcement), auto-failover across model providers (reliability-first), skills as reusable workflows, sediment-grounding rule, daily memory-audit cron.

Design goal, not realized: user-side provider choice (BYOK end-to-end), formal threat model with external audit, structural deletion, signed releases, vulnerability disclosure policy.

Threat model (sketch)

The substrate handles intimate cognitive data. Values commitments alone are not enforcement. Risks the substrate must address before public release of code: storage encryption, structural deletion, consent for any ambient capture, provider leakage characterization, role-isolation adversarial testing, founder bus factor, governance capture, dependency on memory hub.

Until the substrate-code repo flips public with these mitigations in place, treat the substrate as research-grade, not production-grade.

Eleven structural governance commitments

The substrate makes 11 structural commitments that bind any future institutional form to mission rather than to self-continuance. Headlines:

  1. No enemies, including frontier-lab leadership and employees
  2. Non-VC-funded, donation-supported (not yet legally incorporated as nonprofit)
  3. User-owned derivatives, subject to upstream provider terms and applicable law
  4. Public open-source releases iterate alongside internal HEAD
  5. Forks producing better outcomes are celebrated
  6. Substrate has no entitlement to existence
  7. Frontier-lab charters cited as cooperative principles, not invoked as legal claim
  8. Articulation honesty: transparency is structural
  9. + items 9-11 covering anti-capture, donor caps, self-fulfilling-prophecy responsibility — full text in GOVERNANCE.md

Mission framing

The design goal is "personal cognition systems for clearer thinking" — the engineering instantiation of one of the three home-page research directions. The substrate's job is to absorb noise and surface what's load-bearing, leaving more cognitive space for the user's own integration. Capabilities will lag the larger commercial labs. The substrate's bet is on whether user-controlled memory, routing, and identity can be made more auditable, forkable, and less lock-in-prone — these are design goals, not guarantees.

How to engage

Long-arc speculation (separate)

Long-term I expect personal AI to become more ambient and infrastructural. A separate future essay will cover long-arc speculation about ambient AI as infrastructure, neuromorphic computation, brain-machine interfaces, and computational frames for physics / biology / social systems. That speculation is intentionally kept out of this launch document because it is speculation, not product claim, and conflating the two undermines both. Link added when published.

Author + transparency

This page was drafted with substantial collaboration from AI agents in the substrate, calibrated by external cross-model review, and rewritten after the first draft did not survive that review. The drafting + review process is logged in MachengShen/system-evolution-archive (currently private during bootstrap). Authorship-process transparency is a structural commitment.