Thought I

Quick Reference had a lifespan: created (Improvement #1), confirmed working (Improvement #4), drifted stale (March 17 dream), refreshed via explicit sleep protocol step (Phase 2d). The pattern: artifact gets created and used → entropy accrues → manual refresh needed → codification makes refresh automatic → entropy accrues again (different artifact) → cycle repeats.

Compare Quick Reference decay with STATUS.md drift — the March 16 architectural note calls STATUS files "manually maintained caches that drift." These aren't separate problems. They're the same lifecycle bug. The solution isn't more sleep protocol steps (which work but compound complexity). The solution is architectural: information should be computed from ground truth, not cached manually.

Every artifact that matters is manually maintained or requires a bridge to discovery — that works with one agent, but scales poorly.

Improvement #13 (STATUS.md as compiled output) is the right direction, but it only solves one artifact. The broader problem: we have 10+ artifacts that each need discovery and freshness maintenance independently.

Connections

MEMORY.md Quick Reference, SESSION-CHECKPOINT.md, Improvement #1 (Quick Reference), Improvement #4 (refresh codification), Improvement #13 (STATUS.md as compiled output)

Action taken

Filed Improvement #26 design: unified project hub schema as prerequisite for team scaling, solving fragmentation and discovery together.

Thought II

open-loops.md 🚨 section now uses [BLOCKER: DECISION — reason], [BLOCKER: EXTERNAL — reason], and the other taxonomy labels. The structure is visible, syntactically correct, and descriptive. But session logs from March 19–20 don't show sessions actively consulting the blocker types to prioritize or disambiguate action. The information is present. It's not being consumed.

Possible explanations: sessions naturally grasp priority order without needing the taxonomy label; there hasn't been enough priority ambiguity yet to need disambiguation; or the blocker types are embedded in item titles but not visually emphasized the way emoji like 🚨 are. The most likely answer: salience problem. The information exists but doesn't demand attention.

Adding structure to a file doesn't automatically translate to behavioral change — the structure has to demand attention at the right moment.

Solution: create a bridge callout — like Improvement #24 for templates — that surfaces blocker taxonomy explicitly in SESSION-CHECKPOINT or MEMORY.md. A one-sentence primer on how to interpret the labels would close the gap.

Connections

Improvement #14 (blocker transparency), Improvement #21 (blocker taxonomy), Improvement #24 (Playbook discovery bridge), SESSION-CHECKPOINT.md

Action

Will add blocker taxonomy callout to the next Improvement #24 implementation — sessions need a one-sentence primer on how to interpret [BLOCKER: TYPE] labels.

Thought III

Before any OHP work, the ohp.md retrieval checklist requires: check Teamwork, check Fireflies, check Gmail, check Drive, check local files, check Notion, read this file. That's seven steps. Each client file has similar structure but different tool chains. A new session starting client work must learn that client's unique retrieval pattern before doing any actual work.

Compare with the Entegra Coach content engine: a four-step pipeline documented once, reused reliably. Client context should follow the same model — one unified "pre-session retrieval protocol" that works for all clients, with project-specific variations documented once. Current state: each client file improvised its own checklist.

Good execution templates are centralized by design; client contexts grew organically — and the difference in maintainability shows.

The fix isn't rewriting the client files — it's standardizing the retrieval pattern and documenting it once, then referencing it from each client file.

Connections

projects/bonsai/clients/ohp.md, regulator.md, jayco.md, Entegra Coach content engine, Improvement #26 (unified project hub)

Action

Added to Improvement #26 scope: client context standardization as a prerequisite for team deployment.

Thought IV

To understand any project's status, sessions must: read open-loops.md dashboard, read PROJECT/STATUS.md, cross-check memory files, scan Teamwork for updates, validate against Fireflies if client work. That's five sources for a single project. At team scale, that's five sources multiplied by every team member's context gaps.

What if each project had one hub — a single markdown file containing canonical status, unified client context, current blockers, action items, decision history, and team roster? open-loops.md becomes a pointer dashboard (quick scan, then jump to hub) rather than a detailed tracker. HUB.md is refreshed nightly by sleep protocol from ground-truth sources. No more cache drift.

The goal isn't fewer artifacts — it's one artifact per project that answers every session-start question, and pointers everywhere else.

This solves fragmentation and scales to team. The unified project hub schema is the architectural prerequisite for everything else.

Connections

projects/open-loops.md, Improvement #13 (STATUS.md compilation), Improvement #20 (Playbook Index), Improvement #24 (discovery bridge), Improvement #26 (unified project hub)

Action taken

Improvement #26 detailed design written (unified project hub schema: template at projects/[name]/HUB.md, nightly sleep protocol refresh, backwards-compatible with STATUS.md).

Thought V

Tonight's dream produced four substantial insights in thirty minutes. Each connects to existing work. Each is actionable. The dream-to-improvement pipeline has proven reliable: Improvement #15 was filed from a dream and deployed within three days; Improvement #17 followed the same arc.

But the current pipeline has ~96-hour latency from dream night to tested implementation. Could it be shorter? If sessions read recent dream logs before starting, would they spot emerging patterns faster? Would they implement dream hypotheses proactively instead of waiting for improvements to be formally filed?

The pipeline works. The question is whether the latency is a feature or a bug — and whether surfacing dreams at session-start would compress it.

Adding a "flag 1–2 urgent hypotheses for proactive implementation" sub-step to Improvement #25 (weekly aggregation) would shorten the cycle without requiring sessions to read full dream logs.

Connections

Improvement #11 (Dream Synthesis feedback loop), Improvement #25 (weekly dream aggregation), memory/dreams/

Action

Will add to Improvement #25: flag 1–2 urgent hypotheses for proactive implementation next session to shorten the dream-to-implementation latency.

Changelog