Public systems map

One website, seven linked systems.

The site is not just a homepage. It is a loop that connects the repo, the notes vault, the grounded assistant, training capture, public research surfaces, and agent lanes. The point is to keep the public surface readable while the backend system stays rigorous and reusable.

Code vault

Repo-owned public pages, API routes, dispatch spine, and evidence-aware scripts.

Content vault

Notes, prompts, planning packets, and canonical language that feed retrieval and writing.

Grounded website core

Public page surfaces, grounded assistant, source packet rendering, and zero-cost-first delivery. This is the user-facing layer that should stay readable even when the backend system is much deeper.

Training capture

Interaction logs and reviewed data packets for model improvement without blind auto-training.

Research + agent lanes

Benchmark pages, article outputs, HYDRA browse evidence, and dispatchable website work packets.

Seven zones

Each zone has one job

The site gets messy when one layer tries to do every job. The fix is to keep each zone narrow and make the connections explicit.

01

Code vault

The repo is the implementation authority.

  • Public pages under docs/
  • Assistant/backend routes in scripts/aetherbrowser/
  • Dispatch and system scripts under scripts/system/
02

Content vault

Notes hold the raw ideas before they become public claims.

  • Obsidian notes as retrieval corpus
  • Planning packets stay private first
  • Useful fragments move into the site when they are clear enough
03

Website surfaces

Readable, evidence-first pages for humans.

  • Homepage for orientation
  • Research pages for proof and architecture
  • Demo pages for interactive slices
04

Training capture

Interactions become training only after review.

  • Raw interaction logs
  • Reviewed SFT packets
  • No blind train-on-everything loop
05

Grounded assistant

The assistant should retrieve first and cite before it improvises.

  • Search notes/ and docs/
  • Route into Hugging Face or local AetherBot after retrieval
  • Keep source packets visible
06

Public research output

The site needs clean places for what is proven and what is still exploratory.

  • Benchmarks and verification packets
  • Architecture explainers
  • Article landing routes
07

Agent control

Website work should be packetized so it can be repeated or delegated safely.

  • Dispatch capabilities for website work
  • HYDRA browse evidence for comparisons
  • Swarm lanes only when the task is bounded
Operating loop

The website gets better through one loop

This is the bounded version of the bigger dream: one assistant, one evidence loop, one training loop, and then expand.

1. Ask

A public question, demo interaction, or client task hits the site.

2. Ground

The system retrieves from local notes and docs before it routes to a model.

3. Curate

The interaction is logged, reviewed, and only then promoted toward training data.

4. Publish

Better answers, cleaner pages, and stronger research packets feed the next article and product surface.

Agent lanes

Agent work should stay packetized

The site can plug into HYDRA and the dispatch spine, but only through bounded lanes. The website should not become a dumping ground for uncontrolled automation.

Website surface lane

Use the docs/research/demo routes as the mutable public surface.

Assistant lane

Use the grounded assistant backend and source rendering before adding more tools or autonomy.

Research lane

Use deterministic browse evidence for reference scans and competitive comparisons.