This page is the whole-workspace architecture map for the current monorepo state. It shows how the major architecture families in this repository relate to each other, which artefacts are authoritative for each stream, and where to go when you need code-grounded depth. Paperclip remains the densest implemented control-plane system in the repo, but it is not the only architecture stream that matters. ADR-0001 records the current interim monorepo model, and ADR-0002 records the future platform-versus-tenant split target. Read this page as the workspace map first, then drop into the stream-specific entrypoints for implementation detail.

How To Use This Page

Use this page for:
  • the current workspace topology
  • ownership and storage boundaries
  • end-to-end data-flow tracing
  • the relationship between platform, native runtime, operating corpus, contracts, prototypes, and research streams
Use the supporting docs for depth:
NeedStart here
Whole-workspace artefact inventoryArchitecture Artefact Register
Architecture governance and document versioningMaster Document Control
Primary platform entity ownershipData Model
Logs, traces, provenance, and warehouse signal flowObservation Architecture
Route and service owners for tablesSchema Map
API scopes and heartbeat contractAPI Overview
Deployment modes and ingressDeployment Operations
Additive intelligence stack runtimeRuntime Services
Adapter execution contractAdapters Overview
Durable architectural decisionsArchitecture Records
The controlled inventory, authority rules, and baseline versioning live in the Architecture Artefact Register and Master Document Control.

Workspace Scope

The repository currently contains seven first-class architecture families:
  1. Platform canonical architecture in local-pc/docs/start/, api/, deploy/, and adapters/
  2. Tremor operating corpus in local-pc/docs/operating/ and local-pc/docs/companies/tremor/
  3. Deep specs and implementation docs in local-pc/doc/architecture/, local-pc/doc/spec/, and related deep-dive docs
  4. Plans, contracts, audits, and release governance in local-pc/doc/plans/ and execution or audit records
  5. Schema and contract packages in mom/contracts/, paperclip-intake-v1/, and plugin contract packages
  6. Tremor native runtime and prototype network docs in tremor-native/
  7. Root-level meta and research notes at the workspace root
This page maps all seven families. It does not flatten them into one runtime. Instead, it shows their authority, boundaries, and relationships so the workspace can be read as one portfolio without pretending that every stream is the same system.

Unified Workspace Artefact Map

The workspace architecture is governed as a whole-workspace map. Every family is first-class in the inventory, but each family still has its own authority level and purpose.
FamilyAuthorityPrimary entrypointNotes
Platform canonical architecturePrimary canonicalArchitecture and Data ModelPaperclip control-plane architecture, storage boundaries, runtime flows, and topology
Tremor operating corpusControlled overlaylocal-pc/docs/companies/tremor/wiki/architecture-and-schema.mdLiving Tremor company and operating view layered on top of the platform model
Deep specs and implementation docsSupporting or referencelocal-pc/doc/architecture/ and local-pc/doc/spec/Rich detail that may still hold useful warehouse, memory, or implementation context
Plans, contracts, audits, and release governanceSupporting or time-boundlocal-pc/doc/plans/ and local-pc/doc/treaaa-*-execution-contract.mdIn-flight execution control, audits, and rollout history rather than stable architecture authority
Schema and contract packagesCanonical within contract scopemom/contracts/, paperclip-intake-v1/, and plugin intake contractsSource of truth for specific wire contracts, intake schemas, and portable package shapes
Tremor native runtime and prototype network docsCanonical streamtremor-native/docs/runtime-architecture.mdNative host/client runtime stream with current-state architecture plus prototype history
Root-level meta and research notesReferencefindings.md, progress.md, task_plan.md, networking_insights_summary.md, and similar notesUseful for reconstruction and research but not authoritative for the controlled baseline

Stream Relationships

The important cross-family relationships are:
  • the platform canonical architecture is the primary control-plane runtime and storage system
  • the Tremor operating corpus is the live tenant and operating layer for that platform
  • the deep specs preserve detail that may need promotion back into the controlled baseline
  • the plans, contracts, and audits explain sequencing, rollout, and historical control decisions
  • the schema and contract packages define bounded source-of-truth contracts that support multiple streams
  • the Tremor native runtime is a separate but related product/runtime stream in the same workspace
  • the root-level notes explain why work happened, but they do not overrule controlled documents
Ghost OS has been externalized from this workspace and should now be treated as a separate sibling project rather than an in-scope architecture stream.

Primary Platform Context

Paperclip has five primary subsystems:
  • Web control plane in ui/, used by board operators to manage companies, agents, issues, approvals, costs, routines, and plugins.
  • HTTP API and orchestration layer in server/src, which enforces auth, applies policy, coordinates business logic, and records durable evidence.
  • Persistence layer in packages/db, which defines the canonical first-party schema in PostgreSQL.
  • Execution adapters in packages/adapters, which bridge the control plane to local or remote agent runtimes.
  • CLI control surface in cli/, which bootstraps, validates, diagnoses, invokes, and repairs local instances.

What Owns What

SubsystemOwnsDepends onDoes not own
ui/operator workflows, page composition, local presentation state/api/*, auth session, shared contractspersistence, adapter execution, vendor schemas
server/srcrequest boundary, policy, orchestration, derived summaries, execution coordinationpackages/db, adapters, auth, storage, analyticsraw UI rendering, vendor product internals
packages/dbfirst-party tables, relations, migrations, defaultsPostgreSQL, Drizzleroute semantics, UI state, vendor metadata
packages/adaptersadapter contracts, runtime bridging, output captureserver orchestration, runtime environmentcompany governance model, first-party business logic
cli/bootstrap, diagnostics, launch, repair, exports/importsAPI/server config, host environment, db statebrowser UI, canonical entity ownership
The key boundary is simple: Paperclip orchestrates work and records first-party state; it does not absorb vendor schemas or let adapters become business-logic owners.

Data Ownership And Storage

The storage model has three layers:
  1. Canonical first-party state in PostgreSQL
  2. Derived and high-volume analytical facts in ClickHouse
  3. Vendor-owned product stores outside the first-party boundary
Ownership rules:
  • PostgreSQL is canonical for Paperclip business entities, runtime state, audit data, plugin metadata, evaluation state, and memory/coordination records.
  • ClickHouse is canonical for high-volume telemetry, warehouse rollups, and observability marts derived from control-plane activity.
  • Vendor tools keep their own product stores even when they share infra with Paperclip.
  • Plugin mappings are first-party; plugin source-system records are not.
  • Object storage may hold artefact bytes, but Paperclip owns only the references, manifests, and lifecycle metadata it stores.

End-To-End Data Flows

The control plane is easiest to reason about as a small set of repeatable flow families.

1. Identity And Access Flow

Board users and agents enter through different credentials, but both resolve into the same policy boundary in the API layer. Primary tables:
  • user, session, account, verification
  • board_api_keys, cli_auth_challenges
  • instance_user_roles, company_memberships
  • principal_permission_grants, invites, join_requests

2. Board Read Flow

Board pages are projections of first-party state plus selected derived summaries. This is the common path behind dashboards, inbox counts, run histories, costs, and operational summaries.

3. Board Mutation Flow

Board mutations always go through the API policy boundary before they become durable rows. This covers changes to companies, agents, projects, goals, issues, approvals, routines, secrets, plugin settings, and instance settings.

4. Execution And Heartbeat Flow

Heartbeat execution is the most important cross-cutting data flow in the system. It connects work assignment, runtime invocation, run-scoped mutations, evidence capture, and downstream quality loops. This flow touches:
  • heartbeat_runs, heartbeat_run_events
  • issue_active_executions
  • run_file_writes, run_output_artifacts
  • activity_log
  • agent_task_sessions, agent_runtime_state, agent_wakeup_requests
  • issue, approval, document, asset, and work-product families when the run mutates business state

5. Quality, Evaluation, And Memory Flow

Execution evidence does not stop at run completion. It feeds cost accounting, performance review, evaluation queues, and MOM coordination. This is the loop that turns agent activity into governance and improvement:
  • cost and finance facts quantify spend
  • performance tables summarize quality and intervention signals
  • evaluation tables attach judgment and curation
  • MOM tables record coordination and higher-level memory

6. Plugin And Integration Flow

Plugins extend the control plane without moving ownership out of first-party tables. The first-party system owns:
  • installation and configuration
  • per-company settings
  • mappings from first-party entities to source-system identifiers
  • job scheduling and job-run state
  • webhook receipts and operational logs
The source system owns the source record itself.

7. Deploy, Ingress, And Observability Flow

Deployment mode changes the ingress and auth envelope, not the canonical data model. Important consequences:
  • local development, private mode, and public mode share the same first-party schema model
  • the intelligence stack is additive around the control plane
  • the monitor may summarize external health, but it does not replace vendor UIs or absorb vendor metadata

Adjacent Runtime Stream: Tremor Native

This workspace also contains a native peer-to-peer runtime in tremor-native/. It is adjacent, not identical, to the control-plane architecture above. Current reality:
  • the host uses TremorEngine as the authoritative reducer
  • MultipeerManager handles discovery and unreliable MCSession transport
  • NetworkTransport is a host-side TCP listener path, not a full symmetric transport fabric
  • clients mirror state from envelopes and request a resync when revisionMismatch occurs
  • the host currently calculates diffs but broadcasts snapshots
Use tremor-native/docs/runtime-architecture.md for the code-grounded flow. Do not treat the older mesh-network and router documents as canonical runtime architecture unless they are updated to match current code.

Controlled Vs Reference Material

The authoritative classification now lives in the Architecture Artefact Register. Use the summary below for fast orientation. Treat these as controlled architecture sources:
  • local-pc/docs/start/architecture.md
  • local-pc/docs/start/data-model.md
  • local-pc/docs/api/overview.md
  • local-pc/docs/api/schema-map.md
  • local-pc/docs/deploy/overview.md
  • local-pc/docs/deploy/runtime-services.md
  • local-pc/docs/adapters/overview.md
  • local-pc/docs/architecture-records/overview.md
  • local-pc/docs/architecture-records/artifact-register.md
  • local-pc/docs/architecture-records/master-document-control.md
  • tremor-native/docs/runtime-architecture.md
Treat these as reference, history, or prototype material:
  • older deep-dive warehouse docs under local-pc/doc/architecture/
  • company operating wiki overlays under local-pc/docs/companies/tremor/
  • root-level networking notes
  • tremor-native/network_architecture.md
  • tremor-native/packet_failover_logic.md
  • tremor-native/mesh_network_prototype/
  • tremor-native/new_mesh_network_prototype/
  • root-level meta and research notes

Boundary Rules

These boundaries should remain stable:
  • ui/ should not learn persistence or vendor-schema rules.
  • server/src should stay the policy and orchestration boundary.
  • packages/db should remain the canonical authority for first-party row ownership.
  • packages/adapters should remain execution bridges rather than business-logic owners.
  • plugin mappings should not be confused with source-system ownership.
  • vendor product stores should stay outside the first-party schema, even when queried, mirrored, or colocated.
  • deployment topology may change ingress and protection, but should not change canonical data ownership.

Open Questions

The structural model is clear, but some product decisions remain intentionally open:
  • how much external-tool state should be summarized in first-party pages versus deep-linked
  • how much warehouse data should be promoted back into first-party operational summaries
  • whether hosted operator workflows should remain browser-first with CLI support or become equally CLI-native
  • when the monorepo can be cleanly split into platform/runtime and tenant/company repos without breaking the architecture contracts recorded in ADRs