How To Use This Page
Use this page for:- the current workspace topology
- ownership and storage boundaries
- end-to-end data-flow tracing
- the relationship between platform, native runtime, operating corpus, contracts, prototypes, and research streams
| Need | Start here |
|---|---|
| Whole-workspace artefact inventory | Architecture Artefact Register |
| Architecture governance and document versioning | Master Document Control |
| Primary platform entity ownership | Data Model |
| Logs, traces, provenance, and warehouse signal flow | Observation Architecture |
| Route and service owners for tables | Schema Map |
| API scopes and heartbeat contract | API Overview |
| Deployment modes and ingress | Deployment Operations |
| Additive intelligence stack runtime | Runtime Services |
| Adapter execution contract | Adapters Overview |
| Durable architectural decisions | Architecture Records |
Workspace Scope
The repository currently contains seven first-class architecture families:- Platform canonical architecture in
local-pc/docs/start/,api/,deploy/, andadapters/ - Tremor operating corpus in
local-pc/docs/operating/andlocal-pc/docs/companies/tremor/ - Deep specs and implementation docs in
local-pc/doc/architecture/,local-pc/doc/spec/, and related deep-dive docs - Plans, contracts, audits, and release governance in
local-pc/doc/plans/and execution or audit records - Schema and contract packages in
mom/contracts/,paperclip-intake-v1/, and plugin contract packages - Tremor native runtime and prototype network docs in
tremor-native/ - Root-level meta and research notes at the workspace root
Unified Workspace Artefact Map
The workspace architecture is governed as a whole-workspace map. Every family is first-class in the inventory, but each family still has its own authority level and purpose.| Family | Authority | Primary entrypoint | Notes |
|---|---|---|---|
| Platform canonical architecture | Primary canonical | Architecture and Data Model | Paperclip control-plane architecture, storage boundaries, runtime flows, and topology |
| Tremor operating corpus | Controlled overlay | local-pc/docs/companies/tremor/wiki/architecture-and-schema.md | Living Tremor company and operating view layered on top of the platform model |
| Deep specs and implementation docs | Supporting or reference | local-pc/doc/architecture/ and local-pc/doc/spec/ | Rich detail that may still hold useful warehouse, memory, or implementation context |
| Plans, contracts, audits, and release governance | Supporting or time-bound | local-pc/doc/plans/ and local-pc/doc/treaaa-*-execution-contract.md | In-flight execution control, audits, and rollout history rather than stable architecture authority |
| Schema and contract packages | Canonical within contract scope | mom/contracts/, paperclip-intake-v1/, and plugin intake contracts | Source of truth for specific wire contracts, intake schemas, and portable package shapes |
| Tremor native runtime and prototype network docs | Canonical stream | tremor-native/docs/runtime-architecture.md | Native host/client runtime stream with current-state architecture plus prototype history |
| Root-level meta and research notes | Reference | findings.md, progress.md, task_plan.md, networking_insights_summary.md, and similar notes | Useful for reconstruction and research but not authoritative for the controlled baseline |
Stream Relationships
The important cross-family relationships are:- the platform canonical architecture is the primary control-plane runtime and storage system
- the Tremor operating corpus is the live tenant and operating layer for that platform
- the deep specs preserve detail that may need promotion back into the controlled baseline
- the plans, contracts, and audits explain sequencing, rollout, and historical control decisions
- the schema and contract packages define bounded source-of-truth contracts that support multiple streams
- the Tremor native runtime is a separate but related product/runtime stream in the same workspace
- the root-level notes explain why work happened, but they do not overrule controlled documents
Primary Platform Context
Paperclip has five primary subsystems:- Web control plane in
ui/, used by board operators to manage companies, agents, issues, approvals, costs, routines, and plugins. - HTTP API and orchestration layer in
server/src, which enforces auth, applies policy, coordinates business logic, and records durable evidence. - Persistence layer in
packages/db, which defines the canonical first-party schema in PostgreSQL. - Execution adapters in
packages/adapters, which bridge the control plane to local or remote agent runtimes. - CLI control surface in
cli/, which bootstraps, validates, diagnoses, invokes, and repairs local instances.
What Owns What
| Subsystem | Owns | Depends on | Does not own |
|---|---|---|---|
ui/ | operator workflows, page composition, local presentation state | /api/*, auth session, shared contracts | persistence, adapter execution, vendor schemas |
server/src | request boundary, policy, orchestration, derived summaries, execution coordination | packages/db, adapters, auth, storage, analytics | raw UI rendering, vendor product internals |
packages/db | first-party tables, relations, migrations, defaults | PostgreSQL, Drizzle | route semantics, UI state, vendor metadata |
packages/adapters | adapter contracts, runtime bridging, output capture | server orchestration, runtime environment | company governance model, first-party business logic |
cli/ | bootstrap, diagnostics, launch, repair, exports/imports | API/server config, host environment, db state | browser UI, canonical entity ownership |
Data Ownership And Storage
The storage model has three layers:- Canonical first-party state in PostgreSQL
- Derived and high-volume analytical facts in ClickHouse
- Vendor-owned product stores outside the first-party boundary
- PostgreSQL is canonical for Paperclip business entities, runtime state, audit data, plugin metadata, evaluation state, and memory/coordination records.
- ClickHouse is canonical for high-volume telemetry, warehouse rollups, and observability marts derived from control-plane activity.
- Vendor tools keep their own product stores even when they share infra with Paperclip.
- Plugin mappings are first-party; plugin source-system records are not.
- Object storage may hold artefact bytes, but Paperclip owns only the references, manifests, and lifecycle metadata it stores.
End-To-End Data Flows
The control plane is easiest to reason about as a small set of repeatable flow families.1. Identity And Access Flow
Board users and agents enter through different credentials, but both resolve into the same policy boundary in the API layer. Primary tables:user,session,account,verificationboard_api_keys,cli_auth_challengesinstance_user_roles,company_membershipsprincipal_permission_grants,invites,join_requests
2. Board Read Flow
Board pages are projections of first-party state plus selected derived summaries. This is the common path behind dashboards, inbox counts, run histories, costs, and operational summaries.3. Board Mutation Flow
Board mutations always go through the API policy boundary before they become durable rows. This covers changes to companies, agents, projects, goals, issues, approvals, routines, secrets, plugin settings, and instance settings.4. Execution And Heartbeat Flow
Heartbeat execution is the most important cross-cutting data flow in the system. It connects work assignment, runtime invocation, run-scoped mutations, evidence capture, and downstream quality loops. This flow touches:heartbeat_runs,heartbeat_run_eventsissue_active_executionsrun_file_writes,run_output_artifactsactivity_logagent_task_sessions,agent_runtime_state,agent_wakeup_requests- issue, approval, document, asset, and work-product families when the run mutates business state
5. Quality, Evaluation, And Memory Flow
Execution evidence does not stop at run completion. It feeds cost accounting, performance review, evaluation queues, and MOM coordination. This is the loop that turns agent activity into governance and improvement:- cost and finance facts quantify spend
- performance tables summarize quality and intervention signals
- evaluation tables attach judgment and curation
- MOM tables record coordination and higher-level memory
6. Plugin And Integration Flow
Plugins extend the control plane without moving ownership out of first-party tables. The first-party system owns:- installation and configuration
- per-company settings
- mappings from first-party entities to source-system identifiers
- job scheduling and job-run state
- webhook receipts and operational logs
7. Deploy, Ingress, And Observability Flow
Deployment mode changes the ingress and auth envelope, not the canonical data model. Important consequences:- local development, private mode, and public mode share the same first-party schema model
- the intelligence stack is additive around the control plane
- the monitor may summarize external health, but it does not replace vendor UIs or absorb vendor metadata
Adjacent Runtime Stream: Tremor Native
This workspace also contains a native peer-to-peer runtime intremor-native/. It is adjacent, not identical, to the control-plane architecture above.
Current reality:
- the host uses
TremorEngineas the authoritative reducer MultipeerManagerhandles discovery and unreliableMCSessiontransportNetworkTransportis a host-side TCP listener path, not a full symmetric transport fabric- clients mirror state from envelopes and request a resync when
revisionMismatchoccurs - the host currently calculates diffs but broadcasts snapshots
tremor-native/docs/runtime-architecture.md for the code-grounded flow. Do not treat the older mesh-network and router documents as canonical runtime architecture unless they are updated to match current code.
Controlled Vs Reference Material
The authoritative classification now lives in the Architecture Artefact Register. Use the summary below for fast orientation. Treat these as controlled architecture sources:local-pc/docs/start/architecture.mdlocal-pc/docs/start/data-model.mdlocal-pc/docs/api/overview.mdlocal-pc/docs/api/schema-map.mdlocal-pc/docs/deploy/overview.mdlocal-pc/docs/deploy/runtime-services.mdlocal-pc/docs/adapters/overview.mdlocal-pc/docs/architecture-records/overview.mdlocal-pc/docs/architecture-records/artifact-register.mdlocal-pc/docs/architecture-records/master-document-control.mdtremor-native/docs/runtime-architecture.md
- older deep-dive warehouse docs under
local-pc/doc/architecture/ - company operating wiki overlays under
local-pc/docs/companies/tremor/ - root-level networking notes
tremor-native/network_architecture.mdtremor-native/packet_failover_logic.mdtremor-native/mesh_network_prototype/tremor-native/new_mesh_network_prototype/- root-level meta and research notes
Boundary Rules
These boundaries should remain stable:ui/should not learn persistence or vendor-schema rules.server/srcshould stay the policy and orchestration boundary.packages/dbshould remain the canonical authority for first-party row ownership.packages/adaptersshould remain execution bridges rather than business-logic owners.- plugin mappings should not be confused with source-system ownership.
- vendor product stores should stay outside the first-party schema, even when queried, mirrored, or colocated.
- deployment topology may change ingress and protection, but should not change canonical data ownership.
Open Questions
The structural model is clear, but some product decisions remain intentionally open:- how much external-tool state should be summarized in first-party pages versus deep-linked
- how much warehouse data should be promoted back into first-party operational summaries
- whether hosted operator workflows should remain browser-first with CLI support or become equally CLI-native
- when the monorepo can be cleanly split into platform/runtime and tenant/company repos without breaking the architecture contracts recorded in ADRs