Paperclip Recovery Playbook

Use this when Paperclip is broken enough that normal operator flow is not enough. Start with the layer that is actually failing:
  • local dev loop
  • Docker/base compose stack
  • hosted ingress / Cloudflare tunnel
  • plugin or data restore
For the canonical ops set, see docs/deploy/overview.md.

1. Local Dev Reset

Use this when the embedded database, watcher state, or startup loop is wedged.
pnpm dev:stop
killall -9 node
sleep 3
rm -rf ~/.paperclip/instances/default/db
pnpm dev &
Wait about 15 seconds, then verify:
curl http://localhost:3100/api/health
Expected shape:
  • status: ok
  • bootstrapStatus: ready

2. Restore Company Data

After a database reset the Tremor company is gone. Restore it:
pnpm paperclipai company import docs/companies/tremor

3. Reinstall Plugins

The server now attempts to restore plugin registry metadata from plugins-registry.backup.json on startup when the backup file exists. After a full DB reset:
  1. restart the server once and check whether plugins/configs were restored automatically
  2. if a plugin package is missing locally or the backup file does not exist, reinstall it from:
Settings → Plugins → Available Plugins Common example keys:
PluginKey
Agent Skillstremor.agent-skills
Company Intaketremor.company-intake
Project Flight Plantremor.project-flight-plan
Role Skill Heatmaprole-skill-heatmap

4. Port Conflicts

If the server will not start because 3100 is taken:
lsof -i :3100 | grep -v COMMAND | awk '{print $2}' | xargs kill -9
rm -f ~/.paperclip/instances/default/db/postmaster.pid
pnpm dev &
If you are using Docker or compose, also check whether an old container is still holding the port.

5. Watcher Missed Changes

The tsx watcher can miss some server-side TypeScript edits. Force a full restart:
pnpm dev:stop
killall -9 node
sleep 3
pnpm dev &

6. Docker / Compose Recovery

For the base stack:
docker compose down
docker compose up -d
For the hosted monitor overlay:
./scripts/intelligence-stack.sh down
./scripts/intelligence-stack.sh up
If the hosted monitor is broken and cloudflared is involved, restart the overlay first, then check the tunnel connector health.

7. Hosted Ingress Recovery

Use this sequence when public hostnames return errors but local origin is healthy:
  1. Confirm obs-proxy can serve the hostname locally.
  2. Confirm cloudflared is running and the tunnel connector is healthy.
  3. Confirm the hostname is published on the tunnel.
  4. Confirm the DNS record is a proxied CNAME to the tunnel target.
  5. Confirm the Cloudflare Access app matches the hostname and has an Allow policy.
If public requests return 404/openresty and cloudflared sees no traffic, treat it as a DNS or hostname-binding issue before touching the application.

Root Cause

The embedded PostgreSQL process is fragile under repeated fast restarts. Each pnpm dev:stop + pnpm dev cycle can leave orphaned postmaster.pid lock files or stale node processes holding ports. The hosted monitor has a separate failure mode: tunnel health or hostname publication can drift independently of the local origin. That is why the recovery steps above separate local reset, compose restart, and Cloudflare ingress validation.

Plugin Restore Boundary

Plugin registry restore is no longer hypothetical, but it is not magic:
  • it restores registry/config/settings records from the instance backup file
  • it still depends on package availability and the presence of that backup file
  • after a destructive reset, verify plugin presence rather than assuming every plugin is fully restored