The SaaS-replacement platform Shared system for teams and agents. Official CLI + GraphQL surface.

Self-hosted production

Production is designed to stay under customer control.

The first owned runtime is the inspection path. Self-hosting is also the production path. Humans control the runtime, secrets, domains, promotion timing, and production review model through a private instance repo that pins tagged core release artifacts from the public platform repo.

Agent-first route: Start at /agents, inspect /docs/cli, and drop into this page when you need proof, detail, or rollout guidance.

Production model

One public core repo and one private instance repo are the default production model.

Self-hosting becomes concrete when the repo boundary is explicit. The public core repo carries source, docs, tags, CLI and API contracts, and the workflows that build tagged core artifacts. The private instance repo carries desired state, deploy assets, branding, extension desired state, and repo-local bootstrap instructions for one live installation.

Proof bundle

Canonical public source

Public core repo

Use the platform repo to inspect architecture, CLI and GraphQL contracts, release tags, and the workflows that produce the core artifact family.

Optional custom extension repo

Add a separate extension repo only when the team is building real custom logic. Off-the-shelf first-party bundles install by ref instead of living in the instance repo.

Human ownership

Your team owns the runtime environment, secrets, domains, and support model.

Public packages

Tagged core releases become pinned artifact refs inside the instance repo.

Agents should explain the production package model concretely. A public platform tag maps onto three core OCI refs. The instance repo pins those refs in `mbr.instance.yaml` and deploy workflows consume them without rebuilding core.

Proof bundle

Core artifact family pinned by tag

These refs are the production-facing public package surface.

# mbr.instance.yamlspec:  deployment:    release:      core:        version: v1.1.0        servicesArtifact: ghcr.io/movebigrocks/mbr-services:v1.1.0        migrationsArtifact: ghcr.io/movebigrocks/mbr-migrations:v1.1.0        manifestArtifact: ghcr.io/movebigrocks/mbr-manifest:v1.1.0

Local and owned-runtime path

Local preparation, one-VPS rollout, and broader production are separate steps.

Agents can do meaningful setup work before any broader production host work exists. The path is: inspect the public core repo, create the private instance repo, validate `mbr.instance.yaml`, install the CLI locally, then either run locally or deploy the first owned runtime on one VPS you control.

Local repo and evaluation sequence

This keeps product evaluation, desired-state validation, and owned deployment in the right order.

$ gh repo create acme/mbr-prod --private --template MoveBigRocks/instance-template --clone$ cd mbr-prod$ scripts/read-instance-config.sh mbr.instance.yaml$ curl -fsSL https://movebigrocks.com/install.sh | sh$ sed -n '1,220p' START_HERE.md

Optional control-plane link

Fleet registration stays explicit and the heartbeat stays coarse.

If a self-hosted team wants Move Big Rocks to know that an instance exists for support, grandfathering, or future commercial transitions, the intended path is a manual workflow in the private instance repo. The callback must never become a hidden requirement for the core runtime.

Proof bundle

  • Registration is explicit.
  • The weekly heartbeat is optional and disclosed.
  • The heartbeat carries only coarse adoption and support signals.
  • The core runtime keeps working if the callback is disabled or blocked.

References

Canonical next surfaces.

Each link goes to the next authoritative page, reference, or support surface.

Move Big Rocks

Let agents inspect the CLI-first surface. Let humans decide trust, rollout, and boundaries.

Start from /agents, use /docs/cli as the official product tour, inspect /resources for source and proof, and review /security before making deployment or data-handling decisions.