The SaaS-replacement platform Shared system for teams and agents. Official CLI + GraphQL surface.

Why this exists

Why I rebuilt this for the DemandOps portfolio.

Move Big Rocks was rebuilt because I needed one owned instance to run the DemandOps portfolio across multiple products, brands, and countries instead of paying for separate SaaS tools, accounts, licenses, identities, and hidden workflows for each one. I also realized that building separate internal equivalents for support, forms, analytics, error tracking, and other operational needs would just create a new kind of sprawl: isolated vibe-coded apps repeating deployments, secure endpoint exposure, TLS certificates, database management, auth, and integrations. I used the earlier 2010 platform model as leverage, pushed the workspace model further, and built the extensions I needed on top of one shared operational base. I also needed a system that agents could use securely through one CLI and one controlled access model. That now replaces roughly EUR10,000 per year of SaaS spend across my own ventures and gives other teams and agents the same path.

Agent-first route: Start at /agents, inspect /docs/cli, and drop into this page when you need proof, detail, or rollout guidance.

Table of contents

Section map.

Jump directly to the part you need.

The operating reason

The rebuild was driven by repeated cross-brand operational pressure, not abstract ambition.

Running a portfolio of products across brands and countries means the same operational needs appear over and over: customer support, intake forms, analytics, error tracking, knowledge management, and automation. Buying separate SaaS tools for each brand means separate accounts, separate licenses, separate vendor relationships, separate identity integrations, separate security reviews, and separate hidden workflows. That cost and complexity compounds with every new product. It also makes secure agentic operation much harder, because an agent ends up hopping across many vendors and auth models instead of using one approved contract. I needed one substrate I could run once and reuse across the portfolio.

Multi-tenant from the start

The earlier platform was built to support multiple organizations and departments rather than a single hard-coded workflow.

Cases across boundaries

A key design concern was letting cases move between people inside a department and across departments while preserving the same shared record.

Broader than IT support

It was used to support students with all kinds of issues, not just IT support, across departments including student services.

Service model, not website theater

The important part was never brochureware. It was the operating model behind intake, routing, shared context, and durable resolution.

The key realization

Replacing SaaS one app at a time would have recreated the same fragmentation.

I could have built a support tool, a forms tool, an analytics tool, an error-tracking tool, and a series of internal helpers independently. That would still have left me with a mess, only now it would be my mess: vibe-coded app sprawl instead of SaaS sprawl. Each tool would need its own deployment path, secure endpoint exposure, TLS certificates, database management, auth wiring, admin UI, background jobs, secrets, monitoring, and upgrade story. Even if each individual app looked cheaper than SaaS, the operating burden would still compound. The leverage had to come from one shared operational substrate, not from building more standalone replacements faster.

Not just buy vs build

The real problem was not only vendor cost. The real problem was repeated operational fragmentation, whether the tool came from a vendor or from my own code.

Shared operational substrate

Deployments, routed endpoints, TLS, database ownership, auth, audit, background work, and event delivery needed to be solved once and reused, not rebuilt for every capability.

Extensions on one base

That is why product depth lives in bounded extensions instead of isolated apps. A new capability should inherit the same core routing, eventing, storage, UI shell, and review model.

Better agent leverage

Agents become more useful when they can work through one visible control plane and one operating model instead of brittle browser choreography across many SaaS products and homemade apps.

Production history

The earlier platform ran long enough to teach real lessons.

The original system was used in production for more than 10 years across campuses around Australia, especially in the tertiary education sector. It became real infrastructure for supporting students with all kinds of issues across many departments, including student services, while moving work between teams without losing context. That matters because the current product inherits lessons from real service operations rather than relying on fresh theory alone.

  • It had to support real operational handoffs, not just demo scenarios.
  • It had to work for student-facing issues beyond IT support.
  • It had to keep context intact when work crossed teams and departments.
  • It had to stay useful over years, which forced the model to be more durable than a fashionable point tool.
  • The later move to ServiceNow came from a company change, not because the underlying operating problem disappeared.

Why rewrite now

The model survived. The old codebase did not need to stay the final implementation.

After the primary customer was acquired and moved onto ServiceNow, the earlier system stopped being the active product. What remained was a proven model sitting on an older codebase. AI-assisted migration made it practical to carry that Django and Python system forward into Go instead of treating it as a dead end. The point was not only to modernize the implementation. It was to give one owned instance a cleaner shared workspace model, a tighter agent-operable contract, and explicit extensions for the things I actually needed, while turning routing, eventing, endpoint exposure, storage, and deployment concerns into shared platform capability instead of repeated app-by-app boilerplate.

Why Go

Go offers a tighter deployment model, a single binary, strong typing, and a codebase that is easier to evolve with compiler feedback in the loop.

Why now

Operational work is fragmented across support tools, forms tools, analytics tools, recruiting tools, ticketing systems, inboxes, and internal glue. The important context is fragmented too across decks, chats, local Markdown, prompts, runbooks, and personal folders.

What changed

The rewrite tightens the architecture, pushes the shared workspace model further, makes extensions explicit, and turns operational leverage into part of the product instead of leaving deployments, certificates, endpoint wiring, and event plumbing as repeated side work.

Agent operability

One goal of the rebuild was to make the system legible to capable agents.

Move Big Rocks is designed so a capable agent can operate it efficiently through the same approved surfaces humans use. That was not an abstract product goal. Part of the rebuild was realizing that scattered SaaS products were a poor operating model for secure agentic work. Give the agent the Move Big Rocks repo or the instance repo, the current `mbr` CLI and machine-readable contract, and the relevant workspace and team context, and it should be able to work through the system without inventing a second control plane.

One explicit contract

Agents work through one `mbr` CLI with `--json` on every command, one GraphQL API, machine-readable bootstrap endpoints, and explicit workspace and team context.

Practical operating scope

A capable agent should be able to deploy and configure Move Big Rocks, work conversations and cases, retrieve and publish knowledge, submit forms, install and configure extensions, and help teams author private extensions.

System of record stays shared

Move Big Rocks is where records live, permissions are enforced, approvals happen, and audit trails are recorded. Agents operate through that same system instead of around it.

Concrete expectation

A user should be able to tell an agent “create me a Move Big Rocks instance repo and deploy it to one Ubuntu VPS I control” and have the agent handle most of the work through documented surfaces.

Builder context

The product is shaped by running it operationally, not only by product theory.

Adrian McPhee built the original platform and has served as CTO at major firms including LeasePlan and Bol. He now runs Move Big Rocks operationally for the DemandOps portfolio, including products such as TuinPlan, across multiple brands and countries. The same pressure kept showing up at every scale: duplicated SaaS subscriptions, duplicated vendor relationships, duplicated identity integrations, duplicated hidden workflows, and weak footing for secure agentic work when each brand needed its own tool stack.

Used operationally by the builder

Move Big Rocks is not a side project. It runs the operational infrastructure for a real portfolio of products across brands and countries.

Replacing real SaaS spend

The rebuilt platform plus its extensions replace roughly EUR10,000 per year of SaaS spend across Adrian McPhee's ventures, which helps extend runway instead of funding duplicated tool stacks.

Experience at scale

The product is informed by experience running technology in larger businesses, not only by startup-era tooling preferences.

Becoming transferable

The extension model and first-party extensions were opened up after being refined in real use. Some startups the founder advises are now adopting the same model because repeated SaaS sprawl per brand is a common problem.

How to evaluate it

Evaluate Move Big Rocks as a new implementation of a proven operational model.

The codebase is new. The lessons are not. That is the right lens for evaluating the product.

  • Treat the provenance as a credibility signal, not as a substitute for technical proof.
  • Use `/agents`, `/docs/cli`, `/security`, and `/resources` to inspect the current implementation directly.
  • Use the origin story to understand why the product is shaped around shared context, handoffs, and bounded extensions.

References

Canonical next surfaces.

Each link goes to the next authoritative page, reference, or support surface.

Move Big Rocks

Let agents inspect the CLI-first surface. Let humans decide trust, rollout, and boundaries.

Start from /agents, use /docs/cli as the official product tour, inspect /resources for source and proof, and review /security before making deployment or data-handling decisions.