Multi-tenant from the start
The earlier platform was built to support multiple organizations and departments rather than a single hard-coded workflow.
The SaaS-replacement platform
Shared system for teams and agents. Official CLI + GraphQL surface.
Why this exists
Move Big Rocks was rebuilt because I needed one owned instance to run the DemandOps portfolio across multiple products, brands, and countries instead of paying for separate SaaS tools, accounts, licenses, identities, and hidden workflows for each one. I also realized that building separate internal equivalents for support, forms, analytics, error tracking, and other operational needs would just create a new kind of sprawl: isolated vibe-coded apps repeating deployments, secure endpoint exposure, TLS certificates, database management, auth, and integrations. I used the earlier 2010 platform model as leverage, pushed the workspace model further, and built the extensions I needed on top of one shared operational base. I also needed a system that agents could use securely through one CLI and one controlled access model. That now replaces roughly EUR10,000 per year of SaaS spend across my own ventures and gives other teams and agents the same path.
Table of contents
Jump directly to the part you need.
The operating reason
Running a portfolio of products across brands and countries means the same operational needs appear over and over: customer support, intake forms, analytics, error tracking, knowledge management, and automation. Buying separate SaaS tools for each brand means separate accounts, separate licenses, separate vendor relationships, separate identity integrations, separate security reviews, and separate hidden workflows. That cost and complexity compounds with every new product. It also makes secure agentic operation much harder, because an agent ends up hopping across many vendors and auth models instead of using one approved contract. I needed one substrate I could run once and reuse across the portfolio.
The earlier platform was built to support multiple organizations and departments rather than a single hard-coded workflow.
A key design concern was letting cases move between people inside a department and across departments while preserving the same shared record.
It was used to support students with all kinds of issues, not just IT support, across departments including student services.
The important part was never brochureware. It was the operating model behind intake, routing, shared context, and durable resolution.
The key realization
I could have built a support tool, a forms tool, an analytics tool, an error-tracking tool, and a series of internal helpers independently. That would still have left me with a mess, only now it would be my mess: vibe-coded app sprawl instead of SaaS sprawl. Each tool would need its own deployment path, secure endpoint exposure, TLS certificates, database management, auth wiring, admin UI, background jobs, secrets, monitoring, and upgrade story. Even if each individual app looked cheaper than SaaS, the operating burden would still compound. The leverage had to come from one shared operational substrate, not from building more standalone replacements faster.
The real problem was not only vendor cost. The real problem was repeated operational fragmentation, whether the tool came from a vendor or from my own code.
Deployments, routed endpoints, TLS, database ownership, auth, audit, background work, and event delivery needed to be solved once and reused, not rebuilt for every capability.
That is why product depth lives in bounded extensions instead of isolated apps. A new capability should inherit the same core routing, eventing, storage, UI shell, and review model.
Agents become more useful when they can work through one visible control plane and one operating model instead of brittle browser choreography across many SaaS products and homemade apps.
Production history
The original system was used in production for more than 10 years across campuses around Australia, especially in the tertiary education sector. It became real infrastructure for supporting students with all kinds of issues across many departments, including student services, while moving work between teams without losing context. That matters because the current product inherits lessons from real service operations rather than relying on fresh theory alone.
Why rewrite now
After the primary customer was acquired and moved onto ServiceNow, the earlier system stopped being the active product. What remained was a proven model sitting on an older codebase. AI-assisted migration made it practical to carry that Django and Python system forward into Go instead of treating it as a dead end. The point was not only to modernize the implementation. It was to give one owned instance a cleaner shared workspace model, a tighter agent-operable contract, and explicit extensions for the things I actually needed, while turning routing, eventing, endpoint exposure, storage, and deployment concerns into shared platform capability instead of repeated app-by-app boilerplate.
Go offers a tighter deployment model, a single binary, strong typing, and a codebase that is easier to evolve with compiler feedback in the loop.
Operational work is fragmented across support tools, forms tools, analytics tools, recruiting tools, ticketing systems, inboxes, and internal glue. The important context is fragmented too across decks, chats, local Markdown, prompts, runbooks, and personal folders.
The rewrite tightens the architecture, pushes the shared workspace model further, makes extensions explicit, and turns operational leverage into part of the product instead of leaving deployments, certificates, endpoint wiring, and event plumbing as repeated side work.
Agent operability
Move Big Rocks is designed so a capable agent can operate it efficiently through the same approved surfaces humans use. That was not an abstract product goal. Part of the rebuild was realizing that scattered SaaS products were a poor operating model for secure agentic work. Give the agent the Move Big Rocks repo or the instance repo, the current `mbr` CLI and machine-readable contract, and the relevant workspace and team context, and it should be able to work through the system without inventing a second control plane.
Agents work through one `mbr` CLI with `--json` on every command, one GraphQL API, machine-readable bootstrap endpoints, and explicit workspace and team context.
A capable agent should be able to deploy and configure Move Big Rocks, work conversations and cases, retrieve and publish knowledge, submit forms, install and configure extensions, and help teams author private extensions.
Move Big Rocks is where records live, permissions are enforced, approvals happen, and audit trails are recorded. Agents operate through that same system instead of around it.
A user should be able to tell an agent “create me a Move Big Rocks instance repo and deploy it to one Ubuntu VPS I control” and have the agent handle most of the work through documented surfaces.
Builder context
Adrian McPhee built the original platform and has served as CTO at major firms including LeasePlan and Bol. He now runs Move Big Rocks operationally for the DemandOps portfolio, including products such as TuinPlan, across multiple brands and countries. The same pressure kept showing up at every scale: duplicated SaaS subscriptions, duplicated vendor relationships, duplicated identity integrations, duplicated hidden workflows, and weak footing for secure agentic work when each brand needed its own tool stack.
Move Big Rocks is not a side project. It runs the operational infrastructure for a real portfolio of products across brands and countries.
The rebuilt platform plus its extensions replace roughly EUR10,000 per year of SaaS spend across Adrian McPhee's ventures, which helps extend runway instead of funding duplicated tool stacks.
The product is informed by experience running technology in larger businesses, not only by startup-era tooling preferences.
The extension model and first-party extensions were opened up after being refined in real use. Some startups the founder advises are now adopting the same model because repeated SaaS sprawl per brand is a common problem.
How to evaluate it
The codebase is new. The lessons are not. That is the right lens for evaluating the product.
References
Each link goes to the next authoritative page, reference, or support surface.
Move Big Rocks
Start from /agents, use
/docs/cli as the official product
tour, inspect /resources for
source and proof, and review /security
before making deployment or data-handling decisions.