How I Built a Heroku-like Platform in 2 Weeks with Claude Code
From internal operations tool to client-facing PaaS in 2 weeks
The Starting Point
I run a consulting agency serving businesses in AI, service, and franchising industries. Like most consultants, I needed a way to manage the chaos - tracking projects, hosting details, client communications, deliverables. So I built a customer portal.
For months, it was exactly that - an internal operations tool. My clients could log in to see their project status, review hosting environments I managed for them, and access documentation. Built with Next.js, Prisma, and Clerk.
Functional. Organized. Exactly what a small consulting shop needs.
But here's the thing about consulting in AI and tech-adjacent industries: my clients kept asking for the same things.
"Can you deploy this ML model API for us?"
"We need an application deployed, but we don't want to manage servers."
"This data pipeline needs to run every night - can you set that up?"
I was already managing infrastructure for half my clients anyway.
The deployments. The cron jobs. The databases.
Why not build that capability directly into the portal I'd already created?
In mid-January 2026, I decided to find out.
Two weeks later, I had a DevOps platform.
Enter Claude Code
Last year I used Claude (Sonnet and Opus) for code completion, review, and planning, but Claude Code changed everything.
Instead of describing requests in IDE chat, I ditched the IDE and went straight for the terminal.
Claude understand the patterns. It can see the multi-tenant architecture I'd already built for managing multiple client organizations.
My Workflow: Think, Plan, Implement
After working intensively with Claude Code over the last couple of months, I've settled into a rhythm that consistently produces good results. It's a three-step process with iteration baked in.
Step 1: Think
Before any code gets written, I ask Claude to think about the feature. Not plan it—just explore the problem space.
"Think about what a software catalog would need. Users should be able to deploy prebuilt apps like WordPress or Ghost without connecting a GitHub repo."
Claude explores the existing codebase, identifies relevant patterns, and surfaces considerations I might not have thought of:
- "The current deployment pipeline assumes a build step. Catalog items would skip that."
- "You have usage metering tied to CodeBuild minutes. Catalog deployments wouldn't trigger builds—how should they be metered?"
- "The ClientApp model expects a GitHub repo. We'd need to support Docker images as a source type."
This thinking phase often reveals the actual complexity of a feature. What sounds simple ("let users deploy WordPress") has tentacles reaching into billing, infrastructure, and data models.
Step 2: Plan
Once the problem space is understood, I ask for a concrete plan. This is where the real iteration happens.
The thinking step has stuffed the relevant information in the context window, so the planning is primed.
"Plan the implementation for the catalog system."
Claude produces a structured plan: data models, API endpoints, UI components, migration strategy. But the first plan is rarely the final plan. I push back:
"Does this implementation make it harder to add dependency management later? A WordPress site needs MySQL."
Claude revises. The plan now includes a dependencies field in the catalog schema and a hook point for future provisioning logic.
"Consider how Temporal could fit into this architecture for workflow orchestration."
More iteration. I bring my own experience to the table - I've used Temporal before and knew it would be a good fit for reliable, long-running deployments. Claude adapts the plan to include Temporal workflows with compensation activities for rollback.
This back-and-forth might take 20 minutes or an hour depending on feature complexity. But it's cheap iteration. No code has been written yet. Changing direction is easy and I know it's a solid plan because I've reviewed it.
Step 3: Implement
Only after the plan is solid do I ask Claude to implement. By this point, the architecture is clear, edge cases are identified, and the implementation is almost mechanical.
"Implement the catalog deployment workflow according to the plan."
Because Claude has already thought through the problem and we've iterated on the plan together, the generated code tends to fit the first time. It follows existing patterns. It handles the edge cases we discussed. It integrates with the systems we identified during planning.
Why This Works
The three-step process prevents a common failure mode: generating code that works in isolation but doesn't fit the larger system.
When you skip straight to implementation, you get code that:
- Duplicates existing utilities
- Uses different patterns than the rest of the codebase
- Misses integration points
- Creates technical debt you'll pay for later
The thinking phase catches architectural mismatches early. The planning phase catches integration issues. By the time you're implementing, the hard problems are solved.
Real Example: Temporal Integration
When I wanted to add reliable workflow orchestration for catalog deployments, the conversation went like this:
Think: "Consider how we'd make catalog deployments reliable. Right now if the server restarts mid-deployment, the process is lost."
Claude identified the problem and suggested replacing the webhook-based approach with a queue system for better reliability. The thinking was sound - queues would decouple the request from the processing and survive restarts.
But I had experience with Temporal from previous projects and knew it offered something queues alone don't: built-in workflow state, automatic retries with backoff, and native support for long-running processes with human-readable history.
Plan (with iteration):
"Consider how Temporal could be used here instead of a queue"
Claude adapted. The plan shifted from a queue-based approach to Temporal workflows. But I kept pushing:
"What happens when we scale horizontally? Multiple Next.js instances all running workers?"
Revised plan: separate worker process, deployed as its own container, connecting to Temporal Cloud. Single worker pool handles all workflow execution regardless of how many web instances are running.
"How does this affect local development? I don't want to run Temporal server locally."
Another revision: Docker Compose configuration for local dev, with the worker connecting to remote Temporal Cloud using development credentials.
Implement: With the architecture settled, implementation was straightforward. Worker process, workflow definitions, activity functions, SSE status updates—all generated to match the plan we'd refined together.
This is the collaboration that works: I bring domain knowledge and technology preferences. Claude brings the ability to hold the relevant bits of the codebase in context and generate consistent implementations. Neither of us could do it as well alone.
Two Weeks of Feature Development
Week 1: Client Apps and Cron Jobs
Client Apps: The first major feature was letting my clients deploy their own applications - without filing a support ticket every time.
The thinking phase revealed that my existing DeploymentProject and DeploymentEnvironment models could be extended rather than replaced. The planning phase identified that I needed a new ClientApp facade to present a simpler interface to end users while the complexity lived underneath.
Claude Code generated the entire features/client-apps/ module:
- AWS CodeBuild for CI/CD pipelines
- ECR for container storage
- Kubernetes manifests via cdk8s (originally it suggested string templates, so I gently suggested cdk8s for sanity)
- Real-time build logs streamed to the client dashboard
Cron Jobs: My AI and data-focused clients all needed scheduled tasks. ETL pipelines. Model retraining. Report generation.
During thinking, Claude identified that the same build pipeline could target Kubernetes CronJobs instead of Deployments. Planning added the details I'd asked about:
"What about timezone handling? My clients operate across regions."
The implementation included timezone-aware scheduling, execution history, and automatic state sync between the portal and Kubernetes.
Week 2: Software Catalog and Temporal
Software Catalog: This is where the three-step process really proved its value. The catalog feature touched almost every part of the system.
Thinking surfaced:
- Catalog items skip the build pipeline entirely
- Dependencies (MySQL, Redis) need to be provisioned alongside the app
- Long-running deployments need reliability guarantees
- Users need real-time status updates
Planning (with iteration) produced:
CatalogItemandCatalogVersiondata models- Temporal workflow with 7 phases (my suggestion, refined by Claude)
- Dependency provisioning as a child workflow
- SSE for browser updates
- Rollback logic for failure scenarios
Implementation generated thousands of lines of code that fit together because the architecture was already decided.
The Rebrand: At some point during week two, I looked at the header saying "Client Portal" and realized it was misleading. This wasn't just a place for clients to check project status anymore. It was infrastructure.
Claude Code helped me:
- Update branding from "Client Portal" to "DevOps"
- Restructure the home page to showcase platform capabilities
- Move consulting-focused content to a dedicated marketing page
- Update metadata for SEO
What Made This Work
1. Built on Real Needs
Every feature came from actual client requests. I wasn't guessing what a "DevOps platform" should have - I was solving problems I encountered weekly in my consulting work.
2. Iteration Before Implementation
The think-plan-implement cycle meant I caught architectural mistakes before they became technical debt. Asking "does this make X harder in the future?" during planning is free. Discovering it after implementation is expensive.
3. Bringing My Own Expertise
Claude doesn't know everything I know. I've built systems with Temporal, dealt with Kubernetes edge cases, seen what breaks at scale. The collaboration works because I can steer based on experience while Claude handles the implementation details. It's not about delegating decisions - it's about multiplying execution.
4. Multi-Tenancy From Day One
Because the portal was always designed to manage multiple client organizations, scaling to a platform was natural. Usage quotas, resource isolation, billing tiers - the organizational boundaries were already there.
5. Async Done Right
Every time I planned a feature, I'd ask about failure modes. What if the server restarts? What if the client closes their browser? The answers shaped the architecture toward server-orchestrated workflows rather than fragile browser-driven processes.
The Business Impact
For my AI clients: They deploy ML APIs and data pipelines without understanding Kubernetes. Focus on models, not infrastructure.
For my service clients: They get WordPress, Ghost, booking systems - managed and updated without calling me for every change.
For my franchising clients: Consistent infrastructure across locations. New site? Click a button.
For my consulting practice: Less time on repetitive deployment tasks. More time on actual consulting work.
The Numbers
- 2 weeks from consulting portal to DevOps platform
- 30+ database migrations during the transformation
- 20+ feature modules across the codebase
- 3 deployment modes: GitHub repos, Docker images, one-click catalog
- 4 plan tiers: Free, Starter, Pro, Enterprise
The Meta-Lesson
I didn't set out to build a PaaS. I set out to make my consulting operations more efficient. But the tool I built for myself turned out to be exactly what my clients needed too.
Two weeks. That's how long it took to go from "maybe I should add deployment features" to a working platform with CI/CD pipelines, scheduled jobs, a software catalog, and Temporal-orchestrated workflows.
The think-plan-implement cycle made this possible. It's tempting to jump straight to "build me this feature" - but the iteration during thinking and planning is where the real value lives. That's when you catch the mistakes that would cost days to fix later.
Claude Code isn't a replacement for experience. It doesn't know that Temporal is a better fit than queues for this use case - I had to bring that. But it takes that direction and runs with it, holding the entire codebase in context while generating implementations that fit.
The best results come from genuine collaboration: human judgment about what to build and which technologies to use, AI assistance with how to implement it consistently across a growing codebase.
The customer portal started as internal tooling. Two weeks later, it became a product.