The AI-Ready Redesign: How Spec-Driven Development Is Changing the Way Teams Build
Reading time: ~ 5 minutes
This entry is part six of the series: Website Re-design Series
Part 1: Don't Rush the Redesign: Start with Strategy
Part 2: Defining Redesign Roles: Designer, Developer, or UX/UI Expert?
Part 3: Design Meets Reality in Rails Redesigns
Part 4: Launch Ready: Communicating Across Teams for a Smooth Rollout
Part 6: The AI-Ready Redesign: How Spec-Driven Development Is Changing the Way Teams Build
Throughout this series, we've come back to the same theme: the teams that execute redesigns well aren't necessarily the ones with the most talent or the biggest budgets. They're the ones who invest in clarity before execution.
That principle hasn't changed. But how teams execute has changed, and rapidly.
AI-assisted development tools are now a real part of how both engineering and design work gets done. And the teams that are struggling to get value from them often share a common problem: they're feeding vague input to powerful tools and wondering why the output doesn't match what they had in mind.
The solution isn't better prompting. It's better preparation.
The Discovery Workshop is now the most important meeting on the project
In a traditional redesign, the discovery phase- the time spent aligning on goals, constraints, and requirements before any work begins- was valuable, but optional in practice. Teams could course-correct during implementation. The feedback cycles were slow enough that misalignment got caught somewhere along the way.
AI tools compress those cycles dramatically. An engineer using GitHub Copilot or Claude Code can move from requirement to working feature in a fraction of the time it used to take. A designer using AI-assisted tools can explore visual directions faster than a traditional mockup round. That speed is genuinely useful, but only if the input is right. When it isn't, teams don't save time. They build the wrong thing quickly, then rework it.
AI doesn't eliminate the cost of misalignment. It accelerates it.
This is why the discovery workshop- a focused session where the right people gather to clarify the mission before any tool touches a file- has become the most strategically valuable step in a modern redesign. Not a kickoff. Not a scope review. A real working session where designers, engineers, product owners, and client stakeholders sit in the same room (or the same call) and stress-test the "what" and "why" until it can be written down clearly enough to hand to a machine.
The output of that workshop becomes the foundation for everything that follows.
What changes for design teams
For designers, the shift is less about tools and more about timing. AI can generate component variations, explore visual directions, and produce design drafts at a pace that wasn't possible a few years ago. But those tools need direction. A vague brief produces a vague output.
What the discovery workshop gives designers is something more precise: a written definition of the user's goals, the product's purpose, and the constraints that matter. That document becomes the brief- not a mood board or a list of reference sites, but a structured description of intent. The designer's role in an AI-assisted workflow shifts toward shaping that intent clearly and then evaluating outputs critically, rather than producing every pixel manually.
The fundamentals from earlier in this series still apply. You still need to know whether you're solving a visual identity problem, a user flow problem, or a technical implementation problem, and who should lead. AI doesn't answer that question. It just executes faster once you do.
What changes for engineering teams
Most conversations about AI-assisted development have a hidden assumption baked into them: that you're starting from scratch. A blank repo. A clean schema. No prior decisions to untangle.
That's not the reality most teams are actually working in.
The applications we work with most often at Planet Argon are 10, 15, sometimes 20 years old. They've been through three or five redesigns as the business shifted. They have features that were built, deprioritized, and half-removed, but never fully cleaned up. They have architectural decisions that made sense in 2012 but create friction in 2026. They have knowledge baked into the code that isn't written down anywhere, held together by institutional memory that may have already walked out the door.
A system that has survived a decade of real users, real edge cases, and real production pressure has earned something. Call it institutional knowledge, call it battle-tested architecture. Either way, that history has value. Age alone tells you almost nothing. What matters is whether anyone has taken the time to actually understand what the system does and why.
A rewrite is what you choose when you don't understand what you have yet. Understanding comes first. Then you decide.
This is where Spec-Driven Development, and tools like GitHub's Spec Kit, become especially valuable. Not for greenfield projects, but for exactly these kinds of applications entering their next chapter.
The workflow moves through a deliberate sequence. First, /speckit.constitution establishes governing principles: code quality standards, testing expectations, and architectural guidelines. In a legacy Rails application, this step isn't just about new decisions but about documenting what the system currently is, so everyone is working from the same understanding before anyone changes anything.
From there, /speckit.specify captures requirements in plain language: what you're building and why. Then comes the step that matters most in a long-lived codebase: /speckit.clarify, a structured process that surfaces underspecified areas and forces them to be resolved before planning begins. In an aging application, "underspecified" often means undocumented- behaviors that exist because they were built that way, not because anyone consciously decided they should be. The AI will find these gaps. That's the point.
What makes this pairing powerful is that SpecKit isn't operating in a vacuum. It works directly against the actual codebase. That means as it moves through /speckit.plan and into /speckit.tasks, it can identify where proposed changes touch code that was written for a feature that was later abandoned, or where a new pattern risks breaking a behavior that's been load-bearing for years without anyone realizing it. The spec-driven approach creates a paper trail (intent matched against implementation) that makes regressions visible before they ship.
Once requirements are clear and the plan is validated, /speckit.implement executes the task list in the correct sequence, respecting dependencies that a human might miss or rush past.
In spec-driven development, your primary role is to steer. The coding agent does the bulk of the writing.
AI tools can read a codebase faster than any human. But reading code and understanding it are different things. The discovery workshop, the constitution, and the clarification process are how you turn what the machine reads into something it can actually act on correctly. They are not overhead. They are the product. Everything downstream is only as good as the clarity that went into those earlier steps.
The part that doesn't change
What's easy to miss in the excitement around AI tooling is how much of what we've covered in this series remains entirely relevant.
Strategy still has to come first. If you don't know why you're redesigning and what success looks like, no tool- AI or otherwise- will figure that out for you. The right roles still have to be involved at the right time. A constitution written without an engineer in the room will miss technical constraints. A spec written without design input will miss user experience considerations. And alignment between vision and execution, between design and engineering, between the team and the client still has to be built deliberately. AI surfaces misalignment faster, but it doesn't resolve it.
What AI changes is the leverage point. When clarity is high, the tools amplify output significantly. When clarity is low, they amplify confusion just as efficiently.
Where we're heading
The teams we see adapting most successfully to this new environment are investing earlier and more deliberately in the front end of the process. More workshops. More structured discovery. More time spent on the constitution, the spec, and the clarification pass before anyone runs a single /speckit.implement command or asks an AI to generate a component.
That investment pays off later. Not because AI tools are magic, but because they're fast. And fast, paired with clear intent and a well-understood codebase, is exactly what a well-run redesign looks like.
We've always believed that good software deserves a second act- not a funeral, and not a rewrite for its own sake. AI tools don't change that belief. If anything, they strengthen it. An AI agent that can read years of code history, surface undocumented patterns, and execute a spec-driven plan against a proven system is a powerful thing. But it still needs someone in the room who understands what that system is for, what it has survived, and where it's going next.
The through-line of this series has always been that good software outcomes come from alignment between people, between purpose and execution, between what a team builds and what a product actually needs. AI accelerates the work. It doesn't replace the judgment. That part still belongs to the teams that are willing to do discovery before they do the building.