LeadDev Webinar Recap: “Why Software Maintenance Still Feels Stuck in 2015 (And What To Do About It)”
Reading time: ~ 7 minutes
In 2016, we were shipping Rails 4 apps, deploying to Heroku, and feeling pretty modern. Fast‑forward a decade, and we’ve brought a bunch of new technologies into our development workflows, so why can it still feel like application maintenance is being approached the way it was back then: reactive and singularly-focused? How can we update our processes and workflows to accommodate new technologies, and what do we need to do to make sure we don’t face the same problem of stale techniques and opinions 10 or 15 years from now?
I went into the recent LeadDev panel, “Why Software Maintenance Still Feels Stuck in 2015 (And What To Do About It)”, with these questions, and was glad to get a lot of insight from the event’s panelists: Tal Kimhi (Draftt), Abiodun Olowode (Metrifox), Hila Fish (AWS), and Reggie Davis (Tech for Culture). Their perspectives align closely with what our Ruby on Rails agency deals with every day: legacy applications, version upgrades, code quality improvements, and long‑term maintenance across multiple clients.
Below are my five key takeaways from the conversation, reframed through the Planet Argon lens of a legacy Rails agency preparing a variety of codebases for the future.
Maintenance starts at line one, not after launch
The panelists argued that “maintenance” isn’t a phase that begins once you go live; it starts with the very first line of code. Every decision either compounds future complexity or reduces it. Most “tech debt” isn’t a conscious trade‑off, it’s just unmaintainable code written without a maintenance mindset. Because we can’t go back and change the decisions of previous contributors, we need to recognize that our clients’ apps aren’t broken, they’re just a little…mature. And that maturity changes what can and should be done to maintain the app.
Why this matters to a Ruby on Rails agency
If you’re a Rails agency inheriting and evolving legacy apps:
- You can’t change how the app was originally architected, but you can control what you do with that unique or outdated design.
- Your margins depend on predictability: surprise maintenance fires erode trust, profitability, and team morale.
- Every feature you add will either tame the legacy system or entangle your team deeper in it.
Putting it into practice
Bake maintenance readiness into definitions of done
- Require tests for all new/changed behavior.
- Require small, cohesive PRs that are reviewable and reversible.
- Require at least minimal documentation (inline comments or short ADRs) for non‑obvious decisions.
Introduce railings for new work in legacy apps
- Establish a new code quality bar that is higher than the legacy baseline (linting, style guides, test coverage thresholds).
- Prefer new modules, services, or boundaries over extending existing sprawling classes.
Include maintenance explicitly in proposals & SOWs
- Call out maintenance‑supporting work (test coverage, refactors, dependency upgrades) as named line items, not hidden extras.
- Frame these as necessary enablers for reliable delivery, not optional nice‑to‑haves.
Tech debt is usually just bad design, and it spreads by default
Panelists called out an uncomfortable truth: most tech debt was never an intentional shortcut. It’s just poor (or rushed) design decisions that became the de facto standard. New engineers copy what they see, and teams only prioritize fixes when something breaks or performance falls off a cliff.
Why this matters to a Ruby on Rails agency
In Rails legacy work, this is the daily reality:
- New developers joining a client project infer “how we do things here” from the existing code, not a style guide or an aspirational best-practices doc.
- A messy controller or service layer becomes a pattern you accidentally scale out.
- Untracked tech debt quietly compounds across multiple clients, with the team becoming less and less confident about the stability of various applications.
For an agency, unmanaged tech debt is a risk multiplier across our entire portfolio.
Putting it into practice
Create a visible, lightweight tech debt register per client
- Don’t let tech debt live only as TODOs in code or in developers’ heads.
- Instead, track issues as structured entries, including area, impact, proposed fix, rough effort, etc. (Cherrybomb, our internal tech debt-tracking tool, helps a lot with this!)
Tag and classify debt by business impact
- Example categories: “blocks upgrades,” “frequent source of bugs,” “performance hotspot,” “security posture risk.”
- We then use this to prioritize debt across clients and sprints.
Institutionalize “Boy Scout Rule” for client work
- For any touched area, require leaving it slightly better: extract a method, add a test, remove duplication, etc.
- Track micro‑refactors on tickets so effort is visible and justifiable.
Time‑box “scouting spikes” for scary areas
- For brittle hotspots (e.g., a 1,000‑line controller), schedule short investigations to understand risk and outline a staged refactor.
Documentation is the cheapest maintenance tool you’re not using
The panel repeatedly emphasized how the lack of documentation makes maintenance painful and risky. When nobody knows why something was implemented in a particular way, seemingly harmless changes can bring production to a halt. Documentation doesn’t have to mean 50‑page wikis. Inline comments, short design docs, and diagrams are all powerful maintenance assets.
Why this matters to a Ruby on Rails agency
You’re frequently:
- Onboarding new developers into unfamiliar domains.
- Switching team members between client projects.
- Debugging behavior in apps where original authors and product owners are no longer around.
Without documentation, every context switch has a high cognitive and financial cost. You’re flying blind when estimating work or redesigning key flows. Good (or good enough) docs aren’t so much about preserving the past as they are making the future easier on code contributors.
Putting it into practice
*Standardize documentation floors for every client project * At minimum, maintain:
- A living
SYSTEM_OVERVIEW.md(key domains, main services, critical jobs). - A “How this app makes money / delivers value” section to orient new devs.
- A “Gotchas & landmines” section (e.g., fragile parts, non‑obvious invariants).
- A living
Use lightweight architecture decision records (ADRs)
- One short Markdown file per significant decision (e.g., “Why we introduced dry‑rb here,” “Why we kept this old service instead of rewriting”).
- Store them in the repo next to the code.
Encourage code‑level breadcrumbs
- Add comments only where behavior is surprising or business logic is subtle.
- Link comments to relevant tickets/ADRs for deeper context.
Bake documentation into onboarding and offboarding
- During onboarding, require newcomers to update docs as they learn (this surfaces gaps).
- When rolling off a client, schedule explicit “knowledge transfer + doc pass” time, not just hand‑off meetings.
AI accelerates code creation, but not automatic code maintenance
AI coding assistants make it easier than ever to generate code fast. The panel’s warning: AI has dramatically increased the volume and velocity of code, but teams have not similarly increased their capacity to maintain it. Without proper structure and constraints, AI‑generated code can amplify existing maintenance problems.
Why this matters to a Ruby on Rails agency
- AI can help you move faster across many client codebases, but only if it’s constrained by process.
- Your value proposition is quality and reliability at speed; abuse of AI erodes both.
- AI-generated code will often pass linters and other code sniff tests because there’s nothing technically wrong with it, but it might still be introducing new patterns into your codebase that conflict with existing patterns and be hard to untangle later.
Putting it into practice
Adopt a structured approach to AI usage
- Require a written spec or ticket description before using AI for non‑trivial changes.
- Treat AI as an assistant implementing an agreed design, not as the designer.
Define clear AI usage guidelines for client work
- What kinds of tasks are acceptable?
- What AI-generated code requires extra scrutiny?
- How will you communicate AI usage and safeguards to clients?
Upgrade your review practices for AI‑assisted PRs
- Flag AI‑assisted changes clearly in PR descriptions.
- Increase review depth on PRs containing large AI‑generated diffs.
- Require tests and, where relevant, benchmarks for AI‑generated performance‑sensitive code.
- Set an internal ownership standard that covers AI tools. Engineers own all code they submit, regardless of whether AI helped write it or not.
Use AI where it’s least risky and most repetitive
- Generating tests, codemods, bulk renames, translations, or simple Rails upgrade fixes.
- Always run linters, formatters, and test suites as part of the workflow.
Translate maintenance into business risk and velocity, not just engineering pain
A recurring theme from the panel: maintenance only gets prioritized when leaders understand its impact on business outcomes. Security teams solved this a while ago with clear risk language and SLAs. Engineering needs a similar shared language around maintenance.
When you ask leaders how fast they want to move and how reliable they need the system to be, maintenance becomes an obvious enabler, not an engineering indulgence.
Why this matters to a Ruby on Rails agency
Your work sits at the intersection of:
- Legacy systems that power revenue generation.
- Product organizations pushing for new features.
- Business stakeholders who don’t (necessarily) speak your team’s language, but do understand risk, uptime, and opportunity cost.
If you can’t explain maintenance in terms of timelines, risk, and dollars, your clients will under-invest, and you’ll fall into a reactive posture and struggle to justify the work you know you need to do to keep the app ecosystem secure and stable.
Putting it into practice
Define a simple maintenance risk model per client
- Classify issues as “critical, high, medium, low” based on:
- Potential revenue impact.
- Regulatory/security implications.
- Impact on the ability to ship
- Tie each level to recommended SLAs.
- Classify issues as “critical, high, medium, low” based on:
Connect maintenance work to client OKRs and roadmap
- Example: “Upgrading to Rails 7 reduces security risk, unlocks performance work, and shortens time to ship new APIs.”
- Show how a maintenance initiative reduces future feature cost or cycle time.
Bundle maintenance with roadmap milestones
- Pair feature work with related refactors and dependency upgrades.
- Offer maintenance epics that align with big business goals, like internationalization, performance SLAs, or compliance changes.
Measure and report maintenance outcomes
- Track and share metrics with clients, such as incident count, mean time to recovery, deployment frequency, lead time for changes, and Rails/Ruby version status.
- Regularly update clients: “Because we did X, you now get Y (faster releases, fewer outages, easier upgrades).” Don’t be afraid to brag about the work your team does and its impact on the health of your client’s company.
Preparing for what’s ahead
Software maintenance is not glamorous. It rarely makes conference keynotes and almost never features on product roadmaps. But as the panelists emphasized, it’s the most important part of software development because it’s where systems spend the majority of their lives. By treating maintenance as something that begins with a project’s first commit, we can pull our clients and ourselves out of the 2015 mindset.
The technology stack will keep evolving. The real question is whether your maintenance practices evolve with it to support the next phase of the organization.