Oops, I updated a file and everything broke
So, you’ve inherited a codebase. It’s a few years old and the original devs are no longer with the company. Or maybe they are and they’ve moved on to other projects, happily forgetting the awful code they wrote all those years ago.
The first ticket you’ve been assigned says: “fix calculation bug in core processing library.” A few of your peers wish you luck, like you’ve been given a death sentence. You shrug it off, do a little digging, find the poorly-named calcnum2() function, spend a few hours figuring out what it actually does, and then you have the “a-ha” moment. A light shines down from the heavens on your keyboard and you see the error. Even better, it’s just a simple five-line fix.
You’re a genius.
A local commit, a few tests, and you feel good. You push your change up to the feature branch and—error messages start coming in on chat. Hmm… that’s weird. Then a couple people start asking, “Who updated the core processing library on the new feature branch? None of my code is working.”
You run the integration test suite and, yep, nothing’s working.
Local testing is local
It’s easy to find a scapegoat for what you just went through. Poor code quality (not yours, of course), legacy architecture, lack of documentation, too many dependencies—you name it. One factor that doesn’t quickly come to mind is insufficient testing.
You might reply, “Insufficient testing?! Our test suite takes two hours to finish!” or “We have 95% automated test code coverage.” But here’s the truth: test execution time, coverage, and other metrics can build a false sense of security.
- The tests might be non-deterministic. Here’s what Martin Fowler says about that: “Non-deterministic tests have two problems, firstly they are useless, secondly they are a virulent infection that can completely ruin your entire test suite.”
- The tests might be gaslighting you and your team by making you all question the reality of whether your software works at all. “Maybe the tests are right, but we’re doing something else wrong.”
- The tests lack idempotency and provide different results after one or more executions.
- Your codebase may lack the granularity to be tested effectively. Test granularity is one thing, but if the code methods are too large and haven’t been broken into subtasks well enough, effective testing may not be possible.
That hours-long test suite may not be providing any meaningful signals to help the dev team resolve problems. Just because it takes a long time doesn’t mean it’s providing a lot of value. In fact, we’re reducing test suite execution times from hours down to minutes while maintaining test and deployment success.
Bad testing slows devs down
Developers need the most immediate feedback possible. Write code, get feedback—it’s that simple. Unfortunately, it doesn’t work that way yet. Between the act of writing code and receiving feedback are all sorts of delays due to infrastructure, automation, and more. At YourBase, we envision a world where feedback on new code is provided immediately, but that’s a post for another day…
The longer developers wait to receive feedback, the lower their velocity. Not only is this counterproductive to the fundamental purpose of their role, but it’s demoralizing. Write a few lines of code (sometimes three lines, sometimes three hundred) and wait. And the waiting isn’t exclusive to humans.
Computers wait, too: single-threaded test suites, the aforementioned non-deterministic tests, environmental and data constraints, poor dependency mapping and planning, yada yada yada. Bad tests slow everyone down.
Though test suites can be accelerated, sometimes refactoring and rearchitecting are required.
Can’t accelerate testing if testing can’t be accelerated
We accelerate test suites. It’s what we do. We help customers with hours- or days-long test suites and we cut those dang execution times by up to 90%. Sometimes we can do it in a single sprint, but sometimes it takes months. Why?
Well, like we’ve said, the architecture of the codebase, or the infrastructure itself, or the years-old legacy codebase issues, or one of a dozen things could be problematic. We’re also very thorough and we don’t move forward with test acceleration unless we’re abso-frikkin-lutely sure our accelerated tests provide the same value as the full-fat test suite we’re accelerating.
We have a variety of ways of accelerating test suites, including parallelization and mapping tests to lines of code (what we call a “dependency graph”). But sometimes there’s no silver lining. We have to help our customers rearchitect their codebase or their test suite. At some point, we’re “givin’ her all we’ve got, captain!”
And this takes us back to the beginning of this article. Legacy codebases with poorly designed and unmaintained test suites are test acceleration killers, plain and simple. Like the muscles I haven’t exercised since high school, legacy codebases require maintenance, upkeep, lubrication, and a generous application of IcyHot (or Bengay if that’s your jam). Otherwise, it takes a two-hour test suite designed years and years ago to tell you what still works.
Need help with test acceleration?
Test acceleration definitely suffers from the weakest link problem. The system is only as fast as the slowest piece of the critical path from development to deployment.