Automated Testing is a First-Class Citizen
It blows my mind that to this day, in the year 2025, there’s a large number of software development shops that don’t employ automated testing. There are certainly companies out there who have done a good job with automated testing and are reaping the benefits, but I’d say there’s more that are missing some form of unit or integrated testing as part of their flow. Considering the importance of the business functions that their software provides (otherwise, why are they writing it?), it’s mind-boggling that they’d leave mission-critical functionality untested, or at least not tested in an automated way. Let’s dive in!
Automated Testing can be Difficult
I get it. After two decades of writing code, I understand that writing automated tests can be difficult. I hope that writing your unit tests isn’t as difficult as that likely speaks to either an extremely legacy code base (we’re talking pre-.net) or some massive architecture problems. Integration tests are usually the more difficult task here, often with external dependencies.
The key point here is just because the task is difficult doesn’t mean you shouldn’t do it! I know, easier said than done, particularly when you have deadlines to meet and code to ship. I’ve been there. But I’ve seen a lack of testing cause more problems in the long-term than meeting short-term deadlines solves. Which brings me to my next point:
A Lack of Automated Testing Should Be Considered Tech Debt
If you can’t prove that your software is working (and continues to work) as designed, that’s a flaw. Sure, you could manually verify the software, and maybe that’s viable for a proof of concept or a MVP, but as soon as you have a paying customer, you should be able to say with confidence that the thing they’re paying for is working.
Manual Testing is Throw-Away Work
One of the places I worked used to have their testers make word documents full of screenshots, documenting the fact that they tested each feature and it gave the expected result via the UI. While this might be good for feature validation of UI-specific features, the word document they produced is an invalid document the second that someone else merges code into the main branch, or the feature branch itself is merged into main. Manual testing is throw-away work, and should be done as little as possible.
Of course, you’ll never be able to fully rid yourself of manual testing; developers often manually test while working on their code, and it’s a good stop-gap measure until automated tests are in place. But I’d argue a feature shouldn’t be considered “done” until there’s some level of automated acceptance testing proving that the success criteria of the bug or feature has been (and continues to be!) met.
The Three Levels of Automated Testing
Here’s my perspective on the minimum level of automated testing that all development teams with active users should pursue:
- If the failure of a business-critical function would result in you rolling back a release or calling someone after-hours, it should have an automated integration test
- If something is so important to your end-users that they would freak out, flood the support desk with calls, spam social media with complaints, or fundamentally prevent the user from otherwise using the system, it definitely should be verified with an automated test. A good example would be user login, a purchase flow, automated billing, or any other mission-critical functionality. You can prevent a whole bunch of pain by having your deployment processes verify this functionality as part of the deployment, potentially rolling back or halting the deploy if any of those tests fail. These tests should also be a part of your release branch acceptance testing,
- If you, as a developer, committed to deliver a feature with specific success criteria, you should be able to demonstrate that it works (and continues to work) with an integration test
- If one of your analysts went through the trouble of writing specific success criteria for the feature, you can be pretty sure that it’s a critical component of the feature working. It’s business logic, logic that you were paid to implement, and to ensure that you successfully delivered it (and it continues to work as expected), you should demonstrate that with an integration test.
- If you wrote business logic, it should be covered by a unit test
- Anything outside of a simple transform between layers or external function call should likely be covered by a unit test. A good rule of thumb: if you wrote a conditional, you likely need to test it. Not only does this provide code-level verification of expected behaviour, it also ensures your code has good architecture (poorly laid-out code is often harder to test).
Integration Tests Don’t Have to be End-To-End Tests
Where I think a lot of teams get caught up (and eventually abandon integration testing) is end-to-end testing, where reliance on flaky external or shared systems causes tests to fail unexpectedly, costing more maintenance hours than they’re worth. Getting frustrated with false positives and falling into patterns where the test results don’t matter, teams quickly fall out of the habit of integration testing, and test coverage (particularly of business-critical functionality) falls by the wayside.
I’m here to tell you that you don’t have to have end-to-end tests! While a full UI-to-storage test is nice to have, it likely relies on a lot of flaky components (browsers behaving properly, shared environments, volatile data) that will cause your test to be less about the functionality and more about making sure all the planets align. I argue that it’s just as valuable, if not more so, to test tightly integrated components and leave unit tests to cover the rest.
If you have unit tests covering your UI code (which you should!), it’s likely safe to leave them out of an integration test - instead you can make a call directly to a web service, or even run it like a unit test without mocks. Or, you can leave the mocks in for your persistence layer or 3rd party service integrations, allowing you to write more stable integration tests against most of your application and work with mocks that represent the data you expect to send or retrieve to those storage layers.
Is it perfect? No! Is it better than a flaky test that fails every 3rd run, or no test at all? Heck yes!