Software Development Team Standards - Testing and Code Quality

Post #3 on my series on software development team standards focuses on testing and code quality, two hot-button topics for most implementation teams. Although both can cause some disagreements and take a bit of effort to get right, I think they are very critical aspects of high performance teams and should be addressed as early as possible so that developers can focus on what they do best - implementing features.

I write all this with a huge caveat - just because your team doesn’t score highly in one of these categories, doesn’t mean your team is bad! If anything it might give you some ideas and goals to strive for to address some of the pain points and issues you may be experiencing as a group.

This ended up being a huge post. I’m going to break it up into a few separate articles:

Testing

Aah, testing. I don’t think I’ve worked at a single company where this wasn’t an issue in some form. Ranging from a complete lack of automated testing to “these tests always break so we ignore them”, it seems there isn’t one agreed upon approach to test automation. I’m likely going to spin off a separate post on this specific topic, but here’s a few notes.

There’s three reasons to test code:

  1. Completion. This is testing to ensure the feature you’re adding or bug you’ve fixed has been implemented or addressed properly. This often involves manual testing while the feature is being developed and verified, but should include automated testing.
  2. Integration. Here we’re testing to ensure when we combine new features and fixes together that we didn’t break any interdependent components or flows. While I have seen a number of teams do this manually, this should be as automated as possible, if not completely. This is often implemented through nightly regression tests on larger projects.
  3. Release. This is the late-stage testing we do right before shipping code to production. Generally a manual spot-check of areas of the software where we’ve added or fixed functionality, but should also rely on the results of previous integration tests.

There are two types of automated tests:

  • Unit Tests. These focus on a specific section of code, and should be present whenever business logic was implemented. You don’t need to test boilerplate or configuration code; you can likely rest assured that Microsoft has tested that asp.net core controllers work fine, C# assignment operators do what they’re told and that entity framework can save to a database properly. You should be spending your time on the validations / view logic / business logic.
  • Integration Tests. These focus on the connections between two or more services in code, verifying that they interact as expected. These can range from more-complicated unit tests (i.e. domain layer saving something in the database) to full end-to-end tests (i.e. opening a specific web url shows data from a 3rd party web service).

I think where a lot of teams get hung up is that integration tests can be hard. I’ve seen teams throw in the towel on all integration testing because they couldn’t get end-to-end testing working in a consistent, repeatable way. It’s important to note that you don’t have to test everything end to end if it’s not feasible, and an integration test that covers 70% of your business flow is still much better than not covering the flow at all.

As a supporting point - no one likes manual testing. Developers hate it, business analysts hate it, heck even quality assurance analysts hate it. It is non-repeatable throw away work that is immediately invalid as soon as someone pushes a commit. Do as little of it as possible.

Code Quality

I think almost every team I’ve worked on hasn’t had an issue with code quality. There seems to be an unwritten standard for most code to follow, especially in a known project structure like a single page application or web API framework. Where things start to differ are the little nit-picky things like tabs vs spaces or brackets on new lines. All that said, there’s a few metrics you can keep an eye on to get a bigger picture of the team’s code quality.

  • Reverts. I think in 20 years I have rarely, if ever, seen a full revert of a merged feature, but it’s certainly possible. More often than not I’ve seen teams spin off bugfix and hotfix branches to address any issues rather than perform full merge reverts. Reverts speak to failures in the testing, definition of done, and/or requirements gathering processes, and they should be avoided as much as possible.
  • Code churn. Using tools like codebase visualizations, if a deployed feature has a lot of modifications made to its code after it was merged into a main branch, it’s an indication that a lot of changes had to be made after it was initially considered completed, which could be considered a failure of requirements or acceptance criteria.
  • Bug rates, both pre- and post-merge. Another important reason to keep track of bug tickets and their causes separately is to determine where bugs are coming into the software. If they’re a result of a specific feature or features, you can spend more time digging into the cause (missed requirements / failed implementation / poor test coverage / etc.) - this should be even easier to do if you’ve categorized your bugs.
  • Linted coding standards. There are few things more annoying than having to both identify and fix linting-related comments in PRs. I doubt anyone enjoys writing out the pedantic comments, and as someone trying to push code through it’s annoying to have to switch branches just because there was a missing space. If your team cares about those particular things at that level (and you should, as it reduces the cognitive overhead as well as potential conflicts if everyone’s using the same style), you should be using a linter - a tool that integrates into the build flow (hopefully both on the developer machine as well as on your build server) to catch and identify those issues so that other humans don’t have to.
  • Don’t implement DIY frameworks. I’m a firm believer that as a software developer you should be spending your time implementing business value. A billing software provider can’t charge their customers more for the custom database persistence layer they wrote, nor can a point of sale software system put down “custom event messaging framework” as a feature on their product page. These tools already exist, and they’re better maintained and tested than you can ever hope to do on your own; it’s foolish in most cases to think your implementation team could ever produce an ORM framework better than the Entity Framework team at Microsoft already has. Don’t write your own framework code.
  • Up-to-date frameworks. Where you are relying on frameworks and other tools, make sure they’re reasonably up to date. More often than not, especially considering today’s cybersecurity landscape, the updates that these framework providers are pushing out have more to do with security fixes and stability than anything else, which are important aspects of keeping your own business secure. You clearly can’t go refresh your entire codebase every week, but there should be a purposeful plan in place to address upgrades as they appear.