I'm currently working through a task where I have to write unit tests. These unit tests in theory should test out the entirety of its class isolated from any other classes or function and effectively apply business logic on how these functions/methods are being called.
But where do I draw the line in regards to how much testing I'd need to run? Currently struggling with this PR because there's lots of lines of code (bad I know), but I do need to find out what a complete unit test looks like.
If you had to ask me, a complete unit test is where all the unit tests traverse through all code path and have both positive tests and negative tests being ran to show how the code should be used.
If there's some sort of guidance online, that would be helpful. Otherwise I'm mimicking what unit tests have been approved based on style and usage.
You can try to think of the tests as protecting you and your teammates when one of you refactors the code a few years down and a lot of the context behind the code has been forgotten. The test case, if it fails, would have caught something that would save you all from an incident in production.
From that angle, can you think about the cases where if the class gets refactored, as long as the tests pass, the core functionality of the class will still be intact?
If you are writing unit tests for a class, you can create unit tests for every public method.
I like your thinking about having both positive and negative test cases, but try to avoid redundancy in your test cases.
If you had to ask me, a complete unit test is where all the unit tests traverse through all code path.
Yes, a complete unit test will traverse through most code paths, but setting this as the goal might be dangerous since it may lead you to create tests that are tightly coupled to the implementation, making it harder to maintain. Instead of using number of code paths as the main metric, test the behavior of each function.
But where do I draw the line in regards to how much testing I'd need to run? Currently struggling with this PR because there's lots of lines of code (bad I know), but I do need to find out what a complete unit test looks like.
A class is a collection of functions that operate on some data. Each function will contain some specification - what values the client (whoever is using the class) provides and what values the function outputs. Your goal is to validate the behavior of these functions using the smallest set of test cases. MIT's software engineering course has a pretty good example that uses abs() and max() as examples (the page is pretty comprehensive):
https://web.mit.edu/6.102/www/sp23/classes/02-testing/
Another concept is the idea of class invariants, which are a set of rules that must be held before and after a client calls a function of a class. For example, a binary search tree has several invariants: that all left nodes are smaller than the parent, all right nodes are larger than the parent, and that all nodes have at most 2 children, etc. You should write tests that also check these invariants.
So if I were you, I would do the following steps:
As for resources, I think the test driven development community have developed a good idea on what makes a good test suite. You don’t need to do TDD (having practiced it, it tanks your dev velocity by trying to be perfect) but it helps to understand what makes a good test case and how to avoid common pitfalls.
I’d recommend reading Bob Martin’s clean code and also Sandi Metz’s 99 bottles of OOP. Kent Beck’s test driven development by example is also good.
Saving this post. One of the best answers. Thank you sir
I'm going to take a different approach here and talk more about the "meta" part of writing a good unit test (and just writing good tests in general). In short, a good unit test is one that matters. This sounds obvious, but I have seen countless engineering teams mess this up as they chase arbitrary test coverage goals.
So what does it mean to matter? Well, with every area that you could write tests for, ask yourself these 3 questions:
I led huge automated testing efforts at Instagram (both unit and integration tests spanning across features powering billions in revenue), and I have found that the key to becoming a testing master is to focus more on the question of "Do I write a test or no test at all?" vs. "How do I write a good test instead of a bad test?" (but of course, you should always try to write good tests if you are writing them and the other advice in this thread is awesome for that).
Going even deeper, the overall orchestration around tests is very important, which I talk about in-depth here: "What do mobile testing strategies look like at top tech companies?"