“The only way to know that a test actually works is when it fails when you make a code change. — Simon de Lang
There have been some really thoughtful questions that were raised, both while writing tests, as well as reading up on the best practises and what to make of test metrics like code coverage.
which parts of your code are TOTALLY UNTESTED ?
Because we are currently in the era of React, therefore the test runner + code coverage tool I constantly interact with are Jest and Istanbul (which Jest uses)
Jest - test runner, with helpers to describe your tests Istanbul - code coverage calculator
it might be a good idea to try writing my own
HOW GOOD are your tests that you have written at catching bugs ?
This cool idea (probably quite old, because it makes so much sense to do) of mutation testing (e.g. Jumble, or any of the others) is about mutating your tests to see if they still pass. If they do pass, it means your tests have been kinda shitty (because a test that passes ALL THE TIME doesn't catch anything)
there's a paper on mutation testing with javascript
therefore it is in a blogpost, with hopes that it will germinate and turn into something bigger down the line, if not by me, then by a reader of this blogpost.
what this idea wants to solve
in essence, think about framing the question of testing code as a generator-discriminator pair, like a Generative Adversarial Network, where tests are generated and pitted against the mutator, with the KPI being generating tests that the mutation tester cannot create mutants for.