One of the main value props for writing meaningful unit tests, is it helps the developer think differently about the code he is writing tests for, and that improves quality of the code composition.
* It promotes actually looking at the code before considering it done
* It promotes refactoring
* It helps to prevent breaking changes for stuff that wasn't supposed to change
* The tests doesn't actually test functionality, edge cases etc, just that things doesn't crash in a happy-path.
* Any changes to an implementation breaks a test needlessly, because the test tests specifics of the implementation, not correctness. Thus it makes refactoring actually harder, since your test said you broke something, but you probably didn't, and now you have to double the work of writing a new test.
* In codebases for dynamic languages, most of what these tests end up catching is stuff a compiler would catch in a statically typed language.
This is low coverage.
> Any changes to an implementation breaks a test needlessly, because the test tests specifics of the implementation, not correctness.
This is bad design.
> In codebases for dynamic languages, most of what these tests end up catching is stuff a compiler would catch in a statically typed language.
So they are not useless.
No, as a sibling comment to mine shows, it's actually easy to make 100% coverage with bad tests, since one doesn't challenge the implementation to handle edge cases.
AFAIK, «high coverage» may have different meaning for different people. For me, it's «high quality», for others it's «high percentage», e.g. «full coverage» or «80% coverage», which is easy to OKR.
Which is what makes this whole concept of code coverage so much toxic nonsense...
Not to argue against writing 'quality' tests, but high 'coverage' actually decreases quality, objectively speaking, since erroneous coverage of code serves negative purposes such as obscuring important testing, enshrinig bugs within testing.
I would make my case here CodePilot and all such 'AI' tools should be banned from production, at least until they solve the above problem, since as it stands they will serve to shovel piles of useless or worse, incorrect testing.
It is also important to remember what AI does, i.e. produce networks which create results based upon desired metrics - if the metrics were wrong or incomplete, you produce and propagate bad design.
So yes people use it now as a learning tool (fine) and it will get 'better' (sure), but as a tool, when it gets better, it will constrain more, not less, along whatever lines have been deemed better, and it will become harder, not easier, to adjust.