zlacker

[return to "GitHub Copilot available for JetBrains and Neovim"]
1. Spinna+ad[view] [source] 2021-10-27 18:42:37
>>orph+(OP)
How well can copilot write unit tests? This seems like an area where it could be really useful and actually improve software development practices.
◧◩
2. manque+Qe[view] [source] 2021-10-27 18:49:51
>>Spinna+ad
Writing tests for the sake of coverage is already practically useless which is what a lot of orgs do, This could maybe generate such tests. However it doesn't materially impact quality now, so not much difference if automated.

One of the main value props for writing meaningful unit tests, is it helps the developer think differently about the code he is writing tests for, and that improves quality of the code composition.

◧◩◪
3. Graffu+Vl[view] [source] 2021-10-27 19:23:53
>>manque+Qe
Why is that useless? Codebases I have worked on that had high code coverage requirements had very little bugs.

* It promotes actually looking at the code before considering it done

* It promotes refactoring

* It helps to prevent breaking changes for stuff that wasn't supposed to change

◧◩◪◨
4. matsem+Sm[view] [source] 2021-10-27 19:28:06
>>Graffu+Vl
I feel the opposite of codebases where having high coverage has been a priority:

* The tests doesn't actually test functionality, edge cases etc, just that things doesn't crash in a happy-path.

* Any changes to an implementation breaks a test needlessly, because the test tests specifics of the implementation, not correctness. Thus it makes refactoring actually harder, since your test said you broke something, but you probably didn't, and now you have to double the work of writing a new test.

* In codebases for dynamic languages, most of what these tests end up catching is stuff a compiler would catch in a statically typed language.

◧◩◪◨⬒
5. drran+8z[view] [source] 2021-10-27 20:28:07
>>matsem+Sm
> The tests doesn't actually test functionality, edge cases etc, just that things doesn't crash in a happy-path.

This is low coverage.

> Any changes to an implementation breaks a test needlessly, because the test tests specifics of the implementation, not correctness.

This is bad design.

> In codebases for dynamic languages, most of what these tests end up catching is stuff a compiler would catch in a statically typed language.

So they are not useless.

◧◩◪◨⬒⬓
6. matsem+Oz[view] [source] 2021-10-27 20:32:17
>>drran+8z
> This is low coverage.

No, as a sibling comment to mine shows, it's actually easy to make 100% coverage with bad tests, since one doesn't challenge the implementation to handle edge cases.

◧◩◪◨⬒⬓⬔
7. Number+QV[view] [source] 2021-10-27 23:01:00
>>matsem+Oz
I think maybe you are using different definitions of coverage -- textual coverage vs logic coverage.
[go to top]