zlacker

[parent] [thread] 4 comments
1. namelo+(OP)[view] [source] 2021-10-28 06:30:50
By the same logic you've presented, what's the value proposition of the plain-old auto-completion? What's the value proposition of a slick editor? All you need is the built-in notepad and a debugger.

Speaking from my personal experience, I usually write code in TDD style, in which I test the properties of the software I desire upfront, then make it pass with a minimal amount of effort. When I see there's a need for refactoring, I refactor. And I repeat this process until it is done.

The three parts take roughly the same portion of time, and when I'm writing tests, I'm thinking about the functionality and value of the software. When I'm refactoring I'm thinking about the design. When I'm writing the implementation initially, I want it to Just Work™ in the first place, and I find Copilot is great for this matter: why not delegate the boring part to the machine?

replies(1): >>Kronis+Ru
2. Kronis+Ru[view] [source] 2021-10-28 11:53:38
>>namelo+(OP)
You know, perhaps this is tangential to the point that you're making at best, but i still couldn't help but to notice:

> The three parts take roughly the same portion of time, and when I'm writing tests

that bit and have some strong feelings about it. At my current dayjob, writing tests (if it was even done for all code) would easily take anywhere between 50% and 75% of the total development time.

I wish things were easy enough for writing test code not to be a total slog, but sadly there are too many factors in place:

  - what should the test class be annotated with and which bits of the Spring context (Java) will get started with it
  - i can't test the DB because the tests don't have a local one with 100% automated migrations, nor an in memory one because of the need to use Oracle, so i need to prevent it from ever being called
  - that said, the logic that i need to test involves at least 5 to 10 different service calls, which them use another 5 to 20 DB mappers (myBatis) and possibly dozens of different DB calls
  - and when i finally figure out what i want to test, the logic for mocking will definitely fail the first time due to Mockito idiosyncrasies
  - after that's been resolved, i'll probably need to stub out a whole bunch of fake DB calls, that will return deeply nested data structures
  - of course, i still need all of this to make sense, since the DB is full of EAV and OTLT patterns (https://tonyandrews.blogspot.com/2004/10/otlt-and-eav-two-big-design-mistakes.html) as opposed to proper foreign keys (instead you end up with something like target_table and target_table_row_id, except named way worse and not containing a table name but some enum that's stored in the app, so you can't just figure out how everything works without looking through both)
  - and once i've finally mocked all of the service calls, DB calls and data initialization, there's also validation logic that does its own service calls which may or may not be the same, thus doubling the work
  - of course, the validators are initialized based on reflection and target types, such as EntityValidator being injected, however actually being one of ~100 supported subclasses, which may or may not be the ones you expect due to years of cruft, you can't just do ctrl+click to open the definition, since that opens the superclass not the subclass
  - and once all of that works, you have to hope that 95% of the test code that vaguely correseponds to what the application would actually be doing won't fail at any number of points, just so you can do one assertion
I'm not quite sure how things can get that bad or how people can architect systems to be coupled like that in the first place, but at the conclusion of my quasi-rant i'd like to suggest that many of the systems out there definitely aren't easily testable or testable at all.

That said, it's nice that at least your workflow works out like that!

replies(2): >>disgru+5y >>namelo+hM
◧◩
3. disgru+5y[view] [source] [discussion] 2021-10-28 12:18:19
>>Kronis+Ru
Have you read Working Effectively with Legacy Code?

It's transformative in situations like this, it has a bunch of recipes for solving these kinds of problems.

While I don't use Java or C++, this book has probably been the most useful to me in working with larger bodies of code.

replies(1): >>Kronis+CI
◧◩◪
4. Kronis+CI[view] [source] [discussion] 2021-10-28 13:24:04
>>disgru+5y
While the book is indeed good, it's pretty hard to do anything to improve that particular codebase because there are developers who are actively introducing more and more of the problematic patterns and practices even as i write this.

To them it isn't "legacy code" but just "code", while attempting to offer alternatives either earns you blank stares or expressing concerns about anything new causing inconsistencies with the old (which is a valid concern but also doesn't help when the supposedly consistent code is unusable).

To me it feels like it's also a social problem, not just a technical one and if your hands are essentially tied im that situation and you fail to get the other devs and managers onboard, then you'll simply have to either be very patient or let your legs do the work instead.

◧◩
5. namelo+hM[view] [source] [discussion] 2021-10-28 13:43:49
>>Kronis+Ru
Thanks for sharing this. I can feel you because I have been working on a similar project but slightly better, however, it's painful still for me. I wrote a comment last month [0] that is more or less related to what you've said. Basically, you want to write fewer tests that really matter, while the infrastructure should be fast and parallelizable.

Sadly it's easier said than done, since it's not an easy thing to fix for an existing system. We've spent quite some time improving things to ease the pain on writing tests, it was getting better but would never reach the level if we were aware of this problem in the first place - there are tens of thousand tests and we cannot rewrite them all.

I'm not too familiar with your tech stack. But there are two things you mentioned that are especially tricky to handle for testing: DB and service calls.

For DB, there are typically two ways to handle it: Use real DB, or mock it.

Real DB makes people more confident, and don't need to mock too many things. The problem is it can be slow and not parallelizable, or worse, like your case there's no impotent environment at all. We had automated migrations, but the test was run against the SQL Server on the same machine, so it was not parallelizable so the tests took more than a day to run on a single machine. On CI there are tens of machines but still takes hours to finish. In the end, we generalized things a little bit, and used SQLite for testing in a parallel manner. (Many people suggest against this because it's different from production, but the tradeoff really saved us). A more ideal approach is to have SQL sandboxing like Ecto (written in Elixir). Another ideal approach is to have in memory lib that is close to DB, for example, the ORM Entity Framework has an in-memory implementation, which is extremely handy because it's written in C# itself.

If there's no way to leverage real DB, you have to mock it. One thing that might help you is to leverage the Inversion of Control pattern to deal with DB access, there are many doctrines like DDD repositories, Hexagonal, Clean Architecture but essentially they're similar on this point. In this way, you'll have a clean layer to mock, and you can hide the patterns like EAV under those modules. As you leverage them enough, they will evolve and there would be helpers that could simplify the mocking process. According to your description, the best bet I would say is to evolve toward this direction if there's no hope on using real DBs, as you can tuck as much as domain logic into the "core" without touching any of the infrastructures. So that the infrastructure tests could be just very simple and generic.

For service calls, the obvious thing is to mock those calls. The not so obvious thing is to have well-defined service boundaries in the first place. I cannot stress this enough. When people failed to do this, they will feel they're spending a lot of time mocking services, while at the same time they feel they've tested nothing because most things are mocked. Microservices were getting too much hype over the years, but very few people pay enough attention on how to define services boundaries. The ideal microservice should be mostly independent, while occasionally calling others. DDD strategic design is a great tool for designing good service boundaries (while DDD tactic design is yet another hype, just like how people care more about Jira than real Agile, making good things toxic). We were still struggling with this because refactoring microservice is substantially harder than refactoring code within services, but we do try to avoid more mistakes by carefully designing bounded contexts across the system.

With that said, when the service boundaries are well-defined, and if you have things like SQL sandboxing, it's a breeze to test things because most of the data you're testing against is in the same service's DB, and there are very few service calls need to be mocked.

[0] https://news.ycombinator.com/item?id=28642506#28679372

[go to top]