Dependancies introduce unnecessary LOC and features which are, more and more, just written by LLMs themselves. It is easier to just write the necessary functionality directly. Whether that is more maintainable or not is a bit YMMV at this stage, but I would wager it is improving.
For a decent number of relatively pedestrian tasks though, I can see it.
Maybe the smallest/most convenient packages (looking at you is-even) are obsolete, but meaningful packages still abstract a lot of complexity that IMO aren't easier to one-shot with an LLM
Don't get me wrong, I'm not a luddite, I use claude code and cursor but the code generated by either of those is nowhere near what I'd call good maintainable code and I end up having to rewrite/refactor a big portion before it's in any halfway decent state.
That said with the most egregious packages like left-pad etc in nodejs world it was always a better idea to build your own instead of depending on that.
Scikit-learn
Pandas
Polars
This can be fixed in npm if you publish pre-compiled binaries but that has its own problems.
Vanity metrics should not be used for engineering decisions.
Same goes for rust. Sometime one package implicitly imports other in different version. And look of rustup tree to resolve the issue just doesn't seem very appealing.
I don't think it really affects the point discussed above for now, because we were discussing average users, and by definition, the first person to code a plausible web browser with an agent isn't an average user - unless of course that can be reliably replicated with any average user.
But on that note, the takeaways on the post you linked are relevant, because the author bucked a few trends to do this, and concluded among other things that "The human who drives the agent might matter more than how the agents work and are set up, the judge is still out on this one."
This will obviously change, but the areas that LLMs need to improve on here are ones they're notoriously weak on, so it could take a while.