I think an approach that I tried recently is to use it as a friction remover instead of a solution provider. I do the programming but use it to remove pebbles such as that small bit of syntax I forgot, basically to keep up the velocity. However, I don't look at the wholesale code it offers. I think keeping the active thinking cap on results in code I actually understand while avoiding skill atrophy.
This was my pr-AI experience anyway, so getting that first chunk of time back is helpful.
Related: One of the better takes I've seen on AI from an experienced developer was, "90% of my skills just became worthless, and the other 10% just became 1,000 times more valuable." There's some hyperbole there, I but I like the gist.
I do think you're onto something with getting pebbles out of the road inasmuch as once I know what I need to do AI coding makes the doing much faster. Just yesterday I was playing around with removing things from a List object using the Java streams API and I kept running into ConcurrentOperationsExceptions, which happen when multiple threads are mutating the list object at the same time because no thread can guarantee it has the latest copy of the list unaltered by other threads. I spent about an hour trying to write a method that deep copies the list, makes the change and then returns the copy and running into all sorts of problems til I asked AI to build me a thread-safe list mutation method and it was like "Sure, this is how I'd do it but also the API you're working with already has a method that just....does this." Cases like this are where AI is supremely useful - intricate but well-defined problems.
Once AI can actually untangle our 14 year old codebase full of hosh-posh code, read every commit message, JIRA ticket, and Slack conversation related to the changes in full context, it's not going to solve a lot of the hard problems at my job.
I just used it to write about 80 lines of new code like that, and there's no question it saves time.
I think this may become a long horizon harvest for the rigorous OOP strategy, may Bill Joy be disproved.
Gray goo may not [taste] like steel-cut oatmeal.
But nothing will make them stick to the one API version I use.
i don't mean to pick on your usage of this specifically, but i think it's noteworthy that the colloquial definition of "rubber ducking" seems to have expanded to include "using a software tool to generate advice/confirm hunches". I always understood the term to mean a personal process of talking through a problem out loud in order to methodically, explicitly understand a theoretical plan/process and expose gaps.
based on a lot of articles/studies i've seen (admittedly haven't dug into them too deeply) it seems like the use of chatbots to perform this type of task actually has negative cognitive impacts on some groups of users - the opposite of the personal value i thought rubber-ducking was supposed to provide.
Models trained for tool use can do that. When I use Codex for some Rust stuff for example, it can grep from source files in the directory dependencies are stored, so looking up the current APIs is trivial for them. Same works for JavaScript and a bunch of other languages too, as long as it's accessible somewhere via the tools they have available.
Autocorrect is a scourge of humanity.
Otherwise he can shut the fuck up about being 1000x more valuable imo
I like to think of it that instead of having seemingly endless amounts of half thoughts spinning around inside your head, you make an idea or thought more “fully formed” when you express it verbally or with written (or typed) words.
I believe this is part of why therapy can work, by actually expressing our thoughts, we’re kind of forced to face realities and after doing so it’s often much easier to reflect on it. Therapists often recommend personal journals as they can also work for this.
I believe rubber ducking works because in having to explain the problem, it forces you to actually gather your thoughts into something usable from which you can more effectively reflect on.
I see no reason why doing the same thing except in writing to an LLM couldn’t be equally effective.
This is what human language does though, isn't it? Evolves over time, in often weird ways; like how many people "could care less" about something they couldn't care less about.
I'm the biggest skeptic, but more and more I'm seeing it get me the bulk of the way with very little back-and-forth. If it was even more heavily integrated in my dev environment, it would save me even more time.
Most commenters on this paper seem to not respond to the strongest result from it. That is, the developers wrongly thought and felt that using AI had sped up their work. So we need to be super cautious about what we think we know.
However, it is _fun_ to go over the barrier if it is chatting with a model to get a quick tutorial and produce working code for a prototype (for your specific needs) where the understanding that you just developed is applied. The alternative (without LLMs) is to first do the ground work of learning via tutorials in text/video form and then do the cognitive mapping of applying the learning to one's prototype. I would make a lot of mistakes that expert/intermediate React developers don't make on this path.
One could argue that it shortcuts some learning and perhaps the old way results in better retention. But, our field changes so fast... and when it remains static for too long, projects die. I think of all this as accelerant for progress in adoption of new ways of thinking about software and diffusing that more quickly across the developer population globally. Code is always fungible, anyway. The job is about all the other things that one needs to do besides coding.