> This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.
I wonder what could explain such large difference between estimation/experience vs reality, any ideas?
Maybe our brains are measuring mental effort and distorting our experience of time?
What if agentic coding sessions are triggering a similar dopamine feedback loop as social media apps? Obviously not to the same degree as social media apps, I mean coding for work is still "work"... but there's maybe some similarity in getting iterative solutions from the agent, triggering something in your brain each time, yes?
If that was the case, wouldn't we expect developers to have an overly positive perception of AI because they're literally becoming addicted to it?
I wish there was a simple way to measure energy spent instead of time. Maybe nature is just optimizing for something else.