There are still significant limitations, no amount of prompting will get current models to approach abstraction and architecture the way a person does. But I'm finding that these Gemini models are finally able to replace searches and stackoverflow for a lot of my day-to-day programming.
I find this sentiment increasingly worrisome. It's entirely clear that every last human will be beaten on code design in the upcoming years (I am not going to argue if it's 1 or 5 years away, who cares?)
I wished people would just stop holding on to what amounts to nothing, and think and talk more about what can be done in a new world. We need good ideas and I think this could be a place to advance them.
Citation needed. In fact, I think this pretty clearly hits the "extraordinary claims require extraordinary evidence" bar.
My friend, we are living in a world of exponential increase of AI capability, at least for the last few years - who knows what the future will bring!
Because exponentially growing costs with linear or not measurable improvements is not a great trajectory.
Metrics like training data set size are less interesting now given the utility of smaller synthetic data sets.
Once AI tech is more diffused to factory automation, robotics, educational systems, scientific discovery tools, etc., then we could measure efficiency gains.
My personal metric for the next 5 to 10 years: the US national debt and interest payments are perhaps increasing exponentially and since nothing will change politically to change this, exponential AI capability growth will either juice-up productivity enough to save us economically, or it won’t.