>>laiysb+(OP)
With how stochastic the process is it makes it basically unusable for any large scale task. What's the plan? To roll the dice until the answer pops up? That would be maybe viable if there was a way to automatically evaluate it 100% but with a human in the loop required it becomes untenable.
>>ethano+l3
I look back over the past 2-3 years and am pretty amazed with how quick change and progress have been made. The promises are indeed large but the speed of progress has been fast. Not defending the promise but “taking a very long time” does not seem to be an accurate representation.
>>infect+K5
I feel like we've made barely any progress. It's still good at the things Chat GPT was originally good at, and bad at the things it was bad at. There's some small incremental refinement but it doesn't really represent a qualitative jump like Chat GPT was originally. I don't see AI replacing actual humans without another step jump like that.
>>zeroon+p9
As a non-programmer non-software engineer, the programs I can write with modern SOTA models are at least 5x larger than the ones GPT-4 could make.
LLMs are like bumpers on bowling lanes. Pro bowlers don't get much utility from them. Total noobs are getting more and more strikes as these "smart" bumpers get better and better at guiding their ball.