zlacker

[return to "AI 2027"]
1. ahofma+ub[view] [source] 2025-04-03 17:04:12
>>Tenoke+(OP)
Ok, I'll bite. I predict that everything in this article is horse manure. AGI will not happen. LLMs will be tools, that can automate away stuff, like today and they will get slightly, or quite a bit better at it. That will be all. See you in two years, I'm excited what will be the truth.
◧◩
2. mitthr+lf[view] [source] 2025-04-03 17:22:51
>>ahofma+ub
What's an example of an intellectual task that you don't think AI will be capable of by 2027?
◧◩◪
3. coolTh+Of[view] [source] 2025-04-03 17:25:09
>>mitthr+lf
programming
◧◩◪◨
4. lumenw+tg[view] [source] 2025-04-03 17:29:11
>>coolTh+Of
Why would it get 60-80% as good as human programmers (which is what the current state of things feels like to me, as a programmer, using these tools for hours every day), but stop there?
◧◩◪◨⬒
5. kody+Lh[view] [source] 2025-04-03 17:35:57
>>lumenw+tg
It's 60-80% as good as Stack Overflow copy-pasting programmers, sure, but those programmers were already providing questionable value.

It's nowhere near as good as someone actually building and maintaining systems. It's barely able to vomit out an MVP and it's almost never capable of making a meaningful change to that MVP.

If your experiences have been different that's fine, but in my day job I am spending more and more time just fixing crappy LLM code produced and merged by STAFF engineers. I really don't see that changing any time soon.

[go to top]