zlacker

[return to "AI 2027"]
1. ivraat+lo1[view] [source] 2025-04-04 00:49:03
>>Tenoke+(OP)
Though I think it is probably mostly science-fiction, this is one of the more chillingly thorough descriptions of potential AGI takeoff scenarios that I've seen. I think part of the problem is that the world you get if you go with the "Slowdown"/somewhat more aligned world is still pretty rough for humans: What's the point of our existence if we have no way to meaningfully contribute to our own world?

I hope we're wrong about a lot of this, and AGI turns out to either be impossible, or much less useful than we think it will be. I hope we end up in a world where humans' value increases, instead of decreasing. At a minimum, if AGI is possible, I hope we can imbue it with ethics that allow it to make decisions that value other sentient life.

Do I think this will actually happen in two years, let alone five or ten or fifty? Not really. I think it is wildly optimistic to assume we can get there from here - where "here" is LLM technology, mostly. But five years ago, I thought the idea of LLMs themselves working as well as they do at speaking conversational English was essentially fiction - so really, anything is possible, or at least worth considering.

"May you live in interesting times" is a curse for a reason.

◧◩
2. joshda+6B1[view] [source] 2025-04-04 03:17:04
>>ivraat+lo1
> I hope we're wrong about a lot of this, and AGI turns out to either be impossible, or much less useful than we think it will be.

For me personally, I hope that we do get AGI. I just don't want it by 2027. That feels way too fast to me. But AGI 2070 or 2100? That sounds much more preferable.

[go to top]