I hope we're wrong about a lot of this, and AGI turns out to either be impossible, or much less useful than we think it will be. I hope we end up in a world where humans' value increases, instead of decreasing. At a minimum, if AGI is possible, I hope we can imbue it with ethics that allow it to make decisions that value other sentient life.
Do I think this will actually happen in two years, let alone five or ten or fifty? Not really. I think it is wildly optimistic to assume we can get there from here - where "here" is LLM technology, mostly. But five years ago, I thought the idea of LLMs themselves working as well as they do at speaking conversational English was essentially fiction - so really, anything is possible, or at least worth considering.
"May you live in interesting times" is a curse for a reason.
For a sizable number of humans, we're already there. The vast majority of hacker news users are spending their time trying to make advertisements tempt people into spending money on stuff they don't need. That's an active societal harm. It doesn't contribute in any positive way to the world.
And yet, people are fine to do that, and get their dopamine hits off instagram or arguing online on this cursed site, or watching TV.
More people will have bullshit jobs in this SF story, but a huge number of people already have bullshit jobs, and manage to find a point in their existence just fine.
I, for one, would be happy to simply read books, eat, and die.