I hope we're wrong about a lot of this, and AGI turns out to either be impossible, or much less useful than we think it will be. I hope we end up in a world where humans' value increases, instead of decreasing. At a minimum, if AGI is possible, I hope we can imbue it with ethics that allow it to make decisions that value other sentient life.
Do I think this will actually happen in two years, let alone five or ten or fifty? Not really. I think it is wildly optimistic to assume we can get there from here - where "here" is LLM technology, mostly. But five years ago, I thought the idea of LLMs themselves working as well as they do at speaking conversational English was essentially fiction - so really, anything is possible, or at least worth considering.
"May you live in interesting times" is a curse for a reason.
You may find this to be insightful: https://meltingasphalt.com/a-nihilists-guide-to-meaning/
In short, "meaning" is a contextual perception, not a discrete quality, though the author suggests it can be quantified based on the number of contextual connections to other things with meaning. The more densely connected something is, the more meaningful it is; my wedding is meaningful to me because my family and my partners family are all celebrating it with me, but it was an entirely meaningless event to you.
Thus, the meaningfulness of our contributions remains unchanged, as the meaning behind them is not dependent upon the perspective of an external observer.