The problems it raises - alignment, geopolitics, lack of societal safeguards - are all real, and happening now (just replace “AGI” with “corporations”, and voila, you have a story about the climate crisis and regulatory capture). We should be solving these problems before AGI or job-replacing AI becomes commonplace, lest we run the very real risk of societal collapse or species extinction.
The point of these stories is to incite alarm, because they’re trying to provoke proactive responses while time is on our side, instead of trusting self-interested individuals in times of great crisis.
But it's par on course. Write prompts for LLMs to compete? It's prompt engineering. Tell LLMs to explain their "reasoning" (lol)? It's Deep Research Chain Of Thought. Etc.
There might be (strongly) diminishing returns past a certain point.
Most of the growth in AI capabilities has to do with improving the interface and giving them more flexibility. For e.g., uploading PDFs. Further: OpenAI's "deep research" which can browse the web for an hour and summarize publicly-available papers and studies for you. If you ask questions about those studies, though, it's hardly smarter than GPT-4. And it makes a lot of mistakes. It's like a goofy but earnest and hard-working intern.