zlacker

[return to "AI 2027"]
1. stego-+LK1[view] [source] 2025-04-04 05:17:35
>>Tenoke+(OP)
It’s good science fiction, I’ll give it that. I think getting lost in the weeds over technicalities ignores the crux of the narrative: even if this doesn’t lead to AGI, at the very least it’s likely the final “warning shot” we’ll get before it’s suddenly and irreversibly here.

The problems it raises - alignment, geopolitics, lack of societal safeguards - are all real, and happening now (just replace “AGI” with “corporations”, and voila, you have a story about the climate crisis and regulatory capture). We should be solving these problems before AGI or job-replacing AI becomes commonplace, lest we run the very real risk of societal collapse or species extinction.

The point of these stories is to incite alarm, because they’re trying to provoke proactive responses while time is on our side, instead of trusting self-interested individuals in times of great crisis.

◧◩
2. andrep+JC2[view] [source] 2025-04-04 13:20:20
>>stego-+LK1
You said it right, science fiction. Honestly is exactly the tenor I would expect from the AI hype: this text is completely bereft of any rigour while being dressed up in scientific language. There's no evidence, nothing to support their conclusions, no explanation based on data or facts or supporting evidence. It's purely vibes based. Their promise is unironically "the CEOs of AI companies say AGI is 3 years away"! But it's somehow presented as this self important study! Laughable.

But it's par on course. Write prompts for LLMs to compete? It's prompt engineering. Tell LLMs to explain their "reasoning" (lol)? It's Deep Research Chain Of Thought. Etc.

[go to top]