The problems it raises - alignment, geopolitics, lack of societal safeguards - are all real, and happening now (just replace “AGI” with “corporations”, and voila, you have a story about the climate crisis and regulatory capture). We should be solving these problems before AGI or job-replacing AI becomes commonplace, lest we run the very real risk of societal collapse or species extinction.
The point of these stories is to incite alarm, because they’re trying to provoke proactive responses while time is on our side, instead of trusting self-interested individuals in times of great crisis.
lest we run the very real risk of societal collapse or species extinction
Our part is here. To be replaced with machines if this AI thing isn't just a fart advertised as mining equipment, which it likely is. We run this risk, not they. People worked on their wealth, people can go f themselves now. They are fine with all that. Money (=more power) piles in either way.
No encouraging conclusion.
We know that Trump is not captured by corporations because his trade policies are terrible.
If anything, social media is the evil that's destroying the political center: Americans are no longer reading mainstream newspapers or watching mainstream TV news.
The EU is saying the elections in Romania was manipulated through manipulation of TikTok accounts and media.
If it’s somehow different for corporations, please enlighten me how.
Taxes are the best way to change behaviour (smaller cars driving less. Less flying etc). So government and the people who vote for them is to blame.
I wonder if that's corporations' fault after all: shitty working conditions and shitty wages, so that Bezos can afford to send penises into space. What poor person would agree to higher tax on gas? And the corps are the ones backing politicians who'll propagandize that "Unions? That's communism! Do you want to be Chaina?!" (and spread by those dickheads on the corporate-owned TV and newspaper, drunk dickheads who end up becoming defense secretary)
Have you seen gas tax rates in the EU?
> We know that Trump is not captured by corporations because his trade policies are terrible.
Unless you think it's a long con for some rich people to be able to time the market by getting him to crash it.
> The EU is saying the elections in Romania was manipulated through manipulation of TikTok accounts and media.
More importantly, Romanian courts say that too. And it was all out in the open, so not exactly a secret
So corporations are involved in the sense that they pay people more than a living wage.
I think this view of humans - that they look at all the available information and then make calm decisions in their own interests - is simply wrong. We are manipulated all the damn time. I struggle to go to the supermarket without buying excess sugar. The biggest corporations in the world grew fat off showing us products to impulse buy before our more rational brain functions could stop us. We are not a little pilot in a meat vessel.
I'm pretty sure the election was manipulated, but the court only said so because it benefits the incumbents, which control the courts and would lose their power.
It's a struggle between local thieves and putin, that's all. The local thieves will keep us in the EU, which is much better than the alternative, but come on. "More importantly, Romanian courts say so"? Really?
US corporate tax rates are actually every high. Partly due to the US having almost no consumption tax. EU members have VAT etc.
No, there is no risk of species extinction in the near future due to climate change and repeating the line will just further the divide and make the people not care about other people's and even real climate scientist's words.
I like that it ends with a reference to Kushiel and Elua though.
That sounds like the height of folly.
Large corporations, governments, institutionalized churches, political parties, and other “corporate” institutions are very much like a hypothetical AGI in many ways: they are immortal, sleepless, distributed, omnipresent, and possess beyond human levels of combined intelligence, wealth, and power. They are mechanical Turk AGIs more or less. Look at how humans cycle in, out, and through them, often without changing them much, because they have an existence and a weird kind of will independent of their members.
A whole lot, perhaps all, of what we need to do to prepare for a hypothetical AGI that may or may not be aligned consists of things we should be doing to restrain and ensure alignment of the mechanical Turk variety. If we can’t do that we have no chance against something faster and smarter.
What we have done over the past 50 years is the opposite: not just unchain them but drop any notion that they should be aligned.
Are we sure the AI alignment discourse isn’t just “occulted” progressive political discourse? Back when they burned witches philosophers would encrypt possibly heretical ideas in the form of impenetrable nonsense, which is where what we call occultism comes from. You don’t get burned for suggesting steps to align corporate power, but a huge effort has been made to marginalize such discourse.
Consider a potential future AGI. Imagine it has a cult of followers around it, which it probably would, and champions that act like present day politicians or CEOs for it, which it probably would. If it did not get humans to do these things for it, it would have analogous functions or parts of itself.
Now consider a corporation or other corporate entity that has all those things but replace the AGI digital brain with a committee or shareholders.
What, really, is the difference? Both can be dangerously unaligned.
Other than perhaps in magnitude? The real digital AGI might be smarter and faster but that’s the only difference I see.
I agree that it's good science fiction, but this is still taking it too seriously. All of these "projections" are generalizing from fictional evidence - to borrow a term that's popular in communities that push these ideas.
Long before we had deep learning there were people like Nick Bostrom who were pushing this intelligence explosion narrative. The arguments back then went something like this: "Machines will be able to simulate brains at higher and higher fidelity. Someday we will have a machine simulate a cat, then the village idiot, but then the difference between the village idiot and Einstein is much less than the difference between a cat and the village idiot. Therefore accelerating growth[...]" The fictional part here is the whole brain simulation part, or, for that matter, any sort of biological analogue. This isn't how LLMs work.
We never got a machine as smart as a cat. We got multi-paragraph autocomplete as "smart" as the average person on the internet. Now, after some more years of work, we have multi-paragraph autocomplete that's as "smart" as a smart person on the internet. This is an imperfect analogy, but the point is that there is no indication that this process is self-improving. In fact, it's the opposite. All the scaling laws we have show that progress slows down as you add more resources. There is no evidence or argument for exponential growth. Whenever a new technology is first put into production (and receives massive investments) there is an initial period of rapid gains. That's not surprising. There are always low-hanging fruit.
We got some new, genuinely useful tools over the last few years, but this narrative that AGI is just around the corner needs to die. It is science fiction and leads people to make bad decisions based on fictional evidence. I'm personally frustrated whenever this comes up, because there are exciting applications which will end up underfunded after the current AI bubble bursts...
And if Asian culture is better educated and more capable of progress, that’s a good thing. Certainly the US has announced loud and clear that this is the end of the line for us.
There is a non-zero chance that the ineffable quantum foam will cause a mature hippopotamus to materialize above your bed tonight, and you’ll be crushed. It is incredibly, amazingly, limits-of-math unlikely. Still a non-zero risk.
Better to think of “no risk” as meaning “negligible risk”. But I’m with you that climate change is not a negligible risk; maybe way up in the 20% range IMO. And I wouldn’t be sleeping in my bed tonight if sudden hippos over beds were 20% risks.
I think you misunderstood that argument. The simulate the brain thing isn't a "start from the beginning" argument, it's an "answer a common objection" argument.
Back around 2000, when Nick Bostrom was talking about this sort of thing, computers were simply nowhere near powerful enough to come even close to being smart enough to outsmart a human, except in very constrained cases like chess; we did't even have the first clue how to create a computer program to be even remotely dangerous to us.
Bostrom's point was that, "We don't need to know the computer program; even if we just simulate something we know works -- a biological brain -- we can reach superintelligence in a few decades." The idea was never that people would actually simulate a cat. The idea is, if we don't think of anything more efficient, we'll at least be able to simulate a cat, and then an idiot, and then Einstein, and then something smarter. And since we almost certainly will think of something more efficient than "simulate a human brain", we should expect superintelligence to come much sooner.
> There is no evidence or argument for exponential growth.
Moore's law is exponential, which is where the "simulate a brain" predictions have come from.
> It is science fiction and leads people to make bad decisions based on fictional evidence.
The only "fictional evidence" you've actually specified so far is the fact that there's no biological analog; and that (it seems to me) is from a misunderstanding of a point someone else was making 20 years ago, not something these particular authors are making.
I think the case for AI caution looks like this:
A. It is possible to create a superintelligent AI
B. Progress towards a superintelligent AI will be exponential
C. It is possible that a superintelligent AI will want to do something we wouldn't want it to do; e.g., destroy the whole human race
D. Such an AI would be likely to succeed.
Your skepticism seems to rest on the fundamental belief that either A or B is false: that superintelligence is not physically possible, or at least that progress towards it will be logarithmic rather than exponential.
Well, maybe that's true and maybe it's not; but how do you know? What justifies your belief that A and/or B are false so strongly, that you're willing to risk it? And not only willing to risk it, but try to stop people who are trying to think about what we'd do if they are true?
What evidence would cause you to re-evaluate that belief, and consider exponential progress towards superintelligence possible?
And, even if you think A or B are unlikely, doesn't it make sense to just consider the possibility that they're true, and think about how we'd know and what we could do in response, to prevent C or D?
I think the growth you are thinking of, self improving AI, needs the AI to be as smart as a human developer/researcher to get going and we haven't got there yet. But we quite likely will at some point.
But it's par on course. Write prompts for LLMs to compete? It's prompt engineering. Tell LLMs to explain their "reasoning" (lol)? It's Deep Research Chain Of Thought. Etc.
Can you point to the data that suggests these evil corporations are ruining the planet? Carbon emissions are down in every western country since 1990s. Not down per-capita, but down in absolute terms. And this holds even when adjusting for trade (i.e. we're not shipping our dirty work to foreign countries and trading with them). And this isn't because of some regulation or benevolence. It's a market system that says you should try to produce things at the lowest cost and carbon usage is usually associated with a cost. Get rid of costs, get rid of carbon.
Other measures for Western countries suggests the water is safer and overall environmental deaths have decreased considerably.
The rise in carbon emissions is due to Chine and India. Are you talking about evil Chinese and Indians corporations?
Was Asian culture dominated by the west to any significant degree? Perhaps in countries like India where the legal and parliamentary system installed by the British remained intact for a long time post-independence.
Elsewhere in East and Southeast Asia, the legal systems, education, cultural traditions, and economic philosophies have been very different from the "west", i.e. post-WWII US and Western Europe.
The biggest sign of this is how they developed their own information networks, infrastructure and consumer networking devices. Europe had many of these regional champions themselves (Phillips, Nokia, Ericsson, etc) but now outside of telecom infrastructure, Europe is largely reliant on American hardware and software.
The problem with this argument is that it's assuming that we're on a linear track to more and more intelligent machines. What we have with LLMs isn't this kind of general intelligence.
We have multi-paragraph autocomplete that's matching existing texts more and more closely. The resulting models are great priors for any kind of language processing and have simple reasoning capabilities in so far as those are present in the source texts. Using RLHF to make the resulting models useful for specific tasks is a real achievement, but doesn't change how the training works or what the original training objective was.
So let's say we continue along this trajectory and we finally have a model that can faithfully reproduce and identify every word sequence in its training data and its training data includes every word ever written up to that point. Where do we go from here?
Do you want to argue that it's possible that there is a clever way to create AGI that has nothing to do with the way current models work and that we should be wary of this possibility? That's a much weaker argument than the one in the article. The article extrapolates from current capabilities - while ignoring where those capabilities come from.
> And, even if you think A or B are unlikely, doesn't it make sense to just consider the possibility that they're true, and think about how we'd know and what we could do in response, to prevent C or D?
This is essentially https://plato.stanford.edu/entries/pascal-wager/
It might make sense to consider, but it doesn't make sense to invest non-trivial resources.
This isn't the part that bothers me at all. I know people who got grants from, e.g., Miri to work on research in logic. If anything, this is a great way to fund some academic research that isn't getting much attention otherwise.
The real issue is that people are raising ridiculous amounts of money by claiming that the current advances in AI will lead to some science fiction future. When this future does not materialize it will negatively affect funding for all work in the field.
And that's a problem, because there is great work going on right now and not all of it is going to be immediately useful.
Could you provide examples? I am genuinely interested.
A self-driving car would already be plenty.
Why do you think that's the only reason the court said so? The election law was pretty blatantly violated (he declared campaign funding of 0, yet tons of ads were bought for him and influencers paid to advertise him).
This just isn't correct. Daniel and others on the team are experienced world class forecasters. Daniel wrote another version of this in 2021 predicting the AI world in 2026 and was astonishingly accurate. This deserves credence.
https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-...
>he arguments back then went something like this: "Machines will be able to simulate brains at higher and higher fidelity.
Complete misunderstanding of the underlying ideas. Just in not even wrong territory.
>We got some new, genuinely useful tools over the last few years, but this narrative that AGI is just around the corner needs to die. It is science fiction and leads people to make bad decisions based on fictional evidence.
You are likely dangerously wrong. The AI field is near universal in predicting AGI timelines under 50 years. With many under 10. This is an extremely difficult problem to deal with and ignoring it because you think it's equivalent to overpopulation on mars is incredibly foolish.
https://www.metaculus.com/questions/5121/date-of-artificial-...
https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predicti...
Can you point to data that this is 'because' of corporations rather than despite them.
Dude was spot on in 2021, hot damn.
https://x.com/RnaudBertrand/status/1901133641746706581
I finally watched Ne Zha 2 last night with my daughters.
It absolutely lives up to the hype: undoubtedly the best animated movie I've ever seen (and I see a lot, the fate of being the father of 2 young daughters ).
But what I found most fascinating was the subtle yet unmistakable geopolitical symbolism in the movie.
Warning if you haven't yet watched the movie: spoilers!
So the story is about Ne Zha and Ao Bing, whose physical bodies were destroyed by heavenly lightning. To restore both their forms, they must journey to the Chan sect—headed by Immortal Wuliang—and pass three trials to earn an elixir that can regenerate their bodies.
The Chan sect is portrayed in an interesting way: a beacon of virtue that all strive to join. The imagery unmistakably refers to the US: their headquarters is an imposingly large white structure (and Ne Zha, while visiting it, hammers the point: "how white, how white, how white") that bears a striking resemblance to the Pentagon in its layout. Upon gaining membership to the Chan sect, you receive a jade green card emblazoned with an eagle that bears an uncanny resemblance to the US bald eagle symbol. And perhaps most telling is their prized weapon, a massive cauldron marked with the dollar sign...
Throughout the movie you gradually realize, in a very subtle way, that this paragon of virtue is, in fact, the true villain of the story. The Chan sect orchestrates a devastating attack on Chentang Pass—Ne Zha's hometown—while cunningly framing the Dragon King of the East Sea for the destruction. This manipulation serves their divide-and-conquer strategy, allowing them to position themselves as saviors while furthering their own power.
One of the most pointed moments comes when the Dragon King of the East Sea observes that the Chan sect "claims to be a lighthouse of the world but harms all living beings."
Beyond these explicit symbols, I was struck by how the film portrays the relationships between different groups. The dragons, demons, and humans initially view each other with suspicion, manipulated by the Chan sect's narrative. It's only when they recognize their common oppressor that they unite in resistance and ultimately win. The Chan sect's strategy of fostering division while presenting itself as the arbiter of morality is perhaps the key message of the movie: how power can be maintained through control of the narrative.
And as the story unfolds, Wuliang's true ambition becomes clear: complete hegemony. The Chan sect doesn't merely seek to rule—it aims to establish a system where all others exist only to serve its interests, where the dragons and demons are either subjugated or transformed into immortality pills in their massive cauldron. These pills are then strategically distributed to the Chan sect's closest allies (likely a pointed reference to the G7).
What makes Ne Zha 2 absolutely exceptional though is that these geopolitical allegories never overshadow the emotional core of the story, nor its other dimensions (for instance it's at times genuinely hilariously funny). This is a rare film that makes zero compromise, it's both a captivating and hilarious adventure for children and a nuanced geopolitical allegory for adults.
And the fact that a Chinese film with such unmistakable anti-American symbolism has become the highest-grossing animated film of all time globally is itself a significant geopolitical milestone. Ne Zha 2 isn't just breaking box office records—it's potentially rewriting the rules about what messages can dominate global entertainment.
I'm also struck by the extent to which the first series from 2021-2026 feels like a linear extrapolation while the second one feels like an exponential one, and I don't see an obvious justification for this.
And I think the neuroticism around this topic has led young people into some really dark places (anti-depressants, neurotic anti social behavior, general nihilism). So I think it's important to fight misinformation about end of world doomsday scenarios with both facts and common sense.
This is a fundamental misunderstanding of the entire point of predictive models (and also of how LLMs are trained and tested).
For one thing, ability to faithfully reproduce texts is not the primary scoring metric being used for the bulk of LLM training and hasn't been for years.
But more importantly, you don't make a weather model so that it can inform you of last Tuesday's weather given information from last Monday, you use it to tell you tomorrow's weather given information from today. The totality of today's temperatures, winds, moistures, and shapes of broader climatic patterns, particulates, albedos, etc etc etc have never happened before, and yet the model tells us something true about the never-before-seen consequences of these never-before-seen conditions, because it has learned the ability to reason new conclusions from new data.
Are today's "AI" models a glorified autocomplete? Yeah, but that's what all intelligence is. The next word I type is the result of an autoregressive process occurring in my brain that produces that next choice based on the totality of previous choices and experiences, just like the Q-learners that will kick your butt in Starcraft choose the best next click based on their history of previous clicks in the game combined with things they see on the screen, and will have pretty good guesses about which clicks are the best ones even if you're playing as Zerg and they only ever trained against Terran.
A highly accurate autocomplete that is able to predict the behavior and words of a genius, when presented with never before seen evidence, will be able to make novel conclusions in exactly the same way as the human genius themselves would when shown the same new data. Autocomplete IS intelligence.
New ideas don't happen because intelligences draw them out of the aether, they happen because intelligences produce new outputs in response to stimuli, and those stimuli can be self-inputs, that's what "thinking" is.
If you still think that all today's AI hubbub is just vacuous hype around an overblown autocomplete, try going to Chatgpt right now. Click the "deep research" button, and ask it "what is the average height of the buildings in [your home neighborhood]"?, or "how many calories are in [a recipe that you just invented]", or some other inane question that nobody would have ever cared to write about ever before but is hypothetically answerable from information on the internet, and see if what you get is "just a reproduced word sequence from the training data".
The climate regulations are still quite weak. Without a proper carbon tax, a US company can externalize the costs of carbon emissions and get rich by maximizing their own emissions.
Not all brains function like they're supposed to, people getting help they need shouldn't be stigmatized.
You also make no argument about your take on things being the right one, you just oppose their worldview to yours and call theirs wrong like you know it is rather than just you thinking yours is right.
To address only one thing out of your comment, Moore's law is not a law, it is a trend. It just gets called a law because it is fun. We know that there are physical limits to Moore's law. This gets into somewhat shaky territory, but it seems that current approaches to compute can't reach the density of compute power present in a human brain (or other creatures' brains). Moore's law won't get chips to be able to simulate a human brain, with the same amount of space and energy as a human brain. A new approach will be needed to go beyond simply packing more transistors onto a chip - this is analogous to my view that current AI technology is insufficient to do what human brains do, even when taken to their limit (which is significantly beyond where they're currently at).
There might be (strongly) diminishing returns past a certain point.
Most of the growth in AI capabilities has to do with improving the interface and giving them more flexibility. For e.g., uploading PDFs. Further: OpenAI's "deep research" which can browse the web for an hour and summarize publicly-available papers and studies for you. If you ask questions about those studies, though, it's hardly smarter than GPT-4. And it makes a lot of mistakes. It's like a goofy but earnest and hard-working intern.
No one is stigmatizing anything. Just that if you consume doom porn it's likely to affect your attitudes towards life. I think it's a lot healthier to believe you can change your circumstances than to believe you are doomed because you believe you have the wrong brain
https://www.nature.com/articles/s41380-022-01661-0
https://www.quantamagazine.org/the-cause-of-depression-is-pr...
https://www.ucl.ac.uk/news/2022/jul/analysis-depression-prob...
OK, I think I see where you're coming from. It sounds like what you're saying is:
E. LLMs only do multi-paragraph autocomplete; they are and always will be incapable of actual thinking.
F. Any approach capable of achieving AGI will be completely different in structure. Who knows if or when this alternate approach will even be developed; and if it is developed, we'll be starting from scratch, so we'll have plenty of time to worry about progress then.
With E, again, it may or may not be true. It's worth noting that this is a theoretical argument, not an empirical one; but I think it's a reasonable assumption to start with.
However, there are actually theoretical reasons to think that E may be false. The best way to predict the weather is to have an internal model which approximates weather systems; the best way to predict the outcome of a physics problem is to have an internal model which approximates the physics of the thing you're trying to predict. And the best way to predict what a human would write next is to have a model of a human mind -- including a model of what the human mind has in its model (e.g., the state of the world).
There is some empirical data to support this argument, albeit in a very simplified manner: They trained a simple LLM to predict valid moves for Othello, and then probed it and discovered an internal Othello board being simulated inside the neural network:
https://thegradient.pub/othello/
And my own experience with LLMs better match the "LLMs have an internal model of the world" theory than the "LLMs are simply spewing out statistical garbage" theory.
So, with regard to E: Again, sure, LLMs may turn out to be a dead end. But I'd personally give the idea that LLMs are a complete dead end a less than 50% probability; and I don't think giving it an overwhelmingly high probability (like 1 in a million of being false) is really reasonable, given the theoretical arguments and empirical evidence against it.
With regard to F, again, I don't think this is true. We've learned so much about optimizing and distilling neural nets, optimizing training, and so on -- not to mention all the compute power we've built up. Even if LLMs are a dead end, whenever we do find an architecture capable of achieving AGI, I think a huge amount of the work we've put into optimizing LLMs will put is way ahead in optimizing this other system.
> ...that the current advances in AI will lead to some science fiction future.
I mean, if you'd told me 5 years ago that I'd be able to ask a computer, "Please use this Golang API framework package to implement CRUD operations for this particular resource my system has", and that the resulting code would 1) compile out of the box, 2) exhibit an understanding of that resource and how it relates to other resources in the system based on having seen the code implementing those resources 3) make educated guesses (sometimes right, sometimes wrong, but always reasonable) about details I hadn't specified, I don't think I would have believed you.
Even if LLM progress is logarithmic, we're already living in a science fiction future.
EDIT: The scenario actually has very good technical "asides"; if you want to see their view of how a (potentially dangerous) personality emerges from "multi-paragraph auto-complete", look at the drop-down labelled "Alignment over time", and specifically what follows "Here’s a detailed description of how alignment progresses over time in our scenario:".
I think they'll probably be able to finish at least 1-2 by 2027.
There has finally been progress here, which is why you see high-profile publications from, e.g., Deepmind about solving IMO problems in Lean. This is exciting, because if you're working in a system like Coq or Lean your progress is monotone. Everything you prove actually follows from the definitions you put in. This is in stark contrast to, e.g., using LLMs for programming, where you end up with a tower of bugs and half-working code if you don't constantly supervise the output.
---
But well, the degree of excitement is my own bias. From other people I spoke to recently: - Risk-assessment diagnostics in medicine. There are a bunch of tests that are expensive and complex to run and need a specialist to evaluate. Deep learning is increasingly used to make it possible to do risk assessments with cheaper automated tests for a large population and have specialists focus on actual high-risk cases. Progress is slow for various reasons, but it has a lot of potential. - Weather forecasting uses a sparse set of inputs: atmospheric data from planes, weather baloons, measurements at ground stations, etc. This data is then aggregated with relatively stupid models to get the initial conditions to run a weather simulation. Deep learning is improving this part, but while there has been some encouraging initial progress this needs to be better integrated with existing simulations (purely deep learning based approaches are apparently a lot worse at predicting extreme weather events). Those simulations are expensive, they're running on some of the largest supercomputers in the world, which is why progress is slow.
Natural language is a fuzzy context aware state machine of some sorts that can theoretically represent any arbitrarily complex state in the outside world given enough high quality text.
And by reiterating and extrapolating the rules found in human communication an AI could by the sheer ability to simulate infinitely long discussions discover new things, given the ability to independently verify outcomes.
FWIW, this guy thinks E is true, and that he has a better direction to head in:
https://www.youtube.com/watch?v=ETZfkkv6V7Y
HN discussion about a related article I didn't read:
For an AI controlled corporation, I don't know what it wants or what to expect. And if decision making happens at the speed of light, by the time we have any warning it may be too late to react. Usually with human concerns, we get lots of warnings but wait longer than we should to respond.