I hope we're wrong about a lot of this, and AGI turns out to either be impossible, or much less useful than we think it will be. I hope we end up in a world where humans' value increases, instead of decreasing. At a minimum, if AGI is possible, I hope we can imbue it with ethics that allow it to make decisions that value other sentient life.
Do I think this will actually happen in two years, let alone five or ten or fifty? Not really. I think it is wildly optimistic to assume we can get there from here - where "here" is LLM technology, mostly. But five years ago, I thought the idea of LLMs themselves working as well as they do at speaking conversational English was essentially fiction - so really, anything is possible, or at least worth considering.
"May you live in interesting times" is a curse for a reason.
But even so, solving that problem feels much more attainable than it used to be.
we'll likely reach a point where it's infeasible for deep learning to completely encompass human-level reasoning, and we'll need neuroscience discoveries to continue progress. altman seems to be hyping up "bigger is better," not just for model parameters but openai's valuation.
For me personally, I hope that we do get AGI. I just don't want it by 2027. That feels way too fast to me. But AGI 2070 or 2100? That sounds much more preferable.
I assume that thanks to the universal approximation theorem it’s theoretically possible to emulate the physical mechanism, but at what hardware and training cost? I’ve done back of the napkin math on this before [1] and the number of “parameters” in the brain is at least 2-4 orders of magnitude more than state of the art models. But that’s just the current weights, what about the history that actually enables the plasticity? Channel threshold potentials are also continuous rather than discreet and emulating them might require the full fp64 so I’m not sure how we’re even going to get to the memory requirements in the next decade, let alone whether any architecture on the horizon can emulate neuroplasticity.
Then there’s the whole problem of a true physical feedback loop with which the AI can run experiments to learn against external reward functions and the core survival reward function at the core of evolution might itself be critical but that’s getting deep into the research and philosophy on the nature of intelligence.
[1] >>40313672
For a sizable number of humans, we're already there. The vast majority of hacker news users are spending their time trying to make advertisements tempt people into spending money on stuff they don't need. That's an active societal harm. It doesn't contribute in any positive way to the world.
And yet, people are fine to do that, and get their dopamine hits off instagram or arguing online on this cursed site, or watching TV.
More people will have bullshit jobs in this SF story, but a huge number of people already have bullshit jobs, and manage to find a point in their existence just fine.
I, for one, would be happy to simply read books, eat, and die.
If basically a transformer, that means it needs at inference time ~200T flops per token. The paper assumes humans "think" at ~15 tokens/second which is about 10 words, similar to the reading speed of a college graduate. So that would be ~3 petaflops of compute per second.
Assuming that's fp8, an H100 could do ~4 petaflops, and the authors of AI 2027 guesstimate that purpose wafer scale inference chips circa late 2027 should be able to do ~400petaflops for inference, ~100 H100s worth, for ~$600k each for fabrication and installation into a datacenter.
Rounding that basically means ~$6k would buy you the compute to "think" at 10 words/second. Generally speaking that'd probably work out to maybe $3k/yr after depreciation and electricity costs, or ~30-50¢/hr of "human thought equivalent" 10 words/second. Running an AI at 50x human speed 24/7 would cost ~$23k/yr, so 1 OpenBrain researcher's salary could give them a team of ~10-20 such AIs running flat out all the time. Even if you think the AI would need an "extra" 10 or even 100x in terms of tokens/second to match humans, that still puts you at genius level AIs in principle runnable at human speed for 0.1 to 1x the median US income.
There's an open question whether training such a model is feasible in a few years, but the raw compute capability at the chip level to plausibly run a model that large at enormous speed at low cost is already existent (at the street price of B200's it'd cost ~$2-4/hr-human-equivalent).
My solution to the alignment problem is that an ASI could just stick us in tubes deep in the Earth’s crust—it just needs to hijack our nervous system to input signals from the simulation. The ASI could have the whole rest of the planet, or it could move us to some far off moon in the outer solar system—I don’t care. It just needs to do two things for it’s creators—preserve lives and optimize for long term human experience.
You may find this to be insightful: https://meltingasphalt.com/a-nihilists-guide-to-meaning/
In short, "meaning" is a contextual perception, not a discrete quality, though the author suggests it can be quantified based on the number of contextual connections to other things with meaning. The more densely connected something is, the more meaningful it is; my wedding is meaningful to me because my family and my partners family are all celebrating it with me, but it was an entirely meaningless event to you.
Thus, the meaningfulness of our contributions remains unchanged, as the meaning behind them is not dependent upon the perspective of an external observer.
>meaning behind them is not dependent upon the perspective of an external observer.
(Yes brother like cmon)
Regarding the author, I get the impression he grew up without a strong father figure? This isnt ad hominem I just get the feeling of someone who is so confused and lost in life that he is just severely depressed possibly related to his directionless life. He seems so confused he doesn't even take seriously the fact most humans find their own meaning in life and says hes not even going to consider this, finding it futile.( he states this near the top of the article ).
I believe his rejection of a simple basic core idea ends up in a verbal blurb which itself is directionless.
My opinion ( Which yes maybe more floored than anyones ), is to deal with Mazlows hierarchy, and then the prime directive for a living organism which after survival , which is reproduction. Only after this has been achieved can you then work towards your family community and nation.
This may seem trite, but I do believe that this is natural for someone with a relatively normal childhood.
My aim is not to disparage, its to give me honest opinion of why I disagree and possible reasons for it. If you disagree with anything I have said please correct me.
Thanks for sharing the article though it was a good read - and I did struggle myself with meaning sometimes.
We spend the best 40 years of our lives working 40-50 hours a week to enrich the top 0.1% while living in completely artificial cities. People should wonder what is the point of our current system instead of worrying about Terminator tier sci fi system that may or may not come sometimes in the next 5 to 200 years
Like you say, people but more our govs need to worry about what is the point at this moment, not scifi in the future; this stuff has already bad enough to worry about. Working your ass off for diminishing returns , paying into a pension pot that won't make it until you retire etc is driving people to really focus on the now and why they would do these things. If you can just have fun with 500/mo and booze from your garden, why work hard and save up etc. I noticed even people from my birth country with these sentiments while they have it extraordinarily good for the eu standards but they are wondering why would they do all of this for nothing (...) more and more and cutting hours more and more. It seems more an education and communication thing really than anything else; it is like asking why pay taxes: if you are not well informed, it might feel like theft, but when you spell it out, most people will see how they benefit.
I’m led to believe that we see this stuff because the tiny subset of humanity that has the wealth and luxury to sit around thinking about thinking about themselves are worried that AI may disrupt the navel-gazing industry.
And I think training is similar — training is capital intensive therefore centralized, but if 100m people are paying $6k for their inference hardware, add on $100/year as a training tax (er, subscription) and you’ve got $10B/year for training operations.
Aha, you might say, but they hold leadership roles! They have positions of authority! Of course they have meaning, as they wield spiritual responsibility to their community as a fine substitute for the family life they will not have.
To that, I suggest looking deeper, at the nuns and monks. To a cynical non-believer, they surely are wanting for a point to their existence, but to them, what they do is a step beyond Maslow's self actualization, for they live in communion with God and the saints. Their medications and good works in the community are all expressions of that purpose, not the other way around. In short, though their "graph of contextual meaning" doesn't spread as far, it is very densely packed indeed.
Two final thoughts:
1) I am both aware of and deeply amused by the use of priests and nuns and monks to defend the arguments of a nihilist's search for meaning.
2) I didn't bring this up so much to take the conversation off topic, so much as to hone in on the very heart of what troubled the person I originally responded to. The question of purpose, the point of existence, in the face of superhuman AI is in fact unchanged. The sense of meaning and purpose one finds in life is found not in the eyes of an unfeeling observer, whether the observers are robots or humans. It must come from within.
EDIT: holy crap I just discovered a commonly known thing about exponents and log. Leaving comment here but it is wrong, or at least naive.
At the same time, I wouldn't necessarily say that people are currently fine getting dopamine hits from social media. Coping would probably be a better description. There are a lot of social and societal problems that have been growing at a rapid rate since Facebook and Twitter began tapping into the reward centers of the brain.
From a purely anecdotal perspective, I find my mood significantly affected by how productive and impactful I am with how I spend my time. I'm much happier when I'm making progress on something, whether it's work or otherwise.
Ultimately, "meaning" is a matter of "purpose", and purpose is a matter of having an end, or telos. The end of a thing is dependent on the nature of a thing. Thus, the telos of an oak tree is different from the telos of a squirrel which is different from that of a human being. The telos or end of a thing is a marker of the thing's fulfillment or actualization as the kind of thing it is. A thing's potentiality is structured and ordered toward its end. Actualization of that potential is good, the frustration of actualization is not.
As human beings, what is most essential to us is that we are rational and social animals. This is why we are miserable when we live lives that are contrary to reason, and why we need others to develop as human beings. The human drama, the human condition, is, in fact, our failure to live rationally, living beneath the dignity of a rational agent, and very often with knowledge of and assent to our irrational deeds. That is, in fact, the very definition of sin: to choose to act in a way one knows one should not. Mistakes aren't sins, even if they are per se evil, because to sin is to knowingly do what you should not (though a refusal to recognize a mistake or to pay for a recognized mistake would constitute a sin). This is why premeditated crimes are far worse than crimes of passion; the first entails a greater knowledge of what one is doing, while someone acting out of intemperance, while still intemperate and thus afflicted with vice, was acting out of impulse rather fully conscious intent.
So telos provides the objective ground for the "meaning" of acts. And as you may have noticed, implicitly, it provides the objective basis for morality. To be is synonymous with good, and actualization of potential means to be more fully.
Daniel Dennett - Information & Artificial Intelligence
https://www.youtube.com/watch?v=arEvPIhOLyQ
Daniel Dennett bridges the gap between everyday information and Shannon-Weaver information theory by rejecting propositions as idealized meaning units. This fixation on propositions has trapped philosophers in unresolved debates for decades. Instead, Dennett proposes starting with simple biological cases—bacteria responding to gradients—and recognizing that meaning emerges from differences that affect well-being. Human linguistic meaning, while powerful, is merely a specialized case. Neural states can have elaborate meanings without being expressible in sentences. This connects to AI evolution: "good old-fashioned AI" relied on propositional logic but hit limitations, while newer approaches like deep learning extract patterns without explicit meaning representation. Information exists as "differences that make a difference"—physical variations that create correlations and further differences. This framework unifies information from biological responses to human consciousness without requiring translation into canonical propositions.
Again, you’re not experiencing a mundane or perfect world. It would be like being in a video game or movie, if you wanted. Some people would experience the plot of The Matrix as any of the characters. Or you could travel around the galaxy solving mysteries and fighting evil as a Jedi Master. Or you could spend some time living a quiet pastoral life in the Shire with your hobbit friends. Or you could do it all over and over again experiencing the highs and lows each time.