Maybe your calibration isn't poor. Maybe they really are all wrong but there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there. And if you genuinely think that, them this kind of investment isn't so crazy.
She interfaces with AI Agents of companies, organizations, friends, family, etc to get things done for you (or to learn from..what's my friends bday his agent tells yours) automagically and she is like a friend. Always there for you at your beckon call like in the movie H.E.R.
Zuckerberg's glasses that can not take selfies will only be complimentary to our AI phones.
That's just my guess and desire as fervent GPT user, as well a Meta Ray Ban wearer (can't take selfies with glasses).
A 500B dollar investment doesn't just fall into one's lap. It's not your run of the mill funding round. No, this is something you very actively work towards that your funders must be really damn convinced is worth the gamble. No one sane is going to look at what they genuinely believe to be a dead end and try to garner up Manhattan Project scales of investment. Careers have been nuked for far less.
I can see your point re: run locally but no reason Open AI can't release version 0.1 and how many times are u left without an internet connection on ur current phone?
Overall I hate Apple now it's so stale compared to GPT's iPhone app. I nerd rage at dumbass Siri.
The argument presented in the quote there is: “everyone in AI foundation companies are putting money into AI, therefore we must be near AGI.”
The best evaluation of progress is to use the tools we have. It doesn’t look like we are close to AGI. It looks like amazing NLP with an enormous amount of human labelling.
I don't immediately disagree with you but you just accidentally also described all crypto/NFT enthusiasts of a few years ago.
The two are qualitatively different.
But pulling out your phone to talk to it like a friend...
Maybe you have not seen the 2013 movie "H.E.R.?" Scarlett Johansan starred in it (her voice was the AI) and Sam Altman asked her to be the voice of chatGPT.
Overall this is what I see happening and excited for some of it or possibly all of it to happen. Yet time will tell :-) and it sounds like your betting none of it will happen ... we'll see :)
All? Quite a few of the best minds in the field, like Yann LeCun for example, have been adamant that 1) autoregressive LLMs are NOT the path to AGI and 2) that AGI is very likely NOT just a couple of years away.
Re H.E.R phone - I see people already trying to build this type of product, one example: https://www.aphoneafriend.com
So the statement becomes tautological “all researchers who believe that AGI is imminent believe that AGI is imminent”.
And of course, OpenAI and the other labs don’t perform actual science any longer (if science requires some sort of public sharing of information), so they win every disagreement by claiming that if you could only see what they have behind closed doors, you’d become a true believer.
It may be a distinction thats not worth making if the current approach is good enough to completely transform society and make infinite money
I agree with you that there is significantly more there there with AI, but I agree with the parent that the hype cycles are essentially indistinguishable.
Hard to predict!
If we've already hit it, this has already been a very short period of time during which we've seen incredibly valuable new technology commercialized, and that's nothing to sneeze at, and fortunes have and will be rightly made from it.
If it's in the near future, then a lot of people might be over-investing in the promise of future growth that won't materialize to the extent they hoped. Some people will lose their shirts, but we're still left with incredibly useful new technology.
But if we have a long (or infinite) way to go before hitting that inflection point, then the hype is justified.
Personally, I do expect a big correction at some point, even if it never reaches the point of bubble bursting. But I have no idea when I expect it to happen, so this isn't, like, an investable thesis.
Not this specificay but this kinda thing. If I am getting billions like this, I wanna keep this gravy going. And it comes from shareholders ultimately.
Technically you are correct. A ponzi is a single entity paying returns from new marks. It is a straight con.
But some systems can be ponzi-like in that they require more and more investment and people get rich by selling into that. Bitcoin is an example.
All technological advances that are adopted are ones that made life easier and for some cooler then what they were once using (cell phone to iPhone put the web in our pocket but using your iPhone while driving is dangerous but talking to your human like friend isnt). Check out the movie H.E.R. as what Im describing is mostly what i describe above.
Time will tell if any of what im saying comes to fruition, but Silicon Valley is all a buzz about AI Agents in the last month or two and going forward.
If you've taken a couple of lectures about AI, you've probably been taught not to anthropomorphize your own algorithms, especially given how the masses think of AI (in terms of Skynet, Cortana, "Her", Ex Machina, etc). It encourages people to mistake the capabilities of the models and ascribe to them all of the traits of AI they've seen in TV and movies.
Sam has ignored that advice, and exploited the hype that can be generated by doing so. He even tried to mimic the product in "Her", down to the voice [0]. The old board said his "outright lying" made it impossible to trust him [1]. That behavior raises eyebrows, even if he's got a legitimate product.
[0]: https://www.wired.com/story/openai-gpt-4o-chatgpt-artificial...
[1]: https://www.theverge.com/2024/5/28/24166713/openai-helen-ton...
Are there computing and cryptography problems that the infrastructure could be (publicly or quietly) reallocated to address if the United States found itself in a conflict? Any cryptographers here have a thought on whether hundreds of thousands of GPUs turned on a single cryptographic key would yield any value?
If this is a bubble and it bursts in a few years, a lot of investors in specific companies, and in the market broadly, will lose a lot of money, but Sam Altman and Jensen Huang will remain very wealthy.
I'm a capitalist and I think there are good reasons for wealth to accrue to those who take risks and drive toward technological progress. But it just also is the case that they are incentivized to hype their companies, even if it risks getting out over their skis and leads to a bubble which eventually bursts. There are just have lots of ways to extract wealth prior to a bubble bursting, so the downsides of unwarranted hype are not as acute as they might otherwise be.
Whatever LeCun says and really even he has said "AGI is possible in 5 to 10 years" as recently as 2 months ago (so if that's the 'skeptic' opinion, you can only imagine what a lot of people are thinking), Meta has and is pouring a whole lot of money into LLM development. "Put your money where your mouth is" as they say. People can say all sorts of things but what they choose to focus their money on tells a whole lot.
When the old gang at Open ai was together, Sutskever, not Sam was easily the most hypey of them all. And if you ask Norvig today, AGI is already here. 2 months ago, Lecun said he believes AGI could be here in 5 to 10 years and this is supposed to be the skeptic. This is the kind of thing i'm talking about. The idea that it's just the non academics caught in the hype is just blatantly false.
No, it doesn't have to be literally everybody to make the point.
Even now, as we have fully capable conversational models we don't really have any great immediate applications. Our efforts at making them "think" is yielding marginal returns.
OpenAI is what you get when you take Goodhart's Law to the extreme. They are so focused on benchmarks that they are completely blind to the rate of progress that actual matters (hint...it's not model capability in a vacuum).
Yann indeed does believe that AGI will arrive in a decade, but the important thing is that he is honest that this is an uncertain estimate and is based off of extrapolation.
I want to interact with real people, not bots, I'm already spending most of my time wasting my life in front of a fucking screen for work
I mean, AI the tech can be spectacular and the hype can be overblown, right? I'm not even sure that the hype is overblown, but it sure feels like the kind of hype that we'll say, a few years from now, was overblown.
I just want an objective opinion from someone who has a deep understanding of the cutting edge.
It’s maddening to try to plan for a future which everyone is incentivised and permitted to fabricate.
This is because it's also a dystopia in disguise. It's a social criticism and a cautionary tale about the way fetishizing technology is emotionally crippling us as individuals in a society. It kind of amazes me that this aspect seems to go over some people's heads.
It's obviously true what Booker said: What one person considers an ideal dream might to another person seem a nightmare.
Hype is extremely normal. Everyone with a business gets the chance to hype for the purpose of funding. That alone isn't going to get several of the biggest tech giants in the world to pour billions.
Satya just said, "he has his 80 billion ready". Is Microsoft an "AI foundation company" ? Is Google ? Is Meta ?
The point is the old saying - "Put your money where your mouth is". People can say all sorts of things but what they choose to spend their money on says a whole lot.
And I'm not saying this means the investment is guaranteed to be worth it.
At any rate, I'm not saying this means that all this investment is guaranteed to pay off.
[0] With 300 million weekly active users/1 billion messages per day and #8 in visits worldwide the last few months just 2 years after release, ChatGPT is the software product with the fastest adoption ever.
What i describe is much like the movie H.E.R. which Sam Altman chatGPT's CEO asked Scarlett Johansan (voice of the AI in that movie) to be the voice of GPT. GPT is now a little like H.E.R the movie as you can have a full conversation with it unlike Siri. Just atm you dont see how GPT looks .. it doesn't look like a FaceTime call with human AI friend/Assistant (how your AI Assistant/Friend looks.. could look and sound like a deceased loved one .. that's my own crazy idea not from the movie H.E.R). Yet maybe in the future it will.. I'm betting it will, but it's only a guess and time will tell.
Im awaiting your downvote :) but will revisit this thread in a few years or more. Well if im right ;)
Im just jumping ahead utilizing what was seen in H.E.R. to envision where we are headed (possibly) as well adding my own crazy ... your AI Assistant Friend seen on your lock screen via a Facetime UI/UX call looks and sounds like a deceased loved one. Mom still guiding you through life.
Edit: aaaand right after posting I stumble across a documentary running on TV in this very moment, in which a dying guy trained an AI on himself to accompany his widow after his death. Seems you're not the only one to find that desirable...
'Well known' password notwithstanding, let's use the following as a password:
correct-horse-battery-staple
This password is 28 characters long, and whilst it could be stronger with uppercase letters, numbers, and special characters, it still shirtfronts a respectable ~1,397,958,111 decillion (1.39 × 10^42) combinations for an unsuspecting AI-turned-hashcat cluster to crack. Let's say this password was protected by SHA2-256 (assuming no cryptographic weaknesses exist (I haven't checked, purely for academic purposes)), and that at least 50% of hashes would need to be tested before 'success' flourishes (lets try to make things a bit exciting...).
I looked up a random benchmark for hashcat, and found an average of 20 gigahashs/second (GH/s) for a single RTX 4090.
If we throw 100 RTX 4090s at this hashed password, assuming a uniform 20 GH/s (combined firepower of 2,000 GH/s) and absolutely perfect running conditions, it would take at least eleven-nonillion-fifty octillion (1.105 x 10^31) years to crack. Earth will be long gone by the time that rolls around.
Turning up the heat (perhaps literally) by throwing 1,000,000 RTX 4090s at this hashed password, assuming the same conditions, doesn't help much (in terms of Earth's lifespan): two-octillion-two-hundred-ten septillion (2.21 x 10^27) years.
Using some recommended password specifications from NIST - 15 characters comprised of upper and lower-case letters, numbers, and special characters - lets try:
dXIl5p*Vn6Gt#BH
Despite the higher complexity, this password only just eeks out a paltry ~ 41 sextillion (4.11 × 10^22) possible combinations. Throwing 100 RTX 4090s at this password would, rather worryingly, only take around three hundred twenty-six billion seven hundred thirteen million two hundred seventeen thousand (326,713,217,000) years to have a 50% chance of success. My calculator didn't even turn my answer into a scientific number!
More alarming still, is when 1,000,000 RTX 4090s get sic'ed on the shorter hashed password: around thirty-two million six hundred seventy-one thousand (32,671,000) years to knock down half of this hashed password's strength.
I read a report that suggested Microsoft aimed to have 1.8 million GPUs by the end of 2024. We'll probably be safe for at least the next six months or so. All bets are off after that.
All I dream about is the tital wave of cheap high-performance GPUs flooding the market when the AI bubble bursts, so I can finally run Farcry at 25 frames per second for less than a grand.
How so ?
The genie is out of the bottle and America must keep it's momentum in AI up .. ahead of all other countries for it's continued prosperity and security!