zlacker

[parent] [thread] 11 comments
1. root_a+(OP)[view] [source] 2024-05-18 16:26:22
More than these egregious gag contracts, OpenAI benefits from the image that they are on the cusp of world-destroying science fiction. This meme needs to die, if AGI is possible it won't be achieved any time in the foreseeable future, and certainly it will not emerge from quadratic time brute force on a fraction of text and images scraped from the internet.
replies(2): >>MrScru+G2 >>dclowd+Ac
2. MrScru+G2[view] [source] 2024-05-18 16:56:53
>>root_a+(OP)
Clearly we don’t know when/if AGI would happen, but the expectations of many people working in the field is it will arrive in what qualifies as ‘near future’. It probably won’t result from just scaling LLMs, but then that’s why there’s a lot of researchers trying to find the next significant advancement, in parallel with others trying to commercially exploit LLMs.
replies(3): >>troupo+O2 >>timr+Z5 >>zzzeek+2c
◧◩
3. troupo+O2[view] [source] [discussion] 2024-05-18 16:58:17
>>MrScru+G2
> the expectations of many people working in the field is it will arrive in what qualifies as ‘near future’

It was the expectation of many people in the field in the 1980s, too

◧◩
4. timr+Z5[view] [source] [discussion] 2024-05-18 17:34:40
>>MrScru+G2
The same way that the expectation of many people working within the self-driving field in 2016 was that level 5 autonomy was right around the corner.

Take this stuff with a HUGE grain of salt. A lot of goofy hyperbolic people work in AI (any startup, really).

replies(3): >>schmid+29 >>huevos+ke >>thayne+Gg
◧◩◪
5. schmid+29[view] [source] [discussion] 2024-05-18 17:58:13
>>timr+Z5
Sure, but blanket pessimism isn't very insightful either. I'll use the same example you did: self-driving. The public (or "median nerd") consensus has shifted from "right around the corner" (when it struggled to lane-follow if the paint wasn't sharp) to "it's a scam and will never work," even as it has taken off with the other types of AI and started hopping hurdles every month that naysayers said would take decades. Negotiating right-of-way, inferring intent, handling obstructed and ad-hoc roadways... the nasty intractables turned out to not be intractable, but sentiment has not caught up.

For one where the pessimist consensus has already folded, see: coherent image/movie generation and multi-modality. There were loads of pessimists calling people idiots for believing in the possibility. Then it happened. Turns out an image really is worth 16x16 words.

Pessimism isn't insight. There is no substitute for the hard work of "try and see."

◧◩
6. zzzeek+2c[view] [source] [discussion] 2024-05-18 18:20:50
>>MrScru+G2
>but the expectations of many people working in the field is it will arrive in what qualifies as ‘near future’.

they think this because it serves their interests of attracting an enormous amount of attention and money to an industry that they seek to make millions of dollars personally from.

My money is well on environmental/ climate collapse wiping out most of humanity in the next 50-100 years, hundreds of years before anything like an AGI possibly could.

7. dclowd+Ac[view] [source] 2024-05-18 18:24:58
>>root_a+(OP)
Ah yes, the “our brains are somehow inherently special” coalition. Hand-waving the capabilities of LLM as dumb math while not having a single clue about the math that underlies our own brains’ functionality.

I don’t know if you’re conflating capability with consciousness but frankly it doesn’t matter if the thing knows it’s alive if it still makes everyone obsolete.

replies(1): >>root_a+4w
◧◩◪
8. huevos+ke[view] [source] [discussion] 2024-05-18 18:42:04
>>timr+Z5
While I agree with your point, I take self driving rides on a weekly basis and you see them all over SF nowadays.

We overestimate the short term progress, but underestimate the medium, long term one.

replies(2): >>timr+dh >>Kwpols+Pi
◧◩◪
9. thayne+Gg[view] [source] [discussion] 2024-05-18 19:01:31
>>timr+Z5
The same thing happened with nuclear fusion. People working on it have been saying sustainable fusion power is right around the corner for decades, and we still don't have it.

And it _could_ be just one clever breakthrough away, and that could happen tomorrow, or it could be centuries away. There's no way to know.

◧◩◪◨
10. timr+dh[view] [source] [discussion] 2024-05-18 19:07:38
>>huevos+ke
I don't think we disagree, but I will say that "a handful of people in SF and AZ taking rides in cars that are remotely monitored 24/7" is not the drivers-are-obsolete-now, near-term future being promised in 2016. Remember the panic because long-haul truckers were going to be unemployed Real Soon Now? I do.

Back then, I said that the future of self-driving is likely to be the growth in capability of "driver assistance" features to an asymptotic point that we will re-define as "level 5" in the distant future (or perhaps: the "levels" will be memory-holed altogether, only to reappear in retrospective, "look how goofy we were" articles, like the ones that pop up now about nuclear airplanes and whatnot). I still think that is true.

◧◩◪◨
11. Kwpols+Pi[view] [source] [discussion] 2024-05-18 19:22:44
>>huevos+ke
Self-driving taxis are available in only a handful of cities around the world. This is far from progress. And how often are those taxis secretly controlled by an Indian call center?
◧◩
12. root_a+4w[view] [source] [discussion] 2024-05-18 21:12:16
>>dclowd+Ac
This isn't a question of understanding the brain. We don't even have a theory of AGI, the idea that LLMs are somehow anywhere near even approaching an existential threat to humanity is science fiction.

LLMs are a super impressive advancement, like calculators for text, but if you want to force the discussion into a grandiose context then they're easy to dismiss. Sure, their outputs appear remarkably coherent through sheer brute force, but at the end of the day their fundamental nature makes them unsuitable for any task where precision is necessary. Even as just a chatbot, the facade breaks down with a bit of poking and prodding or just unlucky RNG. Only threat LLMs present is the risk that people will introduce their outputs into safety critical systems.

[go to top]