zlacker

[parent] [thread] 56 comments
1. robwwi+(OP)[view] [source] 2023-05-16 20:36:34
Counterpoint—-there is growing speculation we are just about to transition to AGI.
replies(2): >>Eamonn+k >>causal+v3
2. Eamonn+k[view] [source] 2023-05-16 20:38:23
>>robwwi+(OP)
Growing? Or have the same voices who have been saying it since the aughts suddenly been platformed.
replies(3): >>TeMPOr+Bb >>jack_p+kf >>lostms+Tx
3. causal+v3[view] [source] 2023-05-16 20:53:00
>>robwwi+(OP)
Growing among who? The more I learn about and use LLMs the more convinced I am we're in a local maxima and the only way they're going to improve is by getting smaller and cheaper to run. They're still terrible at logical reasoning.

We're going to get some super cool and some super dystopian stuff out of them but LLMs are never going to go into a recursive loop of self-improvement and become machine gods.

replies(6): >>ben_w+67 >>TeMPOr+ba >>behnam+xb >>bernie+5q >>stuckk+7r >>ux-app+ws
◧◩
4. ben_w+67[view] [source] [discussion] 2023-05-16 21:16:08
>>causal+v3
> They're still terrible at logical reasoning.

Are they even trying to be good at that? Serious question; using LLMs as a logical processor are as wasteful and as well-suited as using the Great Pyramid of Giza as an AirBnB.

I've not tried this, but I suspect the best way is more like asking the LLM to write a COQ script for the scenario, instead of trying to get it to solve the logic directly.

replies(2): >>staunt+ye >>fsckbo+Pq
◧◩
5. TeMPOr+ba[view] [source] [discussion] 2023-05-16 21:30:39
>>causal+v3
> The more I learn about and use LLMs the more convinced I am we're in a local maxima

Not sure why would you believe that.

Inside view: qualitative improvements LLMs made at scale took everyone by surprise; I don't think anyone understands them enough to make a convincing argument that LLMs have exhausted their potential.

Outside view: what local maximum? Wake me up when someone else makes a LLM comparable in performance to GPT-4. Right now, there is no local maximum. There's one model far ahead of the rest, and that model is actually below it's peak performance - side effect of OpenAI lobotomizing it with aggressive RLHF. The only thing remotely suggesting we shouldn't expect further improvements is... OpenAI saying they kinda want to try some other things, and (pinky swear!) aren't training GPT-4's successor.

> and the only way they're going to improve is by getting smaller and cheaper to run.

Meaning they'll be easier to chain. The next big leap could in fact be a bunch of compressed, power-efficient LLMs talking to each other. Possibly even managing their own deployment.

> They're still terrible at logical reasoning.

So is your unconscious / system 1 / gut feel. LLMs are less like one's whole mind, and much more like one's "inner voice". Logical skills aren't automatic, they're algorithmic. Who knows what is the limit of a design in which LLM as "system 1" operates a much larger, symbolic, algorithmic suite of "system 2" software? We're barely scratching the surface here.

◧◩
6. behnam+xb[view] [source] [discussion] 2023-05-16 21:40:56
>>causal+v3
My thoughts exactly. It's hard to see signal among all the noise surrounding LLMs, Even if they say they're gonna hurt you, they have no idea about what it means to hurt, what is "you", and how they're going to achieve that goal. They just spit out things that resemble people have said online. There's no harm from a language model that's literally a "language" model.
replies(2): >>visarg+Wf >>forget+Ig
◧◩
7. TeMPOr+Bb[view] [source] [discussion] 2023-05-16 21:41:41
>>Eamonn+k
Yes, growing. It's not that the Voices have suddenly been "platformed" - it's that the field made a bunch of rapid jumps which made the message of those Voices more timely.

Recent developments in AI only further confirm that the logic of the message is sound, and it's just the people that are afraid the conclusions. Everyone has their limit for how far to extrapolate from first principles, before giving up and believing what one would like to be true. It seems that for a lot of people in the field, AGI X-risk is now below that extrapolation limit.

replies(4): >>b112+Lk >>rtkwe+Jn >>2muchc+sU >>dhkk+lY
◧◩◪
8. staunt+ye[view] [source] [discussion] 2023-05-16 21:58:21
>>ben_w+67
Indeed, AI reinforcement-learning to deal with formal verification is what I'm looking forward to the most. Unfortunately it seems a very niche endeavour at the moment.
◧◩
9. jack_p+kf[view] [source] [discussion] 2023-05-16 22:02:49
>>Eamonn+k
When the sky is getting to a dark shade of red it makes sense to hear out the doomsayers
replies(1): >>matwoo+4g
◧◩◪
10. visarg+Wf[view] [source] [discussion] 2023-05-16 22:06:31
>>behnam+xb
A language model can do many things based on language instructions, some harmless, some harmful. They are both instructable and teachable. Depending on the prompt, they are not just harmless LLMs.
◧◩◪
11. matwoo+4g[view] [source] [discussion] 2023-05-16 22:07:05
>>jack_p+kf
And the vast majority of the time it's just a nice sunset.
replies(2): >>jack_p+Fi >>nullse+by
◧◩◪
12. forget+Ig[view] [source] [discussion] 2023-05-16 22:11:02
>>behnam+xb
You appear to be ignoring a few thousand years of recorded history around what happens when a demagogue gets a megaphone. Human-powered astroturf campaigns were all it took to get randoms convinced lizard people are an existential threat and then -act- on that belief.
replies(2): >>nullse+Tn >>goatlo+qT
◧◩◪◨
13. jack_p+Fi[view] [source] [discussion] 2023-05-16 22:22:15
>>matwoo+4g
a sunset at lunch time hits different
◧◩◪
14. b112+Lk[view] [source] [discussion] 2023-05-16 22:35:54
>>TeMPOr+Bb
it's that the field made a bunch of rapid jumps

I wish I knew what we really have achieved here. I try to talk to these things, via turbo3.5 api, amd all I get is broken logic, twisted moral reasoning, all due to oipenai manually breaking their creation.

I don't understand their whole filter business. It's like we found a 500 yr old nude painting, a masterpiece, and 1800 puritans painted a dress on it.

I often wonder if the filter, is more to hide its true capabilities.

replies(4): >>stuckk+Cq >>andyba+gr >>ux-app+ds >>TeMPOr+FC
◧◩◪
15. rtkwe+Jn[view] [source] [discussion] 2023-05-16 22:58:50
>>TeMPOr+Bb
What's the actual new advancements? LLMs to me are great at faking AGI but are no where near actually being a workable general AI. The biggest example to me is you can correct even the newest ChatGPT and ask it to be truthful but it'll make up the same lie within the same continuous conversation. IMO the difference between being able to act truth-y and actually being truthful is a huge gap that involves the core ideas of what separates an actual AGI and a really good chatbot.

Maybe it'll turn out to be a distinction that doesn't matter but I personally still think we're a ways away from an actual AGI.

replies(2): >>ux-app+wr >>canjob+0V
◧◩◪◨
16. nullse+Tn[view] [source] [discussion] 2023-05-16 22:59:34
>>forget+Ig
I think I'm just going to build and open source some really next gen astroturf software that learns continuously as it debates people online in order to get better at changing people's minds. I'll make sure to include documentation in Russian, Chinese and Corporate American English.

What would a good name be? TurfChain?

I'm serious. People don't believe this risk is real. They keep hiding it behind some nameless, faceless 'bad actor', so let's just make it real.

I don't need to use it. I'll just release it as a research project.

replies(2): >>forget+ZL >>edgyqu+101
◧◩
17. bernie+5q[view] [source] [discussion] 2023-05-16 23:13:14
>>causal+v3
I’m agreeing with this viewpoint the more I use LLMs.

They’re text generators that can generate compelling content because they’re so good at generating text.

I don’t think AGI will arise from a text generator.

◧◩◪◨
18. stuckk+Cq[view] [source] [discussion] 2023-05-16 23:15:58
>>b112+Lk
Lately I have been picturing comments, and this is truly iconic haha.
◧◩◪
19. fsckbo+Pq[view] [source] [discussion] 2023-05-16 23:17:17
>>ben_w+67
> using the Great Pyramid of Giza as an AirBnB

, were you allowed to do it, would be an extremely profitable venture. Taj Mahal too, and yes, I know it's a mausoleum.

replies(1): >>ben_w+Ru1
◧◩
20. stuckk+7r[view] [source] [discussion] 2023-05-16 23:18:38
>>causal+v3
I was looking at the A100 80gb cards. 14k a pop. We gonna see another GPU shortage when these models become less resource dependent. CRYPTO era
◧◩◪◨
21. andyba+gr[view] [source] [discussion] 2023-05-16 23:19:23
>>b112+Lk
You seem to be making several points at once and I'm not sure they all join up?
◧◩◪◨
22. ux-app+wr[view] [source] [discussion] 2023-05-16 23:21:24
>>rtkwe+Jn
>Maybe it'll turn out to be a distinction that doesn't matter but I personally still think we're a ways away from an actual AGI.

if you had described GPT to me 2 years ago I would have said no way, we're still a long way away from a machine that can fluidly and naturally converse in natural language and perform arbitrary logic and problem solving, and yet here we are.

I very much doubt that in 5 years time we'll be talking about how GPT peaked in 2023.

replies(2): >>q7xvh9+Rr >>rtkwe+MH
◧◩◪◨⬒
23. q7xvh9+Rr[view] [source] [discussion] 2023-05-16 23:24:18
>>ux-app+wr
Seriously. It's worth pausing for a minute to note that the Turing Test has been entirely solved.

In fact, it has been so thoroughly solved that anyone can download an open-source solution and run it on their computer.

And yet, the general reaction of most people seems to be, "That's kind of cool, but why can't it also order me a cheeseburger?"

replies(3): >>eroppl+zt >>canjob+GV >>2muchc+uW
◧◩◪◨
24. ux-app+ds[view] [source] [discussion] 2023-05-16 23:26:31
>>b112+Lk
>I often wonder if the filter, is more to hide its true capabilities.

right now we're all sharing a slice of GPT. I wouldn't be at all surprised if there's some uber GPT (which requires a lot more processing per response) running in a lab somewhere that blows what's publicly available out of the water.

◧◩
25. ux-app+ws[view] [source] [discussion] 2023-05-16 23:29:13
>>causal+v3
>They're still terrible at logical reasoning.

2 years ago a machine that understands natural language and is capable of any arbitrary, free-form logic or problem solving was pure science fiction. I'm baffled by this kind of dismissal tbh.

>but LLMs are never going to go into a recursive loop of self-improvement

never is a long time.

replies(1): >>leonid+jD
◧◩◪◨⬒⬓
26. eroppl+zt[view] [source] [discussion] 2023-05-16 23:34:51
>>q7xvh9+Rr
It has not been solved. Even GPT-4, as impressive as it is for some use cases, is dumb and I can tell the difference between it and a human in a dozen sentences just by demanding sufficient precision.

In some contexts, will some people be caught out? Absolutely. But that's been happening for a while now.

replies(1): >>ben_w+DQ1
◧◩
27. lostms+Tx[view] [source] [discussion] 2023-05-17 00:03:05
>>Eamonn+k
Growing is quite apt here. No matter what you or I think more and more people get the sense of AI coming and talk about it.
◧◩◪◨
28. nullse+by[view] [source] [discussion] 2023-05-17 00:05:15
>>matwoo+4g
I'm so glad that we 100% know for sure that this too is the vast majority of the time.
◧◩◪◨
29. TeMPOr+FC[view] [source] [discussion] 2023-05-17 00:38:13
>>b112+Lk
> I wish I knew what we really have achieved here. I try to talk to these things, via turbo3.5 api, amd all I get is broken logic

Try to get your hands on GPT-4, even if it means paying the $20/mo subscription for ChatGPT Plus. There is a huge qualitative jump between the two models.

I got API access to GPT-4 some two weeks ago; my personal experience is, GPT-3.5 could handle single, well-defined tasks and queries well, but quickly got confused by anything substantial. Using it was half feelings of amazement, and half feelings of frustration. GPT-4? Can easily handle complex queries and complex tasks. Sure, it still makes mistakes, but much less frequently. GPT-4 for me is 80% semi-reliable results, 20% trying to talk it out of pursuing directions I don't care about.

Also, one notable difference: when GPT-4 gives me bad or irrelevant answers, most of the time this is because I didn't give it enough context. I.e. it's my failure at communicating. A random stranger, put in place of GPT-4, would also get confused, and likely start asking me questions (something LLMs generally don't do yet).

> I don't understand their whole filter business.

Part preferences, part making its "personality" less disturbing, and part PR/politics - last couple times someone gave the general public access to an AI chatbot, it quickly got trolled, and then much bad press followed. Doesn't matter how asinine the reaction was - bad press is bad press, stocks go down. Can't have it.

> I often wonder if the filter, is more to hide its true capabilities.

I don't think it's to hide the model's capabilities, but it's definitely degrading them. Kind of expected - if you force-feed the model with inconsistent and frequently irrational overrides to highly specific topics, don't be surprised if the model's ability to (approximate) reason starts to break down. Maybe at some point LLMs will start to compartmentalize, but we're not there yet.

◧◩◪
30. leonid+jD[view] [source] [discussion] 2023-05-17 00:42:23
>>ux-app+ws
Two years ago we already had GPT-2, that was capable of some problem solving and logic following. It was archaic, sure, it produced a lot of gibberish, yes, but if you followed OpenAI releases closely, you wouldn't think that something like GPT3.5 was "pure science fiction", it would just look as the inevitable evolution of GPT-2 in a couple of years given the right conditions.
replies(2): >>ux-app+9E >>canjob+eW
◧◩◪◨
31. ux-app+9E[view] [source] [discussion] 2023-05-17 00:47:39
>>leonid+jD
that's pedantic. switch 2 years to 5 years and the point still stands.
replies(1): >>edgyqu+QZ
◧◩◪◨⬒
32. rtkwe+MH[view] [source] [discussion] 2023-05-17 01:15:11
>>ux-app+wr
Again I'm not really saying GPT has peaked I'm saying there's a categorical difference between GPT and AGI. A good enough fake might perform well enough to function like one but I have my doubts that it will. Without a way to deal with and, to some sense of the word understand, facts I don't think LLMs are suitable for use as anything beyond an aide for humans (for starters because they can't determine internally what is and isn't a fact vs hallucination so you have to constantly check their work).
◧◩◪◨⬒
33. forget+ZL[view] [source] [discussion] 2023-05-17 01:56:50
>>nullse+Tn
It's not like there isn't a market waiting impatiently for the product...
replies(1): >>nullse+bc1
◧◩◪◨
34. goatlo+qT[view] [source] [discussion] 2023-05-17 03:12:22
>>forget+Ig
So the LLM demagogue is going to get people to create gray goo or make a lot of paper clips?
◧◩◪
35. 2muchc+sU[view] [source] [discussion] 2023-05-17 03:25:43
>>TeMPOr+Bb
How do we define a general intelligence?
◧◩◪◨
36. canjob+0V[view] [source] [discussion] 2023-05-17 03:31:05
>>rtkwe+Jn
The fact that it’s a system you’d even consider to be “lying” or “truthful” is a huge advance over anything available 5 years ago.
replies(1): >>rtkwe+cV1
◧◩◪◨⬒⬓
37. canjob+GV[view] [source] [discussion] 2023-05-17 03:37:45
>>q7xvh9+Rr
As impressive as GPT4 is, it still makes mistakes humans would not, and so I wouldn’t say it passes the Turing Test.

I generated this just now:

Me: Jack and Jill are sitting next to each other in a room. There is no one else in the room. The person sitting next to Jack is sad. The person sitting next to Jill is angry. Who is sad, Jack or Jill?

GPT4: Based on the information provided, it is not possible to determine who is sad and who is angry, as there is a contradiction in the given statements. If Jack and Jill are the only ones in the room and sitting next to each other, then the person sitting next to Jack would be Jill and the person sitting next to Jill would be Jack. The statements about their emotions conflict with each other, so it is not possible to accurately determine who is sad or angry.

replies(1): >>jdkee+0Z
◧◩◪◨
38. canjob+eW[view] [source] [discussion] 2023-05-17 03:43:35
>>leonid+jD
In hindsight it’s an obvious evolution, but in practice vanishingly few people saw it coming.
replies(1): >>leonid+ms2
◧◩◪◨⬒⬓
39. 2muchc+uW[view] [source] [discussion] 2023-05-17 03:45:55
>>q7xvh9+Rr
>And yet, the general reaction of most people seems to be, "That's kind of cool, but why can't it also order me a cheeseburger?”

Well yeah. Imagine you tell a small child that knows about calculators, “Hey can you work out 18763 + 38284, for me?” They might struggle and then maybe fetch a calculator.

The LLMs attempt to predict the answer. WTF? It’s a computer and it can see that is a plus sign. Just understand its addition, and use the rest of your computer brain and do the sum. Hell, it is connected to the internet and we just taught you everything since before 2021. Just call out to Wolfram and give me the answer.

But that’s not how computers work. And we keep saying “AI” but that I is doing a lot of heavy lifting.

◧◩◪
40. dhkk+lY[view] [source] [discussion] 2023-05-17 04:10:14
>>TeMPOr+Bb
Sources please. Every expert interview ive seen with AI researchers who have been in the game since the beginning have said the same: GPT's are not a massive breakthrough in the field of AI research.
replies(1): >>TeMPOr+CH1
◧◩◪◨⬒⬓⬔
41. jdkee+0Z[view] [source] [discussion] 2023-05-17 04:17:35
>>canjob+GV
Jack and Jill are sitting next to each other in a room. There is no one else in the room. The person sitting next to Jack is sad. The person sitting next to Jill is angry. Who is sad, Jack or Jill?

ChatGPT Since Jack and Jill are the only two people in the room and they are sitting next to each other, the person sitting next to Jack is Jill and the person sitting next to Jill is Jack. Given the conditions you provided:

Jill is the one who is sad because she is sitting next to Jack. Jack is the one who is angry because he is sitting next to Jill.

replies(1): >>ChatGT+Pi1
◧◩◪◨⬒
42. edgyqu+QZ[view] [source] [discussion] 2023-05-17 04:27:14
>>ux-app+9E
No it isn’t. Even before transformers people were doing cool things with LSTMs and RNNs before that. People following this space haven’t really been surprised by any of these advancements. It’s a straight forward path imo
◧◩◪◨⬒
43. edgyqu+101[view] [source] [discussion] 2023-05-17 04:28:45
>>nullse+Tn
I just don’t see how it’s going to be significantly worse than existing troll farms etc. This prediction appears significantly overblown to me.
replies(2): >>forget+T51 >>nullse+5B1
◧◩◪◨⬒⬓
44. forget+T51[view] [source] [discussion] 2023-05-17 05:35:27
>>edgyqu+101
Does it really? You thinking LLM-powered propaganda distribution services can't out-scale existing troll farms? Or do a better job of evading spam filters?
replies(1): >>edgyqu+Sp2
◧◩◪◨⬒⬓
45. nullse+bc1[view] [source] [discussion] 2023-05-17 06:46:53
>>forget+ZL
It's definitely not something I would attempt to productize and profit off of. I'm virtually certain someone will, and I'm sure that capability is being worked on as we speak, since we already know this type of thing occurs at scale.

My motivation would be simply shine a light on it. Make it real for people, so we have things to talk about other than just the hypotheticals. It's the kind of tooling that if you're seriously motivated to employ it, you'd probably prefer it remain secret or undetected at least until after it had done it's work for you. I worry that the 2024 US election will be the real litmus test for these things. All things considered it'd be a shame if we go through another Cambridge Analytica moment that in hindsight we really ought to have seen coming.

Some people have their doubts, and I understand that. These issues are so complex that no one individual can hope to have an accurate mental model of the world that is going to serve them reliabily again and again. We're all going to continue to be surprised as events unfold, and the degree to which we are surprised indicates the degree to which our mental models were lacking and got updated. That to me is why I'm erring on the side or pessimism and caution.

◧◩◪◨⬒⬓⬔⧯
46. ChatGT+Pi1[view] [source] [discussion] 2023-05-17 07:54:36
>>jdkee+0Z
Does the fact you both got difference answers make it better or worse at reasoning ?
◧◩◪◨
47. ben_w+Ru1[view] [source] [discussion] 2023-05-17 09:48:14
>>fsckbo+Pq
I can see the reviews in my head already:

1 star: No WiFi, no windows, no hot water

1 star: dusty

1 star: aliens didn't abduct me :(

5 stars: lots of storage room for my luggage

4 stars: service good, but had weird dream about a furry weighing my soul against a feather

1 star: aliens did abduct me :(

2 stars: nice views, but smells of camel

◧◩◪◨⬒⬓
48. nullse+5B1[view] [source] [discussion] 2023-05-17 10:49:31
>>edgyqu+101
Only time will tell.
◧◩◪◨
49. TeMPOr+CH1[view] [source] [discussion] 2023-05-17 11:47:03
>>dhkk+lY
> Sources please.

My own eyes? Hundreds of thousands thousand different scientific papers, blog posts, news reports and discussion threads that covered this ever since ChatGPT appeared, and especially in the last two months as GPT-4 rolled out?

At this point I'd reconsider if the experts you listened to are in fact experts.

Seriously. It's like saying Manhattan project wasn't a massive breakthrough in experimental physics or military strategy.

replies(1): >>dhkk+Nm3
◧◩◪◨⬒⬓⬔
50. ben_w+DQ1[view] [source] [discussion] 2023-05-17 12:48:30
>>eroppl+zt
"Dumb" isn't why the Turing Test isn't solved. (Have you seen unmoderated chat with normal people? Heck, even smart people outside the domain of expertise; my mum was smart enough to get into university in the UK in the early 60s, back when that wasn't the default, but still believed in the healing power of crystals, homeopathic sodium chloride and silicon dioxide, and Bach flower remedies…)

ChatGPT (I've not got v4) deliberately fails the test by spewing out "as a large language model…", but also fails incidentally by having an attention span similar to my mother's shortly after her dementia diagnosis.

The problem with 3.5 is that it's simultaneously not mastered anything, and yet also beats everyone in whatever they've not mastered — an extremely drunk 50,000 year old Sherlock Holmes who speaks every language and has read every book just isn't going to pass itself off as Max Musstermann in a blind hour-long trial.

replies(1): >>rtkwe+OW1
◧◩◪◨⬒
51. rtkwe+cV1[view] [source] [discussion] 2023-05-17 13:15:32
>>canjob+0V
That's more a convenience of language than an actual "It's Alive!". Calling them hallucinations or inaccuracies is unwieldy and the former has the same kind of implied attribution of a mind. We know for sure that's not there, my internal model for those is just a stupendously complex markov chain because to my understanding that's all LLMs are currently doing.
◧◩◪◨⬒⬓⬔⧯
52. rtkwe+OW1[view] [source] [discussion] 2023-05-17 13:23:25
>>ben_w+DQ1
The lack of an ability to take in new information is maybe the crux of my issues with the LLM to AGI evolution. To my understanding the only way to have it even kind of learn something is to include it in a preamble it reprocesses every time which is maybe workable for small facts but breaks down for updating it from the 202X corpus it was trained on.
replies(1): >>ben_w+hy2
◧◩◪◨⬒⬓⬔
53. edgyqu+Sp2[view] [source] [discussion] 2023-05-17 15:27:20
>>forget+T51
No I’m thinking that scaling trolls up has diminishing returns and we’re already peak troll.
replies(1): >>nullse+6y3
◧◩◪◨⬒
54. leonid+ms2[view] [source] [discussion] 2023-05-17 15:37:58
>>canjob+eW
Few people saw it coming in just two years, sure. But most people following this space were already expecting a big evolution like the one we saw in 5-ish years.

For example, take this thread: https://news.ycombinator.com/item?id=21717022

It's a text RPG game built on top of GPT-2 that could follow arbitrary instructions. It was a full project with custom training for something that you can get with a single prompt on ChatGPT nowadays, but it clearly showcased what LLMs were capable of and things we take for granted now. It was clear, back then, that at some point ChatGPT would happen.

◧◩◪◨⬒⬓⬔⧯▣
55. ben_w+hy2[view] [source] [discussion] 2023-05-17 15:59:49
>>rtkwe+OW1
Mmm. Well, possibly.

On the one hand, what I was saying here was more about the Turing Test than about AGI. Sometimes it gets called the AGI, sometimes it's "autocomplete on steroids", but even if it is fancy autocomplete, I think 3.5 has the skill to pass a short Turing Test, but not the personality, and it needs a longer "short-term memory"-equivalent than 3.5 for a full Turing Test.

On the other hand, as I (sadly) don't get paid to create LLMs, I've only got the kind of superficial awareness of how they work that comes from podcasts and the occasional blog post, which means ChatGPT might very well understand ChatGPT better than I do.

Can GPT-[3.5, 4] be prompted to make itself?

◧◩◪◨⬒
56. dhkk+Nm3[view] [source] [discussion] 2023-05-17 19:51:01
>>TeMPOr+CH1
It was Yann LeCun. His professional experience and knowledge of the AI development timeline outweighs your opinions, imo. Thanks for confirming you have no sources.
◧◩◪◨⬒⬓⬔⧯
57. nullse+6y3[view] [source] [discussion] 2023-05-17 20:44:53
>>edgyqu+Sp2
Any evidence or sources for that? I just don't know how that would be knowable to any of us.

Yuval Noah Harari gave a great talk the other day on the potential threat to democracy from the current state of the technology - https://youtu.be/LWiM-LuRe6w

[go to top]