zlacker

[parent] [thread] 32 comments
1. auctor+(OP)[view] [source] 2023-02-09 13:13:49
I'm more fatigued by people denying the obvious that ChatGPT and similar models are revolutionary. People have been fantasizing about the dawn of AI for almost a century and none managed to predict the rampant denialism of the past few months.

I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.

replies(6): >>liveon+M1 >>Rivier+n3 >>addcom+H4 >>rsynno+75 >>moron4+o7 >>latexr+u8
2. liveon+M1[view] [source] 2023-02-09 13:23:56
>>auctor+(OP)
So, to you, ChatGPT is approaching AGI?
replies(4): >>bioeme+U2 >>tarsin+d4 >>ballen+B4 >>dr_dsh+Da
◧◩
3. bioeme+U2[view] [source] [discussion] 2023-02-09 13:30:26
>>liveon+M1
I do believe if we are going to get AGI without some random revolutionary breakthrough, to achieve it iteratively, It's going to come through language models.

Think about it.

What's the most expressive medium we have which is also absolutely inundated with data?

To broadly be able to predict human speech you need to broadly be able to predict the human mind. To broadly predict a human mind requires you build a model of it, and to have a model of a human mind? Welcome to general intelligence.

We won't realize we've created an AGI until someone makes a text model, starts throwing random problems at it, and discovers that it's able to solve them.

replies(3): >>moron4+O8 >>hhjink+wb >>mattr4+lc
4. Rivier+n3[view] [source] 2023-02-09 13:34:07
>>auctor+(OP)
> I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.

It's important to note that this is your assumption which I believe to be wrong (for most people here).

◧◩
5. tarsin+d4[view] [source] [discussion] 2023-02-09 13:39:49
>>liveon+M1
Why the obsession with AGI? The point is that ChatGPT is already useful.
replies(1): >>EVa5I7+s8
◧◩
6. ballen+B4[view] [source] [discussion] 2023-02-09 13:42:02
>>liveon+M1
Perhaps a more interesting question is "how much better do we understand what characteristics AGI will have due to ChatGPT?"

We don't really understand what intelligence means -- in humans or our creations -- but ChatGPT gives us a little more insight (just like ELIZA, and the psychological research behind it, did).

At the very least, ChatGPT helps us build increasingly better Turing tests.

7. addcom+H4[view] [source] 2023-02-09 13:42:28
>>auctor+(OP)
There's a fellow that kinda predicted it in 1950 [0]:

> These arguments take the form, "I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X."

> [...]

> The criticisms that we are considering here are often disguised forms of the argument from consciousness, Usually if one maintains that a machine can do one of these things, and describes the kind of method that the machine could use, one will not make much of an impression.

Every time "learning machines" are able to do a new thing, there's a "wait, it is just mechanical, _real_ intelligence is the goalpost".

[0] https://www.espace-turing.fr/IMG/pdf/Computing_Machinery_and...

replies(1): >>foldr+Ha
8. rsynno+75[view] [source] 2023-02-09 13:44:20
>>auctor+(OP)
> ChatGPT and similar models are revolutionary

For _what purpose_, tho? It's a good party trick, but its tendency to be confidently wrong makes using it for anything important a bit fraught.

replies(3): >>chesch+f6 >>kriops+m6 >>falcor+Ca
◧◩
9. chesch+f6[view] [source] [discussion] 2023-02-09 13:50:27
>>rsynno+75
If you're the type of person that struggles to ramp up production of a knowledge product, but has great success in improving a knowledge product through an iterative review process, then these generative pre-trained transformers are fantastic tools in your toolbox.

That's about the only purpose I've found so far, but it seems a big one?

◧◩
10. kriops+m6[view] [source] [discussion] 2023-02-09 13:50:55
>>rsynno+75
If you work at a computer, it will increase your productivity. Revolutionary is not the word I'd use, but finding use cases isn't hard.
replies(3): >>rsynno+97 >>EVa5I7+78 >>rchaud+Fg
◧◩◪
11. rsynno+97[view] [source] [discussion] 2023-02-09 13:54:07
>>kriops+m6
I can buy that it's a better/worse search engine (better in that it's easier to formulate a query and you get the response right there without having to parse the results; worse in that there's a decent chance the response is nonsense, and it's very confident when it's being wrong about things).

I can't really imagine asking it a question about anything I cared about and not verifying via a second source, though, given its accuracy issues. This makes it feel a lot less useful.

12. moron4+o7[view] [source] 2023-02-09 13:55:22
>>auctor+(OP)
The problem is that ChatGPT is about as useful as all the other dilettantes claiming to be polymaths. Shallow, unreliable knowledge on lots of things only gets you so far. Might be impressive at parties, but once there's real, hard work to do, these things fall apart.
replies(1): >>falcor+Bb
◧◩◪
13. EVa5I7+78[view] [source] [discussion] 2023-02-09 13:58:32
>>kriops+m6
But will it? After accounting for the time needed to fix all the bugs it introduces?
replies(1): >>Timwi+kd
◧◩◪
14. EVa5I7+s8[view] [source] [discussion] 2023-02-09 13:59:33
>>tarsin+d4
Is it? I see it mostly generates BS much faster.
replies(1): >>rhn_mk+v9
15. latexr+u8[view] [source] 2023-02-09 13:59:40
>>auctor+(OP)
> I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.

Respectfully, that reads as needlessly combative within the context. It sounds like the blockchain proponents who say that the only people who are against cryptocurrencies are the ones who are “bitter for having missed the boat”.¹

It is possible and perfectly reasonable to identify problems in ChatGPT and similar technologies without feeling threatened. Simple example: someone who is retired and monetarily well off, whose way of living and sense of self worth are in no way affected by developments in AI, can still be critical and express valid concerns when these models tell you that it’s safe to boil a baby² or give other confident but absurdly wrong answers to important questions.

¹ I’m not saying that’s your intention, but consider that type of rhetoric may be counterproductive if you’re trying to make another understand your point of view.

² I passed by that specific example on Mastodon but I’m not finding it now.

◧◩◪
16. moron4+O8[view] [source] [discussion] 2023-02-09 14:01:08
>>bioeme+U2
> I do believe if we are going to get AGI without some random revolutionary breakthrough, to achieve it iteratively, It's going to come through language models.

Language is way, way far removed from intelligence. This is well-known in cognitive psychology. You'll find plenty of examples of stroke victims who are still intelligent but have lost the ability to produce coherent sentences, and (though much rarer) examples of people who can produce clear, eloquent prose, yet are so learning and mentally challenged that they can't even tell the difference between fantasy and reality.

replies(1): >>bioeme+Vo
◧◩◪◨
17. rhn_mk+v9[view] [source] [discussion] 2023-02-09 14:04:18
>>EVa5I7+s8
Brothers Grimm would like a word with you about what "BS" means.

ChatGPT is good at making up stories.

◧◩
18. falcor+Ca[view] [source] [discussion] 2023-02-09 14:10:22
>>rsynno+75
It seems to me that the tendency to be confidently wrong is entirely baked into intelligence of all kinds. In terms of actual philosophical rationality, human reasoning is also much closer to cargo cults than to cogito ergo sum, and I think we're better for it.

I cannot but think that this approach of "Strong Opinions, Weakly Held" is a much stronger path forward towards AGI than what we had before.

◧◩
19. dr_dsh+Da[view] [source] [discussion] 2023-02-09 14:10:27
>>liveon+M1
Yes. It is obviously already weak AGI (obvious to anyone if they saw it 20 years ago).

It is also obvious that we are in the middle of a shift of some kind. Very hard to see from within, but clearly we will look back at 2022 as the beginning of something

◧◩
20. foldr+Ha[view] [source] [discussion] 2023-02-09 14:10:44
>>addcom+H4
>Every time "learning machines" are able to do a new thing, there's a "wait, it is just mechanical, _real_ intelligence is the goalpost".

Just because people shift the goalposts doesn't mean that the new position of the goalposts isn't closer to being correct than the old position. You can criticise the people for being inconsistent or failing to anticipate certain developments, but that doesn't tell you anything about where the goalposts should be.

◧◩◪
21. hhjink+wb[view] [source] [discussion] 2023-02-09 14:13:55
>>bioeme+U2
> To broadly be able to predict human speech you need to broadly be able to predict the human mind

This is a non sequitur. The human mind does a whole lot more than string words together. Being able to predict which word would logically follow another does not require the ability to predict anything other than just that.

replies(2): >>Timwi+pe >>bioeme+0o
◧◩
22. falcor+Bb[view] [source] [discussion] 2023-02-09 14:14:31
>>moron4+o7
Even if ChatGPT could only make us 10% better at solving the "easy" things but on a global scale, that is already a colossal benefit to society.
◧◩◪
23. mattr4+lc[view] [source] [discussion] 2023-02-09 14:17:44
>>bioeme+U2
"The ability to speak does not make you intelligent." — Qui-Gon Jinn, The Phantom Menace.
◧◩◪◨
24. Timwi+kd[view] [source] [discussion] 2023-02-09 14:21:07
>>EVa5I7+78
Humans introduce bugs too. ChatGPT is still new, so it probably makes more mistakes than a human at the moment, but it's only a matter of time until someone creates the first language model that will measurably outperform humans in this regard (and several other important regards).
replies(2): >>EVa5I7+Sg >>rsynno+lp
◧◩◪◨
25. Timwi+pe[view] [source] [discussion] 2023-02-09 14:25:03
>>hhjink+wb
I think what the commenter is saying is that, in time, language models too will do a lot more than string words together. If it's large enough, and you train it well enough to respond to “what's the best next move in this chess position?” prompts with good moves, it will inevitably learn chess.
replies(1): >>hhjink+pk
◧◩◪
26. rchaud+Fg[view] [source] [discussion] 2023-02-09 14:33:09
>>kriops+m6
How will it do that?

One of major problems of modern computer-based work is that there are too many people already in those roles, doing work that isn't needed. Case in point: the culling of tens of thousands of software engineers, people who would consider themselves to be doing 'bullshit jobs'.

◧◩◪◨⬒
27. EVa5I7+Sg[view] [source] [discussion] 2023-02-09 14:33:47
>>Timwi+kd
>> it's only a matter of time

That reminds me how in my youth many were planning on vacations to Mars resorts and unlimited fusion energy) Stars looked so close, only a matter of time!

◧◩◪◨⬒
28. hhjink+pk[view] [source] [discussion] 2023-02-09 14:47:20
>>Timwi+pe
I don't think that follows, necessarily. Chess has an unfathomable amount of states. While the LLM might be able to play chess competently, I would not say it has learned chess unless it is able to judge the relative strength of various moves. From my understanding, an LLM will not judge future states of a chess game when responding to such a prompt. Without that ability, it's no different than someone receiving anal bead communications from Magnus Carlsen.
replies(1): >>bioeme+dm1
◧◩◪◨
29. bioeme+0o[view] [source] [discussion] 2023-02-09 14:59:41
>>hhjink+wb
Exactly. Since language is a compressed and transmittable result of our thought, to predict text as accurately as possible requires you do the same. A model with understanding of the human mind will outperform one without.

> Being able to predict which word would logically follow another does not require the ability to predict anything other than just that.

Why? Wouldn't you expect that technique to generally fail if it isn't intelligent enough to know what's happening in the sentence?

◧◩◪◨
30. bioeme+Vo[view] [source] [discussion] 2023-02-09 15:03:44
>>moron4+O8
We don't judge AI by their ability to produce language, we judge them by their conference and ability to respond intelligently, to give us information we can use.
◧◩◪◨⬒
31. rsynno+lp[view] [source] [discussion] 2023-02-09 15:04:53
>>Timwi+kd
> it's only a matter of time until someone creates the first language model that will measurably outperform humans in this regard

This seems to have been the rallying cry of AI-ish stuff for the past 30 years, tho. At a certain point you have to ask "but how much time"? Like, a lot of people were confidently predicting speech recognition as good as a human's from the 90s on, for instance. It's 2023, and the state of the art in speech recognition is a fair bit better than Dragon Dictate in the 90s, but you still wouldn't trust it for anything important.

That's not to say AI is useless, but historically there's been a strong tendency to say, of AI-ish things "it's 95% of the way there, how hard could the last 5% be?" The answer appears to be "quite hard, actually", based on the last few decades.

As this AI hype cycle ramps up, we're actually simultaneously in the down ramp of _another_ AI hype cycle; the 5% for self-driving cars is going _very slowly indeed_, and people seem to have largely accepted that, while still predicting that the 5% for generative language models will be easy. It's odd.

(Though, also, I'm not convinced that it _is_ just a case of making a better ChatGPT; you could argue that if you want correct results, a generative language model just isn't the way to go at all, and that the future of these things mostly lies in being more convincingly wrong...)

replies(1): >>EVa5I7+zb2
◧◩◪◨⬒⬓
32. bioeme+dm1[view] [source] [discussion] 2023-02-09 18:19:12
>>hhjink+pk
An LLM could theoretically create a model with which to understand chess and predict a next move, you just need to adjust the training data and train the model until that behavior appears.

The expressiveness of language lets this be true of almost everything.

◧◩◪◨⬒⬓
33. EVa5I7+zb2[view] [source] [discussion] 2023-02-09 21:33:33
>>rsynno+lp
Anyone still remembers the self-driving hype?
[go to top]