zlacker

[parent] [thread] 28 comments
1. reduce+(OP)[view] [source] 2024-05-15 06:23:14
Daniel “Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI”

“I think AGI will probably be here by 2029, and could indeed arrive this year”

Kokotajlo too.

We are so fucked

replies(1): >>Otomot+P
2. Otomot+P[view] [source] 2024-05-15 06:32:48
>>reduce+(OP)
I am sorry, there must be some hidden tech, some completely different attempt to speak about AGI.

I really, really doubt that transformers will become AGI. Maybe I am wrong, I am no expert in this field, but I would love to understand the reasoning behind this "could arrive this year", because it reminds me about coldfusion :X

edit: maybe the term has changed again. AGI to me means truly understanding, maybe even some kind of consciousness, but not just probability... when I explain something, I have understood it. It's not that I have soaked up so many books that I can just use a probabilistic function to "guess" which word should come next.

replies(8): >>_nalpl+x1 >>Taylor+M1 >>n_ary+Y1 >>bbor+k2 >>ben_w+Df >>trucul+pj >>Miralt+qj >>JKCalh+bt
◧◩
3. _nalpl+x1[view] [source] [discussion] 2024-05-15 06:40:04
>>Otomot+P
I think what's missing:

- A possibility to fact-check the text, for example by the Wolfram math engine or by giving internet access

- Something like an instinct to fight for life (seems dangerous)

- some more subsystems: let's have a look a the brain: there's the amygdala, the cerebellum, the hippocampus, and so on, and there must be some evolutionary need for these parts

replies(1): >>t4ng0p+74
◧◩
4. Taylor+M1[view] [source] [discussion] 2024-05-15 06:43:22
>>Otomot+P
No I’m with you on this. Next token prediction does lead to impressive emergent phenomena. But what makes people people is an internal drive to attend to our needs, and an LLM exists without that.

A real AGI should be something you can drop in to a humanoid robot and it would basically live as an individual, learning from every moment and every day, growing and changing with time.

LLMs can’t even count the number of letters in a sentence.

replies(4): >>vinter+b5 >>astran+ib >>kgeist+di >>sebast+Jj1
◧◩
5. n_ary+Y1[view] [source] [discussion] 2024-05-15 06:45:44
>>Otomot+P
Don't worry, these are the "keeping the bridge intact" speak of people leaving a glorious or so workplace. I have worked at several places, and when people left(usually most well paid ones), they post linkedin/twitter posts to say kudos and inspire that, the corresponding business will be in forefront of the particular niche this year or soon and they would like to be proud of ever being part of it.

Also, when they speak about AGI, it raises their(person leaving) marketing value as someone else already know they are brilliant to have worked at something cool and they might also know some secret sauce, which could be acquired at lower cost by hiring them immediately[1]. I have seen these kinds of speak play out too many times. Last January, one of the senior engineers from my current work place in aviation left citing about something super secret coming this year or soon, and they immediately got hired by a competitor with generous pay to work on that said topic.

replies(1): >>reduce+h2
◧◩◪
6. reduce+h2[view] [source] [discussion] 2024-05-15 06:49:42
>>n_ary+Y1
> Also, when they speak about AGI, it raises their(person leaving) marketing value

Why yes, of course Jan Leike just impromptu resigned and Daniel Kokotajlo just gave up 85% of his wealth in order not to sign a resignation NDA to do what you're describing...

replies(1): >>Shrezz+H8
◧◩
7. bbor+k2[view] [source] [discussion] 2024-05-15 06:52:10
>>Otomot+P
As something of a (biased) expert: yes, it’s a big deal, and yes, this seemingly dumb breakthrough was the last missing piece. It takes a few dozen hours of philosophy to show why your brain is also composed of recursive structures of probabilistic machines, so forget that, it’s not neccesary, instead, take a glance at these two links:

1. Alan Turing on why we should never ever perform a Turing test: https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf

2. Marvin Minsky on the “Frame Problem” that lead to one or two previous AI winters, and what an Intuitive algorithm might look like: https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...

replies(1): >>re+9d
◧◩◪
8. t4ng0p+74[view] [source] [discussion] 2024-05-15 07:08:30
>>_nalpl+x1
AGI can’t be defined as autocomplete with fact checker and instinct to survive, there’s so so so much more hidden in that “subsystems point”. At least if we go by Bostroms definition…
◧◩◪
9. vinter+b5[view] [source] [discussion] 2024-05-15 07:18:59
>>Taylor+M1
From that AGI definition, AGI is probably quite possible and reachable - but also something pointless which there are no good reasons to "use", and many good reasons not to.
◧◩◪◨
10. Shrezz+H8[view] [source] [discussion] 2024-05-15 07:58:12
>>reduce+h2
While he'll be giving up a lot of wealth, it's unlikely that any meaningful NDA will be applied here. Maybe for products, but definitely not for their research.

There's very few people who can lead in frontier AI research domains - maybe a few dozen worldwide - and there are many active research niches. Applying an NDA to a very senior researcher would be such a massive net-negative for the industry, that it'd be a net-negative for the applying organisation too.

I could see some kind of product-based NDA, like "don't discuss the target release dates for the new models", but "stop working on your field of research" isn't going to happen.

replies(1): >>reduce+da
◧◩◪◨⬒
11. reduce+da[view] [source] [discussion] 2024-05-15 08:11:18
>>Shrezz+H8
Kokotajlo: “To clarify: I did sign something when I joined the company, so I'm still not completely free to speak (still under confidentiality obligations). But I didn't take on any additional obligations when I left.

Unclear how to value the equity I gave up, but it probably would have been about 85% of my family's net worth at least.

Basically I wanted to retain my ability to criticize the company in the future.“

> but "stop working on your field of research" isn't going to happen.

We’re talking about NDA, obviously no-competes aren’t legal in CA

https://www.lesswrong.com/posts/kovCotfpTFWFXaxwi/?commentId...

replies(1): >>darkwa+ji
◧◩◪
12. astran+ib[view] [source] [discussion] 2024-05-15 08:19:14
>>Taylor+M1
LLMs could count the number of letters in a sentence if you stopped tokenizing them first.
replies(1): >>HarHar+bw
◧◩◪
13. re+9d[view] [source] [discussion] 2024-05-15 08:38:07
>>bbor+k2
> Alan Turing on why we should never ever perform a Turing test

Can you cite specifically what in the paper you're basing that on? I skimmed it as well as the Wikipedia summary but I didn't see anywhere that Turing said that the imitation game should not be played.

replies(1): >>bbor+0la
◧◩
14. ben_w+Df[view] [source] [discussion] 2024-05-15 09:08:00
>>Otomot+P
> maybe the term has changed again. AGI to me means truly understanding, maybe even some kind of consciousness, but not just probability... when I explain something, I have understood it.

The term, and indeed each initial, means different things to different people.

To me, even InstructGPT manages to be a "general" AI, so it counts as AGI — much to the confusion and upset of many like you who think the term requires consciousness, and others who want it to be superhuman in quality.

I would also absolutely agree LLMs are not at all human-like. I don't know if they do or don't need the various missing parts in order to be in order to change the world into a jobless (u/dis)topia.

I also don't have any reason to be for or against any claim about consciousness, given that word also has a broad range of definitions to choose between.

I expect at least one more breakthrough architecture on the scale of Transformers before we get all the missing bits from human cognition, even without "consciousness".

What do you mean by "truly understanding"?

◧◩◪
15. kgeist+di[view] [source] [discussion] 2024-05-15 09:37:26
>>Taylor+M1
>LLMs can’t even count the number of letters in a sentence.

It's a consequence of tokenization. They "see" the world through tokens, and tokenization rules depend on the specific middleware you're using. It's like making someone blind and then claiming they are not intelligent because they can't tell red from green. That's just how they perceive the world and tells nothing about intelligence.

replies(1): >>Otomot+kI
◧◩◪◨⬒⬓
16. darkwa+ji[view] [source] [discussion] 2024-05-15 09:39:02
>>reduce+da
> Unclear how to value the equity I gave up, but it probably would have been about 85% of my family's net worth at least.

Percentages are nice, but with money and wealth absolute numbers are already important enough. You can leave a very, very good life even if you are losing 85% if the remaining 15% is USD $1M. And if not signing that NDA will help you landing another richly paying job + freedom to say whatever you feel it's important saying.

◧◩
17. trucul+pj[view] [source] [discussion] 2024-05-15 09:53:21
>>Otomot+P
> truly understanding… when I explain something, I have understood it

When you have that feeling of understanding, it is important to recognize that it is a feeling.

We hope it’s correlated with some kind of ability to reason, but at the end of the day, you can have the ability to reason about things without realising it, and you can feel that you understand something and be wrong.

It’s not clear to me why this feeling would be necessary for superhuman-level general performance. Nor is it clear to me that a feeling of understanding isn’t what being an excellent token predictor feels like from the inside.

If it walks and talks like an AGI, at some point, don’t we have to concede it may be an AGI?

replies(1): >>quantu+Ms
◧◩
18. Miralt+qj[view] [source] [discussion] 2024-05-15 09:53:31
>>Otomot+P
This paper and other similar works changed my opinion on that quite a bit. It shows that to perform text prediction, LLMs build complex internal models.

>>38893456

◧◩◪
19. quantu+Ms[view] [source] [discussion] 2024-05-15 11:30:51
>>trucul+pj
Would say understanding usually means ability to connect the dots and see the implications … not feeling.
replies(1): >>trucul+fz2
◧◩
20. JKCalh+bt[view] [source] [discussion] 2024-05-15 11:33:20
>>Otomot+P
> when I explain something, I have understood it.

Yeah, that's the part I don't understand though - do I understand it? Or do I just think I understand it. How do I know that I am not probabilistic also?

Synthesis is the only thing that comes to mind as a differentiator between me and an LLM.

◧◩◪◨
21. HarHar+bw[view] [source] [discussion] 2024-05-15 11:54:57
>>astran+ib
tokenization is not the issue - these LLMs can all break a word into letters if you ask them.
◧◩◪◨
22. Otomot+kI[view] [source] [discussion] 2024-05-15 13:11:20
>>kgeist+di
But it limits them, they cannot be AGI then, because a child that can count could do it :)
◧◩◪
23. sebast+Jj1[view] [source] [discussion] 2024-05-15 16:05:26
>>Taylor+M1
You seem generally intelligent. Can you tell how many letters are in the following sentence?

"هذا دليل سريع على أنه حتى البشر الأذكياء لا يمكنهم قراءة ”الرموز“ أو ”الحروف“ من لغة لم يتعلموها."

replies(2): >>omeze+v52 >>lewhoo+zt2
◧◩◪◨
24. omeze+v52[view] [source] [discussion] 2024-05-15 20:07:35
>>sebast+Jj1
I counted very quickly but 78? I learned arabic in kindergarten, im not sure what your point was. There are arabic spelling bees and an alphabet song just like english

The comment you replied to was saying LLMs trained on english cant count letters in english

replies(1): >>sebast+VW4
◧◩◪◨
25. lewhoo+zt2[view] [source] [discussion] 2024-05-15 22:34:18
>>sebast+Jj1
Is this even a fair comparison ? Are we asking a LLM to count letters in an alphabet it never saw ?
replies(1): >>trucul+uz2
◧◩◪◨
26. trucul+fz2[view] [source] [discussion] 2024-05-15 23:27:45
>>quantu+Ms
Okay, what if I put it like this: there is understanding (ability to reason about things), and there is knowing that you understand something.

In people, these are correlated, but one does not necessitate the other.

◧◩◪◨⬒
27. trucul+uz2[view] [source] [discussion] 2024-05-15 23:30:20
>>lewhoo+zt2
Yes, it sees tokens. Asking it to count letters is a little bit like asking that of someone who never learned to read/write and only learned language through speech.
◧◩◪◨⬒
28. sebast+VW4[view] [source] [discussion] 2024-05-16 20:26:57
>>omeze+v52
LLMs aren't trained in English with the same granularity that you and I are.

So my analogy here stands : OP was trained in "reading human language" with Roman letters as the basis of his understanding, and it would be a significant challenge (fairly unrelated to intelligence level) for OP to be able to parse an Arabic sentence of the same meaning.

Or:

You learned Arabic, great (it's the next language I want to learn so I'm envious!). But from the LLM point of view, should you be considered intelligent if you can count Arabic letters but not Arabic tokens in that sentence?

◧◩◪◨
29. bbor+0la[view] [source] [discussion] 2024-05-19 01:30:39
>>re+9d
Sorry I missed this, for posterity:

I was definitely being a bit facetious for emphasis, but he says a few times that the original question — “Can machines think?” - is meaningless, and the imitation game question is solved in its very posing. As a computer scientist he was of course worried about theoretical limits, and he intended the game in that vein. In that context he sees the answer as trivial: yes, a good enough computer will be able to mimic human behavior.

The essay’s structure is as follows:

1. Propose theoretical question about computer behavior.

2. Describe computers as formal automata.

3. Assert that automata are obviously general enough to satisfy the theoretical question — with good enough programming and enough power.

4. Dismiss objections, of which “humans might be telepathic” was somewhat absurdly the only one left standing.

It’s not a very clearly organized paper IMO, and the fun description of the game leads people to think he’s proposing that. That’s just the premise, and the pressing conclusion he derives from it is simple: spending energy on this question is meaningless, because it’s either intractable or solved depending on your approach (logical and empirical, respectively).

TL;DR: the whole essay revolves around this quote, judge for yourself:

  We may now consider the ground to have been cleared and we are ready to proceed to the debate on our question, "Can machines think?" and the variant of it quoted at the end of the last section… ["Are there discrete-state machines which would do in the Imitation Game?"]

  It will simplify matters for the reader if I explain first my own beliefs in the matter.

  Consider first the more accurate form of the question. I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. 

  The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
[go to top]