Ilya
Jan Leike
William Saunders
Leopold Aschenbrenner
All gone
“I think AGI will probably be here by 2029, and could indeed arrive this year”
Kokotajlo too.
We are so fucked
I really, really doubt that transformers will become AGI. Maybe I am wrong, I am no expert in this field, but I would love to understand the reasoning behind this "could arrive this year", because it reminds me about coldfusion :X
edit: maybe the term has changed again. AGI to me means truly understanding, maybe even some kind of consciousness, but not just probability... when I explain something, I have understood it. It's not that I have soaked up so many books that I can just use a probabilistic function to "guess" which word should come next.
- A possibility to fact-check the text, for example by the Wolfram math engine or by giving internet access
- Something like an instinct to fight for life (seems dangerous)
- some more subsystems: let's have a look a the brain: there's the amygdala, the cerebellum, the hippocampus, and so on, and there must be some evolutionary need for these parts
A real AGI should be something you can drop in to a humanoid robot and it would basically live as an individual, learning from every moment and every day, growing and changing with time.
LLMs can’t even count the number of letters in a sentence.
Also, when they speak about AGI, it raises their(person leaving) marketing value as someone else already know they are brilliant to have worked at something cool and they might also know some secret sauce, which could be acquired at lower cost by hiring them immediately[1]. I have seen these kinds of speak play out too many times. Last January, one of the senior engineers from my current work place in aviation left citing about something super secret coming this year or soon, and they immediately got hired by a competitor with generous pay to work on that said topic.
Why yes, of course Jan Leike just impromptu resigned and Daniel Kokotajlo just gave up 85% of his wealth in order not to sign a resignation NDA to do what you're describing...
1. Alan Turing on why we should never ever perform a Turing test: https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf
2. Marvin Minsky on the “Frame Problem” that lead to one or two previous AI winters, and what an Intuitive algorithm might look like: https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...
There's very few people who can lead in frontier AI research domains - maybe a few dozen worldwide - and there are many active research niches. Applying an NDA to a very senior researcher would be such a massive net-negative for the industry, that it'd be a net-negative for the applying organisation too.
I could see some kind of product-based NDA, like "don't discuss the target release dates for the new models", but "stop working on your field of research" isn't going to happen.
Doesn't seem like they felt it was required.
Edit: I'd love to know why the down votes, it's an opinion, not a political statement. This community is quite off lately.
Is this a highly controversial statement ? People are truly worried about the future and this is just an anxiety based reaction ?
Unclear how to value the equity I gave up, but it probably would have been about 85% of my family's net worth at least.
Basically I wanted to retain my ability to criticize the company in the future.“
> but "stop working on your field of research" isn't going to happen.
We’re talking about NDA, obviously no-competes aren’t legal in CA
https://www.lesswrong.com/posts/kovCotfpTFWFXaxwi/?commentId...
Can you cite specifically what in the paper you're basing that on? I skimmed it as well as the Wikipedia summary but I didn't see anywhere that Turing said that the imitation game should not be played.
The term, and indeed each initial, means different things to different people.
To me, even InstructGPT manages to be a "general" AI, so it counts as AGI — much to the confusion and upset of many like you who think the term requires consciousness, and others who want it to be superhuman in quality.
I would also absolutely agree LLMs are not at all human-like. I don't know if they do or don't need the various missing parts in order to be in order to change the world into a jobless (u/dis)topia.
I also don't have any reason to be for or against any claim about consciousness, given that word also has a broad range of definitions to choose between.
I expect at least one more breakthrough architecture on the scale of Transformers before we get all the missing bits from human cognition, even without "consciousness".
What do you mean by "truly understanding"?
It's a consequence of tokenization. They "see" the world through tokens, and tokenization rules depend on the specific middleware you're using. It's like making someone blind and then claiming they are not intelligent because they can't tell red from green. That's just how they perceive the world and tells nothing about intelligence.
Percentages are nice, but with money and wealth absolute numbers are already important enough. You can leave a very, very good life even if you are losing 85% if the remaining 15% is USD $1M. And if not signing that NDA will help you landing another richly paying job + freedom to say whatever you feel it's important saying.
When you have that feeling of understanding, it is important to recognize that it is a feeling.
We hope it’s correlated with some kind of ability to reason, but at the end of the day, you can have the ability to reason about things without realising it, and you can feel that you understand something and be wrong.
It’s not clear to me why this feeling would be necessary for superhuman-level general performance. Nor is it clear to me that a feeling of understanding isn’t what being an excellent token predictor feels like from the inside.
If it walks and talks like an AGI, at some point, don’t we have to concede it may be an AGI?
I have no idea I’m saying I’ve seen that happen in companies.
Seriously asking; I've purchased a GitHub CoPilot license subscription but I don't know what their sales numbers are doing on AI in general. It's to be seen if it can be made more cost-efficient to deliver to consumers.
Yeah, that's the part I don't understand though - do I understand it? Or do I just think I understand it. How do I know that I am not probabilistic also?
Synthesis is the only thing that comes to mind as a differentiator between me and an LLM.
What Mercedes, Porsche, Audi can do aside continue to produce the cars over and over again until they are overtaken by somebody else? Hell, both EU and USA need tariffs to compete with chinese automakers.
I’m not seeing “the good ones” leaving in this case.
"هذا دليل سريع على أنه حتى البشر الأذكياء لا يمكنهم قراءة ”الرموز“ أو ”الحروف“ من لغة لم يتعلموها."
The comment you replied to was saying LLMs trained on english cant count letters in english
In people, these are correlated, but one does not necessitate the other.
So my analogy here stands : OP was trained in "reading human language" with Roman letters as the basis of his understanding, and it would be a significant challenge (fairly unrelated to intelligence level) for OP to be able to parse an Arabic sentence of the same meaning.
Or:
You learned Arabic, great (it's the next language I want to learn so I'm envious!). But from the LLM point of view, should you be considered intelligent if you can count Arabic letters but not Arabic tokens in that sentence?
I was definitely being a bit facetious for emphasis, but he says a few times that the original question — “Can machines think?” - is meaningless, and the imitation game question is solved in its very posing. As a computer scientist he was of course worried about theoretical limits, and he intended the game in that vein. In that context he sees the answer as trivial: yes, a good enough computer will be able to mimic human behavior.
The essay’s structure is as follows:
1. Propose theoretical question about computer behavior.
2. Describe computers as formal automata.
3. Assert that automata are obviously general enough to satisfy the theoretical question — with good enough programming and enough power.
4. Dismiss objections, of which “humans might be telepathic” was somewhat absurdly the only one left standing.
It’s not a very clearly organized paper IMO, and the fun description of the game leads people to think he’s proposing that. That’s just the premise, and the pressing conclusion he derives from it is simple: spending energy on this question is meaningless, because it’s either intractable or solved depending on your approach (logical and empirical, respectively).
TL;DR: the whole essay revolves around this quote, judge for yourself:
We may now consider the ground to have been cleared and we are ready to proceed to the debate on our question, "Can machines think?" and the variant of it quoted at the end of the last section… ["Are there discrete-state machines which would do in the Imitation Game?"]
It will simplify matters for the reader if I explain first my own beliefs in the matter.
Consider first the more accurate form of the question. I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.
The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.