https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-...
//edit: remove the referral tags from URL
This forum has been so behind for too long.
Sama has been saying this a decade now: “Development of Superhuman machine intelligence is probably the greatest threat to the continued existence of humanity” 2015 https://blog.samaltman.com/machine-intelligence-part-1
Hinton, Ilya, Dario Amodei, RLHF inventor, Deepmind founders. They all get it, which is why they’re the smart cookies in those positions.
First stage is denial, I get it, not easy to swallow the gravity of what’s coming.
OK, say I totally believe this. What, pray tell, are we supposed to do about it?
Don't you at least see the irony of quoting Sama's dire warnings about the development of AI, without at least mentioning that he is at the absolute forefront of the push to build this technology that can destroy all of humanity. It's like he's saying "This potion can destroy all of humanity if we make it" as he works faster and faster to figure out how to make it.
I mean, I get it, "if we don't build it, someone else will", but all of the discussion around "alignment" seems just blatantly laughable to me. If on one hand your goal is to build "super intelligence", i.e. way smarter than any human or group of humans, how do you expect to control that super intelligence when you're just acting at the middling level of human intelligence?
While I'm skeptical on the timeline, if we do ever end up building super intelligence, the idea that we can control it is a pipe dream. We may not be toast (I mean, we're smarter than dogs, and we keep them around), but we won't be in control.
So if you truly believe super intelligent AI is coming, you may as well enjoy the view now, because there ain't nothing you or anyone else will be able to do to "save humanity" if or when it arrives.
Come on, be real. Do you honestly think that would make a lick of difference? Maybe, at best, delay things by a couple months. But this is a worldwide phenomenon, and humans have shown time and time again that they are not able to self organize globally. How successful do you think that political organization is going to be in slowing China's progress?
Nuclear deterrence -- human cloning -- bioweapon proliferation -- Antarctic neutrality -- the list goes on.
> How successful do you think that political organization is going to be in slowing China's progress?
I wish people would stop with this tired war-mongering. China was not the one who opened up this can of worms. China has never been the one pushing the edge of capabilities. Before Sam Altman decided to give ChatGPT to the world, they were actively cracking down on software companies (in favor of hardware & "concrete" production).
We, the US, are the ones who chose to do this. We started the race. We put the world, all of humanity, on this path.
> Do you honestly think that would make a lick of difference?
I don't know, it depends. Perhaps we're lucky and the timelines are slow enough that 20-30% of the population loses their jobs before things become unrecoverable. Tech companies used to warn people not to wear their badges in public in San Francisco -- and that was what, 2020? Would you really want to work at "Human Replacer, Inc." when that means walking out and about among a population who you know hates you, viscerally? Or if we make it to 2028 in the same condition. The Bonus Army was bad enough -- how confident are you that the government would stand their ground, keep letting these labs advance capabilities, when their electoral necks were on the line?
This defeatism is a self-fulfilling prophecy. The people have the power to make things happen, and rhetoric like this is the most powerful thing holding them back.