I appreciate the idea of being a "not-greedy typical company," but there's a reason you e.g. separate university type research or non-profits and private companies.
Trying to make up something in the middle is the exact sort of naivete you can ALWAYS expect from Silicon Valley.
If he's not, and literally all the other employees come back, its still a failure of the power structure that already happened. The threat of firing the CEO of the for-profit arm is supposed to be a unused threat, like a nuclear weapon.
Honestly, pretty sick
I don’t know how you can look at the development of generative AI tools in the past few years and write so dismissively about “science fiction” becoming reality
> The company pressed forward and launched ChatGPT on November 30. It was considered such a nonevent that no major company-wide announcement about the chatbot going live was made. Many employees who weren’t directly involved, including those in safety functions, didn’t even realize it had happened. Some of those who were aware, according to one employee, had started a betting pool, wagering how many people might use the tool during its first week. The highest guess was 100,000 users. OpenAI’s president tweeted that the tool hit 1 million within the first five days. The phrase low-key research preview became an instant meme within OpenAI; employees turned it into laptop stickers.
> Anticipating the arrival of [AGI], Sutskever began to behave like a spiritual leader, three employees who worked with him told us. His constant, enthusiastic refrain was “feel the AGI,” a reference to the idea that the company was on the cusp of its ultimate goal. At OpenAI’s 2022 holiday party, held at the California Academy of Sciences, Sutskever led employees in a chant: “Feel the AGI! Feel the AGI!” The phrase itself was popular enough that OpenAI employees created a special “Feel the AGI” reaction emoji in Slack.
> For a leadership offsite this year, according to two people familiar with the event, Sutskever commissioned a wooden effigy from a local artist that was intended to represent an “unaligned” AI—that is, one that does not meet a human’s objectives. He set it on fire to symbolize OpenAI’s commitment to its founding principles. In July, OpenAI announced the creation of a so-called superalignment team with Sutskever co-leading the research. OpenAI would expand the alignment team’s research to develop more upstream AI-safety techniques with a dedicated 20 percent of the company’s existing computer chips, in preparation for the possibility of AGI arriving in this decade, the company said.
I think that anecdote made me like this guy even if I disagree with him about the dangers of AI.
Capitalism is the modern avenue for channeling greed but the greed and hubris part is universal to all of human history.
Let's suppose that AGI is about to be invented, and it will wind up having a personality similar to humans. The more that those are are doing the inventing are afraid of what they are inventing, the more that they will push it to be afraid of the humans in turn. This does not sound like a good conflict to start with.
By contrast if the humans inventing it go full throttle to convincing it that humans are on its side, there is no such conflict at all.
I don't know how realistic this model is. But it certainly suggests that the e/acc approach is more likely to create AI alignment than EA is.
I don’t think an LLM is ever going to be capable of feeling fear, boredom etc.
If we did it would probably have many of the handicaps we do.
On the other hand, the realm of non-fiction has been predicting the automation of intelligent processes by computational processes since Alan Turing first suggested it in Computing Machinery and Intelligence. Probably before then, as well.
The only exception I can think of for fiction is the movie "Her," which as far as I can tell effectively predicted the future. Not really, of course, but every inch of that movie down to how people work pre and post AI, how people play video games pre and post AI, and how people socialize pre and post AI, are starting to look eerily accurate.
I know it's small and kind of a throw-away line, but statements like this make me take this author's interpretation of the rest of these events with a healthy dose of skepticism. At my company we have multiple company "memes" like this that have been turned into reacji, but the vast majority of them are not because it's "popular" but rather because we use it ironically or to make fun of the meme. Just the fact that an employee turned it into a reacji is a total non-event and I don't think you can read anything from it.
I'd say yes, Sutsksever is... naive? though very smart. Or just utopian. Seems he couldn't get the scale he needed/wanted out of a university (or Google) research lab. But the former at least would have bounded things better in the way he would have preferred, from an ethics POV.
Jumping into bed with Musk and Altman and hoping for ethical non-profit "betterment of humanity" behaviour is laughable. Getting access to capital was obviously tempting, but ...
As for Altman. No, he's not naive. Amoral, and likely proud of it. JFC ... Worldcoin... I can't even...
I don't want either of these people in charge of the future, frankly.
It does point to the general lack of funding for R&D of this type of thing. Or it's still too early to be doing this kind of thing at scale. I dunno.
Bleak.
Imagine if the US or any other government of 1800s came gained so much power, 'locking-in' their repugnant values as the moral truth, backed by total control of the world.
Microsoft in particular laid off 10,000 and then immediately turned around and invested billions more in OpenAI: https://www.sdxcentral.com/articles/news/microsoft-bets-bill... -- last fall, just as the timeline laid out in the Atlnatic article was firing up.
In that context this timeline is even more nauseating. Not only did OpenAI push ChatGPT at the expense of their own mission and their employee's well-being, they likely caused massive harm to our employment sector and the well-being of tens of thousands of software engineers in the industry at large.
Maybe those layoffs would have happened anyways, but the way this all has rolled out and the way it's played out in the press and in the board rooms of the BigTech corporations... OpenAI is literally accomplishing the opposite of its supposed mission. And now it's about to get worse.
There's not enough info in this article to know if it was seen as weird by the employees or not, but my point is that "they created a reacji of it" isn't evidence one way or the other for it being "popular".
I'm pro-AI but it feels like a lot of people are lost in the weeds discussing concepts like consciousness and forgo pragmatic logic completely.
Unpacking that second point are the implications that: - AGI considering humans a threat is conditional on our fearing it - AGI seeing humans as a threat is the only reason it would harm humans
I feel like I can rule out these last 3 points just by pointing out that there are humans that see other humans as a threat even though there is not a display of fear. Someone could be threatening because of greed, envy, ignorance, carelessness, drugs, etc.
Also humans harm other humans all this time in situations where there was not a perceived threat. How many people have been killed by cigarettes? Car accidents? Malpractice?
And this is going off the assumption that AGI thinks like a human, which I'm incredibly skeptical of.
I really don’t understand how tech people are so spectacularly naive.
I think there is a wealth of fiction out there that features AI without robot bodies. The sequel to Ender's Game, Speaker for the Dead, comes to mind immediately (because I re-read it last week).
2001: A Space Odyssey, I Have No Mouth and I Must Scream, Neuromancer (I think, haven't read it in a while), I think some of of the short stories from Ray Bradbury and Ted Chiang, etc, etc
I actually don't think these execs can replace us with LLMs, but the hype volcano certainly made them think they could. Which says more about the shit understanding they have of what their SWE staff does more than anything else.
Idiots hired like crazy during COVID, and then were surprised that 9 women couldn't make a baby in 1 month... and now they think "AI" is going to fix this for them.
I need a new career.
He gets fired. Thats the enforcement mechanism.
Hence if he comes back, that indicates the board essentially "agrees" with Altman's positions now, however much Altman and the board's positions have shifted.
The problem of defining “what’s a good outcome” is a sub problem of alignment.
The first clue is this: "In conversations between The Atlantic and 10 current and former employees at OpenAI..."
When you're reporting something like this, especially when using anonymous sources (not anonymous to you, but sources that have good reasons not to want their names published), you can't just trust what someone tells you - they may have their own motives for presenting things in a certain way, or they may just be straight up lying.
So... you confirm what they are saying with other sources. That's why "10 current and former employees" is mentioned explicitly in the article.
Being published in the Atlantic helps too, because that's a publication with strong editorial integrity and a great track record.
He had certain positions, and pushed them just gradually enough for OpenAI to end up where it is today. A more zealous capitalist would have gotten fired unceremoniously long ago.
The 2003 Northeast blackout that affected 50 million people was partially caused by a race condition. https://www.theregister.com/2004/04/08/blackout_bug_report/
A world where everyone is paperclipped is probably better than one controlled by psychopathic totalitarian human overlords supported by AI, yet the direction of current research seems to leading us into the latter scenario.
But our most effective experiment so far is based on creating LLMs that try to act like humans. Specifically try to predict the next token that human speech would create. When AI is developed off of large scale models that attempt to imitate humans, shouldn't we expect that in some ways it will also imitate human emotional behavior?
What is "really" going on is another question. But any mass of human experience that you train a model on really does include our forms of irrationality in addition to our language and logic. With little concrete details for our speculation, this possibility at least deserves consideration.
To the extent that we provide the training data for such models, we should expect it to internalize aspects of our behavior. And what is internalized won't just be what we expected and were planning on.
Is that right?
Don't know if Altman or Sutskever is right, there seems to be a kind of arms race between the companies. OpenAI may be past the point where they can try out a radically different system, due to competition in the space. Maybe trying out new approaches could only work in a new company, who knows?
That was exactly my reaction. I’ve been following the news and rumors and speculation closely since Altman’s firing, and this is by far the most substantive account I have read. Kudos to the authors and to The Atlantic for getting it out so quickly.
The fire is OpenAI controlling an AI with their alignment efforts. The analogy here is that some company could recreate the AGI-under-alignment and just... Decide to remove the alignment controls. Hence, create another effigy and not set it on fire.
I think this is kind of what some people are more concerned about... The day greed doesn't prevail because greed killed us all.
Inward safety for people means their vulnerabilities are not exposed and/or open to attack. Outward safety means they are not attacking others and looking out for the general wellbeing of others. We have a lot of different social constructs with other people of which we attempt to keep this in balance. It doesn't work out for everyone, but in general it's somewhat stable.
What does it mean to be safe if you're not at threat of being attacked and harm? What does attacking others mean if doing so is meaningless, just a calculation? This goes from everything from telling someone to kill themselves (it's just words) to issuing a set of commands to external devices with real world effects (print ebola virus or launch nuclear weapon). The concern here is the AI of the future will be extraordinarily powerful yet very risky when it comes to making decisions that could harm others.
So I would say ChatGPT exists because its creators specifically transgressed the traditional division of universities vs industry. The fact that this transgressive structure is unstable is not surprising, at least in retrospect.
Indeed, the only other approach I can think of is a massive government project. But again with gov't bureaucracy, a researcher would be limited by legal issues of big data vs copyright, etc.--which many have pointed out that OpenAI again was able to circumvent when they basically used the entire Internet and all of humanity's books, etc., as their training source.
Sam Altman, the figurehead of the generative-AI revolution,
—one must understand that OpenAI is not a technology company.
EDIT: despite of the poor phrasing I agree that the article as a whole is of high qualityYet: "zealous doomers" is that how people cautious of the potential power of AI are now being labeled?
> To truly understand the events of the past 48 hours—the shocking, sudden ousting of OpenAI’s CEO, Sam Altman, arguably the figurehead of the generative-AI revolution, followed by reports that the company is now in talks to bring him back—one must understand that OpenAI is not a technology company. At least, not like other epochal companies of the internet age, such as Meta, Google, and Microsoft.
The key piece here is "At least, not like other epochal companies of the internet age, such as Meta, Google, and Microsoft." - which then gets into the weird structure of OpenAI as a non-profit, which is indeed crucial to understanding what has happened over the weekend.
This is good writing. The claim that "OpenAI is not a technology company" in the opening paragraph of the story instantly grabs your attention and makes you ask why they would say that... a question which they then answer in the next few sentences.
Brockman had a robot as ringbearer for his wedding. And instead of asking how your colleagues are doing, they would have asked “What is your life a function of?”. This was 2020.
https://www.theatlantic.com/technology/archive/2023/11/sam-a...
(I don't know if that's what happened here. But many sources seem to point in that direction at the moment.)
"In an ideal world, humanity would be the board members, and AGI would be the CEO. Humanity can always press the reset button and say, 'Re-randomize parameters!'"
This was 3 years ago. But that metaphor strikes me as too powerful for it not to have been at the back of Sutskever's mind when he pushed for Altman being ousted.
I think it at least remains to be seen as to whether "rampant copyright infringement" is necessarily a good thing here.
Honestly, this seems like a pretty good outcome.
Which is to say, I think the fearmongering sentient AI stuff is silly -- but I think we are all DEFINITELY better off with an ugly rough-and-tumble visible rocky start to the AI revolution.
Weed out the BS; equalize out who actually has access to the best stuff BEFORE some jerk company can scale up fast and dominate market share; let a de-facto "open source" market have a go at the whole thing.
One of the worst versions of AGI might be a system that simulates to us that it has an internal life, but in reality has no internal subjective experience of itself.
The messy, secretive reality behind OpenAI’s bid to save the world
https://www.technologyreview.com/2020/02/17/844721/ai-openai...
The AI moonshot was founded in the spirit of transparency. This is the inside story of how competitive pressure eroded that idealism.
Only 4 comments at the time: >>22351341
More comments on Reddit: https://old.reddit.com/r/MachineLearning/comments/f5immz/d_t...
I wonder what this struggle means for the future of ChatGPT censorship/safety.
My understanding is that Anthropic is even more idealistic than OpenAI. I think it was founded by a bunch of OpenAI people who quit because they felt OpenAI wasn't cautious enough.
In any case, ultimately it depends on the industry structure. If there are just a few big players, and most of them are idealistic, things could go OK. If there are a ton of little players, that's when you risk a race to the bottom, because there will always be someone willing to bend the rules to gain an advantage.
The series (basically everything in the https://en.wikipedia.org/wiki/Eight_Worlds) is pretty dated but Varley definitely managed to include some ahead-of-his-time ideas. I really liked Ophiuchi Hotline and Equinoctial
Perhaps the reason why ChatGPT has become so popular is because it provides entertainment. So it is not a great leap forward in AI or a path to AGI, but instead a incredibly convoluted way of keeping reasonable intelligent people occupied and amused. You enter a prompt, and it returns a result - what a fun game!
Maybe that is it's primary contribution to society.
It doesn’t exist until suddenly it does. I think there are a lot of potential issues we really should be preparing for / trying to solve.
For example, what to do about unemployment. We can’t wait until massive number of people start losing their job before we start working on what to do.
I’m not for slowing down AI research but I do think we need to restrict or slow the deployment of AI if the effects on society are problematic.
1. Drafting directional copy I can give to a real copywriter to create something we'd all be happy presenting to users.
2. Act as a sounding board for peer-personnel issues I'm dealing with at work.
3. "Dumb down" concepts in academic journals/articles such that I can make sense of them.
4. Just today it helped me build an app to drill a specific set of chord shapes/inversions on the guitar that I've been struggling with (programming has always been a very casual hobby and, consequently, I'm not very good at it).
Increasingly now I use ChatGPT and sometimes Kagi. And they just work like I expect. I can think of one time that ChatGPT has failed me, which was when I was trying to remember the terms OLTP/OLAP in database architecture. But for a long time now it's been a very effective tool in my toolbox, while Google increasingly wears out.
* Ilya Sutskever is concerned about the company moving too fast (without taking safety into account) under Sam Altman.
* The others on the board that ended up supporting the firing are concerned about the same.
* Ilya supports the firing because he wants the company to move slower.
* The majority of the people working on AI don't want to slow down, either because they want to develop as fast as possible or because they're worried about missing out on profit.
* Sam rallies the "move fast" faction and says "this board will slow us down horribly, let's move fast under Microsoft"
* Ilya realizes that the practical outcome will be more speed/less safety, not more safety, as he hoped, leading to the regret tweet (https://nitter.net/ilyasut/status/1726590052392956028)
I use ChatGPT all the time in my software dev job and I find it incredibly helpful. First, it's just much faster than pouring over doc to find the answer you want when you know what you're asking for. Second, when you don't know exactly what you're asking for (i.e. "How would I accomplish X using technology Y"), it's incredibly helpful because it points you to some of the keywords that you should be searching for to find more/corroborate. Third, for some set of tasks (i.e. "write me an example of code that does this") I find it faster to ask ChatGPT to write the code for me first - note this one is the least common of the tasks I use ChatGPT for because I can usually write the code faster.
Yes, I know ChatGPT hallucinates. No, I don't just copy-and-paste the output into my code and press enter. But it saves me a ton of time in some specific areas, and I think that people that don't learn how to use generative AI tools effectively will be at a huge productivity disadvantage.
Yes, but when you think about it that still does not mean that it's "human nature".
Just the opposite, it's really just inhuman nature still lingering since the dawn of man, who haven't quite made enough progress beyond the lower life forms. Yet.
Maybe a thinking machine will teach us a thing or two, back in 1969 this was dramatized in the movie 2001 where the computer was not as humane as it could have been since it was too much like some real people.
If so, a big issue was Ilya being so out of touch with what an overwhelming majority of his company wanted.
And bleak because there doesn't seem to be an alternative where the people making these decisions are responsible to an electorate or public in some democratic fashion. Just a bunch of people with $$ and influence who set themselves up to be arbiters ... and
It's just might makes right.
And bleak because in this case the "mighty" are often the very people who made fun of arts students who took the philosophy and ethics classes in school that could at least offer some insight in these issues.
> People building AGI unable to predict consequences of their actions 3 days in advance.
It’s a reasonable point if these are the people building the future of humanity it’s a little concerning they can’t predict the immediate consequences of their own actions.
On the other hand it shows some honesty being able to admit a mistake in public, also something you might want out of someone building the future of humanity.
Perhaps the only salient critique was the textual representation of the problem, but I think it was presented in a way where the model was given all the help it could get.
You forget the result of the paper was actually improving the model's performance and still failing to get anywhere near decent results.
Now it's laughable, but OpenAI was founded in 2015. I don't know about Altman, but Musk was very respected at the time. He didn't start going off the deep end until 2017. "I'm motivated by... a desire to think about the future and not be sad," was something he said during a TED interview in 2017, and people mostly believed him.
Is this decel movement just an extension of the wokeism that has been a problem in SV? Employees more focused on social issues than actually working.
What I do believe is that as the hype grows for this AI stuff, more and more people are going to be displaced and put out of work for the sake of making some rich assholes even richer. I wasn't a huge fan of "Open"AI as a company, but I sure as fuck would take them over fucking Microsoft, the literal comic-book tier evil megacorporation being at the helm of this mass displacement.
Yet, many of these legitimate concerns are just swatted away by AI sycophants with no actual answers to these issues. You get branded a Luddite (and mind you the Luddites weren't even wrong) and a sensationalist. Shit, you've already got psychopathic C-suites talking about replacing entire teams with current-day AIs, what the fuck are people supposed to do in the future when they get better? What, we're suddenly going to go full-force into a mystical UBI utopia? Overnight?
Whether the AI is even good enough to truly replace people barely even matters, the psychopathic C-suites don't give a shit as long as they get an excuse to fire 20,000 people and write it off as a good thing since their bottom line gets a bit fatter from using the AI in their stead.
I think that "zealous doomers" refers to people who are afraid that this technology may result in some sort of Skynet situation, not those who are nervous about more realistic risks.
That doesn't sound simple. Not all humans have the same moral code, so who gets to decide which is the "correct" one?