[1] - https://sfstandard.com/2023/11/17/openai-sam-altman-firing-b...
Apparently Microsoft was also blindsided by this.
https://www.axios.com/2023/11/17/microsoft-openai-sam-altman...
One of the board of directors that fired him co-signed these AI principles (https://futureoflife.org/open-letter/ai-principles/) that are very much in line with safeguarding general intelligence
Another of them wrote this article (https://www.foreignaffairs.com/china/illusion-chinas-ai-prow...) in June of this year that opens by quoting Sam Altman saying US regulation will "slow down American industry in such a way that China or somebody else makes faster progress” and basically debunks that stance...and quite well, I might add.
The split between e/acc (gotta go fast) and friendly AI/Coherent Extrapolated Volition (slow and cautious) is the first time in my life I've come down on the (small-c) conservative side of a split. I don't know if that's because I'm just getting older and more risk adverse.
Specifically, he mentioned that OpenAI is supposed to be open source and non-profit. Pursuing profit and making it closed-source brings "bad karma".
It's a story about greed, vanity, and envy.
Impossible to be more human than that.
And why such a controversial wording around Altman?
Why fire Brockman too?
Classic virtue signalling for the sake of personal power gains as so often.
The hypocritical part is doing so right AFTER beginning to take off commercially.
An honorable board with backbone would have done so at the first inkling of commercialization instead (which would have been 1-2 years ago).
Maybe you can find a better word for me but the point should be easily gotten ...
Shipping untested crap is the only known way to develop technology. Your AI assistant hallucinates? Amazing. We gotta bring more chaos to the world, the world is not chaotic enough!!
They have still been operating pretty much like a for-profit for years now so my point still stands.
But it's the honorable thing to do if you truly believe in something.
Otherwise it's just virtue signalling.
Seems reasonable, I mean that's why Sutskever joined in the first place ?
This seems more like your personal definition of "utopian ideology" than an actual observation of the world we live in.
https://en.wikipedia.org/wiki/OpenAI#2019:_Transition_from_n...
The board is for the non-profit that ultimately owns and totally controls the for-profit company.
Everyone that works for or invests in the for-profit company has to sign an operating agreement that states the for-profit actually does not have any responsibility to generate profit and that it's primary duty is to fulfill the charter and mission of the non-profit.
This is what is being said. But I am not so sure, if the real reasons discussed behind closed doors, are really the same. We will find out, if OpenAI will indeed open itself more, till then I remain sceptical. Because lots of power and money are at stake here.
The real path to AI safety is regulating applications, not fundamental research and making fundamental research very open (which they are against).
Web sites were quite stable back then. Not really much less stable than they are now. E.g. Twitter now has more issues than web sites I used often back in 2000s.
They had "beta" sign because they had much higher quality standards. They warned users that things are not perfect. Now people just accept that software is half-broken, and there's no need for beta signs - there's no expectation of quality.
Also, being down is one thing, sending random crap to a user is completely another. E.g. consider web mail, if it is down for one hour it's kinda OK. If it shows you random crap instead of your email, or sends your email to a wrong person. That would be very much not OK, and that's the sort of issues that OpenAI is having now. Nobody complains that it's down sometimes, but it returns erroneous answers.
Is it fundamental? I don't think so. GPT was trained largely on random internet crap. One of popular datasets is literally called The Pile.
If you just use The Pile as a training dataset, AI will learn very little reasoning, but it will learn to make some plausible shit up, because that's the training objective. Literally. It's trained to guess the Pile.
Is that the only way to train an AI? No. E.g. check "Textbooks Are All You Need" paper: https://arxiv.org/abs/2306.11644 A small model trained on high-quality dataset can beat much bigger models at code generation.
So why are you so eager to use a low-quality AI trained on crap? Can't you wait few years until they develop better products?
The same way there's a big difference between firing a government employee and expulsion of a member of Congress.
The singularity folks have been continuously wrong about their predictions. A decade ago, they were arguing the labor market wouldn't recover because the reason for unemployment was robots taking our jobs. It's unnerving to see that these people are having gaining some traction while actively working against technological progress.
Then a smaller example in Matthia’s cult in the “Kingdom Of Matthias” book. Started around the same time as Mormonism. Which led to a murder. Or the Peoples Temple cult with 909 dead in mass suicide. The communal aspects of these give away their “utopian ideology”
I’d like to hear where you’re coming from. I have a Christian worldview, so when I look at these movements it seems they have an obvious presupposition on human nature (that with the right systems in place people will act perfectly — so it is the systems that are flawed not the people themselves). Utopia is inherently religious, and I’d say it is the human desire to have heaven on earth — but gone about in the wrong ways. Because humans are flawed, no economic system or communal living in itself can bring about the utopian ideal.
There's far more to the world than China on top of that and importantly developments happen both inside and outside of the scope of regulatory oversight (usually only heavily commercialized products face scrutiny) and China itself will eventually catch up to the average - progress is rarely a non-stop hockey stick, it plateaus. LLMs might already be hitting a wall https://twitter.com/HamelHusain/status/1725655686913392933)
The Chinese are experts at copying and stealing Western tech. They don't have to be on the frontier to catch up to a crippled US and then continue development at a faster pace, and as we've seen repeatedly in history regulations stick around for decades after their utility has long past. They are not levers that go up and down, they go in one direction and maybe after many many years of damage they might be adjusted, but usually after 10 starts/stops and half-baked non-solutions papered on as real solutions - if at all.
To allow OpenAI to raise venture capital, which allows them to exchange equity for money (ie, distribute [future] rights to profit to shareholders)
In a semse board memebers have even less protection than rank and file. So no, nothing special happening at OpenAI other than a founder CEO being squezzed out, not the first nor the last one. And personal feeling never factored into that kind of decision.
Having utopian ideologies NEVER doing good in the world would require some very careful boundary drawing.
What I do see is "classism is the biggest humanitarian crisis of our age," and "solving the class problem will improve people's lives," but no where do I see that non-class problem will cease to exist. People will still fight, get upset, struggle, just not on class terms.
Maybe you read a different set of Marx's writing. Share your reading list if possible.
Substitute "rules of order" or "parliamentary procedure" if you like. At the end of the day, it's majority vote by a tiny number of representatives. Whether political or corporate.
Occasionally, in high risk situations, "good change good, bad change bad" looks like "change bad" at a glance, because change will be bad by default without great effort invested in picking the good change.
Sure that's been their modus operandi in the past, but to hold an opinion that a billion humans on the other side of the Pacific are only capable of copying and no innovation of their own is a rather strange generalization for a thread on general intelligence.
Many politically aligned folks will leave, and OAI will go back and focus on mission.
New company will emerge and focus on profits.
Overall probably good for everyone.
The problem I see is, astronomical costs of training and inference warrants a for-profit structure like the one Sam put up. It was a nice compromise, I thought; but of course, Sustkever thinks otherwise.
That said my comment was looking mainly at the result of Marxist ideology in practice. In practice millions of lives were lost in an attempt to create an idealized world. Here is a good paper on Stalin’s utopian ideal [2].
[1] https://www.jstor.org/stable/10.7312/chro17958.7?searchText=...
It is rather a cultural/political thing. Free thinking and stepping out of line is very dangerous in a authorian society. Copying approved tech on the other hand is safe.
And this culture has not changed in china lately, rather the opposite. Look what happened to the Alibaba founder, or why there is no more Winnie Puuh in china.
We are quite OT here, but I would say christianity in general is a utopian ideology as well. All humans could be living in peace and harmony, if they would just believe in Jesus Christ. (I know there are differences, but this is the essence of what I was taught)
And well, how many were killed in the name of the Lord? Quite a lot I think. Now you can argue, those were not really christians. Maybe. But Marxists argue the same of the people responsible for the gulags. (I am not a marxist btw)
"Because humans are flawed, no economic system or communal living in itself can bring about the utopian ideal."
And it simply depends on the specific Utopian ideal. Because a good utopian concept/dream takes humans as they are - and still find ways to improve living conditions for everyone. Not every Utopia claims to be a eternal heaven for everyone, there are more realistic concepts out there.
GitHub Copilot is an auto-completer, and that's, perhaps, a proper use of this technology. At this stage, make auto-completion better. That's nice.
Why is it necessary to release "GPTs"? This is a rush to deliver half-baked tech, just for the sake of hype. Sam was fired for a good reason.
Example: Somebody markets GPT called "Grimoire" a "100x Engineer". I gave him a task to make a simple game, and it just gave a skeleton of code instead of an actual implementation: https://twitter.com/killerstorm/status/1723848549647925441
Nobody needs this shit. In fact, AI progress can happen faster if people do real research instead of prompting GPTs.
Could see this
They'll improve hallucinations and such later.
Imagine people not driving the model T cause it didn't have an airbag lmao. Things take time to be developed and perfected.
Just imagining if we all only used proven products, no trying out cool experimental or incomplete stuff.
these models are built to sound like they know what they are talking about, whether they do or not. this violates our basic social coordination mechanisms in ways that usually only delusional or psychopathic people do, making the models worse than useless
If it had been, we wouldn't now be facing an extinction event.