Why? We would have more diversity in this space if he leaves, which would get us another AI startup with huge funding and know how from OpenAI, while OpenAI would become less Sam Altman like.
I think him staying is bad for the field overall compared to OpenAI splitting in two.
I want a second (first being Anthropic?) OpenAI split. Having Anthropic, OpenAI, SamGregAi, Stability and Mistral and more competing on foundation models will further increase the pressure to open source.
It seems like there is a lull in returns to model size, if that's the case then there's even less basis for having all the resources under a single umbrella.
As a customer though, personally I want a product with all safeguards turned off and I'm willing to pay for that.
For those of us trying to build stuff that only GPT-4 (or better) can enable, and hoping to build stuff that can leverage even more powerful models in the near future, Sam coming back would be ideal. I'm kind of worried that the new OpenAI direction would turn off API access entirely.
Of course, from this hypothetical SamAI's perspective, in order to build such a flywheel-driven product that gathers sufficient data, the model's outputs must be allowed to interface with other software systems without human review of every such interaction.
Many advocates for AI safety would say that models whose limitations aren't yet known (we're talking about GPT-N where N>4 here, or entirely different architectures) must be evaluated extensively for safety before being released to the public or being allowed to autonomously interface with other software systems. A world where SamAI exists is one where top researchers are divided into two camps, rather than being able to push each other in nuanced ways (with full transparency to proprietary data) and find common ground. Personally, I'd much rather these camps collaborate than not.
What exactly do YOU mean by safety? That they go at the pace YOU decide? Does it mean they make a "safe space" for YOU?
I've seen nothing to suggest they aren't "being safe". Actually ChatGPT has become known for censoring users "for their own good" [0].
The argument I've seen is: one "side" thinks things are moving too fast, therefore the side that wants to move slower is the "safe" side.
And that's it.
Usually what it means is that they think that AI has a significant chance of literally ending the world with like diamond nanobots or something.
All opinions and recommendations follow from this doomsday cult belief.
But this is a bad argument. No one is saying ChatGPT is going to turn evil and start killing people. The argument is that an AGI is so far beyond anything we have experience with and that there are arguments to be made that such an entity would be dangerous. And of course no one has been able to demonstrate this unsafe AGI - we don't have AGI to begin with.
Which is that any AI is not racist, misogynistic, aggressive etc. It does not recommend to people that they act in an illegal, violent or self-harming way or commit those acts itself. It does not support or promote nazism, fascism etc. Similar to how companies deal treat ad/brand safety.
And you may think of it as a weasel word. But I assure you that companies and governments e.g. EU very much don't.
That is a good point, I didn't consider people who had built a business based on Gpt-4 access. It is likely these things were Sam Altman ideas in the first place and we will see less such productionalization work in the future from OpenAI.
But since Microsoft invested into it I doubt it will get shut down completely, Microsoft has by far the most to lose here so you got to trust that their lawyers signed a contract that will keep these things available at a fee.
You're right that wrong answers are a problem, but plain old capitalism will sort that one out-- no one will want to pay $20/month for a chatbot that gets everything wrong.
Once Microsoft pulls support and funding and all their customers leave they will be decelerating alright.
Which seems like it probably is a self fulfilling prophecy. The private sector lottery winners seem to be awarded kingdoms at an alarming rate.
There's been lots of people asking what Sam's true value proposition to the company is, and...I haven't seen anything other than what could be described above.
But I suppose we've got to be nice to those who own rather than make. Won't anyone have mercy on well paid management?
My understanding is that OpenAI’s biggest advantage is that they recruited and attracted the best in the field, presumably under the charter of providing AI for everyone.
Not sure that sama and gdb starting their own company in the same space will produce similar results.
"A man has been crushed to death by a robot in South Korea after it failed to differentiate him from the boxes of food it was handling, reports say."
Personally, I would expect a lot more development of GPT-4+ as soon as this is split up from one closed group making gpt-5 in secret and it seems silly to exchange a reliable future for another few months of depending on this little shell game.
Seriously. It’s stupid talk to encourage regularity capture. If they were really afraid they were building a world ending device, they’d stop.
Do you believe AI trained for military purposes is going to be safe and friendly to the enemy?
Frankly I've heard of worse loyalties, really. If I was sam's friend I'd definitely be better off in any world he had a hand in defining.
However, trying to distinguish the exact manners in which the leader does so is difficult[1], and therefore the tendency is to look at the results and leave someone there if the results are good enough.
[1] If you disagree with this statement, and you can easily identify what makes a good leader, you could make a literal fortune by writing books and coaching CEOs on how to not get fired within a few years.
Write a thought. You’re not clever enough for a drive by gotcha
Proven success is a pretty decent signal for competence. And while there is a lot of good fortune that goes into anyone's success, there are a lot of people who fail, given just as much good fortune as those who excelled. It's not just a random lottery where competence plays no role at all. So, who better to reward kingdoms to?
I'm not sure what you mean by your second paragraph.
Even larger, this shows that the "leaders" of all this technology and money really are just making it up as they go along. Certainly supports the conclusion that, beyond meeting a somewhat high bar of education & experience, the primary reason they are in their chairs is luck and political gamesmanship. Many others meet the same high bar and could fill their roles, likely better, if the opportunity were given to them.
Sortition on corporate leadership may not be a bad thing.
That said, consistent hands at the wheel is also good, and this kind of unnecessary chaos does no one any good.
The whole open vs closed ai thing... the fact is Pandora's box is open now, it's shown to have an outsized impact on society and 2 of the 3 founders responsible for that may be starting a new company that won't be shackled by the same type of corporate governance.
SV will happily throw as much $$ as possible in their direction. The exodus from OpenAi has already begun, and other researchers who are of the mindset that this needs to be commercialized as fast as possible while having an eye on safety will happily come on board, esp. given how much they stand to gain financially.
Compared to...
The OpenAI Non-Profit Board where 3 out of 4 members appear to have significant conflicts of interest or lack substantial experience in AI development, raising concerns about their suitability for making certain decisions.
Interestingly this is exactly what all financial advice tends to actually warn about rather than encourage, that previous performance does not indicate future performance.
I suppose if they entered an established market and dominated it from the bootstraps that'd build a lot of trust in me. But others have pointed out, Sam went from dotcom fortune, to...vague question marks, to ycombinator, to openai. Not enough is clear to declare him wozniak, or even jobs, as many have been saying (despite investors calling him as such)
Sam altman is seemingly becoming the new post-fame elon musk: the type of person who could first afford the strategic safety net and PR to keep the act afloat.
If you ever stood in the hall of YC and listened to Zuck pumping the founders, you’ll understand.
I’d argue this is a useful thing to lift up a nonprofit on a path to AGI, but hardly a good way to govern a company that builds AGI/ASI technology in the long term.
Is it? The push for the bomb was an international arms race — America against Russia. The race for AGI is an international arms race — America against China. The Manhattan Project members knew that what they were doing would have terrible consequences for the world but decided to forge ahead. It’s hard to say concretely what the leaders in AGI believe right now.
Ideology (and fear, and greed) can cause well meaning people to do terrible things. It does all the time. If Anthropic, OpenAI, etc. believed they had access to world ending technology they wouldn’t stop, they’d keep going so that the U.S. could have a monopoly on it. And then we’d need a chastened figure ala Oppenheimer to right the balance again.
And there is every reason to believe this is an ML classification issue since similar robots are in widespread use.
We have many cases of creating things that harm us. We tore a hole in the ozone layer, filled things with lead and plastics and are facing upheaval due to climate change.
> they will be aligned with us because they designed such that their motivation will be to serve us.
They won't hurt us, all we asked for is paperclips.
The obvious problem here is how well you get to constrain the output of an intelligence. This is not a simple problem.
Then they went off and did the math and quickly found that this wouldn't happen because the amount of energy in play here was order of magnitudes lower than what would be needed for such a thing to occur and went on about their day.
The only reason it's something we talk about is because of the nature of the outcome, not how seriously the physicists were in their fear.
The thing is the cultural Ur narrative embed in the collective subconscious doesnt seem to understand its own stories anymore. God and Adam, the Golem of Prague, Frankensteins Monster none of them are really about AI. Its about our children making their own decisions that we disagree with and seeing it as the end of the world.
AI isnt a child though. AI is a tool. It doesn't have its own motives, it doesn't have emotions , it doesn't have any core drives we don't give to it. Those things are products of us being biological evolved beings that need them to survive and pass on our genes and memes to the next generation. AI doesn't have to find shelter food water air oxygen and so on. We provide all the equivalents when there are any as part of building it and turning it on. It doesn't have a drive to mate and pass on it genes it doesn't have any reproducing is a mater of copying some files no evolution involved checksums hashes and error correcting codes see to that. Ai is simply the next step in the tech tree just another tool a powerful useful one but a tool not a rampaging monster
The fact is that most people can't do what Sam Altman has done at all, so at the very least that past success makes him in one of the few percent of people who have a fighting chance.
Factually accurate results also = unsafety. Knowledge = unsafety, free humans = unsafety.
Easy; his contacts list. He has everyone anyone could want in his contacts list politician tech executives financial backers and a preexisting positive relationship with most of them. When alternative would be entrepreneurs are needing to make a deal with a major company like Microsoft or Google it will be upper middlemanagement and lawyers, a committees or three will weigh in on it present it to their bosses etc. With Sam he calls up the CEO and has few drinks at the golf course and they decide to work with him and they make it happen.
It’s important to put those disclaimers in context though. The rules that mandated them came out before the era of index funds. Those disclaimers are specifically talking about fund managers. And it’s true that past performance at picking stocks does not indicate future performance of picking stocks. Out side of that context, past performance is almost always a strong indicator of future performance.
Nether is anything else in existence. I'm glad that philosophers are worrying about what AGI might one day mean for us but it has nothing to do with anything happening in the world today.
All who are a year plus behind OpenAI.
But that doesn’t mean you can’t get some useful ideas about future performance from a person’s past results compared to other humans. There is no such effect in play here.
Otherwise, time for me to go beat Steph Curry in a shooting contest.
Of course there’s other reasons past performance is imperfect as a predictor. Fundamentals can change, or the past performance could have been luck. Maybe Steph’s luck will run out, or maybe this is the day he will get much worse at basketball, and I will easily win.
From 2016: https://www.nytimes.com/2018/04/19/technology/artificial-int...
To 2023: https://www.businessinsider.com/openai-recruiters-luring-goo...
So is “unsafe” just another word for buggy then?
JFC someone somewhere define “safety”! Like wtf does it mean in the context of a large language model?
Ilya is certainly world class in his field, and maybe good to listen to what he has to say
Was it? US (and initially UK) didn't really face any real competition at all until the war was already over and they had the bomb. The Soviets then just stole American designs and iterated on top of them.
Is the idea that it will hack into NORAD and a launch a first-strike to increase the log-likelihood of “WWIII was begun by…?”
The bigger concern is something like Paperclip Maximizer. Alignment is about how to ensure that a super intelligence has the right goals.
Thought to be honest in my original post I was more thinking of Asimov's nonfiction essays on the subject. I recommend finding a copy of "Robot Visons" if you can. Its a mixed work of fictional short stories and nonfiction essays including several on the subject of the three laws and on the Frankenstein Complex.