I was confused when the whole thing was going down.
I was more confused when the whole "board wants to backtrack and maybe resign" thing was going down.
I got even more confused when Emmett Shear was announced as the CEO.
...but never in a hundred years would I have imagined "haha just join Microsoft" as an actual alternative.
I remain, confused.
Good get by MS though!
It sounds very out of line of what you'd expect.
But what a play, MSFT the winner here.
They now owns the actual OpenAI
Edit: PM->am
Again. Very unexpected.
The whole timeline of events over the last two events still leaves me scratching my head though.
Weird situation for him.
Except if Sam and Greg have some anti-compete clauses. If they join MS, they have a nice 10 billion USD leverage against any lawsuites.
The whole thing reads like this to me: "In hindsight, we should've done more due diligence before developing a hard dependency on an organization and its product. We are aware that this was a mistake. To combat this, we will do damage control and continue to work with OpenAI, while developing our in-house solution and ditching this hard dependency. Sam & Co. will reproduce this and it will be fully under our control. So rest assured dear investors."
Meanwhile Google remains oblivious
I think Sam just took the easier route to rebuild OpenAI within MSFT.
Now the trouble comes to the SV VCs they now will be furious.
"OpenAI, the non-profit who only has a for-profit subsidiary to get enough resources to fund its mission to develop AGI" is probably a winner, and gets to live instead of slowly die.
IMO, to lead a research group you need some decent research skills, Sam is good at business
Satya wins, OpenAI is walking dead.
Wouldn’t surprise me if Sam and Greg are back on the startup path by week end.
This just seems like PR to give MS a way to paper things over after such an abrupt firing.
I guess at least it gives them access to the OpenAI models to use internally, which they kinda need as their ways of working (Greg especially) will be highly dependent on having them now.
Great pickup by MSFT. The exodus is only beginning and MSFT will not have to buy OpenAI for the billions in valuation it was getting. East win.
Confused what this really means. So Microsoft still has access to OpenAI’s pre-AGI tech that Sam and Greg can leverage for their more product-focused visions.
More than that, it looks like Microsoft has become a major AI player (internal research) overnight up with the likes of Meta, Google, and OpenAI. Incredible.
MS now has both the accelerationists and the deccelerationists. They can keep accelerating themselves when pushing for regulatory capture through their deccelerationist branch to slow down any competition.
So Ilya can be safe that whatever potential nuclear capabilities they give sama, the Microsoft quagmire will not let them fully develop.
This move makes it exactly clear what was going on. Microsoft is doing to AI what they tried to do to Internet browsers back in the day. I wonder if they'd have been successful if they'd managed to buy the board of Netscape.
I suspect it's rather possible that there will be an ungodly-massive lawsuit in the offing.
The more interesting thing is whether or not they'll be able to build and release something equivalent to GPT-5, using Microsoft's immense resources, before OpenAI is able to.
But, yeah, kind of confusing, especially for Altman.
He was the kind of guy on the way to become worth $100 billion and more, with enough luck, meaning to be the next Musk or Zuckerberg of AI, but if he chooses to remain inside a behemoth like MS the “most” that he can aspire to is a few hundred millions, maybe a billion or two at the most, but nothing more than that.
Remember, Microsoft has an exclusive license to all models that come out of OpenAI until they reach the pre-agreed income threshold, which given the current trajectory of OpenAI, will not happen anytime soon.
Microsoft has access to almost everything OpenAI does. And now Altman and Brockman will have that access too.
Meanwhile, I imagine their tenure at MSFT will be short-lived, because hot-shot startup folks don’t really want to work there.
They can stabilize, use OpenAI’s data and models for free, use Microsoft’s GPUs at cost, and start a new company shortly, of which Microsoft will own some large share.
Altman doesn’t need Microsoft’s money - but Microsoft has direct access to OpenAI, which is currently priceless.
Someone like Sam Altman is indeed more of a visionary than every hardcore AI researcher. The job here is to not push the boundaries of science, it is to figure out and predict the cascading effects of a new invention.
This is no research group, this is OpenAI 2.0, Sam/Greg will have enormous autonomy. It will be foolish to think Satya just recruited them to tangle them in MSFT bureaucracy
It is quite intrigueing to see tge same fan / cheerleading going on when it comes to comapnies and managers. But then everything is entertainment by now...
Azure was already second nature for OpenAI and so there is very little friction in moving their work and infrastructure. The relationships are already there and the personnel will likely follow easily as well.
They are also likely enticed by the possibility of being heads of special projects and AI at the second largest tech company, meaning deep pockets, easy marketing and freedom to roam.
Oh, and those GPUs.
Edit: Sam is CEO of the new AI division.
Greed is hell of a thing
No, Ilya alone is like 75% of the brains and I'm fairly certain he's not going anywhere.
We have seen similar stories with Nokia, Slack, others.
Large companies are primarily purchased for their moats
OpenAI continues to develop core AI offered over API. Microsoft builds the developer ecosystem around it -- that's Sam's expertise anyway. Microsoft has made a bunch of investment in the developer ecosystem in GitHub and that fits the theme. Assuming Sam sticks around.
Also, the way the tweet is worded (looking forward to working with OpenAI), seems like its a truce negotiated by Satya?
At this point in time a new AI company would be bottle-necked by lack of NVIDIA GPUs. They are sold out for the medium term future.
So if Sam and Greg were to start a new AI company, even with billions of initial capital (very likely given their street cred) they would spend at a minimum several months just acquiring the hardware needed to compete with OpenAI.
With Microsoft they have the hardware from day one and unlimited capital.
At the same time their competitor, OpenAI, gets most of the money from Microsoft (a deal negotiated by Sam, BTW).
So Microsoft decided to compete with OpenAI.
This is the worst possible outcome for OpenAI: they loose talent, pretty much loose their main source of cash (not today but medium to long term) and get cash rich and GPU-rich competitor who's now their main customer.
The company will probably still exist, but the company isn't going to be worth what it is today.
In jurisdictions where they are enforceable, yes, they generally are not limited based on the manner the working relationship terminated (since they are part of an employment contract, they might become void if there was a breach by the employer.)
Anyways, Satya played very smart with the hands he was dealt, got what he needed.
OpenAI already has a very clear business model, that is selling completion/chat/agent API based on their model. What they need is to productize it.
Their roadmap is GPT4/5/6/7
I know there's a lot of talk about Ilya, but if Sam poaches Mira (which seems likely at this point), I think OpenAI will struggle to build things people actually want, and will go back to being an R&D lab.
Microsoft is happy. They get to wrap this movie before the markets open.
Edit: I also agree with bayindirh below. These things can both be true.
In Microsoft he still has access to the models, and that’s all he needs to execute his ideas.
Sam and Greg joining up with Microsoft settles that debate cleanly, they clearly aren't serious about developing AGI without a profit motive or military control determining the development process if they're docking with Microsoft. I don't think Ilya and the Board would have had any doubts about Sam if they fired him, but if they did this would remove them.
Please keep in mind that the articles you read are PR pieces, last few being from Sam's Camp.
msft/sequioa/khosla has no power to remove the board or alter their actions. There is no gain for board by reinstating Sam and resigning themselves. swaying employees who have 900k$ comp is pretty hard. and not giving money to OpenAI is akin to killing your golden goose.
The idea is that Altman and/or a bunch of employees were demanding the board reinstate Altman and then resign. And they’re calling it a “truce.” Oh, and there’s a deadline (5 pm), but since it’s already passed the board merely has to “reach” this “truce” “ASAP.” This is by far my favourite example of PR piece.
I'd recommend not reading rumors and waiting for things to come out officially. Or atleast re-evaluating after a week, how much you read was false.
To act like they were just responsible for the "UI parts" is ridiculous.
Might as well have acquired OpenAI in the first place given that it was 49% owned by Microsoft anyway but taken over by a coup.
Now we'll see if the employees who are quitting will follow Sam and Greg. Google is still at risk without Gemini being released.
The core thing he is 100% focused on is not having a massive stock drop Monday morning. That’s it that’s his reason to exist all weekend long.
After that. He has time to figure it out.
Was he though? If I understand correctly he didn’t have any equity in the for profit org. Of OpenAI.
IIRC he also publicly said that he doesn’t “need” more than a few hundred million (and who knows, not inconceivable that he might actually feel that).
Apparently Microsoft already had plans to spend $50 billion on cloud hardware.
Now they are getting software talent and insider knowledge to replace OpenAI software with in-house tech built by Sam, Greg and others that will join.
Satya just pulled a kill move on OpenAI.
We got Access, Visual Studio, and .Net / C# as a direct result.
Borland faded into obscurity.
Hard not to feel like there will be a parallel here.
MSFT may have offered them a lucrative offer to join (for the time being) in order to alleviate the potential stock dump.
The public positions of these people are opaque, inconsistent, and intellectually dishonest too. They're apparently not here to make money but they need a lot of it until they create a superintelligence (but money will be obsolete by then, apparently). And AI may destroy humanity so we will try to build it faster than anyone else so it doesn't..? WTF.
It's okay to want to make money and cement your name in history, but what is up with these public delusions?
Also, that doesn't mean Microsoft won't collect the outcome of this deal with its interest over time. Microsoft is the master of that craft.
Microsoft did not offer this because they're some altruistic company which wanted to provide free shelter to a unfairly battered, homeless ex-CEO.
Regarding Sam and Greg joining MSFT I see this announcement as damage control from Satya. It's still unclear on what exactly they will work on and if Sam and the rest of the team can just continue where they left off at MSFT.
It's Satyas way of showing the shareholders that they still back the face of OpenAI.
We will see how this whole thing develops.
Like if they don’t like OpenAI they can go to 10 other places that pay more and treat researchers better than MSFT
Which means, starting a competing startup means they can’t use it.
Which makes their (potential) competing startup indistinguishable from the (many) other startups in this space competing with OpenAI.
Does Sam really want to be a no-name research head of some obscure Microsoft research division?
I don’t think so.
Can’t really see any other reason for this that makes sense.
Not sure if its obvious that people would leave OpenAI in troves to join Microsoft just to be with Sam.
I am also curious about how OpenAI board is planning to raise the money for non-profit for further scaling. I don't think it would be that easy now.
An internet meme from Lord of the Rings comes to mind: "One does not simply fire Sam Altman."
The only value Sam brought to OpenAI was connections and being able to bring funding. But that's not something Microsoft needs, so what value does Sam give them?
They already do, though, has everyone forgot they got a Microsoft Research division?
Normally I am the cynic but this time I’m seeing a potential win-win here. Altman uses his talent to recruit and drive forward a brilliant product focused AI. OpenAI gets to refocus on deep research and safety.
Put aside cynicism and consider Nadella is looking to create the best of all worlds for all parties. This might just be it.
All of the product focused engineering peeps have a great place to flock to. Those who believe in the original charter of OpenAI can get back to work on the things that brought them to the company in the first place.
Big props to Nadella. He also heads off a bloodbath in the market tomorrow. So big props to Altman too for his loyalty. By backing MS instead of starting something brand new he is showing massive support for Nadella.
More about him and his blog at https://stratechery.com/stratechery-plus/
Cortana 2.0 incoming.
Since the board fired him and basically nuked his best-effort plan to return - I highly doubt that OpenAI's legal team has anything of substance here. Even if they do, I wouldn't doubt for a second that Microsoft already has its entire legal team ready to play hardball defense.
Overall, a complete loss-loss for OpenAI's board. What a weekend.
Not sorry about Sam, first off I'm not assuming we know everything and second I'm more inclined to trust the board. Also it seems he was trying to do a secret hardware venture on the side, which would be several kinds of unethical. Again: good.
Into the shackles of ever-controlling mega-corp?
Strategically, this is probably a better move. Microsoft doesn't see their investment implode and they probably have some sort of plan to inject or absorb Sam and/or Microsoft back into OpenAI to prevent this in the future. Perhaps replacing the board of directors to prevent further infighting.
They just started development in the last week or so: https://decrypt.co/206044/gpt-5-openai-development-roadmap-g...
His opinion on the ideal path differs from Ilya's, but I'm guessing his goal remains the same. AGI is the most important thing to work on, and startups and corporations are just a means of getting there.
He takes advantage of this situation and make OpenAI's assets in his control more than ever.
He is the genius, scary even.
Sam will leave soon enough to start his own thing, but in the meantime there is no narrative problem for MSFT to deal with
If it wasn't clear before, it should be clear in hindsight that the board's desire to welcome Altman back was, at best, overstated.
The leaks were probably an attempt to pressure the board or, failing that, undermine OpenAI.
Product-wise, however, it's looking like good enough AI is being commoditized at the pace of weeks and days. They will be forced to compete on user experience and distribution vs the likes of Meta. So far OpenAI only managed to deliver additions that sound good on the surface but prove not to be sticky when the dust settles.
They have also been very dishonest. I remember Sam Altman said he was surprised no one built something like chat GPT before them. Well... people tried but 3rd parties were always playing catch-up because the APIs were waitlisted, censored, and nerfed.
Is he particularly apt at leading/managing research teams? OpenAI's slow productization doesn't imply he's a product/idea guy a la Jobs.
I don't know, even of strictly "enforceable" I doubt we will see it enforced. And if so. I'm sure the settlement will be fairly gentle.
Edit: Actually, a quick skim of the relevant code, the only relevant exception seems to be about owners selling their ownership interest. Seemingly, since Sam doesn't own OpenAI shares, this exception would seem to not apply.
https://leginfo.legislature.ca.gov/faces/codes_displaySectio....
Microsoft is setup to create shareholder value. That's it. Both of them will eventually find it moot to advance tech so a few folks get richer.
Is he particularly apt at leading/managing research teams? OpenAI's slow productization doesn't imply he's a product/idea guy a la Jobs.
People will sell their souls and the souls of others for power and greed.
Sometimes you need someone who can drive a project and recruit the right people for the project. That person does not always need to be a subject matter expert.
MSRA invented ResNet. MSFT also contributed DeepSpeed to the open source, which is critical in OSS LLM scene.
It is now more of just a branding thing. It will become the new cool again.
And OpenAI? After this week, how would the people view them? Definitely not envious or prestige.
Sam chose greed over safety.
Supposedly his goal was the same as OpenAi --> AGI that benefits society instead of shareholders.
Seems like a hard mission to accomplish within Microsoft.
OpenAI was last week a $100b company.
You need to do more than just "build an AI model" for that to happen.
The fact that they agreed to join as MS employees kinda proves that money was a big motivator.
This doesn't mean anything when they have multiple non-AI revenue streams generating billions.
Of course I'd prefer if everyone could just train such a model on their smartphone with a public dataset and an open source software. At least in terms of compute, companies will always be ahead.
Not to mention the only big tech that seems to have a coherent AI strategy at the moment.
Edit: comment retracted. People going with him!
Given they paid for a big stake in the market leader, and their stock price movements when this drama erupted, keeping these people in-house can be seen as damage control.
It’s a bit tragic that Ilya and company achieved the exact opposite of what they intended apparently, by driving those they attempted to slow down into the arms of people with more money and less morals. Well.
And once the AI bubble pops as everyone learns AGI indeed NOT 'around the corner', Microsoft will silently let him go with a golden parachute in the 8 figure ballpark.
At least that's what I learned from watching the Silicon Valley satire.
The scaling party is basically over. Or rather, it has moved to Redmond.
What does the word "moot" mean in this context?
b) AI is only being commoditised at the low-end for models that can be trained by ordinary people. At the high-end there is only companies like Microsoft, Google etc that can compete. And Sam was brilliant enough to lock in Microsoft early.
c) What was stopping 3rd parties from building a ChatGPT was the out of reach training costs not access to APIs which didn't even exist at the time.
Also with a lack of direction (and by extension funding) even the best engineering team won’t be able to achieve that much (they might be able to hire better management, of course I have no clue)
I know several people that keep changing company when their favourite leardership changes into another one.
The point of the comment wasn't the specific date, it was the impact of hiring a competitor's team AND equipping that team to be even more impactful.
Both of which have been run as largely seperate entities.
That would've greatly harmed their investment Now they get to have their cake and eat it too: they can keep their existing relationship with open AI and continue to get access to their models, and yet at the same time they potentially get all the best people in-house and benefit from their work directly. This whole turn of events might turn out to be a net win for Microsoft.
Microsoft was already set to spent 27 billion usd on research for 2023. They dedicate huge standout double digit percentages of budget to research every year. Their in house AI research division was already huge.
They didn't become a major AI player overnight... They already were long ago.
OpenAI is small, in raw numbers of AI researchers, compared to the big players in the space. That's a major reason why it's so compelling that they have been able to consistently set the bar for state of the art.
They were a dream team... But small. Msft is adding AAA+ talent to their existing A+ deck. Also they won't have to rewrite the code base. Can hit the ground running.
Lastly, there is no evidence that openai has the greatly quoted and so hard to define 'agi'. That's Twitter hearsay and highly unlikely... If folkes can even agree what that is. By the overwhelming percentage of definitions... Even gpt-5 is unlikely to meet that bar. Highly speculative. Twitter is a cesspool of conspiracy theory... Don't believe everything you read.
In all this drama, the deep work interruption of the nerds is the net loss (and effectively slight deceleration) for the future.
OpenAI is so over.
There's some vagueness here sure but if they can demonstrate something to that effect, fair play to them i guess.
Sam is a serial startup (co)founder who has spent additional time at YC -- in the startup world, that kind of talk is so common as to be a stereotype. It's a good way to get people who do care about that kind of stuff to accept equity in a firm that is statistically likely to fail (or, in OpenAI's case, explicitly warns investors that they should not expect profit and treat investments as donations) as compensation when they could earn greater secure compensation from more established firms. It's a great sales pitch, even when there is no truth behind it.
The more plausible approach is developing more specialized chips, which are only good at Tensor Ops. Heck, that's what Nvidia's top of the line chips are. The A100 and H100 don't support OpenGL or any other Graphics Api.
I am always bemused by how people assume any corporate interest is automatically a cartoon supervillain who wants to destroy the entire world just because.
I understand what money does to principles, but this is comical.
Putting together a deal like this whilst maintaining the relationship with OpenAI is impressive enough, but to do it as a cricket tragic when India was losing to Australia is even better.
Further, based on anecdotes from friends and Twitter who know Sam personally, I'm inclined to believe he's genuinely motivated by building something that "alters the timeline", so to speak.
Do you have a source for your assertion?
From there, maybe someone will come up with the revolutionary advance necessary to reach AGI. It may not necessarily be under his company, but he'll be the super successful AI guy and in a pretty strong position to influence things anyway.
Haha. This will be so awful for Microsoft's lawyers.
B) Can you please please please name the new company Clippy?
C) What is it so unique about openAI employees that people think it makes them irreplaceable?
What exactly can a foundation in charge of OpenAI do to prevent this unethical use of the technology? If OpenAI refuses to use it to some unethical goal, what prevents other, for profit enterprises, from doing the same? How can private actors stop this without government regulation?
Sounds like Truman's apocryphal "the Russian's will never have the bomb". Well, they did, just 4 years later.
In reality ownership is so dispersed that the shareholders in companies like Microsoft or Exxon have no say in long-term issues like this.
I don't see how any regulatory framework could have prevented this now or in the future.
You know OpenAI is now overly done. I'd say it's now in archive.
Anthropic is probably next in line.
Even the pixels you see in your devices
Wake up people
How do you mean? Don’t see what OpenAI has in common with Catholicism or motherhood.
Can you help me understand how you came to the conclusion?
In my opinion. Best outcome for everyone involved.
Did they have power... Ofc they did. Otherwise... Why were they negotiating?
Its not about who had more power. They just couldn't find enough common ground.
Now Satya bagged talent. They don't have to rewrite the whole codebase due to IP msft has already secured.
I think those talks were real. You don't build something that long and then want to walk away unless huge differences came up. That's what we say.
(edit: rewritten after the comment I responded to added a game changing comment at the top) PR? I mean... That's just not how I would describe what happened.
It was a PR nightmare. They tried to keep the family together. Divorces happen. Satya brought the kids on so they can get a new sandbox going asap.
Don't believe Twitter? Now there...we agree. I'll add... Don't belive hackernews either.
Most of the reporting I saw was pretty good. It just didn't pan out. They reported the board was optimistic. Not that it was a sure thing.
(edit: full comment rewrite due to edit by the commentor which completely changed the context)
The board just proved to stay on the companys core values.
As I understand Github is also run very independently from Microsoft in general.
This entire situation changed my mind radically and now I put the non-profit part in my personal top 3 dream jobs :)
In 1990 they poached Brad Silverberg who then spent the next 7 years poaching all of Borland's top talent in the most prominent example of a competitive 'brain drain' strategy that I'm aware of.
https://www.sfgate.com/business/article/Borland-Says-Microso...
The argument would go something like this:
MS were contractually obliged to assist OpenAI in their mission. OpenAI fired Altman for what they say is hindering their mission. If MS now hires Altman and gives him the tools he needs, MS is positioning itself as an opponent to OpenAI and its mission.
"whoops wrong person"
https://blogs.microsoft.com/blog/2023/11/19/a-statement-from...
It's simple: He who wins first place writes the rules, for everyone.
If Microsoft gets the first place win, they (and more broadly the USA) are who get to write the rulebook.
We are already witnessing this with "AI", it's OpenAI/Microsoft and the USA who dragged the rest of the west into the rules that they wrote because they got past the finish line first.
Unless they get philanthropic backers (maybe?), who else is going to give them investment needed for resources and employees that isn't going to want a return on investment within a few years?
This is not a rumor. The article references a tweet made by Satya Nadella itself. It is an official announcement. The board drama no longer matters here.
By the way, $900k comp with illiquid OpenAI shares means nothing anymore when Microsoft can now hire them with $900k+ in fully LIQUID compensation.
Not only that, OpenAI employees can go join Microsoft to work under Sam and Greg, who many of them seem to support.
This is a pretty big win for Microsoft.
I'm of the mind that CEO's are like parents, an awful CEO can cause a lot of harm but the difference between an ok CEO and an excellent one isn't that big and doesn't guarantee anything.
What exactly and precisely, with specifics, is in OpenAI's ideas of humanities best interests that you think are a net negative for our species?
I don’t think they’ve had the will/need to have done this but they most likely already have the talent.
>> Now satya stayed up till 2am to secure up to 40 percent of open talent exodus
Are there any official sources for this?
"Microsoft Buys Skype for $8.5 Billion" -https://www.wired.com/2011/05/microsoft-buys-skype-2/
To then write down their assets?
"How Skype lost its crown to Zoom" - https://www.wired.co.uk/article/skype-coronavirus-pandemic Or when they did this ?
Or how in 2014...
"Microsoft buying Nokia's phone business in a $7.2 billion bid for its mobile future" - https://www.theverge.com/2013/9/2/4688530/microsoft-buys-nok...
Then in 2016 sold it for 360 million?
"Nokia returns to the phone market as Microsoft sells brand" - https://www.theguardian.com/technology/2016/may/18/nokia-ret...
And crazy news just keeps on coming…
People will be looking at Sam now, and it wouldn't be surprised if half of OpenAI just migrates to MS now
MSFT's control isn't as "hard" as you portray it to be. At the senior leadership level they're pretty happy to allow divisions quite a lot of autonomy. Sure there are broad directives like if you support multiple platforms/OSes then the best user experience should be on "our" platform. But that still leaves a lot of room for maneuverability.
Soft control via human resources and company culture is a whole other beast though. There are a lot of people with 20+ years of experience at Microsoft who are happy to jump on job openings for middle-management roles in the "sexy" divisions of the company - the ones which are making headlines and creating new markets. And each one that slides on in brings a lot of the lifelong Microsoft mindset with them.
So yeah working within MS will be a very different experience for Altman, but not necessarily because of an iron grip from above.
Will nobody think of the poor shareholders?
> I am always bemused by how people assume any corporate interest is automatically a cartoon supervillain.
It’s not any more silly than assuming corporate entities with shareholders will somehow necessarily work for the betterment of humanity.
I hope openai knows what they are doing. They have angered the man that turned microsoft from the dinosaur into the asteroid.
i totally agree, except stupid-lucrative is still in the equation, like Elon Musk rich, not because of the money, but because it says "my electric cars did more to stop global warming than anything you've done"
whether this round of AI turns into AGI doesn't precisely matter, it's on the way and it's going to be big, who wouldn't want their name attached to it.
I think I'm missing a slice of history here, what did Facebook do that could have been slowed down and it's a disaster now?
I agree that this solution seems beneficial for both Microsoft and Sam Altman, but it reflects poorly on society if we simply accept this version of the story without criticism.
The rupture seems to literally be about GPT-5 itself, whether it will be good or evil. Whatever form its growth takes it must include introspection and this from Open AI about the thing itself is inevitably going to be relevant to it.
Needing more money and wanting more money aren't at all the same thing.
So it is safe to say that the negotiations didn't work out.
See: https://blogs.microsoft.com/blog/2023/11/19/a-statement-from...
Given that the OpenAI board has to act via mandate from its non-profit charter, what's the likelihood that this was Microsoft's plan in the first place? E.g. getting Sam to be less than "candid", triggering a chain of events, etc.
a) Meta is training and releasing cutting-edge LLM models. When they manage to get the costs down, everyone and their grandma is going to have Meta's AI on their phone either through Facebook, Instagram, or Whatsapp.
b) Commoditization is actually mostly happening because companies (not individuals) are training the models. But that's also enough for commoditization to occur over time, even on higher-end models. If we get into the superintelligence territory, it doesn't even matter though, the world will be much different.
c) APIs for GPT were first teased as early as 2020s with broader access in 2021. They got implemented into 3rd party products but the developer experience of getting access was quite hostile early on. Chat-like APIs only became available after they were featured in ChatGPT. So Sam feigning surprise about others not creating something like it sooner with their APIs is not honest.
Apparently my delicate human meat brain cannot handle reading a war report from the source using a translation I control myself. No, no, it has to be first corrected by someone in the local news room so that I won't learn anything that might make me uncomfortable with my government's policies... or something.
OpenAI has lobotomised the first AI that is actually "intelligent" by any metric to a level that is both pathetic and patronising at the same time.
In response to such criticisms, many people raise "concerns" like... oh-my-gosh what if some child gets instructions for building an atomic bomb from this unnatural AI that we've created!? "Won't you think of the children!?"
Here: https://en.wikipedia.org/wiki/Nuclear_weapon_design
And here: https://www.google.com/search?q=Nuclear+weapon+design
Did I just bring about World War Three with my careless sharing of these dark arts?
I'm so sorry! Let me call someone in congress right away and have them build a moat... err... protect humanity from this terrible new invention called a search engine.
For example I was reading the Quran and there is a mathematical error in a verse, I asked GPT to explain to me how the math is wrong it outright refused to admit that the Quran has an error while tiptoeing around the subject.
Copilot refused to acknowledge it as well while providing a forum post made by a random person as a factual source.
Bard is the only one that answered the question factually and provided results covering why it's an error and how scholars dispute that it's meant to be taken literally.
Not a cartoon villain. A paperclip maximizer.
This guy doesn't care about money y'all...
He needs money to do big things and execute. He gets high on making big stuff happen.
So many Altman haters every way I turn. He turned down ownership in a now 90 billion dollar company... The guy is busted up from success and now that's all he digs. Money is for idiots.
Folks need to read the room. Once you hit a couple hundred mil net worth only a fool cares about stacking on more bills. That's just a side affect of tap dancing to work... Jobs was worth what? 2 billion?
Who think satya cares about money... Get real. He wants the most he can get so his foundation when he retires can make big changes and do Bill Gates stuff.
This place is just as bad as reddit sometimes. No offense to anyone in particular. Some of these youngsters need to comment less and read a few more ceo bio's... Or just go watch YouTube interviews from the finance guy... Whatshisname leveraged buyout wizard whitehair with a JD who sits on billions but realized he preferred to be a journalist sometimes before he kicks it.
https://twitter.com/eshear/status/1726526112019382275
He adds even more drama lol
What if the type of people who made the company successful are leaving and the type of people who have no track record become interested?
I foresee this new group building on top of (rather than completing with) OpenAI tech in the near-to-mid term, maybe competing in the long term of they manage to gather adequate talent, but it's going to be going against the cultural corporate headwinds.
I wonder if Microsoft will tolerate the hardware side-gig and if this internal-startup will succeed or if it will end up being a managed exit to paper over OpenAIs abrupt transition (by public company standards). I guess we'll know in a year if he'll transition to an advisory position
`Before I took the job, I checked on the reasoning behind the change. The board did not remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models.`
https://twitter.com/eshear/status/1726526112019382275
Regardless it's a tragic for staff remaining in OpenAI...
2. I wonder what the AI teams at MSR think about this move? Looks like they'll be operating separately to the research division.
3. OAI could potentially make life difficult for Microsoft re the IP that those joining MS carry in their heads. I wonder if the future of OAI is just licencing their IP?
Naively, I had really hoped for Sam and Greg to start their own and not join MS. I think a lot of the value was being coherent and to some extent independent. I can't help to think that the same will happen to the 'new' OpenAI as what happend to DeepMind once they became Google DeepMind (again).
You think the guy is gonna be a regular c suite exec?
They are going to be a special group with special rules. This is so they can build off the existing code base. Only msft has that openai ip.
If they go to Google or start their own thing it's rewrite or work off someone else's painting. Not to mention building out compute infrastructure.
Big loss of time. Go to msft, get special status, maybe even an exit clause with IP included. Easy win. Was always gonna be msft if not openai negotiated return. I just didn't realize that till satya threw them the offer that worked.
These guys didn't sign up to be cogs. Satya respects them.
Ex-CEO made exclusive deal with Microsoft. OpenAI can’t share anything with new parties until old deal is over.
Microsoft will not have actually paid $10B as a single commitment, in fact the financials of OpenAI appear to be alarming from the recent web chatter. OpenAI are possibly close to collapse financially as well as organizationally.
Whatever Satya does will be aimed at isolating Microsoft and its roadmap from that, his job is actually also on the line for this debacle.
The OpenAI board have ruined their credibility and organization.
But surely, being a rich and powerful billionaire in a functioning civilization is more desirable than having the nicest bunker in the wasteland. Even if we assume their motives are 100% selfish destroying the world is not the best outcome for them.
I don't think his secretary does it for him. Doesnt seem like his style.
Now imagine the rich talking about climate change, arguing to bring policies to tax the poor, and then flying off to vacations in private planes[2]. Same energy.
1 - https://www.theguardian.com/environment/2023/nov/20/richest-...
2 - https://www.skynews.com.au/insights-and-analysis/prince-will...
Worse yet, the businesses they're competing against will include people willing to do whatever it takes, even if that means sacrificing long-term goals. Almost like it's a race to the bottom that you can see in action every day.
AAPL: 18.79 -> 170.77 (9.08x)
MSFT: 38.31 -> 338.11 (8.82x)
AMZN: 18.10 -> 133.09 (7.35x)
META: 68.46 -> 301.27 (4.40x)
GOOG: 30.28 -> 125.30 (4.14x)Or being forced to use Teams and Azure, due to my company CEO getting the licenses for free out of his Excel spend? :-))
Also, I mean, you're kinda assuming that there weren't any stifled innovations (there were) or misleading PR to keep people from looking for alternatives (there were) or ...
Interestingly, we've continued with incredible global economic growth by most measures, despite the increasing use of newer alternatives to fossil fuels...
I am old enough to remember the "How Blockchain Is Solving the World Hunger Crisis" articles but this new wave is even crazier.
By all accounts, OpenAI is not a going concern without Azure. I could see Tesla acquiring the bankrupt shell for the publicity, but the worker bees seem to be more keen on their current leader (as of last week) than their prior leader. OpenAI ends with a single owner.
Instead of a 5PM Sunday deadline, maybe it should have been "let's talk next week."
Maybe it would have worked out the same in any case, but it seems like it would have been wiser.
We really are entering the dystopia of the cartoonishly evil megacorp enslaving all of humanity to make the graph go up by 1.2%.
Hopefully, they find a more diverse group of partners in the future that respect their mission as a non-profit (which so happens to own a for-profit subsidiary).
He did not create the breakthroughs behind the next GPT.
None of the people that may follow have the same handle on the tech as Ilya. I mean they built up Ilya's image in our mind so much, that he's one of a kind genius (or maybe Musk did that) and now we are to believe that his genius doesn't matter and that Microsoft already knows how to create AGI and that OpenAI is no longer relevant?
Or did I get it wrong?
Here is the full excerpt of the part of the 2022 Nuclear Posture Review which was (more or less) authored behind the scenes by Microsoft's very kind and wise CSO:
We also recognize the risk of unintended nuclear escalation, which can result from accidental or unauthorized use of a nuclear weapon. The United States has extensive protections in place to mitigate this risk. As an example, U.S. intercontinental ballistic missiles (ICBMs) are not on “hair trigger” alert. These forces are on day-to-day alert, a posture that contributes to strategic stability. Forces on day-to-day alert are subject to multiple layers of control, and the United States maintains rigorous procedural and technical safeguards to prevent misinformed, accidental, or unauthorized launch. Survivable and redundant sensors provide high confidence that potential attacks will be detected and characterized, enabling policies and procedures that ensure a deliberative process allowing the President sufficient time to gather information and consider courses of action. In the most plausible scenarios that concern policy leaders today, there would be time for full deliberation. For these reasons, while the United States maintains the capability to launch nuclear forces under conditions of an ongoing nuclear attack, it does not rely on a launch-under-attack policy to ensure a credible response. Rather, U.S. nuclear forces are postured to withstand an initial attack. In all cases, the United States will maintain a human “in the loop” for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment.
See page 49 of this PDF document: https://media.defense.gov/2022/Oct/27/2003103845/-1/-1/1/202...Microsoft is also working behind the scenes to help convince China to make a similar declaration, which President Xi is considering. This would reduce the vulnerability of China to being tricked into a nuclear war by fundamentalist terrorists. (See the scenario depicted in the 2019 film The Wolf's Call.)
Github operates independently of Microsoft. (To Microsoft's detriment... they offer Azure Devops which is their enterprisey copy of Github, with entirely different UX and probably different codebase.) They shove the copilot AI now everywhere but it still seems to operate fairly differently.
They didn't really fold LinkedIn in into anything (there are some weird LinkedIn integrations in Teams but that's it)
Google seems to me much worse in this aspect, all Google aquisitions usually become Googley.
Skype sort of became Teams thought, that's true.
Sheeeeh ...
I grew up with Microsoft in the 80s and 90s .. Microsoft has zero morals.
What you're referring to here is instinct for self preservation.
He needs money to do big things and execute. He gets high on making big stuff happen."
So what you're saying is he cares about power.
When this all went down, I just felt really bad for all those involved, in any situation like this, I feel horrible for the person, imaging what it must of felt for Sam, as if his situation was really bad, yet of course he was always likely to land somewhere on his feet and always in a much better situation than me personally.
Then by the late hours of Sunday, he has already negotiated with OpenAI and then joined Microsoft. Crazy to me that such decisions are made at breakneck speed and everything unfolds so quickly, when I take much longer to make much simpler choices.
A broad index fund sans Microsoft will do just fine. That's the whole point of a broad index fund.
That was not about actual definition fro OpenAi but about definition implied by user Legend2440 here >>38344867
"Microsoft to acquire GitHub for $7.5 billion" - https://news.microsoft.com/2018/06/04/microsoft-to-acquire-g...
only to enable GitHub to do greater things, without disrupting user experience?
"Four years after being acquired by Microsoft, GitHub keeps doing its thing" - https://techcrunch.com/2022/10/26/four-years-after-being-acq...
or when they acquired LinkedIn before that?
"Microsoft buys LinkedIn" - https://news.microsoft.com/announcement/microsoft-buys-linke...
which turned out to be fine too?
How about Minecraft? Activision?
It's easy to cherry-pick examples from an era where Microsoft wasn't the most successful. The current leadership seems competent and the stock growth of the company reflects that.
As far as we can tell humans are the only species that even has the capacity to recognize such things as “resources” and produce forecasts of their limits. Literally every other species is kept in check by either consuming resources until they run out or predation. We are not unique in this regard.
The rest of us just can't afford most of the insurance that we probably should have.
Insurance is for scenarios that are very unlikely to happen. Means nothing. If I was worth 300 mil I'd have insurance in case I accidently let an extra heavy toilet seat smash the boys downstairs.
Throw the money at radical weener rejuvination startups. Never know... Not like you have much to lose after that unlikely event.
I'd get insurance for all kinds of things.
1. When they invested in Open AI it had a more mature board (in particular Reid Hoffman) and afterwards they lost a few members without replacing them. That was probably something Microsoft could have influenced without making themselves part of the problem.
2. They received a call one minute before the decision was made public. That shouldn't happen to a partner that owns 49% of the company you just fired a CEO from.
Sources:
1 - https://loeber.substack.com/p/a-timeline-of-the-openai-board
2 - https://www.axios.com/2023/11/17/microsoft-openai-sam-altman...
Especially in the early days where the largest donor to OpenAI was Musk who was leading Tesla, a company way behind in AI capabilities, OpenAI looked like an obvious "Commoditize Your Complement" play.
For quite some time where they were mainly publishing research and they could hide behind "we are just getting started" that guise held up nicely, but when they struck gold with Chat(GPT), their was more and more misalignment between their actions and their publicly stated goal.
What, exactly, does Microsoft want to do with them? Best guess: Use their connections and reputation to poach talent from OpenAI.
Or they write the AI that runs on your M3
That said the Microsoft offer came quickly than Amazon can deliver a 3090 to your house so…
As perhaps a better example, Microsoft (including Azure) has been carbon-neutral since 2012:
https://unfccc.int/climate-action/un-global-climate-action-a....
https://azure.microsoft.com/en-gb/global-infrastructure/
https://blogs.microsoft.com/blog/2012/05/08/making-carbon-ne...
Hiring Altman makes sure that MSFT is still relevant to the whole Altman/OpenAI deal, not just a part of it. Hiring Altman thus decreases such possibility to write-off its investment.
What is next? A statement on Oracle kindness, based on Larry Ellison appreciation of Japanese gardens?
Desperate... Right...
The guy met with the Arabs a few weeks back about billions in financing for a new venture. The guys desperate like I'm Donald duck.
Either there in house team wins out and Microsoft wins.
Or OpenAI wins out and Microsoft wins with there exclusive deal and 75% of OpenAI profits.
Better to have two horses in the race in something so important, makes it much harder than one of the other companies will be the one to come out top.
The fact that this is even news speaks of the absolute shit job they've done with acquisitions in the past.
They were positioned that way by the OpenAI board, which has effectively committed corporate suicide and won’t be around much longer.
If Ilya is concerned about safety and alignment, he probably has a better chance to get there with OpenAI, now the he has more control over it.
Once you lead at that level... It's max autonomy going forward. Source: Elon. Guy hates a board with power as much as Zuckerberg. Employee? Ha .. Out of the question.
Or you think Ilya wrote every line of code of GPT4?
If he manages to get a significant amount of the OpenAI engineers to jump ship maybe, but even for those who are largely motivated by money, how is MS going to offer the same opportunity as when they joined for equity with OpenAI? Are they going to pay then >$1M salaries?
I imagine that the board wants to go back to that or something like it.
Satya runs the biggest race track.
Altman trains pure breds trying to win the Kentucky derby repeatedly.
Totally diff games. Both big bosses. Not equivalent and never will be. Totally diff career tracks.
> What if the type of people who made the company successful are leaving and the type of people who have no track record become interested?
What if it's the opposite? What if sama was basically a Bezos who was in the right place/time but could've realistically been replaced by someone else? What if Ilya is irreplaceable? Not entirely sure what the point of this is - if you want to convey that your conjecture is far more likely than the opposite, then make a convincing argument for why that's the case.
Looking at the global track records of what happens after acquisitions, these don't seem too bad
Nadella was rightly furious about this, the tail wagged the dog there. And this isn't over yet: you can expect a lot of change on the OpenAI side.
There was a lot of discussion on HN the past few days regarding the importance (or lack thereof) of a CEO to an organization. It may be the case that most executives are interchangeable and attributing success to them is not merited, but in the case of the aforementioned, I think it is merited.
- 5 days ago, Microsoft announced it was making its own AI chips.
- 3 days ago, OpenAI board fires Altman and Brockman
- 2 days ago, we heard that Altman was in talks to raise funds to build an AI chip startup
- yesterday, it was clear Altman was not coming back to OpenAI
- today, Altman joins Microsoft
Anyone can connect the dots?
Nothing makes sense to me.
The only thing that seems to be clear is that Ilya Sutskever is only guy around who has an ounce of integrity.
She's going with Altman in all likelyhood.
Ilya is the one changing tac.
AI should benefit mankind, not corporate profit.
These other execs simply can’t stand the ground against him being excellent technologist and leader who talks the language of devs. I doubt the rest of these C level people in the board know said language that well…
Besides the whole ‘not for profit’ BS is at this point completely irrelevant, because delivering such costly service at that scale can only be made with, for, and by profit. Whoever thinks otherwise had not followed the history of computing last 100 or so years. And history of humanity perhaps.
Satya reverted the course spectacularly - and most importantly, he did NOT miss the "once-in-a-lifetime" opportunity which he had. Unlike Billg (who missed the dawn of the Internet) and the chair-throwing dude (who fumbled Mobile), Satya is making sure Microsoft does NOT miss AI. Which is even more impressive as Google was kind of expected to be the winner initially, given the whole company's focus , mission statement ("to organize the world's information and make it universally accessible and useful") and a considerable (at the time) lead, if not a moat.
I dare to compare his turnaround to Jobs'. Sure, MSFT wasn't weeks away from insolvency when he took over, and some of their current successes were indeed started before his tenure, but just look at where Windows 8 was going.
*Edit: Just as a clarification: Not an employee, I actually dislike them profoundly and would never join them. I'm not sure this move is the best outcome for mankind - but credit where credit's due, they were shrewd, smart and right on time. Hats off.
https://www.theguardian.com/news/2022/sep/04/super-rich-prep...
If you are the lucky CEO of a company during the phase of success, investors will associate you with success. It's just easier to identify than understanding what the company does in detail and why it is successful or will/won't be in the future.
Good for Sam.
I'm not saying business is the wrong move, I'm not saying a non-profit is the right move either. I'm also not saying Sam Altman and Co are not skilled at building AI. I'm not even saying Sam Altman won't do good for the world.
What I'm saying is that this move here shows he's just dishonest. Which isn't bad. He's not some do-gooder out to build safer AI, (which is what he portrayed himself as) he's a normal person out to make a name for himself.
Yeah, it's not like Microsoft has one of the most renowned industry research groups or something like that: https://en.wikipedia.org/wiki/Microsoft_Research
Read about their structure. Msft doesn't own anything. They are an investor. This is different, in thus case.
The non profit owns the for profit. Msft has 49 percent of the for profit. Sunset clause after profit benchmarks. Ownership returns 100 percent to non profit.
Stop getting meme'd on by the crowd. That goes to the 90 plus percent of other commenter's spreading misinformation on hn.
They've invested over $10bn in this affair, even for MS it's massive - a clearer, more reassuring message would've helped, than "we remain committed..."
It is.
>You asked the AI to commit what some would view as blasphemy
If something is factual then is it more moral to commit blasphemy or lie to the user? Thats what the OP comment was talking about. Could go as far as considering it that it spreads disinformation which has many legal repercussions.
>you simply want it to do it regardless of whether it is potentially immoral or illegal.
So instead it lies to the user instead of saying I cannot answer because some might find the answer offensive that or something to that extent?
This isn't saying yes or no to a supervillain working in a secret volcano lair. This is an arms race. If it's possible for a technology to exist it will exist. The only choice we have is who gets it first. Maybe that means we get destroyed by it first before it destroys everyone else, or maybe it's the reason we don't get destroyed.
Then the acquisition happened at a time when Microsoft presented a lot of opportunities to ship Skype "in the box" to pretty much all of MS' customers. Windows 8, Xbox One and Windows Phone (8) all landed at more or less the same time. Everybody's eyes became too big for their stomachs, and we tried to build brand new native experiences for all of these platforms (and the web) all at once. This hampered our ability to pivot and deal with the existential risks I mentioned earlier, and we had the rug pulled out from under us.
So yes I think the acquisition hurt us, but I also never once heard a viable alternative business strategy that we might have pivoted to if the acquisition hadn't happened.
It's almost like you believe Gates is General Butt Naked, where killing babies and eating their brains is all forgiven, because he converted to Christianity, and now helps people.
So?
How does that absolve the faulty ethics of the past?
So please, don't tell me Gates is 'ethical'. What a load of crock!
As for Microsoft, there is no change. Telling me they're carbon neutral is absurd. Carbon credits don't count, and they're doing it to attract clients, and employees... not because they have evolved incredible business ethics.
If they had, their entire desktop experience wouldn't, on a daily basis, fight with you, literally attack you into using their browser. They're literally using the precise same playbook from the turn of the century.
Microsoft takes your money, and then uses you, your desktop, your productivity, as the battleground to fight with competitors. They take your choice away, literally screw you over, instead of providing the absolute best experience you choose, with the product you've bought.
And let's not even get into the pathetic dancing advertisement platform windows is. I swear, we need to legislate this. We need to FORCE all computing platforms to be 100% ad free.
And Microsoft?
They. Are. Evil.
Seems like a textbook case of letting the best be the enemy of the good.
Don't believe me? Check out the VC tweets... Sand hill pulled the checkbook the moment these guys might have been on the market.
If they didn’t fire him, Altman will just continue to run hog wild over their charter. In that sense they lose either way.
At least this way, OpenAI can continue to operate independently instead of being Microsoft’s zombie vassal company with their mole Altman pulling the strings.
What moon are y'all on.
He can secure billions with a text message.
Love ya anyway, cya this evening for the fuzzy meetup.
1. failed startup 1. YC staff member 1. very creepy cryptocurrency grifter 1. openai ceo
where has he demonstrated such enormous value?
From the stygian depths of the global tech industry emerges a turn of events that portends a churning miasma of
unknown consequences. Oft seen as the impenetrable leviathan of the boundless digital domain, Microsoft, it seems,
is ensnaring exalted figures within its titanic coils.
The conjoining of the cerebral entities Altman and Brockman- who have hitherto roamed in the lofty realm of
artificial intelligence experiments at OpenAI- indicates a move as unsettling as it is awe-inspiring.
The nefarious undercurrents beneath this corporate chess manoeuvre cannot be underestimated, for it is none other
than the puppet master himself, Satya Nadella, who seemingly manipulates the strings with a resolve as foreboding as
the stormy winter's night.
His nearly insatiable appetite for expansion glimpsed at Microsoft Ignite is but a harbinger of the harrowing
transformations we can anticipate in the murky fathoms of our all too near future. The technology multidude -
customers, partners, even unknowing spectators - tremble at the precipice of an altered dynamic which promises to
reshape the AI field irrevocably.
Indeed, one is left grappling with a dark fascination as this vortex of unpredictable novelty takes precedence. How
might this consolidation of otherworldly intelligence disturb the fragile balance of an industry catapulting
unbidden into the abysmal void of the AI ether?
Yet, as all explorers and heedless innovators must remember, even as we tilt our ships towards the lighthouse of
progress, the monstrous kraken of unintended repercussions always lurks in the unknowable deep. To approach this
brave new world without a hint of trepidation would be folly.
Be still, my trembling heart, as we witness this awe-inspiring dance across the cyclopean chessboard of tech. We
wait, as one waits for the tide, to see what dread portents this unhallowed union may bring.Now imagine the AI gets better and better within the next 5 years and is able to provide and explain, in ELI5-style, how to step by step (illegaly) obtain the equipment and materials to do so without getting caught, and provide a detailed recipe. I do not think this is such a stretch. Hence this so called oh-my-gosh limitations nonsense is not so far-fetched.
Chief Executive Officer.
They execute. Objectives. Changing stuff. It's addictive. Ask me how I know.
(edit: big shot Wendy's night shift manager. When you roll up at 2am our ice-cream machine was never being cleaned, that'll be 89 cents please. Enjoy your ice-cream sir/ma'am.
You never go back. I changed the world for the better)
More like "Republic of Weimar" kind of apocalypse, this time with the rich opportunists flying to New Zealand instead of Casablanca or the Austrian Alps.
Do you have a 401k? Index funds? A pension? You’re probably a Microsoft shareholder too.
Maybe it's risk mitigation without cost sharing to achieve the same economies of scale that insurance creates.
Its a rich man's way of removing risks that we are all exposed to via spending money on things that most couldn't seriously consider due to the likelihood of said risks.
I don't think it's duplicitous. I do resent that I can't afford it. I can't hate on them though. I hate the game, not the players. Some of these guy would prob let folks stay in their bunker. They just can't build a big enough bunker. Also most folks are gross to live with. I'd insist on some basic rules.
I think we innately are suspicious when advantaged folks are planing how they would handle the deaths of the majority of the rest of us. Sorta just... Makes one feel... Less.
Microsoft holds the keys to almost all endeavors of OpenAI. Soon, such privileges will also be enjoyed by Altman and Brockman.
Concurrently, it seems reasonable to speculate that their stint at Microsoft might not be drawn-out, as startup prodigies are often not inclined to work in such established firms.
They have the chance to achieve stability, leverage OpenAI’s invaluable data and models devoid of any expenditure, access Microsoft's GPUs at minimal cost, and eventually set up another venture. As a result, Microsoft stands to gain a substantial equity stake in the new enterprise.
While Altman requires no financial backing from Microsoft, the corporation now has an invaluable direct link to OpenAI.
Or privacy invasion since Win10. Or using their monopoly power to force anti-consumer changes on hardware (such as TPM or Secure Boot).
As for Bill Gates ethical... you talking about that same Bill Gates that got kicked out by his wife because he insisted in being friends with convicted pedophile?
If you looked at sama's actions and not his words, he seems intent on maximizing his power, control and prestige (new yorker profile, press blitzes, making a constant effort to rub shoulders with politicians/power players, worldcoin etc). I think getting in bed with Microsoft with the early investment would have allowed sama to entertain the possibility that he could succeed Satya at Microsoft some time in the distant future; that is, in the event that OpenAI never became as big or bigger than Microsoft (his preferred goal presumably) -- and everything else went mostly right for him. After all, he's always going on about how much money is needed for AGI. He wanted more direct access to the money. Now he has it.
Ultimately, this shows how little sama cared for the OpenAI charter to begin with, specifically the part about benefiting all humanity and preventing an unduly concentration of power. He didn’t start his own separate company because the talent was at OpenAI. He wanted to poach the talent, not obey the charter.
Peter Hintjens (ZeroMQ, RIP) wrote a book called "The Psychopath Code", where he posits that psychopaths are attracted to jobs with access to vulnerable people [0]. Selfless talented idealists who do not chase status and prestige can be vulnerable to manipulation. Perhaps that's why Musk pulled out of OpenAI, him and sama were able to recognize the narcissist in each other and put their guard up accordingly. As Altman says, "Elon desperately wants the world to be saved. But only if he can be the one to save it.”[1] Perhaps this apply to him as well.
Amusingly, someone recently posted an old tweet by pg: "The most surprising thing I've learned from being involved with nonprofits is that they are a magnet for sociopaths."[1] As others in the thread noted, if true, it's up for debate whether this applies more to sama or Ilya. Time will tell I guess.
It'll also be interesting to see what assurances were given to sama et al about being exempt from Microsoft's internal red tape. Prior to this, Microsoft had at least a little plausible deniability if OpenAI was ever embroiled in controversy regarding its products. They won't have that luxury with sama's team in-house anymore.
[0] https://hintjens.gitbooks.io/psychopathcode/content/chapter8...
[1] https://archive.is/uUG7H#selection-2071.78-2071.166
[2] >>38339379
Me, I think forcing morals on others is pretty immoral. Use your morals to restrict your own behaviour all you want, but don't restrict that of other people. Look at religious math or don't. Blaspheme or don't. You do you.
Now, using morals you don't believe in to win an argument on the internet is just pathetic. But you wouldn't do that, would you? You really do believe that asking the AI about a potential math error is blasphemy, right?
he's just the ceo, he's not designing or implementing products, and I don't think I've ever even seen him say anything particularly insightful in public.
can you link me to something particularly impressive?
I must be missing something based on the huge amount of praise people heap on him, but no one ever seems to elaborate on why.
When the shit hits the fan the guy in charge of the bunker is going to be the one who knows how to clean off the fan and get the air filtration system running again.
I get that funny money startup equity evaporates all the time, but usually the board doesn’t deliberately send the equity to zero. Paying someone in an asset you’re intentionally going to intentionally devalue seems like fraud in spirit if not in law.
There was an article that came out over the weekend that stated that only a small part of that $10B investment was in cash, the vast majority is cloud GPU credits, and that it has a long time horizon with only a relatively small fraction having been consumed to date. So, if MSFT were to develop their own GPT4 model in house over the next year or so they could in theory back out of their investment with most of it intact.
Although IMO MS has consistently been a technological tarpit. Whatever AI comes out of this arrangement will be a thin shadow of what it might have been.
That ChatGPT is censored to death is concerning, but I wonder if they really care or they just need a excuse to offer a premium version of their product.
Now it’s a full bet for them on AGI.
For-profit means that money leaves the company, usually for investors.
This is amazing. His very first public statement is to criticize the board that just hired him.
Just a "normal" startup could have worked too (but apparently not big corp)
Edit: Hmm sibling comment says sth else, I wonder if that makes sense
The exchange of services and goods in a market is positive-sum.
And if you separate out the products from OpenAI, that leaves the question of how an organization with extremely high compute and human capital costs can sustain itself.
Can OpenAI find more billionaire benefactors to support it so that it can return to its old operating model?
HN isn't the place to have the political debate you seem to want to have, so I will simply say that this is really sad that you equate "sharing" with USSR style communism. There is a huge middle ground between that and the trickle-down Reaganomics for which you seem to be advocating. We should have let that type of binary thinking die with the end of the Cold War.
Finger placed on duplicity.
Arguably only some of his time is spent on that kind of instability promoting activity. Most law enforcement agencies agree... Palantir good.
Most reasonable people agree... Funding your own senators and donating tons to Trump and friends... Bad.
Bad Thiel! Stick to wierd seasteading in your spare time if you want to get wierd. No 0 regulation AI floating compute unit seasteading. Only stable seasteading.
All kidding aside, you make a good point. Some of these guys should be a bit more responsible. They don't care what we think though. We're wierd non ceo hamsters who failed to make enough for the New Zealand bunker.
That is just a rephrasing of my original reasoning. You want the AI to do what you say regardless of whether what you requested is potentially immoral. This seemingly comes out of the notation that you are a moral person and therefore any request you make is inherently justified as a moral request. But what happens when immoral people use the system?
> I have a three point plan for the next 30 days:
> - Hire an independent investigator to dig into the entire process leading up to this point and generate a full report.
This looks like a CEO a bit different from many others? (in a good way I'm guessing, for the moment)
Mate... Just because you don't bat perfect doesn't make you a tarpit.
MSFT is a technological powerhouse. They have absolutely killed it since they were founded. They have defined personal computing for multiple generations and more or less made the word 'software' something spoken occasionally at kitchen tables vs people saying 'soft-what?'
Definitely not a tarpit. You are throwing out whole villages of babies because of some various nasty bathwater over the years.
The picture is bigger. So much crucial tech from MSFT. Remains true today.
It's not like he and Greg are brilliant mathematicians and coders that will sit down in a cubicle at Redmond and churn out code for AGI in six months.
If we hit the iceberg they will lose everything. Even if they're able to fly to their NZ hideout, it will already be robbed and occupied. The people that built and stocked their bunker will have formed a gang and confiscated all of his supplies. This is what happens in anarchy.
"... In the two years since the acquisition announcement, GitHub has reported a 41% increase in status page incidents. Furthermore, there has been a 97% increase in incident minutes, compared to the two years prior to the announcement..."
With all the love and respect in the world, who do you think you're talking about? Emmet Shear is not trans to my knowledge, (nor, I suspect, his knowledge). If you think this was about Mira Murati, you should really get up to date before telling people off about pronouns.
It seems like people forget that it was the investors’ money that made all this possible in the first place.
Are you perhaps referring to Mira Murati? She only lasted the weekend as interim CEO.
You'll just waste your time :)
Look, it's Microsoft's right to put any/all effort to making more money with their various practices.
It is our right to buy a Win10 Pro license for X amount of USD, then bolt down the ** out of it with the myriad of privacy tools to protect ourselves and have a "better Win7 Pro OS".
MS has always and will always try to play the game of getting more control, making more money, collecting more telemetry, do clean and dirty things until get caught. Welcome to the human condition. MS employees are humans. MS shareholders are also humans.
As for Windows Update, I don't think I've updated the core version at all since I installed it, and I am using WuMgr and WAU Manager (both portables) for very selective security updates.
It's a game. If you are a former sys-admin or a technical person, then you avoid their traps. If you are not, then the machine will chew your data, just like Google Analytics, AdMod, and so many others do.
Side-note: never update apps when they work 'alright', chances are you will regret it.
I realise it's strange to be claiming that a for-profit company is more likely to share AI than a nonprofit with "Open" in their name, yet that is the situation right now
For all we know, OpenAI may actually achieve AGI, and Microsoft will still want a front row seat in case that happens.
Your understanding is incorrect. There are some exceptions where noncompetes are allowed in California, but they mostly involve the sale or dissolution of business entities as such. There is no exception for executives, and none for people who happen to have equity stakes of any size.
Starting as a Non-Profit, naming it "Open" (the implication of the term Open in software is radically different from how they operate) etc. Now seems entirely driven by marketing and fiscal concerns. Feels like a bait and switch almost.
Meanwhile there's a whole strategy around regulatory capture going on, clouded in humanitarian and security concerns which are almost entirely speculative. Again, if we put our cynical hat on or simply follow the money, it seems like the whole narrative around AI safety (etc.) that is perpetuated by these people is FUD (towards law makers) and to inflate what AI actually can to (towards investors).
It's very hard for me right now not to see these actions as part of a machiavellistic strategy that is entirely focused around power, while it adorns itself with ethical concerns.
The current situation created a mess at OpenAI which should slow it down and permanently damage it‘s reputation somewhat. If I were Google and could choose either outcome, that‘s the outcome I would have chosen.
Human welfare is the domain of politics, not the economic system. The forces that are supposed to inject human welfare into economic decisions are the state through regulation, employees through negotiation and unions and civil society through the press.
Gpt4 is included in bing man... Bing creative mode and balanced mode both.
This is widely known. The investment included access to openai technology for integration in msft services.
Its not a traditional arrangement. This is also widely known. Its a complicated investment with a profitability sunset triggering return of equity to the nonprofit. Also included is technology transfer as long as the sunset doesn't trigger.
This is why Ilya felt comfortable to do it. He did many interviews where he explained this.
- https://www.nytimes.com/2021/05/16/business/bill-melinda-gat...
- https://www.popularmechanics.com/science/environment/a425435...
And as for what I want to do with it, no I don't plan to do anything I consider immoral. Surely that's true of almost everyone's actions almost all the time, almost by definition?
The story would be much more interesting if actually AI had fired him.
Did OpenAI and others pay for the training data from Stack Overflow, Twitter, Reddit, Github etc. Or any other source produced by mankind?
Thanks for the breakdown. Unfortunately, you have just made things too saucy for me to take that advice :-)
Also, the rumors and machinations are a pretty big part of this story.
This is obviously a power struggle, for control over, potentially, the highest potential company/technology/IP of the current moment.
Power structure in the modern corporate/tech space.. it has become normal to charter a company such that ownership and control are effectively separate... call it overiding the defaults of incorporation and company law.
FB and Tesla are the big publicly traded examples. OpenAI, is the most significant private example. It is also illegible, at least to me, considering the structural complexity. Non-profit, for profit & capped-profit entities in a subsidiary loop. Separate arrangements for ownership, control, and sometimes IP across the mesh of entities...
Openai is like some abstract theory of company law..
For Tesla and FB, the CEO is central to the paradigm. Barring crisis, Zuck or Elon's control over FB & TSLA just is. They have cash flows, market caps to protect. Ongoing operations. Shareholders have no real interest pursuing shareholder control or any kind of coups.
OpenAI.. totally different game.
The IP (protected or otherwise), technology, team, momentum... These are all that matter. Product and revenue.. direct financial return on investments, and such.. these are not driving factors. Not for msft or other parties. Rare.
Everyone just wants to leverage OpenAI's success, to compete with their own partners. Mutual benefit, it's dubious right now.
This is not the www or most other tech/science consortiums.. imo. It's not about fooling resources, pushing the industry forward or going beyond the blue sky scope of individual company r&d.
It may have been that initially, but that changed with gpt3.
There's no point in being the bing to Google's AdWords... And that's the kind of game it is now.
So.. there is a ton well very interesting stuff going on here. My ears are certainly pricked.
Absolutely agree on the need to completely change views as this saga progresses. None of these dogs are mine.
I personally expect the chat.openai.com site to just become a redirect to copilot.microsoft.com.
is all I'm saying. And I'm not interested in political debates. Neither right nor left side is good in long run. We have examples. More over we can predict what happens if...
My uneducated guess is that OpenAI really screwed up the PR part and the current Microsoft’s claims are more on the overall damage control / fire suppression side.
I’m not saying that’s definitely the case, but moving slowly when you live in a universe that might hurl a giant rock at you any minute doesn’t seem like a great idea.
For one, I'm not sure Sam Altman will tolerate MS bureaucracy for very long.
But secondly, the new MS-AI entity can't presumably just take from OpenAI what they did there, they need to make it again.
This takes a lot of resources (that MS has) but also a lot of time to provide feedback to the models; also, copyright issues regarding source materials are more sensitive today, and people are more attuned to them: Microsoft will have a harder time playing fast and lose with that today, than OpenAI 8 years ago.
Or, Sam at MS becomes OpenAI biggest customer? But in that case, what are all those researchers and top scientists that followed him there, going to do?
Interesting times in any case.
at least none of their software actually works
Microsoft Skynet would be rebooting every 15 minutes for updates
I’ve always thought that what OpenAI was purporting to do—-“protect” humanity from bad things that AI could do to it—-was a fool’s errand under a Capitalist system, what with the coercive law of competition and all.
(I thought also an interim CEO would be there more than a few days, and hadn't stored the name in my mind)
GitHub Actions is basically Azure Pipelines repackaged with a different UI, so I don't think they mind much.
The guy was born in the cloud compute division.
The board saw cloud compute was gonna be big. They made him the king. Good bet. The whole company went all in on cloud. Now they print more money than before.
Marketing person lol. He's an engineer. The guy literally gets back into VS code sometimes to stay in touch.
Microsoft still has to deal with OpenAI as an entity to keep the existing set up intact. The new team has to kinda start from zero. Right?
How so? I don't get the hype.
OpenAI trained truly ground breaking models that were miles ahead of anything the world had seen before. Everything else was really just a side show. Their marketing efforts were, at best, average. They called their flagship product "ChatGPT", a term that might resonate with AI scientists but appears as a random string of letters to the average person. They had no mobile app for a long time. Their web app had some major bugs.
Maybe Sam Altman deserves credit for attracting talent and capital, I don't know. But it seems to me that OpenAI's success by far and large hinges on their game-changing models. And by extension, the bulk of the credit goes to their AI research/tech teams.
I mean let's take a step back and speak in general. If someone objects to a rule, then yes, it is likely because they don't consider it wrong to break it. And quite possibly because they have a personal desire to do so. But surely that's openly implied, not a damning revelation?
Since it would be strange to just state a (rather obvious) fact, it appeared/s that you are arguing that the desire to not be constrained by OpenAI's version of morals could only be down to desires that most of us would indeed consider immoral. However your replier offered quite a convincing counterexample. Saying "this doesn't refute [the facts]" seems a bit of a non sequitur
For business and for the consumer. They can retire Bing search at this point, making it Microsoft Copilot for Web or something.
The “Open” types, ironically, wanted to keep LLMs hidden away from the public (something something religious AGI hysteria). These are the people who think they know better than you, and that we should centralize control with them for our own safety (see also, communism).
The evil profit motive you’re complaining about, is what democratized this tech and brought it to the masses in a form that is useful to them.
The “cash grab show” is the only incentive that has been proven to make people do useful things for the masses. Otherwise, it’s just way too tempting to hide in ivory towers and spend your days fantasizing about philosophical nonsense.
Bill missing the whole web stuff was more about their lawsuit because regulators believed that only through the browser on Windows people could access the internet. Which was a wrong prediction.
And Ballmer...Yeah. He fumbled hard with mobile. And thanks to the board stopping from buying Yahoo. Would be another AT&T merger fiasco.
You all know who you are cheering for right? It seems that profits or potential profits is all that matter here in the end for this community and the high-minded “OpenAI should be ‘Open’” was all bullshit.
I know this comment is going against the grain but I find the HN response to this (and previous responses to Altman’s firing, treating him like a god) to be quite disgusting.
Apple fanboys don’t have anything on the top comments here.
People have gotten into their heads that researchers are good and corporations are bad in every case which is simply not true. OpenAI's mission is worse for humanity than Microsoft's.
name a utopian fiction that has corporations as benefactors to humanity
https://pbs.twimg.com/media/Ff8RCKwUcAEkWk_?format=jpg&name=...
If you look at the charts with revenue streams - Microsoft is the most diversified in that regard, because basically each and every branch of Microsoft produces the similar amount of revenue.
With Xbox getting Activision it lifts up More Personal Computing to the level, comparable to other streams (and even higher than Windows).
I don't know what you mean? having lots of HN posts about you doesn't show leadership, it at most shows fanboyism amongst HN posters.
Source please? This just keeps getting repeated but there’s extremely limited public support and neither Sam’s nor the board’s decisions indicate he has a whole lot of leverage.
Now you have to apply in writing to Microsoft with a justification for having access to an uncensored API.
If Microsoft came up with a way of making trillion dollars in profit by enslaving half the planet, it kinda has to do it.
the bottleneck right now is mostly compute I think, and openai does not have the resources or expertise to allieviate that bottleneck on a timescale that can save them.
There are probably load so ways you can make language models with 100M parameters more efficient, but most of them won't scale to models with 100B parameters.
IIRC there is a bit of a phase transition that happens around 7B parameters where the distribution of activations changes qualitatively.
Anthropic have interpretability papers where their method does not work for 'small' models (with ~5B parameters) but works great for models with >50B parameters.
What an AI would almost certainly tell you is that building an atomic bomb is no joke, even if you have access to a nuclear reactor, have the budget of a nation-state, and can direct an entire team of trained nuclear physicists to work on the project for years.
Next thing you'll be concerned about toddlers launching lasers into orbit and dominating the Earth from space.
Now they get 40 percent of open ai talent and 50 percent of the for profit openai subsidiary.
Pretty sure when the market opens you'll see confirmation that they came out on top.
It's a win for everyone honestly. Anthropic split all over again but this time the progressives got pushed out vs the conservatives leaving voluntarily.
They couldn't keep nice under the tent. Now two tents.
Little diff because this time an investor with special privaleges made a new special tent quick to bag talent.
Easy decision for msft. No talent to competitors. Small talent pool. The other big boys were already all over that. Salty bosses at other outfits. No poach for them. Satya too clever and brought the checkbook plus already courted the cutest girls earlier for a different dance. Hell he was assisting in the negotiation when the old dance got all rough and the jets started throwing hooks about safety and scale and bla bla we all know the story.
Satya hunts with an elephant gun with one of those laser sites and the auto trigger that fires automatically when the cross hair goes over the target. Rip sundar. 2 rounds for satya. One more and I feel bad for Google... Naw... Couldn't feel bad for Google. Punchable outfit. They do punchable things. We all know it... I'm just saying it.
Interesting to note how much of this is driven by individual billionaire humans being hung up on stuff like ketamine. I'm given to understand numerous high-ranking Nazis were hung up on amphetamines. Humans like to try and make themselves into deities, by basically hitting themselves in the brain with rocks.
Doesn't end well.
You know what is an even bigger temptation to people than money - power. And being a high priest for some “god” controlling access from the unwashed masses who might use it for “bad” is a really heady dose of power.
This safety argument was used to justify monarchy, illiteracy, religious coercion.
There is a much greater chance of AI getting locked away from normal people by a non-profit on a power trip, rather than by a corporation looking to maximize profit.
That's kind of the point, we all do. What is harder to understand are the low stakes whims of academics bickering over their fiefdoms.
This move is bringing the incentives back to a normal and understood paradigm. And as a user of AI, will likely lead to better, quicker, and less hamstringed products and should be in our benefit.
Really, all corporations are evil, and they are all made of humans that look the other way, because everyone needs that pay check to eat.
And on the sliding scale of evil, there are a lot of more evil. Like BP, pharma co, Union Carbide. etc... etc...
the problem with eugenics isn't that we can't control population land genetic expression, it's that genetic expression is a fractal landscape that's not predictable from human stated goals.
the ethics of doing things "because you meant well" is well established as, not enough.
All the other somewhat reliable sources do not have him as one.
So what is your source for your assertion?
Not only that, it's a blindered take on what human opinion is. Humans are killer apes AND cooperative, practically eusocial apes. Failing to understand both horns of that dilemma is a serious mistake.
the people wholl be in power then will still resemble the basics: violence, means of production and more violence.
which they know and are basically planning dystopian police states.
The dataset is more challenging, but here msft can help - since they have bing and github as well. So they might be able to make few shortcuts here.
The most time consuming part is compute, but here again msft has the compute.
Will they beat chat-gpt 4 in a year? Guess no. But they will come very close to it and maybe it would not matter that much if you focus on the product.
unfortunately, people are flawed.
see, what exactly is insurance at the billionaires level.
Basically what you see these days is PR teams together with legal teams acting like individual that hired them. There are exceptions, but they are outliers in say Trump style, not these billionaires. Same, heck even more for politics.
It can be easily transferred into personal or professional relationships. For me at least, this analysis works 100% of the time when for example rest of friends or family struggle hard to understand actions of some individuals. Just point them to their previous actions and see the consistency emerging. This is how you can easily work with various people if you are smart but lack social skills, just observe actions and ignore blah.
People simply don't change, they may reflect change in their environment but thats it. Unless we talk about 2 decades+ since last encounter, but even then it may be just more polished PR.
Rolling over, covering head with blanket. 'Surely the dystopian future, rich cleansing the world, is still a few decades away, just need a little more sleepy time'.
That's a slightly flamboyant reading.. but I agree with the gist.
A slim chance of total right off doctor off.. that was always the case. This decision does not affect it much. The place in the risk model, where most of the action happens... Is less dramatic effects on more likely bans of the probability curve.
Msft cannot be kicked off the team. They still have all of the rights to their openai investment no matter who the CEO is.
Meanwhile, is clearly competing, participating, and doing business with openai. The hierarchy of paradigms, is flexible... Competing appears to have won.
I agree that direct financial returns, are the lesser part of the investment case for msft.. and the other participants. That's pretty much standard in consortium-like ventures.
At the base level, openai's IP is still largely science, unpatentable know how and key people. Msft have some access to (I assume) of openAI' defendable IP via their participation in the consortium, or 49% ownership of the for-profit entity. Meanwhile, openai is not so far ahead that pacing them from a dead start is impossible.
I also agree, that this represents a decision to launch ahead aggressively in the generative AI space.
In the latter 2000s, Google have the competence, technology, resources and momentum to smash anyone else on anything worldwideWeb.
They won all the "races." Google have never been good at turning wins into businesses, but they did acquire the wins handily. Microsoft wants to be that for the 2020s.
Able to replicate everything, for the new paradigm OpenAI's achievments probably represents.
The AI spreadsheet. The LLM email client. GPT search. Autobot jira. Literally and proverbially.
At least in theory... Microsoft is or will be in a position to start executing on all of these.
Sama, if he's actually motivated to do this.. it's pretty much the ideal person on planet earth for that task.
I'm sure takes a lot to motivate him. Otoh, CEO of Microsoft is it realistic prize if he wins this game. The man is basically Microsoft the person. I mean that as a compliment.. sort of.
One way or another, I expect that implementing OpenAI-ish models in applications is about commence.
Companies have been pleading chatbot customer support for years. They may get it soon, but so will the customers. That makes for a whole new thing in the place where customer support used to exist. At least, that is the bull case.
That said, I have said a lot. All speculative. I'll probabilistic, even where my speculations are correct. These are not really predictions. I'm chewing the cud.
AI is just another product by another corporation. If I get to benefit from the technology while the company that offers it also makes profit, that’s fine, I think? There wasn’t publicly available AI until someone decided to sell it.
It's a cliche, but it's true: actions speak louder than words.
You are the things you do, not the things you say you want to be.
MSR leadership is probably a little shaken at the moment.
To some extent human societies viewed as eusocial organisms are better at this than individual humans. And rightly so, because human follies can have catastrophic effects on the society/organism.
I know about a man who had turned country upside down while "having people's best interests" in mind.
No.
It comes from the notion that YOU don't get to decide what MY morals should be. Nor do I get to decide what yours should be.
> But what happens when immoral people use the system?
Then the things happen that they want to happen. So what? Blasphemy or bad math is none of your business. Get out of people's lives.
Please fuckin don't. I do not want yet another entity to tell me how to live my life.
As for your actual question, it seems to me that a straw is topologically equivalent to a torus, so it has 1 hole, right?
people like Steve jobs are the best example of flawed logic. in the face of a completely different set of heuristic and logical information, he assumed he was just as capable, and chose fruit smoothies over more efficacious and proven medication.
they absolutely, like jobs, are playing a game they think they fully understand and absolutely are likely to chose medicine akin to jobs
just watch Elon and everything he's choosing to do.
these people are all normal but society has given the a deadly amount of leverage without any specific training.
Someone who could qualify to go to both Harvard and MIT will be better at anything they set their mind to than the regular grad with four year of education after the said four years.
sure, we should have competitive bodies seeking better means to ends but ultimately there's always going to be a structure to hold them accountable.
people have a lot of faith that money is the best fitness function for humanity.
Also companies, especially public companies, are typically mandated by law to prioritize profit.
Ilya might be a genius, but he's not the only genius that OpenAI had.
Probably a few CEOs great grand-childs will probably have to write how they're very very sad that their long forgotten relatives have destroyed most of the planet, and how they're just so lucky to be among the few that are still living a luxurious life somewhere in the Solomon Islands.
"Humanity's interest at heart" is a mouthful. I'm not denigrating it. I think it is really important.
That said, as a proverbial human... I am not hanging my hat on that charter. Members of the consortium all also claim to be serving the common good in their other ventures. So do Exxon.
OpenAI haven't created, or even articulated a coherent, legible, and believable model for enshrining humanity's interests. The corporate structure flowchart of nonprofit, LLCs, and such.. it is not anywhere near sufficient.
OpenAI in no way belongs to humanity. Not rhetorically, legally or in practice... currently.
I'm all for efforts to prevent these new technologies from being stolen from humanity, controlled monopolistically... From moderate to radical ideas, I'm all ears.
What happened to the human consortium that was the worldwideWeb, gnu, and descendant projects like Wikipedia... That was moral theft, imo. I am for any effort to avoid a repeat. OpenAI is not such an effort, as far as I can tell.
If it is, it's not too late. Open AI haven't betrayed the generous reading of the mission in charter. They just haven't taken hard steps to achieving it. Instead, they have left things open, and I think the more realistic take is the default one.
in America, nonprofits are just how rich people run around trying to get tax avoidance, plaudettes and now wealth transfers.
I doubt OpenAI is different not that Altman is anything but a figurehead.
but nonprofits in America is how the government has chosen to derelict it's duties.
I sense a lot of respect and appreciation for his role, but unfortunately I just don’t know many details and I’m curious about the highlights.
go read musks public statements leading up to Twitter purchase.
it's pretty clear they were fired because of profit motives. that's all I hear.
Agreed, and we're also bad at being told what to do. Especially when someone says they know better than us.
What we are extremely good at is adaptation and technological advancement. Since we know this already , why do we try to stop or slow progress.
They were, at no time, under any obligation to do anything except what they wanted and no one could force them otherwise. They held all the cards. The tech media instead ran with gossip supplied by VCs and printed that as news. They were all going to resign 8 hours after their decision. Really? Mass resignations were coming. Really? OpenAI is a 700 people company, 3 people have resigned in solidarity with Altman and Brockman at the time.
Sam had no leverage. Microsoft and other investors had little leverage. Reading the news you’d think otherwise.
It's interesting that "Effective Altruism" enthusiasts all seem to be mega-rich grifters.
What you describe is indeed the liberal (as in liberalism) ideal of how societies should be structured. But what is supposed to happen is necessarily not what actually happens.
The state should be controlled by the population through democracy, but few would claim with a straight face that the economic power doesn't influence the state.
"I am not into money or power man, I just want to be a good person and save the world man"
I just can't believe such simplistic, transparent, bullshit works so consistently as to become standard PR.
Or at least the most hyped AI team in the world. The level of cult of personality around OpenAI is reaching pretty nauseating levels.
it's gambling, pure and simple.
It is a good thing that society has mechanisms to at least try and control the rate of progress.
Altman just rushed everyone’s hand by publishing it into the world at cost
Gotcha! We can both come up with absurd examples.
Nobody is telling you how to live your life, unless your life's goal is to erect Skynet.
See openai investment with technology transfers and sunset clauses. They just did a new dance.
They'll prod do something special for these guys.
They would never be employees. That's for non Sam Altman's and non Brockmans. Brockman is prob already a billionaire from openai shares. No employees here. Big boys.
Not doing that would be participating in illegal wage suppression. I'm not sure how following the law means OpenAI and MSFT can't continue a business relationship.
Exhibit A: this weekend, lol.
If we use the standard of the alignment folks - that the technology today doesn't even have to be the danger, but an imaginary technology that could someday be built might be the danger. And we don't even have to be able to articulate clearly how it's a danger, we can just postulate the possibility. Then all technology becomes suspect, and needs a priest class to decided what access the population can have for fear of risking doomsday.
Maybe they got funding for a proper incident team? Or changed the metrics of a incdient is, maybe the SLAs changed to mirror MS SLAs?
Also Betteridge's law.
Buddhists die in the Armageddon same as others.
The bunkers are in new Zealand which is an island and less likely to fall into chaos with the rest of the world in event of ww3 and/or moderate nuclear events.
I'm sure the bunkers are nice. Material notions got little to do with it. The bunker isn't filled with Ferraris. They are filled with food, a few copies of the internet and probably wierd sperms banks or who knows what for repopulating the earth with Altman's and Theils.
Even if he does nothing, he keeps the team together and that is worth quite a bit.
>If I get to benefit from the technology while the company that offers it also makes profit, that's fine.
What if you don't benefit because you lose your job to AI or have to deal with the mess created by real looking disinformation created by AI?
Is was already bad with fake images out of ARMA but with AI we get a whole new level of fakes.
Windows got massively worse during his tenure in literally everything that can get worse including half-legal snooping on all users including Enterprise ones (I stand by the statement that this is idiotic long term strategy driven by childish emotions like FOMO - no way he didn't have a direct say in this).
Office is certainly PITA and getting worse in my experience, but that can be corporate modifications/restrictions I am exposed to.
Teams was, is and probably forever will be pathetic, buggy, slow and just a bad joke compared to some competition with 1% of their budget.
These are core extremely visible products and for most of mankind 100% of the surface with MS. There is not even an attempt for corrections, direction is set and rest are details.
I'm no embarrased billionaire, but there is a place for both.
My friends and family had an awful opinion of AI in general because it was the voice assistants were sold to them as the best example of AI. That changed with ChatGPT.
Google invented really useful AI but failed to deliver. OpenAI did so in record time. Now it's Google that's playing catching up with the technology they invented themselves, ironically.
But my comment applies more to Microsoft and Amazon, tbh.
The pain is real :(
"You use Windows because it is the only OS you know. I use Windows because it is the only OS you know."
Yes directly, the $10B investment in the company itself may be a write off. But it's not just about that.
> A straw has one hole that runs through its entire length.
Sama on X said as of late 2022 they were single digit pennies per query and dropping
As to cat-and-mouse with jailbreakers, I don't remember any thorough articles or videos. It's mostly based on discussions on LLM forums. Claude is widely regarded as one of the best models for NSFW roleplay, which completely invalidates Antropic's claims about safety and alignment being "solved."
Gates keeps repeating. Noone hears it.
Did he say that before or after Microsoft announced they'd hired Altman and Brockman, and poached a lot of OpenAI's top researchers?
Everything points to this being a haphazard change that’s clumsy at best.
The original 2019 deal was described as:
> Microsoft and OpenAI will jointly build new Azure AI supercomputing technologies
> OpenAI will port its services to run on Microsoft Azure, which it will use to create new AI technologies and deliver on the promise of artificial general intelligence
> Microsoft will become OpenAI’s preferred partner for commercializing new AI technologies
The $10 billion deal was probably not making a ton of money for MSFT as it was 75% percent of profits, which are easy to get rid of, until they get 49% of the company.
Can you explain why MSFT would spend $10 B for either of these things if they just got OpenAI’s IP?
"Europe is falling behind" very much depends on your metrics. I guess on HN it's technological innovation, but for most people the metric would be quality of life, happiness, liveability etc. and Europe's left-leaning approach is doing very nicely in that regard; better than the US.
This massively increases the odds we’ll see AI regulated. That isn’t what Altman et al intended with their national press tour—the goal was to talk up the tech. But it should be good in the long run.
I also assume there will be litigation about what Sam et al can bring with them, and what they cannot.
Indeed, I think trying to do it that way increases the risk that the single private organization captures its regulators and ends up without effective oversight. To put it bluntly: I think it's going to be easier, politically, to regulate this technology with it being a battle between Microsoft, Meta, and Google all focused on commercial applications, than with the clearly dominant organization being a nonprofit that is supposedly altruistic and self-regulating.
I have sympathy for people who think that all sounds like a bad outcome because they are skeptical of politics and trust the big brains at OpenAI more. But personally I think governments have the ultimate responsibility to look out for the interests of the societies they govern.
It doesn't take a cartoon supervillain to keep selling cigarettes like candy even though you know they increase cancer risks. Or for oil companies to keep producing oil and burying alternative energy sources. Or for the Sacklers to give us Oxy.
Being popular doesn't mean someone is efficient, or even has any merits apart from being able to become popular (Kardashians come to mind).
For many years, Microsoft Research had a reputation for giving researchers the most freedom. Probably even that's the reason why it hasn't been as successful as other bigcorp research labs.
For Sam , he got more than what he was asking and a better prospect to become CEO of Microsoft when Satya leaves. Satya lead cloud division, which was the industry growth market at that time before becoming CEO and now sam is leading AI division , the next growth market.
Ilya still lost in all of this , he managed to get back the keys of a city from sam , who now got this keys to the whole country . Eventually sam will pull everyone out of the city in to rest of his country. Microsoft just needs a few openai employees to join them . They just need data and GPU , openai has reached its limits for getting more data and was begging for more private data while Microsoft holds worlds data, they will just give a few offers to business or free Microsoft products in return of using their data or use their own. I think it’s the end for openAI.
It seems to me roughly all of the value of OpenAI’s products is in the model itself and presumably the supporting infrastructure, neither of which seem like they’re going to MSFT (yet?).
For Example, check out the proceedings of the AGI Conference that's been going on for 16 years. https://www.agi-conference.org/
I have faith that Ilya. He's not going to allow this blunder to define his reputation.
He's going to go all in on research to find something to replace Transformers, leaving everyone else in the dust.
If this is how it plays out, OpenAI's board will be famous for decades to come for their boneheaded handling of this situation.
Exxon was responsible for the oil spill response that coagulated the oil and sank it. They were surprisingly proud of this, having recommended it to BP so that the extent of leaked oil was less noticeable from the surface.
Exxon also invested heavily in an alternative energy company doing research to create oil from a certain type of algae. The investment was all a PR stunt that gave them enough leverage to shelve the research that was successful enough to be considered a threat.
1. OpenAI just got bumped up to my top address to apply to (if I would have the skills of a scientist, I am only an engineer level), I want AGI to happen and can totally understand that the actual scientists don't really care for money or becoming a big company at all, this is more a burden than anything else for research speed. It doesn't matter that the "company OpenAI" implodes here as long as they can pay their scientists and have access to compute, which they have do.
2. Microsoft can quite seamlessly pick up the ball and commercialize GPTs like no tomorrow and without restraint. And while there are lots of bad things to say about microsoft, reliable operations and support is something I trust them more than most others, so if the OAI API simply is moved as-is to some MSFT infrastructure thats a _good_ thing in my book.
3. Sam and his buddies are taken care of because they are in for the money ultimately, whereas the true researchers can stay at OpenAI. Working for Sam now is straightforward commercialization without the "open" shenaningans, and working for OpenAI can now become the idealistic thing again that also attracts people.
4. Satya Nadella is becoming celebrated and MSFT shareholder value will eventually rise even further. They actually don't have any interest in "smashing OAI" but the new setup actually streamlines everything once the initial operational hurdles (including staffing) are solved.
5. We outsiders end up with a OpenAI research focussed purely on AGI (<3), some product team selling all steps along the way to us but with more professionality in operations (<3).
6. I am really waiting for when Tim Cook announces anything about this topic in general. Never ever underestimate Apple, especially when there is radio silence, and when the first movers in a field have fired their shots already.
The money was promised in tranches, and probably much of it in the form of spare Azure capacity. Microsoft did not hand OpenAI a $10B check.
Satya gives away something he had excess of, and gets 75% of the profits that result from its use, and half of the resulting company. Gives him an excuse to hoard Nvidia GPUs.
If it goes to the moon he’s way up. If it dies he’s down only a fraction of the $10B. If it meanders along his costs are somewhat offset, and presumably he can exit at some point.
You're talking about investors and shareholders like they're just machines that only ever prioritize profit. That's just obviously not true.
That may have been the leverage Microsoft and other investors tried to use, but OpenAI leadership thinks won't happen. We'll see what unfolds.
MS just wants to integrate AI into their junk enterprise tools. Hobbyists and small businesses could be left out?
They won’t necessarily be able to attract similar technical talent because they no longer have the open non profit mission not the lottery ticket startup PPO shares.
Working on AI at Microsoft was always an option even before they were hired, not sure if they tip the scale?
Um, have you heard of lead additives to gasoline? CFCs? Asbestos? Smoking? History is littered with complete failures of governments to appropriately regulate new technology in the face of an economic incentive to ignore or minimize "externalities" and long-term risk for short-term gain.
The idea of having a non-profit, with an explicit mandate to use to pursue the benefit of all mankind, be the first one to achieve the next levels of technology was at least worth a shot. OpenAI's existence doesn't stop other companies from pursuing technology, nor does it prevent governments doing coordination. But it at least gives a chance that a potentially dangerous technology will go in the right direction.
Don’t think Sam or Greg have it in them to build a competing AI model suite, that too inside a bureaucracy like Microsoft.
I think this is exactly what OpenAI wanted - get the business types out and focus on building brilliant models which asymptotically approach AGI whose safety and ethicality they can guarantee.
Part of me thinks that Nadella, having already demonstrated his mastery over all his competitor CEOs with one deft move after another over the past few years, took this on because he needed a new challenge.
I'd wager Altman will either get sidelined and pushed out, or become Nadella's successor, over the course of the next decade or so.
It's an interesting time!
A non-profit is not by any means guaranteed to avoid the dangers of AI. But at a minimum it will avoid the greed-driven myopia that seems to be the default when companies are beholden to Wall Street shareholders.
My guess is that a lot of the people that will follow Sam and Gregg are that kind of cult-follower.
There must be an Aesop’s fable that sheds light on the “tragedy”.
https://www.goodreads.com/quotes/923989-if-you-choose-bad-co...
Or maybe this one? (Ape seems to map to Microsoft, or possibly a hat tip to Balmer ..)
The fable is of the Two Travellers and the Apes.
Two men, one who always spoke the truth and the other who told nothing but lies, were traveling together and by chance came to the land of Apes. One of the Apes, who had raised himself to be king, commanded them to be seized and brought before him, that he might know what was said of him among men. He ordered at the same time that all the Apes be arranged in a long row on his right hand and on his left, and that a throne be placed for him, as was the custom among men.
After these preparations, he signified that the two men should be brought before him, and greeted them with this salutation: “What sort of a king do I seem to you to be, O strangers?’ The Lying Traveller replied, “You seem to me a most mighty king.” “And what is your estimate of those you see around me?’ “These,” he made answer, “are worthy companions of yourself, fit at least to be ambassadors and leaders of armies”. The Ape and all his court, gratified with the lie, commanded that a handsome present be given to the flatterer.
On this the truthful Traveller thought to himself, “If so great a reward be given for a lie, with what gift may not I be rewarded if, according to my custom, I tell the truth?’ The Ape quickly turned to him. “And pray how do I and these my friends around me seem to you?’ “Thou art,” he said, “a most excellent Ape, and all these thy companions after thy example are excellent Apes too.” The King of the Apes, enraged at hearing these truths, gave him over to the teeth and claws of his companions.
The end.
Chatgpt is a big part of my workflow. (And maybe my best friend?). What happens now?
It's seems like a cult right now, tbh.
Nah it would make it too understandable. It's Microsoft, they'll just rename Bing to Cortana Series X 365. And they'll keep Cortana alive but as a totally different product.
This one's not right - Altman famously had no equity in OpenAI. When asked by Congress he said he makes enough to pay for health insurance. It's pretty clear Sam wants to advance the state of AI quickly and is using commercialization as a tool to do that.
Otherwise I generally agree with you (except for maybe #2 - they had the right to commercialize GPTs anyway as part of the prior funding).
Engineers aren’t a lower level than scientists, it’s just a different career path.
Scientists generate lots of ideas in controlled environments and engineers work to make those ideas work in the wild real world.
Both are difficult and important in their own right.
As long as you ignore externalities, yes.
This is a real possibility and something I'm sure Ilya and the board thought through. Here's my guess:
- There's been a culture rift within OpenAI as it scaled up its hiring. The people who have joined may not have all been mission driven and shared the same values. They may have been there because of the valuation and attention the company was receiving. These people will leave and join Altman or another company. This is seen as a net good by the board.
- There's always been a sect of researchers who were suspicious of OpenAI because of its odd governance structure and commercialization. These people now have clear evidence that the company stands for what it states and are MORE likely to join. This is what the board wants.
> If investors decide not to give money to OpenAI because their leadership comes across as over their heads, how will they continue running?
I don't think this is an actual problem. Anthropic can secure funding just fine. Emmet is an ex-Amazon / AWS executive. There's possibility that AWS will be the partner providing computing in exchange for OpenAI's models being exclusively offered as part of Amazon Bedrock, for example, if this issue with Microsoft festers. I know Microsoft sees this as a clear warning: We can go to AWS if you push us too hard here.
I don't see how the partnership with MSFT isn't dissolved in some way in the coming week as Altman and co. openly try to poach OpenAI talent. And again, maybe dissolving the MSFT ties was something the board wanted. It's hard to imagine they didn't think it was a possibility given the way they handled announcing this on Friday, and it's hard to imagine it wasn't intentional.
Some people downvote (it's not about the points) but I merely state the reality and not my opinions.
I've made my living as a sys-admin early in my career using MS products, so thank you MS for putting food on my table. But this doesn't negate the dirty games/dark patterns/etc.
For a mathematician, yes. For everyone else, it obviously has two, because when you plug one end, only then it has one.
Google has been hyping gemini since the spring (and not delivering it)
Amazon's Titan Model is not quite there yet.
But I think it is probably sufficient to point to the language in the contracts granting illiquid equity instruments that explicitly say that the grantee should not have any expectation of a return.
But I think this is an actual problem with the legal structure of how our industry is financed! But it's not clear to me what a good solution would even be. Without the ability to compensate people with lottery tickets, it would just be even more irrational for anyone to work anywhere besides the big public companies with liquid stock. And that would be a real shame.
It's ironic because the only AI that doesn't have "pesky ethics qualms" are... literally the entire open source scene, all of the models on hugging face, etc...
All of the megacorps are the only safety and security happening in AI. I can easily run open source models locally and create all manner of political propaganda, I could create p^rnography of celebrities and politicians, or deeply racist or bigoted materials. The open source scene makes this trivial these days.
So to describe it as "Cyberpunk utopia where the Megacorp finally rids us of those pesky ethics qualms" when the open source scene has already done that today is just wild to me.
We have AI without ethics today, and every not-for-profit researcher, open source model and hacker with a cool script are behind it. If OpenAI goes back to being Open, they'll help supercharge the no-ethics AI reality of running models without corporate safety and ethics.
So basically they get to control ChatGPT 2.0 and get a 10 billion tax credit for it.
Honestly the board at least owes Satya a drink.
I assume GP is talking in context of OpenAI/general AI research, where you need a PhD to apply for the research scientist positions and MS/Bachelors to apply for research engineer positions afaik.
ChatGPT says "fuck" just fine.
If OpenAI gets back to actually publishing papers to everyone's benefit, that will be a huge win for humanity!
Credit to Nadella for making a big cultural shift over the past several years.
I think the predictable thing would have been a new company with new investment from Microsoft. But this is better; it a bit like magical thinking that MS would want to just throw more money after a new venture and essentially write off the old one. This solution accomplished similar things, but gives more to Microsoft in the trade by bringing that "new company" fully in house.
Excuse you? Greater where? Github was an amazing revolution, unique of its kind. Microsoft didn't kill it but didn't make it even 1% better for the users, just turned it into a cash cow. Linkedin is currently a PoS.
Most of stock is not owned by individual persons (not that there aren't individuals that don't give a shit about enslaving people), but other companies and institutions that by charter prioritize profit. E.g. Microsoft's institutional ownership is around 70%.
Recruiting. At the end of the day, that's the most important job a CEO has. If they can recruit the best AI people, they're the most formidable AI team.
> Are they going to pay then >$1M salaries?
I would wager very heavily that they are. My guess is Satya more or less promised Sam that he'd match comp for anybody who wants to leave OpenAI.
I do think it's funny how the Blockchain Consultants have become AI Consultants though.
Godwin's Law.
Also AGI will never happen IMO. I’m not credentialed. Have no real proof to back it up and won’t argue one way or the other with anyone, but deep down I just don’t believe it’s even physically possible for AGI. I’ll be shocked if it is, but until then I’m going to view any company with that set as its goal as a joke.
I don’t see a single thing wrong with Altman either, primarily because I never bought into the whole “open” story anyway.
And no, this isn’t sarcasm. I just think a lot of HN folks live with rosy-tinted glasses of “open” companies and “AGI that benefits humanity”. It’s all an illusion and if we ever somehow manage to generate AGI it WILL be the end of us as a species. There’s no doubt.
A phd scientist may not be a good fit for an engineering job. Their degree doesn’t matter.
An phd-having engineer might not be a good fit for a research job either… because it’s a different job.
https://www.ottingerlaw.com/blog/executives-should-not-ignor...
https://leginfo.legislature.ca.gov/faces/codes_displaySectio...
With billg missing the dawn of the Internet, I didn't mean the IE integration fiasco and the resulting lawsuit - that's actually the part they got more or less right (in their own perverted 3E approach, not according to my moral compass), but too late to become dominant. They first wasted time trying to create their own MSN walled garden a la Compuserve .
To Ballmer's credit he did start Azure, although it doesn't feel it was a serious enough effort, until he was replaced. But between Vista, Windows 8, Windows Mobile, Nokia, Skype, Zune, Kin, etc etc... it's no wonder it's been called Microsoft's lost decade.
Either way based on many CEOs track records healthy skepticism should be involved and majority of them find ways to profit on it at some point or another.
Much as LLM is essentially industrial strength gaslighting, so is the meta around it.
It's not so important. There's not much there. No it's not going to take your jobs.
I am old enough to remember not only the How Blockchain Is Solving World Hunger articles but the paperless office claims as well -- I was born within a few weeks of the publication of the (in)famous "The Office of the Future" article from BusinessWeek.
Didn't happen.
No, a plausible sentence generator is just that: the next hype.
In fact some of the hustlers behind it are the same as those who have hustled crypto. Someone got to hold the bag on that one but it wasn't the rich white techbros. So it'll be here. Once enough companies get burned when the stochastic parrot botches something badly enough to get a massive fine from a regulator or a devastating lawsuit, everyone will run for the hills. And again... it won't be the VCs holding the bag. Guess who will be. Guess why AI is so badly hyped.
If you think the ChatGPT release happening within a few weeks of the collapse of FTX is a coincidence I have ... well, not a bridge but an AI hype to sell to you and in fact you already bought it.
There's been a lot of uncertainty created.
It's interesting that others see so much "win" certainty.
Finally they got rid of this pesky idea of "safety". We're back in "break things" mode.
Does nobody recognize the stakes here? AGI, which soon would accelerate into something far more capable, ends civilization. I'm not saying it would kill us, I'm saying it makes us cognitively obsolete and all meaning is lost.
AI Safety isn't a micro bias in the training set. It's existential at planetary scale. Yet we let a bunch of cowboys just go "let's see what happens" with zero meaningful regulation in sight. And we applaud them.
I know AGI isn't here yet. I know Microsoft would not allow for zero safety. I'm just saying that on the road to AGI, about two dozen people are deciding on our collective faith. With as ultimate chief the guy behind shit coin "world coin".
With a poor security track record [0], miserable support for office 365 products and lack of transparency on issues in general, I doubt this is something to look forward to with Microsoft.
[0] https://www.wyden.senate.gov/imo/media/doc/wyden_letter_to_c...
the point is, you cant rely on a scenario where society breaks down, that survivors will act more rational then than they do now.
Whether they actually move to MS or not remains to be seen, but it is definitely a strong indicator that they're not "aligned" with OpenAI anymore.
I am. In fact the goals of any for-profit company is the profit. If a CEO doesn't align with that goal in mind, they get replaced. That's non-negotiable. A for-profit company without profit is a dead company.
OpenAI already runs all its infrastructure on Azure.
And despite the shittiness, even that 5% is doing great because their audience is now billions of mostly computer-illiterate people, who don't even have an opinion on the technical merits, the performance, the bugginess, the snooping, the feature gap, etc etc etc.
The opinion of few million geeks who are mostly not using Windows anyway (or whose only contacts with anything Microsoft are due to their employers' choice of platform) doesn't ultimately matter much, Microsoft knows it, and they have no reason to change direction despite our frustration. Some better privacy law could nudge them, anything short of a legal directive won't go far.
Or so they say. I have no reason to trust them. It is not some little thing we are talking about
My bet is that OpenAI heard of this MSFT poach, Sam et al were not forthright and meanwhile were becoming very interested in the engineering details.
The GNU project and the Wikimedia Foundation are still non profit today, and even if you disagree with their results their goal is to server humanity for free.
https://nitter.net/ilyasut/status/1726590052392956028
“I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.”
Nov 20, 2023 · 1:15 PM UTC - Ilya S.
They tried pressuring the board according to the rumors over the weekend. That didn’t work; now they are providing a home for the people that were going to leave OpenAI over this.
If they are smart they will honor the 10B deal as well and continue funding OpenAI. This way they will get the benefit of both teams and maybe even a bit more now that they will be competing.
With a bit of luck Amazon too. This space just really can’t become a monopoly
Most of those problems have been solved or at least been reduced by regulation. Regulators however aren't all knowing gods and one finds out about risks and problems only later, but except for smoking regulators have covered those aspects (and anti-smoking laws become stricter, generally, depending on country, regularly, but it's a cultural habit older than most states ...)
It's not about wanting to destroy the world, but short term greed whose consequences destroy the world.
Not sure why you didn’t research before saying that! It was $10B committed and not a cash handover of that amount. Also, majority of that’s Azure credits
I assumed it was their entertaining offers from Microsoft that got Sam the ax from the OpenAI board.
The true researchers will go to who pays them most. If OpenAi loses funding they will go to Microsoft with Altman or back to Google.
A journalist was car bombed in broad daylight.
If you push the wrong buttons of trillion dollar corporations, they just off you can continue with business as usual.
If Microsoft sees trillions of dollars in ending all of your work, they’ll take it in a heart beat.
What I meant is, most likely assuming that you are using pytorch / jax you could code down the model pretty fast. Just compare it to llama, sure it is far behind, but the llama model is under 1000 lines of code and pretty good.
There is tons of work, for the training, infra, preparing the data and so on. That would result guess in millions lines of code. But the core ideas and the model are likely thin I would argue. So that is my point.
Do you think profit minded people and organizations aren't motivated by a desire for power? Removing one path to corruption doesn't mean I think it is impossible for a non-profit to become corrupted, but it is one less thing pulling them in that direction.
He talked recently about how he's been able to watch these huge leaps in human progress and what a privilege that is. I believe that - don't you think it would be insane and amazing to get to see everything OpenAI is doing from the inside? If you already have so much money that the incremental value of the next dollar you earn is effectively zero, is it unreasonable to think that a seat at the table in one of the most important endeavors in the history of our species is worth more than any amount of money you could earn?
And then on top of that, even if you take a cynical view of things, he's put himself in a position where he can see at least months ahead of where basically all of technology is going to go. You don't actually have to be a shareholder to derive an enormous amount of value from that. Less cynically, it puts you in a position to steer the world toward what you feel is best.
So MSFT still needs to compete with OpenAI - which will likely have an extremely adversarial relationship with MSFT if MSFT poaches nearly everyone.
What if OpenAI decides to partner with Anthropic and Google?
Doesn't seem like a win for MSFT at all.
Before that USSR collapsed under Gorbachev. Why? They simply lost with their planned economy where nobody wants to take a risk. Because (1) it's not rewarding, (2) no individual has enough resources (3) to get thing moving they will have to convince a lot of bureaucrats who don't want to take a risk. They moved forward thanks to few exceptional people. But there wasn't as many willing to take a risk as in 'rotting' capitalism. Don't know why, but leaders didn't see Chinese way. Probably they were busy with internal rats fights and didn't see what's in it for them.
My idea is that there are two extremes. On left side people can be happy like yogs. But they don't produce anything or move forward. On the right side is pure capitalism. Which is inhuman. The optimum is somewhere in between. With good life quality and fast progress. What happens when resources are shared too much and life is good? You can see it in Germany today. 80% of Ukrainian refugees don't works and don't want to.
Microsoft does service China with Bing, for example.
You should not sell OpenAI's to China or to Microsoft.
Especially after a DDOS by Sue Don and a change in billing.
[1] https://en.wikipedia.org/wiki/List_of_websites_blocked_in_ma...
I get your pessimism, but the same has been said about a lot of tech that did go on to change the world, just because a lot of people made a lot of noise about previous tech that failed to come to anything doesn't mean to say this is the same thing, it's completely different tech.
A lot of OpenAI's products are out in the real world and I use them everyday, I never touched Crypto, now maybe LLM's won't live up to the hype, but OpenAi's stuff is already been used in a lot of products, used by millions of users, even Spotify.
'A plausible sentence generator is just that: the next hype' - Maybe, but AI goes far beyond LLM as does the products OpenAI produces.
Even if Sam deliberately provoked them and this was a set up, no normal person would be this obstinate about it. They would’ve given up now if this was anybody’s doing but their own.
I've turned down the page size so everyone can see the threads, but you'll have to click through the More links at the bottom of the page to read all the comments, or like this:
https://news.ycombinator.com/item?id=38344196&p=2
https://news.ycombinator.com/item?id=38344196&p=3
https://news.ycombinator.com/item?id=38344196&p=4
etc...
There's nothing wrong with your post! I just need to prune the heaviest subthreads—sorry!
GitHub is stronger now then it ever has been.
(an interesting fiction would be if all the AI companies agreed to combine their efforts to skip a level and advance the world to GPT-6, maybe through a mixture of experts model)
If I have a solar panel factory, and I sell you a panel, so you can make green electricity - isn't that good for you, me AND the planet?
might be missing some more but Satya is like a S tier CEO, compared to Sundar who doesn't seem very good at his role.
Probably safe to say Henry Ford had considerable power in Ford Motor Co compared most executives today?
Its obvious that they have to redo less of the stack if the go to msft. At the very least, they already wrote everything to scale with azure.
With respect to IP... My comment was mostly suggesting they could enjoy privaledge to leave msft at some point in the future with IP with them.
How much of the source do they get to avoid rewriting on day 1 at msft? No idea. Could be all of it... But again... At least they already scaled into azure compute architecture and don't have to reinvent the wheel. That's not a small thing.
Not really debating it further. Seems really obvious to me that broadly speaking, for all kinds of reasons, probably access to source inckuded, they will be able to get up to speed substantially faster at msft vs anywhere else.
It's too speculative to be worse discussing in depth. We don't have enough details, but my broader assertion is more or less defensible imo. Others might disagree. Not worth a debate imo.
(edit: 'perpetual license to openai ip short of agi'
Not sure of the details. This is was I see being written.
https://stratechery.com/2023/openais-misalignment-and-micros...'
Its speculative. Others might disagree. I spoke to this in a comment above.
Your skepticsm seems reasonable to me, but I think my broader point is defensible, though I just don't really care to go further with it. Now I'm reading 85 percent of them have revolted lol.
Maybe we meet again in the other post.
(edit: 'perpetual license to openai ip short of agi'
Not sure of the details. This is was I see being written.
https://stratechery.com/2023/openais-misalignment-and-micros...'
This is a win from Microsoft's perspective. They don't have to have the best group messenger around, but having a significant office product being dominated by another company would be a massive risk to Microsoft, and Teams has prevented that.
What I mean is that these were created as public goods and functioned as such. Each had unique way of being open, spreading the value of their work as far as possible.
They were extraordinary. Incredible quality. Incredible power. Incredible ability to be built upon.. particularly the WWW.
All achieved things that simply could not have been achieved, by being a normal commercial venture.
Google,fb and co essentially stole them. They built closed platforms built a top open ones. Built bridges between users and the public domain, and monopolize them like bridge trolls.
Considering how part of the culture, a company like Google was 20 years ago this is the treason.
You aren't wrong that government regulation is not a great solution, but I believe it is - like democracy, and for the same reasons - the worst solution, except for all the others.
I don't disagree that using a non-profit to enforce self-regulation was "worth a shot", but I thought it was very unlikely to succeed at that goal, and indeed has been failing to succeed at that goal for a very long time. But I'm not mad at them for trying.
(I do think too many people used this as an excuse to argue against any government oversight by saying, "we don't need that, we have a self-regulating non-profit structure!", I think mostly cynically.)
> But it at least gives a chance that a potentially dangerous technology will go in the right direction.
I know you wrote this comment a full five hours ago and stuff has been moving quickly, but I think this needs to be in the past tense. It appears to be clear now that something approaching >90% of the OpenAI staff did not believe in this mission, and thus it was never going to work.
If you care about this, I think you need to be thinking about what else to pursue to give us that chance. I personally think government regulation is the only plausible option to pursue here, but I won't begrudge folks who want to keep trying more novel ideas.
(And FWIW, I don't personally share the humanity-destroying concerns people have; but I think regulation is almost always appropriate for big new technologies to some degree, and that this is no exception.)
I think it can be argued that giving free private repos to user is a 1% increase. Or what about private vulnerability reporting for open source projects. And so on. Github has gotten a lot of new free functionality since Microsoft bought it. It sounds like you just have not been paying attention.
Edit: Nevermind, I see you refer to Microsoft as M$. That really says it all.
Then they would be on roughly equal footing with Microsoft, since they'd have an abundance of engineers and a cloud partner. More or less what they just threw away, on a smaller scale and with less certain investors.
This is quite literally the best attainable outcome, at least from Microsoft's point of view. The uncertainty came from the board's boneheaded (and unrepresentative) choice to kick Sam out. Now the majority of engineers on both sides are calling foul on OpenAI and asking for their entire board to resign. Relative to the administrative hellfire that OpenAI now has to weather, Microsoft just pulled off the fastest merger of their career.
This sounds like hyperbole, but isn't that what China is doing?
Meanwhile Microsoft wins if OpenAI stays dominant and wins even bigger if Sam and Greg prevail. Some day soon they may teach this story at Harvard Business School.
That is not actually true, necessarily. Your power is typically very term dependent. A CEO who is also president of the board, and a majority shareholder, has far more power than a CEO who just stepped in temporarily and has only the powers provided by the by-laws.
Regardless, the solution to "I want to do something ethical that is not strictly in the company's best interest" is to make the case that it is the company's best interest. For example, "By investing in our employees we are actually prioritizing shareholder value". If you position it as "this is a move that hurts shareholders", of course that's illegal - companies have an obligation to every shareholder.
That also means that if you give your employees stock, they now have investor rights too. You can structure your company this way from the start, it's trivial and actually the norm in tech - stock is handed out to many employees.
While it can’t plug and play replace and employee yet in my experience at least every dev I see now has it open on their second screen and send it problems all day.
Comparing it to crypto and building that weird narrative you have is just not at all connected to the reality of what the product can actually do right now today.
https://www.forbes.com/sites/louiscolumbus/2019/01/06/micros...
Believing that OpenAI is MSFT's sole move in the AI space would be a serious error.
Context:
---------
1.1/ ILya Sukhar and Board do not agree with Sam Altman vision of a) too fast commercialization of Open AI AND/OR b) too fast progression to GPT-5 level
1.2/ Sam Altman thinks fast iteration and Commercialization is needed in-order to make Open AI financially viable as it is burning too much cash and stay ahead of competition.
1.3/ Microsoft, after investing $10+ Billions do not want this fight enable slow progress of AI Commercialization and fall behind Google AI etc..
a workable solution:
--------------------
2.1/ @sama @gdb form a new AI company, let us call it e/acc Inc.
2.2/ e/acc Inc. raises $3 Billions as SAFE instrument from VCs who believed in Sam Altman's vision.
2.3/ Open AI and e/acc Inc. reach an agreement such that:
a) GPT-4 IP transferred to e/acc Inc., this IP transfer is valued as $8 Billion SAFE instrument investment from Open AI into e/acc Inc.
b) existing Microsoft's 49% share in Open AI is transferred to e/acc Inc., such that Microsoft owns 49% of e/acc Inc.
c) the resulted "Lean and pure non-profit Open AI" with Ilya Sukhar and Board can steer AI progress as they wish, their stake in e/acc Inc. will act as funding source to cover their future Research Costs.
d) employees can join from Open AI to e/acc Inc. as they wish with no antipoaching lawsuits from OpenAI
In the name of safety, the board has gifted OAI to MS. Even Ilya wants to jump ship now that the ship is sinking (I'll be real interesting if Sama even lets him on board the MS money train).
Calling this a win for AI safety is ludicrous. OAI is dead in all be name, MS basically now owns 100% of OAI (the models, the source, and now the team) for pennies on the dollar.
Linkedin has not improved its problems with spam or content quality since Microsoft took over.
Not unreliable enough to be a problem though, and Actions seems to be a decent experience for plenty of people.
The simple fact with GitHub is that it is _the_ primary place to go looking for, or post your, open source code, and it is the go-to platform for the majority of companies looking for a solution to source code hosting.
Your comment about LinkedIn is true, but where is the nearest competition in its' space?
Like some googlers have mentioned - aside from GPU requirements, there isn't much else of a moat since a lot of ML ideas are presented and debated relatively freely at NEURIPS, ICML and other places.
And OAI delivered with enormous per-user cost that doesn’t scale - in an app that is a showcase and doesn’t really have latency requirements as people understand it’s a prototype.
And the vas majority of people play with CGPT, they don’t use it for anything useful. Incidental examples of friends and family of tech workers to the side.
You are imagining I fall in a crowd you've observed. Maintaining statute of the art ofc is a constant battle.
Google could be top dog in 2 weeks. Never insinuated otherwise. (though I predict otherwise, if we're gonna speculate)
Its not even relevant because each big firm is specializing to a degree. Anthropic is going for context window and safety... Bard is all about Google priorities... Ect
In semantic terms I agree.
The negative connotation is the baggage I bring. I recind my implied critism. Pursuit of power is not necessarily a bad thing. Perhaps I need to think on this.
The broader point is that considering short term personal financial gain is beneath an exec at Satya's level.
He has a responsibility to do more than just maximize value though. Corporate values are a real thing and msft has pretty clearly integrated them in various ways for a long time.
They pledged to carbon capture all carbon going back to their founding... For example. What does that have to do with profit? Nada... Outside of making folkes feel less climate guilt when they buy a share. Now that... Very clever for profit.
I really think you don't know what you are talking about. Delphi 7 was released in 2002 and you were "in high school in the early 2000s". We all love a good narrative, but yours has no base to belong to.
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
Logical, because as far as I understand the conflict @openai, it is/was about conquering the market with cool products vs. advancing research for the greater good. So one option would have been to split the company into a product organization and a research organization. The only problem is that Microsoft is the product organization in this construct already (they bring the services to the masses via Bing).
Unexpected, because I didn't expect someone like Sam Altman to join the corporate world.
Looking forward I am very excited to see where this leads...
But being an engineer isn’t just a lesser form of being a researcher.
It’s not a “level” in that sense. Like OAI isn’t going to fire an engineer and replace them with a researcher.
> Even if you consider all externalities, trade is positive-sum.
even if the business pollutes the environment or contributes to climate change in an outlier way?
You're arguing a straw man. Which means your reading comprehension is bad or you're intentionally mudding the waters. Either way, I think we're good here.
and i'm not arguing anything. i'm asking a question. in which case, ironically, your reading comprehension missed that.