I agree that this solution seems beneficial for both Microsoft and Sam Altman, but it reflects poorly on society if we simply accept this version of the story without criticism.
Or being forced to use Teams and Azure, due to my company CEO getting the licenses for free out of his Excel spend? :-))
A broad index fund sans Microsoft will do just fine. That's the whole point of a broad index fund.
AI should benefit mankind, not corporate profit.
Seems like a textbook case of letting the best be the enemy of the good.
If you looked at sama's actions and not his words, he seems intent on maximizing his power, control and prestige (new yorker profile, press blitzes, making a constant effort to rub shoulders with politicians/power players, worldcoin etc). I think getting in bed with Microsoft with the early investment would have allowed sama to entertain the possibility that he could succeed Satya at Microsoft some time in the distant future; that is, in the event that OpenAI never became as big or bigger than Microsoft (his preferred goal presumably) -- and everything else went mostly right for him. After all, he's always going on about how much money is needed for AGI. He wanted more direct access to the money. Now he has it.
Ultimately, this shows how little sama cared for the OpenAI charter to begin with, specifically the part about benefiting all humanity and preventing an unduly concentration of power. He didn’t start his own separate company because the talent was at OpenAI. He wanted to poach the talent, not obey the charter.
Peter Hintjens (ZeroMQ, RIP) wrote a book called "The Psychopath Code", where he posits that psychopaths are attracted to jobs with access to vulnerable people [0]. Selfless talented idealists who do not chase status and prestige can be vulnerable to manipulation. Perhaps that's why Musk pulled out of OpenAI, him and sama were able to recognize the narcissist in each other and put their guard up accordingly. As Altman says, "Elon desperately wants the world to be saved. But only if he can be the one to save it.”[1] Perhaps this apply to him as well.
Amusingly, someone recently posted an old tweet by pg: "The most surprising thing I've learned from being involved with nonprofits is that they are a magnet for sociopaths."[1] As others in the thread noted, if true, it's up for debate whether this applies more to sama or Ilya. Time will tell I guess.
It'll also be interesting to see what assurances were given to sama et al about being exempt from Microsoft's internal red tape. Prior to this, Microsoft had at least a little plausible deniability if OpenAI was ever embroiled in controversy regarding its products. They won't have that luxury with sama's team in-house anymore.
[0] https://hintjens.gitbooks.io/psychopathcode/content/chapter8...
[1] https://archive.is/uUG7H#selection-2071.78-2071.166
[2] >>38339379
Although IMO MS has consistently been a technological tarpit. Whatever AI comes out of this arrangement will be a thin shadow of what it might have been.
Just a "normal" startup could have worked too (but apparently not big corp)
Edit: Hmm sibling comment says sth else, I wonder if that makes sense
HN isn't the place to have the political debate you seem to want to have, so I will simply say that this is really sad that you equate "sharing" with USSR style communism. There is a huge middle ground between that and the trickle-down Reaganomics for which you seem to be advocating. We should have let that type of binary thinking die with the end of the Cold War.
Mate... Just because you don't bat perfect doesn't make you a tarpit.
MSFT is a technological powerhouse. They have absolutely killed it since they were founded. They have defined personal computing for multiple generations and more or less made the word 'software' something spoken occasionally at kitchen tables vs people saying 'soft-what?'
Definitely not a tarpit. You are throwing out whole villages of babies because of some various nasty bathwater over the years.
The picture is bigger. So much crucial tech from MSFT. Remains true today.
You'll just waste your time :)
Look, it's Microsoft's right to put any/all effort to making more money with their various practices.
It is our right to buy a Win10 Pro license for X amount of USD, then bolt down the ** out of it with the myriad of privacy tools to protect ourselves and have a "better Win7 Pro OS".
MS has always and will always try to play the game of getting more control, making more money, collecting more telemetry, do clean and dirty things until get caught. Welcome to the human condition. MS employees are humans. MS shareholders are also humans.
As for Windows Update, I don't think I've updated the core version at all since I installed it, and I am using WuMgr and WAU Manager (both portables) for very selective security updates.
It's a game. If you are a former sys-admin or a technical person, then you avoid their traps. If you are not, then the machine will chew your data, just like Google Analytics, AdMod, and so many others do.
Side-note: never update apps when they work 'alright', chances are you will regret it.
Did OpenAI and others pay for the training data from Stack Overflow, Twitter, Reddit, Github etc. Or any other source produced by mankind?
is all I'm saying. And I'm not interested in political debates. Neither right nor left side is good in long run. We have examples. More over we can predict what happens if...
I’ve always thought that what OpenAI was purporting to do—-“protect” humanity from bad things that AI could do to it—-was a fool’s errand under a Capitalist system, what with the coercive law of competition and all.
name a utopian fiction that has corporations as benefactors to humanity
You know what is an even bigger temptation to people than money - power. And being a high priest for some “god” controlling access from the unwashed masses who might use it for “bad” is a really heady dose of power.
This safety argument was used to justify monarchy, illiteracy, religious coercion.
There is a much greater chance of AI getting locked away from normal people by a non-profit on a power trip, rather than by a corporation looking to maximize profit.
AI is just another product by another corporation. If I get to benefit from the technology while the company that offers it also makes profit, that’s fine, I think? There wasn’t publicly available AI until someone decided to sell it.
If we use the standard of the alignment folks - that the technology today doesn't even have to be the danger, but an imaginary technology that could someday be built might be the danger. And we don't even have to be able to articulate clearly how it's a danger, we can just postulate the possibility. Then all technology becomes suspect, and needs a priest class to decided what access the population can have for fear of risking doomsday.
>If I get to benefit from the technology while the company that offers it also makes profit, that's fine.
What if you don't benefit because you lose your job to AI or have to deal with the mess created by real looking disinformation created by AI?
Is was already bad with fake images out of ARMA but with AI we get a whole new level of fakes.
The pain is real :(
"You use Windows because it is the only OS you know. I use Windows because it is the only OS you know."
Gates keeps repeating. Noone hears it.
"Europe is falling behind" very much depends on your metrics. I guess on HN it's technological innovation, but for most people the metric would be quality of life, happiness, liveability etc. and Europe's left-leaning approach is doing very nicely in that regard; better than the US.
Some people downvote (it's not about the points) but I merely state the reality and not my opinions.
I've made my living as a sys-admin early in my career using MS products, so thank you MS for putting food on my table. But this doesn't negate the dirty games/dark patterns/etc.
Do you think profit minded people and organizations aren't motivated by a desire for power? Removing one path to corruption doesn't mean I think it is impossible for a non-profit to become corrupted, but it is one less thing pulling them in that direction.
Before that USSR collapsed under Gorbachev. Why? They simply lost with their planned economy where nobody wants to take a risk. Because (1) it's not rewarding, (2) no individual has enough resources (3) to get thing moving they will have to convince a lot of bureaucrats who don't want to take a risk. They moved forward thanks to few exceptional people. But there wasn't as many willing to take a risk as in 'rotting' capitalism. Don't know why, but leaders didn't see Chinese way. Probably they were busy with internal rats fights and didn't see what's in it for them.
My idea is that there are two extremes. On left side people can be happy like yogs. But they don't produce anything or move forward. On the right side is pure capitalism. Which is inhuman. The optimum is somewhere in between. With good life quality and fast progress. What happens when resources are shared too much and life is good? You can see it in Germany today. 80% of Ukrainian refugees don't works and don't want to.