zlacker

[parent] [thread] 25 comments
1. zoogen+(OP)[view] [source] 2023-11-20 08:25:24
All of the naysayers here seem convinced this is Altman and Microsoft looking to destroy OpenAI.

Normally I am the cynic but this time I’m seeing a potential win-win here. Altman uses his talent to recruit and drive forward a brilliant product focused AI. OpenAI gets to refocus on deep research and safety.

Put aside cynicism and consider Nadella is looking to create the best of all worlds for all parties. This might just be it.

All of the product focused engineering peeps have a great place to flock to. Those who believe in the original charter of OpenAI can get back to work on the things that brought them to the company in the first place.

Big props to Nadella. He also heads off a bloodbath in the market tomorrow. So big props to Altman too for his loyalty. By backing MS instead of starting something brand new he is showing massive support for Nadella.

replies(8): >>quickt+ad >>esoter+tj >>jaredk+Pk >>DalasN+3l >>bambax+er >>iandan+vx >>sander+LF >>sackfi+IX
2. quickt+ad[view] [source] 2023-11-20 09:34:38
>>zoogen+(OP)
Damn was looking forward to picking up some cheap MSFT
3. esoter+tj[view] [source] 2023-11-20 10:13:13
>>zoogen+(OP)
What about the people who got paid equity for the past few years of work and now might see all of their equity intentionally vaporized? They essentially got cheated into working for a much lower compensation than they were promised.

I get that funny money startup equity evaporates all the time, but usually the board doesn’t deliberately send the equity to zero. Paying someone in an asset you’re intentionally going to intentionally devalue seems like fraud in spirit if not in law.

replies(2): >>sander+uI >>Workac+PM
4. jaredk+Pk[view] [source] 2023-11-20 10:23:05
>>zoogen+(OP)
I’m going to go out on a limb and guess that going forward there won’t be much investor interest in OpenAI.

And if you separate out the products from OpenAI, that leaves the question of how an organization with extremely high compute and human capital costs can sustain itself.

Can OpenAI find more billionaire benefactors to support it so that it can return to its old operating model?

replies(2): >>layer8+An >>jacoop+pq
5. DalasN+3l[view] [source] 2023-11-20 10:24:25
>>zoogen+(OP)
Reading the statement, I am doubtful that Microsoft and OpenAI can continue their business relationship. I think the most aggressive part of this is the "[they will be joining] together with colleagues" sub sentence. He is basically openly poaching the employees of a company that he supposedly has a very close cooperation with. This situation seems especially difficult since Microsoft basically houses all of openai's infrastructure. How can they continue a trust-based relationship like this?
replies(3): >>l5870u+Gn >>stingr+zo >>shkkmo+DB
◧◩
6. layer8+An[view] [source] [discussion] 2023-11-20 10:44:22
>>jaredk+Pk
Wouldn't all Microsoft competitors be interested in boosting OpenAI?
replies(1): >>morale+cC
◧◩
7. l5870u+Gn[view] [source] [discussion] 2023-11-20 10:45:19
>>DalasN+3l
Because they need the chief scientist Ilya Sutskever. Microsoft's commercial interests will push them do whatever is needed to make it work.
replies(1): >>morale+ZB
◧◩
8. stingr+zo[view] [source] [discussion] 2023-11-20 10:52:07
>>DalasN+3l
In the end it’s all about business, and it’s not in Microsoft’s interest to destroy OpenAI. It’s in Microsoft’s interest to keep the relationship warm, because it’s basically two different philosophies that are at odds with each other, one of which is now being housed under Microsoft R&D.

For all we know, OpenAI may actually achieve AGI, and Microsoft will still want a front row seat in case that happens.

replies(1): >>fastba+3E
◧◩
9. jacoop+pq[view] [source] [discussion] 2023-11-20 11:05:01
>>jaredk+Pk
I think openAI will become the research lab, while the new group in Microsoft lead by Sam will focus on creating products.

I personally expect the chat.openai.com site to just become a redirect to copilot.microsoft.com.

10. bambax+er[view] [source] 2023-11-20 11:10:50
>>zoogen+(OP)
I wonder how this will all workout in the end (and the excitement around all of this is a little reminiscent of AOL bying Time Warner).

For one, I'm not sure Sam Altman will tolerate MS bureaucracy for very long.

But secondly, the new MS-AI entity can't presumably just take from OpenAI what they did there, they need to make it again.

This takes a lot of resources (that MS has) but also a lot of time to provide feedback to the models; also, copyright issues regarding source materials are more sensitive today, and people are more attuned to them: Microsoft will have a harder time playing fast and lose with that today, than OpenAI 8 years ago.

Or, Sam at MS becomes OpenAI biggest customer? But in that case, what are all those researchers and top scientists that followed him there, going to do?

Interesting times in any case.

replies(2): >>Michae+Jw >>sander+2H
◧◩
11. Michae+Jw[view] [source] [discussion] 2023-11-20 11:44:41
>>bambax+er
I think you overestimate the technical part. Just speculating (no inside, no expert), but I would assume that the models are pretty "easy" and can be coded in few days. There are for sure some tweaks to the standard transformer architecture, but guess the tweaks are well known to sam and co.

The dataset is more challenging, but here msft can help - since they have bing and github as well. So they might be able to make few shortcuts here.

The most time consuming part is compute, but here again msft has the compute.

Will they beat chat-gpt 4 in a year? Guess no. But they will come very close to it and maybe it would not matter that much if you focus on the product.

replies(1): >>duhast+HA
12. iandan+vx[view] [source] 2023-11-20 11:50:48
>>zoogen+(OP)
Seems like it will create a Deepmind/Google Brain style split within MS.

MSR leadership is probably a little shaken at the moment.

replies(1): >>dudein+Mh2
◧◩◪
13. duhast+HA[view] [source] [discussion] 2023-11-20 12:14:52
>>Michae+Jw
You lost me at "can be coded in few days".
replies(1): >>Michae+po1
◧◩
14. shkkmo+DB[view] [source] [discussion] 2023-11-20 12:21:46
>>DalasN+3l
> He is basically openly poaching the employees of a company that he supposedly has a very close cooperation with

Not doing that would be participating in illegal wage suppression. I'm not sure how following the law means OpenAI and MSFT can't continue a business relationship.

◧◩◪
15. morale+ZB[view] [source] [discussion] 2023-11-20 12:24:03
>>l5870u+Gn
They don't. He's a smart guy but he's far from having the reins of AI in his hands as some people blindly believe.

Exhibit A: this weekend, lol.

replies(1): >>seattl+Q62
◧◩◪
16. morale+cC[view] [source] [discussion] 2023-11-20 12:25:48
>>layer8+An
No, because OpenAI is still Microsoft somehow. And also, all the other big players already have their own thing.
◧◩◪
17. fastba+3E[view] [source] [discussion] 2023-11-20 12:38:03
>>stingr+zo
Microsoft specifically does not get a front row seat (in any meaningful sense) to and OpenAI AGI event, per their agreement.
18. sander+LF[view] [source] 2023-11-20 12:48:59
>>zoogen+(OP)
Agreed, I think this is an awesome outcome. We now have an extremely capable AI product organization in-house at each of Microsoft, Meta, and Google, and a couple strong research-oriented organizations in Anthropic and OpenAI. This sounds like a recipe for a thriving competitive industry to me.
◧◩
19. sander+2H[view] [source] [discussion] 2023-11-20 12:55:16
>>bambax+er
Altman reporting to Nadella is certainly going to be a fascinating political struggle!

Part of me thinks that Nadella, having already demonstrated his mastery over all his competitor CEOs with one deft move after another over the past few years, took this on because he needed a new challenge.

I'd wager Altman will either get sidelined and pushed out, or become Nadella's successor, over the course of the next decade or so.

It's an interesting time!

◧◩
20. sander+uI[view] [source] [discussion] 2023-11-20 13:03:55
>>esoter+tj
There is probably a lawsuit here, I would not disagree, but I don't think the board will have too much trouble arguing that they didn't intentionally send the equity to zero. I certainly haven't seen any of them state that that was their intention here. But the counter argument that theyshould have known that their actions would result in that outcome may be a strong one.

But I think it is probably sufficient to point to the language in the contracts granting illiquid equity instruments that explicitly say that the grantee should not have any expectation of a return.

But I think this is an actual problem with the legal structure of how our industry is financed! But it's not clear to me what a good solution would even be. Without the ability to compensate people with lottery tickets, it would just be even more irrational for anyone to work anywhere besides the big public companies with liquid stock. And that would be a real shame.

◧◩
21. Workac+PM[view] [source] [discussion] 2023-11-20 13:26:54
>>esoter+tj
The board would counter that that equity was for a stake in a non-profit open source research company and the board was simply steering the ship back towards those goals.
22. sackfi+IX[view] [source] 2023-11-20 14:07:40
>>zoogen+(OP)
I suppose I don't see the case where large numbers of OpenAI employees follow these two to Microsoft. Microsoft can't possibly cover the value of the OpenAI employees equity as it was (and imminently to be), let alone what could have potentially been. There is a big difference between being on a rocket ship and just a good team at a megacorp.
◧◩◪◨
23. Michae+po1[view] [source] [discussion] 2023-11-20 16:18:44
>>duhast+HA
Haha, agree, it would take longer for sure.

What I meant is, most likely assuming that you are using pytorch / jax you could code down the model pretty fast. Just compare it to llama, sure it is far behind, but the llama model is under 1000 lines of code and pretty good.

There is tons of work, for the training, infra, preparing the data and so on. That would result guess in millions lines of code. But the core ideas and the model are likely thin I would argue. So that is my point.

◧◩◪◨
24. seattl+Q62[view] [source] [discussion] 2023-11-20 19:01:55
>>morale+ZB
I know I’m not qualified to make that observation, but what exactly makes you think you are? Can you share what information you’re using to make such a confident determination?
replies(1): >>dudein+2h2
◧◩◪◨⬒
25. dudein+2h2[view] [source] [discussion] 2023-11-20 19:40:59
>>seattl+Q62
My simple take would be the credits for GPT-3.5/GPT-4/GPT-5. The key engineers were part of those that have seemingly moved to Microsoft. I personally think Ilya is brilliant. I absolutely don't think he's the _sole_ brilliant mind behind OpenAI. He wasn't even one of the founders. He's a very brilliant and powerful mind and likely will be critical in the breakthroughs that lead to AGI. That said, AGI feels like one of those "way off in the distance ideas" that might be 5,10, or 100 years away. I tend to think that GPT-x is several orders of magnitude from AGI and this drama was silly and unneeded. GPT-5/6/7/8 aren't likely to destroy the world.
◧◩
26. dudein+Mh2[view] [source] [discussion] 2023-11-20 19:44:02
>>iandan+vx
I don't think so, MSR is more like OpenAI, a research think tank. MSR doesn't create products, they create concepts. I think Sam wants to create products. I think it would also be a difference in velocity to market.
[go to top]