zlacker

[parent] [thread] 257 comments
1. breadw+(OP)[view] [source] 2023-11-20 14:06:24
If they join Sam Altman and Greg Brockman at Microsoft they will not need to start from scratch because Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.

Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.

So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.

[1] https://stratechery.com/2023/openais-misalignment-and-micros...

replies(19): >>Myster+w2 >>bertil+v4 >>dhruvd+45 >>singul+A8 >>dmix+C9 >>m_ke+X9 >>JumpCr+4d >>JumpCr+rd >>Tenoke+2h >>mupuff+3j >>himara+Sn >>Lonely+9z >>Simon_+XB >>davedx+LM >>dheera+qO >>_the_i+gS >>fuddle+p11 >>echelo+p41 >>caycep+Jr1
2. Myster+w2[view] [source] 2023-11-20 14:13:48
>>breadw+(OP)
Why does Microsoft have full rights to ChatGPT IP? Where did you get that from? Source?
replies(2): >>breadw+D7 >>anonym+fa
3. bertil+v4[view] [source] 2023-11-20 14:20:19
>>breadw+(OP)
I got the impression that the most valuable models were not published. Would Microsoft have access to those too according to their contract?
replies(1): >>ncann+q6
4. dhruvd+45[view] [source] 2023-11-20 14:21:59
>>breadw+(OP)
More importantly to me, I think generating synthetic data is OpenAI's secret sauce (no evidence I am aware of), and they need access to GPT-4 weights to train GPT-5.
◧◩
5. ncann+q6[view] [source] [discussion] 2023-11-20 14:26:30
>>bertil+v4
Don't they need access to the models to use them for Bing?
replies(2): >>armcat+s9 >>bertil+4a
◧◩
6. breadw+D7[view] [source] [discussion] 2023-11-20 14:30:51
>>Myster+w2
See here: https://stratechery.com/2023/openais-misalignment-and-micros...
replies(1): >>kolink+Ib
7. singul+A8[view] [source] 2023-11-20 14:34:08
>>breadw+(OP)
Board will be ousted, new board will instruct interim CEO to hire back Sam at al, Nadella will let them go for a small favor, happy ending.
replies(3): >>DebtDe+Rc >>jacque+ud >>vidarh+Di
◧◩◪
8. armcat+s9[view] [source] [discussion] 2023-11-20 14:37:56
>>ncann+q6
Not necessarily, it would be just RAG, the use the standard Bing search engine to retrieve top K candidates, and pass those to OpenAI API in a prompt.
9. dmix+C9[view] [source] 2023-11-20 14:38:35
>>breadw+(OP)
OpenAI's upper ceiling in for-profit hands is basically Microsoft-tier dominance of tech in the 1990s, creating the next uber billionaire like Gates. If they get this because of an OpenAI fumble it could be one of the most fortunate situations in business history. Vegas type odds.

A good example of how just having your foot in the door creates serendipitous opportunity in life.

replies(1): >>ramesh+Eh
10. m_ke+X9[view] [source] 2023-11-20 14:39:39
>>breadw+(OP)
Watch Satya also save the research arm by making Karpathy or Ilya the head of Microsoft Research
replies(2): >>browni+HG >>twsted+mN
◧◩◪
11. bertil+4a[view] [source] [discussion] 2023-11-20 14:40:24
>>ncann+q6
I would consider those models "published." The models I had in mind are the first attempts at training GPT5, possibly the model trained without mention of consciousness and the rest of the safety work.

There is also all the questions for RLHF, and the pipelines to think around that.

◧◩
12. anonym+fa[view] [source] [discussion] 2023-11-20 14:40:59
>>Myster+w2
That was a seriously dumb move on the part of OpenAI
◧◩◪
13. kolink+Ib[view] [source] [discussion] 2023-11-20 14:46:56
>>breadw+D7
The source for that (https://archive.ph/OONbb - WSJ), as far as I can understand, made no claim that MS owns IP to GPT, only that they have access to it's weights and code.
replies(5): >>tiahur+Of >>tiahur+Rf >>Manouc+im >>breadw+Cr >>azakai+tU
◧◩
14. DebtDe+Rc[view] [source] [discussion] 2023-11-20 14:52:20
>>singul+A8
Board will be ousted, but the ship has sailed on Sam and Greg coming back.
replies(1): >>voittv+TP
15. JumpCr+4d[view] [source] 2023-11-20 14:53:22
>>breadw+(OP)
> Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits

To be clear, these are still an asset OpenAI holds. It should at least let them continue doing research for a few years.

replies(3): >>Jensso+ae >>JChara+yf >>sebzim+a01
16. JumpCr+rd[view] [source] 2023-11-20 14:55:04
>>breadw+(OP)
> Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits

To be clear, these don't go away. They remain an asset of OpenAI's, and could help them continue their research for a few years.

replies(8): >>toomuc+0e >>breadw+Ke >>anonym+9f >>1024co+Il >>numpad+Yv >>pauldd+4H >>hnbad+xN >>blazes+t51
◧◩
17. jacque+ud[view] [source] [discussion] 2023-11-20 14:55:07
>>singul+A8
That's definitely still within the realm of the possible.
◧◩
18. toomuc+0e[view] [source] [discussion] 2023-11-20 14:57:36
>>JumpCr+rd
"Cluster is at capacity. Workload will be scheduled as capacity permits." If the credits are considered an asset, totally possible to devalue them while staying within the bounds of the contractual agreement. Failing that, wait until OpenAI exhausts their cash reserves for them to challenge in court.
replies(4): >>p_j_w+4O >>dicris+hO >>htrp+W31 >>quickt+N42
◧◩
19. Jensso+ae[view] [source] [discussion] 2023-11-20 14:58:16
>>JumpCr+4d
But how much of that research will be for the non-profit mission? The entire non-profit leadership got cleared out and will get replaced by for-profit puppets, there is nobody left to defend the non-profit ideals they ought to have.
◧◩
20. breadw+Ke[view] [source] [discussion] 2023-11-20 15:01:23
>>JumpCr+rd
Assuming OpenAI still exists next week, right? If nearly all employees — including Ilya apparently — quit to join Microsoft then they may not be using much of the Azure credits.
replies(2): >>ghaff+Wo >>cactus+AS
◧◩
21. anonym+9f[view] [source] [discussion] 2023-11-20 15:04:00
>>JumpCr+rd
So you're saying Microsoft doesn't have any type of change in control language with these credits? That's... hard to believe
replies(1): >>JumpCr+Yg
◧◩
22. JChara+yf[view] [source] [discussion] 2023-11-20 15:06:22
>>JumpCr+4d
they're GPUs right? Time to mine some niche cryptos to cash out the azure credits..
replies(1): >>Manouc+Nj
◧◩◪◨
23. tiahur+Of[view] [source] [discussion] 2023-11-20 15:07:49
>>kolink+Ib
Exactly. The generalities, much less the details, of what MS actually got in the deal are not public.
◧◩◪◨
24. tiahur+Rf[view] [source] [discussion] 2023-11-20 15:08:08
>>kolink+Ib
Exactly. The generalities, much less the details, of the deal are not public.
◧◩◪
25. JumpCr+Yg[view] [source] [discussion] 2023-11-20 15:15:36
>>anonym+9f
> you're saying Microsoft doesn't have any type of change in control language with these credits? That's... hard to believe

Almost certainly not. Remember, Microsoft wasn’t the sole investor. Reneging on those credits would be akin to a bank investing in a start-up, requiring they deposit the proceeds with them, and then freezing them out.

replies(1): >>johndh+LP
26. Tenoke+2h[view] [source] 2023-11-20 15:15:56
>>breadw+(OP)
Don't they have a more limited license to use the IP rather than full rights? (The stratechery post links to a paywalled wsj article for the claim so I couldn't confirm)
◧◩
27. ramesh+Eh[view] [source] [discussion] 2023-11-20 15:20:06
>>dmix+C9
>A good example of how just having your foot in the door creates serendipitous opportunity in life.

Sounds like Altman's biography.

replies(2): >>renega+uQ >>itchyo+vR
◧◩
28. vidarh+Di[view] [source] [discussion] 2023-11-20 15:27:38
>>singul+A8
Whom is it that has power to oust the non-profits board? They may well manage to pressure them into leaving, but I don't they have any direct power over it.
29. mupuff+3j[view] [source] 2023-11-20 15:30:37
>>breadw+(OP)
Can the OpenAI board renege on the deal with msft?
replies(3): >>somena+mz >>jacque+HA >>kcorbi+MY
◧◩◪
30. Manouc+Nj[view] [source] [discussion] 2023-11-20 15:35:01
>>JChara+yf
I would be shocked if the Azure credits didn't come with conditions on what they can be used for. At a bare minimum, there's likely the requirement that they be used for supporting AI research.
◧◩
31. 1024co+Il[view] [source] [discussion] 2023-11-20 15:48:51
>>JumpCr+rd
# sudo renice +19 openai_process

There's your "credit".

◧◩◪◨
32. Manouc+im[view] [source] [discussion] 2023-11-20 15:51:05
>>kolink+Ib
The worst part of OpenAI is their web frontend.

Their development and QA process is either disorganized to the extreme, or non-existent.

replies(1): >>ipaddr+cY
33. himara+Sn[view] [source] 2023-11-20 16:00:46
>>breadw+(OP)
This is wrong. Microsoft has no such rights and its license comes with restrictions, per the cited primary source, meaning a fork would require a very careful approach.

https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...

replies(7): >>dan_qu+GP >>svnt+tQ >>alasda+gR >>blazes+G41 >>btown+t71 >>breadw+LG2 >>runjak+uS2
◧◩◪
34. ghaff+Wo[view] [source] [discussion] 2023-11-20 16:06:52
>>breadw+Ke
It's a lot easier to sign a petition than it is to quit your cushy job. It remains to be seen how many people jump ship to (supposedly) take a spot at Microsoft.
replies(6): >>treesc+Xz >>dagesh+HC >>oceanp+AE >>vikram+FP >>clover+dU >>jedber+7b1
◧◩◪◨
35. breadw+Cr[view] [source] [discussion] 2023-11-20 16:21:42
>>kolink+Ib
What are the chances that an investor owns 49% of a company but does not have rights to its IP? Especially when that investor is Microsoft?
replies(2): >>himara+yu >>sudosy+v51
◧◩◪◨⬒
36. himara+yu[view] [source] [discussion] 2023-11-20 16:36:47
>>breadw+Cr
Very reasonable? Microsoft doesn't control any part of the company and faces a high degree of regulatory scrutiny.
◧◩
37. numpad+Yv[view] [source] [discussion] 2023-11-20 16:43:52
>>JumpCr+rd
A $13B lawsuit against Microsoft Corporation clearly in the wrong surely is an easy one.
replies(4): >>geodel+xQ >>dragon+JU >>mikery+BY >>fennec+5i4
38. Lonely+9z[view] [source] 2023-11-20 16:57:44
>>breadw+(OP)
Just a thought.... Wouldn't one of the board members be like "If you screw with us any further we're releasing gpt to the public"

I'm wondering why that option hasn't been used yet.

replies(5): >>jacque+fA >>supriy+6C >>vikram+nQ >>srouss+nR >>justap+hc1
◧◩
39. somena+mz[view] [source] [discussion] 2023-11-20 16:58:11
>>mupuff+3j
A contractual mistake one makes only once is ensuring there's penalties for breach, or a breach would entail a clear monetary loss which is what's generally required by the courts. In this case I expect Microsoft would almost certainly have both, so I think the answer is 'no.'
replies(1): >>agloe_+c81
◧◩◪◨
40. treesc+Xz[view] [source] [discussion] 2023-11-20 17:00:04
>>ghaff+Wo
When the biggest chunk of your compensation is in the form of PPUs (profit participation units) which might be worthless under the new direction of the company (or worth 1/10th of what you think they were), it might be actually much more of an easier jump than people think to get some fresh $MSFT stock options which can be cashed regardless.
◧◩
41. jacque+fA[view] [source] [discussion] 2023-11-20 17:01:12
>>Lonely+9z
Which of the remaining board members could credibly make that threat?
◧◩
42. jacque+HA[view] [source] [discussion] 2023-11-20 17:03:23
>>mupuff+3j
Don't you think they have trouble enough as it is?
replies(1): >>mupuff+QB
◧◩◪
43. mupuff+QB[view] [source] [discussion] 2023-11-20 17:07:08
>>jacque+HA
Depends on why they did what they did.

If they let msft "loot" all their IP then they lose any type of leverage they might still have, and if they did it due to some ideological reason I could see why they might prefer to choose a scorched earth policy.

Given that they refused to resign seems like they prefer to fight rather than give it to Sam Altman, which what the msft maneuver looks like defacto.

replies(1): >>sebzim+qb1
44. Simon_+XB[view] [source] 2023-11-20 17:07:24
>>breadw+(OP)
> Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.

What? That's even better played by Microsoft so than I'd originally anticipated. Take the IP, starve the current incarnation of OpenAI of compute credits and roll out their own thing

◧◩
45. supriy+6C[view] [source] [discussion] 2023-11-20 17:07:51
>>Lonely+9z
Probably a violation of agreements with OpenAI and it would harm their own moat as well, while achieving very little in return.
replies(1): >>lrvick+y51
◧◩◪◨
46. dagesh+HC[view] [source] [discussion] 2023-11-20 17:09:55
>>ghaff+Wo
Given these people are basically the gold standard by which everyone else judges AI related talent. I'm gonna say it would be just as easy for them to land a new gig for the same or better money elsewhere.
◧◩◪◨
47. oceanp+AE[view] [source] [discussion] 2023-11-20 17:16:08
>>ghaff+Wo
Depends on how much of that is paper money.

If you’re making like 250k cash and were promised $1M a year in now-worthless paper, plus you have OpenAI on the resume, are one of the most in-demand people in the world? It would be rediculously easy to quit.

replies(1): >>quickt+752
◧◩
48. browni+HG[view] [source] [discussion] 2023-11-20 17:22:14
>>m_ke+X9
0% chance of Ilya failing upwards from this. He dunked himself hard and has blasted a huge hole in his organizational-game-theory quotient.
replies(3): >>golerg+8T >>kvetch+dT >>kibwen+Z31
◧◩
49. pauldd+4H[view] [source] [discussion] 2023-11-20 17:23:20
>>JumpCr+rd
Sure, the point is that MS giving $13B of its services away is less expensive than $13B in cash.
replies(2): >>serger+W01 >>nojvek+i41
50. davedx+LM[view] [source] 2023-11-20 17:42:52
>>breadw+(OP)
"just" is doing a hell of a lot of work there.
◧◩
51. twsted+mN[view] [source] [discussion] 2023-11-20 17:44:33
>>m_ke+X9
BTW, has Karpathy signed the petition?
◧◩
52. hnbad+xN[view] [source] [discussion] 2023-11-20 17:45:11
>>JumpCr+rd
Sure but you can't exchange Azure credits for goods and services... other than Azure services. So they simultaneously control what OpenAI can use that money for as well as who they can spend it with. And it doesn't cost Microsoft $13bn to issue $13bn in Azure credits.
replies(1): >>dixie_+RP
◧◩◪
53. p_j_w+4O[view] [source] [discussion] 2023-11-20 17:47:48
>>toomuc+0e
It’s amazing to me to see people on HN advocate a giant company bullying a smaller one with these kind of skeezy tactics.
replies(7): >>geodel+mP >>DANmod+vP >>weird-+zP >>toomuc+FQ >>eigenv+vU >>toaste+u01 >>fennec+Tf4
◧◩◪
54. dicris+hO[view] [source] [discussion] 2023-11-20 17:48:33
>>toomuc+0e
Ah, a fellow frequent flyer, I see? I don't really have a horse in this race, but Microsoft turning Azure credits into Skymiles would really be something. I wonder if they can do that, or if the credits are just credits, which presumably can be used for something with an SLA. All that said, if Microsoft wants to screw with them, they sure can, and the last 30 years have proven they're pretty good at that.
replies(1): >>ajcp+5Z
55. dheera+qO[view] [source] 2023-11-20 17:48:51
>>breadw+(OP)
It's about time for ChatGPT to be the next CEO of OpenAI. Humans are too stupid to oversee the company.
◧◩◪◨
56. geodel+mP[view] [source] [discussion] 2023-11-20 17:52:06
>>p_j_w+4O
Not advocating but just reflecting on reality of situation.
◧◩◪◨
57. DANmod+vP[view] [source] [discussion] 2023-11-20 17:52:28
>>p_j_w+4O
Don't confuse trying to understand the incentives in a war for rooting for one of the warring parties.
◧◩◪◨
58. weird-+zP[view] [source] [discussion] 2023-11-20 17:52:48
>>p_j_w+4O
Presenting a scenario and advocating aren't the same thing
◧◩◪◨
59. vikram+FP[view] [source] [discussion] 2023-11-20 17:53:10
>>ghaff+Wo
those jobs look a lot less cushy now compared to a new microsoft division where everyone is aligned on the idea that making bank is good and fun
◧◩
60. dan_qu+GP[view] [source] [discussion] 2023-11-20 17:53:10
>>himara+Sn
This is MSFT we're talking about. Aggressive legal maneuvers are right in their wheelhouse!
replies(1): >>burnte+gk1
◧◩◪◨
61. johndh+LP[view] [source] [discussion] 2023-11-20 17:53:20
>>JumpCr+Yg
Except that all of the investors are aligned with Microsoft in that they want sam to lead their investment
replies(1): >>rvnx+SW
◧◩◪
62. dixie_+RP[view] [source] [discussion] 2023-11-20 17:53:28
>>hnbad+xN
Can you mine 13bn+ bitcoin with 13bn worth of Azure compute power?
replies(2): >>floren+sS >>shawab+tg1
◧◩◪
63. voittv+TP[view] [source] [discussion] 2023-11-20 17:53:32
>>DebtDe+Rc
I would think OpenAI is basically toast. They arent coming back, these people will quit and this will end up in court.

Everyone just assumes AGI is inevetible but it is a non-zero chance we just passed the ai peak this weekend.

replies(3): >>Applej+hW >>MVisse+k41 >>moogly+F71
◧◩
64. vikram+nQ[view] [source] [discussion] 2023-11-20 17:54:49
>>Lonely+9z
theoretically their concern is around AI safety - whatever it is in practice doing something like that would instantly signal to everyone that they are the bad guys and confirm everyone's belief that this was just a power grab

Edit: since it's being brought up in thread they claimed they closed sourced it because of safety. It was a big controversial thing and they stood by it so it's not exactly easy to backtrack

replies(2): >>whatwh+721 >>mcv+b91
◧◩
65. svnt+tQ[view] [source] [discussion] 2023-11-20 17:54:57
>>himara+Sn
But it does suggest a possibility of the appearance of a sudden motive:

Open AI implements and releases ChatGPTs (Poe competitor) but fails to tell D’Angelo ahead of time. Microsoft will have access to code (with restrictions, sure) for essentially a duplicate of D’Angelo’s Poe project.

Poe’s ability to fundraise craters. D’Angelo works the less seasoned members of the board to try to scuttle OpenAI and Microsoft’s efforts, banking that among them all he and Poe are relatively immune with access to Claude, Llama, etc.

replies(2): >>himara+TS >>Terret+ZT
◧◩◪
66. renega+uQ[view] [source] [discussion] 2023-11-20 17:54:59
>>ramesh+Eh
Altman's bio is so typical. Got his first computer at 8. My parents finally opened the wallet for a cheap E-Machine when I went to college.

Altman - private school, Stanford, dropped out to f*ck around in tech. "Failed" startup acquired for $40M. The world is full of Sam Altmans who never won the birth lottery.

Could he have squandered his good fortune - absolutely, but his life is not exactly per ardua ad astra.

replies(1): >>dmix+vG1
◧◩◪
67. geodel+xQ[view] [source] [discussion] 2023-11-20 17:55:07
>>numpad+Yv
Clear to you. But in courts of law it may take a while to be clear.
◧◩◪◨
68. toomuc+FQ[view] [source] [discussion] 2023-11-20 17:55:38
>>p_j_w+4O
Explaining how the gazelle is going to get eaten confidently jumping into the oasis isn't advocating for the crocodiles. See sibling comments.

Experience leads to pattern recognition, and this is the tech community equivalent of a David Attenborough production (with my profuse apologies to Sir Attenborough). Something about failing to learn history and repeating it should go here too.

If you can take away anything from observing this event unfold, learn from it. Consider how the sophisticated vs the unsophisticated act, how participants respond, and what success looks like. Also, slow is smooth, smooth is fast. Do not rush when the consequences of a misstep are substantial. You learning from this is cheaper than the cost for everyone involved. It is a natural experiment you get to observe for free.

replies(2): >>jacque+gY >>robbom+ZZ
◧◩
69. alasda+gR[view] [source] [discussion] 2023-11-20 17:57:33
>>himara+Sn
They could make ChatGPT++

https://en.wikipedia.org/wiki/Visual_J%2B%2B

replies(3): >>dangro+uR >>prepen+HT >>trhway+4X1
◧◩
70. srouss+nR[view] [source] [discussion] 2023-11-20 17:58:05
>>Lonely+9z
Which they take and sell.
◧◩◪
71. dangro+uR[view] [source] [discussion] 2023-11-20 17:58:29
>>alasda+gR
ChatGPT#
replies(8): >>hn_thr+4S >>patapo+YY >>eli_go+041 >>TeMPOr+e71 >>gfosco+rf1 >>klft+2A1 >>fluidc+MA1 >>adrian+AS1
◧◩◪
72. itchyo+vR[view] [source] [discussion] 2023-11-20 17:58:30
>>ramesh+Eh
I get the impression based on Altman's history as CEO then ousted from both YCombinator and OpenAI, that he must be a brilliant, first-impression guy with the chops to back things up for a while until folks get tired of the way he does things.

Not to say that he hasn't done a ton with OpenAI, I have no clue, but it seems that he has a knack for creating these opportunities for himself.

replies(1): >>ipaddr+5X
◧◩◪◨
73. hn_thr+4S[view] [source] [discussion] 2023-11-20 18:00:38
>>dangro+uR
Hopefully ChatGPT will make it easier to search/differentiate between ChatGPT, ChatGPT++, and ChatGPT# than Google does.
replies(1): >>albert+wX
74. _the_i+gS[view] [source] 2023-11-20 18:01:11
>>breadw+(OP)
Exactly. This is what business is about in the ranks of heavyweights like Sadya. On the other hand, prevent others from taking advantage of OpenAI.

MS can only win because there are only viable options: OpenAI survives under MS's control, OpenAI implodes, and MS gets the assets relatively cheaply.

Everything else won't benefit competitors.

◧◩◪◨
75. floren+sS[view] [source] [discussion] 2023-11-20 18:01:50
>>dixie_+RP
Can you mine $1+ bitcoin with $1 of Azure credits? The questions are equivalent and the answer is no.
◧◩◪
76. cactus+AS[view] [source] [discussion] 2023-11-20 18:02:14
>>breadw+Ke
Why would Microsoft take Ilya? He is rumored to have started the coup. I can see Microsoft taking all uninvolved employees.
replies(2): >>loeg+HU >>noprom+z01
◧◩◪
77. himara+TS[view] [source] [discussion] 2023-11-20 18:03:39
>>svnt+tQ
I think there's more to the Poe story. Sam forced out Reid Hoffman over Inflection AI, [1] so he clearly gave Adam a pass for whatever reason. Maybe Sam credited Adam for inspiring OpenAI's agents?

[1] https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...

replies(1): >>svnt+uX
◧◩◪
78. golerg+8T[view] [source] [discussion] 2023-11-20 18:04:27
>>browni+HG
He's shown himself to be bad at politics, but he's still one of the world best researchers. Surely, a sensible company would find a position for him where he would be able to bring enormous value without having to play politics.
replies(2): >>browni+w11 >>nvm0n2+Q61
◧◩◪
79. kvetch+dT[view] [source] [discussion] 2023-11-20 18:04:34
>>browni+HG
countless people are looking to weaponize his autism
replies(1): >>fb03+FY
◧◩◪
80. prepen+HT[view] [source] [discussion] 2023-11-20 18:06:16
>>alasda+gR
“Microsoft Chat 365”

Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.

replies(5): >>kylebe+CV >>htrp+w31 >>bee_ri+P71 >>barkin+DP1 >>wkat42+RK2
◧◩◪
81. Terret+ZT[view] [source] [discussion] 2023-11-20 18:07:30
>>svnt+tQ
>>38348995
◧◩◪◨
82. clover+dU[view] [source] [discussion] 2023-11-20 18:08:02
>>ghaff+Wo
I would imagine the MS jobs* would be cushier, just with less long-term total upside. For all the promise of employees having 5-50 million in potential one-day money, MS can likely offer 1 million guaranteed in the next 4 years, and perhaps more with some kind of incentives. IMHO guaranteed money has a very powerful effect on most, especially when it takes you into "Not rich, but don't technically need to work" anymore territory.

Personally I've got enough IOU's alive that I may be rich one day. But if someone gave me retirement in 4 years money, guaranteed, I wouldn't even blink before taking it.

*I think before MS stepped in here I would have agreed w/ you though -- unlikely anyone is jumping ship without an immediate strong guarantee.

replies(2): >>ghaff+r01 >>quickt+o62
◧◩◪◨
83. azakai+tU[view] [source] [discussion] 2023-11-20 18:08:59
>>kolink+Ib
Yes, there is a big difference between having access to the weights and code and having a license to use them in different ways.

It seems obvious Microsoft has a license to use them in Microsoft's own products. Microsoft said so directly on Friday.

What is less obvious is if Microsoft has a license to use them in other ways. For example, can Microsoft provide those weights and code to third parties? Can they let others use them? In particular, can they clone the OpenAI API? I can see reasons for why that would not have been in the deal (it would risk a major revenue source for OpenAI) but also reasons why Microsoft might have insisted on it (because of situations just like the one happening now).

What is actually in the deal is not public as far as I know, so we can only speculate.

replies(1): >>whycom+id1
◧◩◪◨
84. eigenv+vU[view] [source] [discussion] 2023-11-20 18:09:08
>>p_j_w+4O
Sounds like it won’t be much of a company in a couple days. Just 3 idiot board members wondering why the building is empty.
replies(4): >>noprom+aZ >>jacque+971 >>Madnes+Gg1 >>hansel+hP1
◧◩◪◨
85. loeg+HU[view] [source] [discussion] 2023-11-20 18:09:49
>>cactus+AS
The article mentions Ilya regrets it, whatever his role was.
replies(2): >>dragon+fX >>cbozem+wY
◧◩◪
86. dragon+JU[view] [source] [discussion] 2023-11-20 18:09:59
>>numpad+Yv
"Clearly" in the form of the most probable interpretation of the public facts doesn't mean that it is unambiguous enough that it would be resolved without a trial, and by the time a trial, the inevitable first-level appeal for which the trial judgement would likely be stayed was complete, so that there would even be a collectible judgement, the world would have moved out from underneath OpenAI; if they still existed as an entity, whatever they collected would be basically funding to start from scratch unless they also found a substitute for the Microsoft arrangement in the interim.

Which I don't think is impossible at some level (probably less than Microsoft was funding, initially, or with more compromises elsewhere) with the IP they have if they keep some key staff -- some other interested deep-pockets parties that could use the leg up -- but its not going to be a cakewalk in the best of cases.

◧◩◪◨
87. kylebe+CV[view] [source] [discussion] 2023-11-20 18:13:21
>>prepen+HT
At least in this forum can we please stop calling something that is not even close to AGI, AGI. Its just dumb at this point. We are LIGHT-YEARS away from AGI, even calling an LLM "AI" only makes sense for a lay audience. For developers and anyone in the know LLMs are called machine learning.
replies(7): >>prepen+QX >>boc+p01 >>erosen+j61 >>hackin+u81 >>ncjcuc+wD1 >>acje+TL1 >>fennec+Sc4
◧◩◪◨
88. Applej+hW[view] [source] [discussion] 2023-11-20 18:15:22
>>voittv+TP
Non-zero chance that somebody thought we passed the AI peak this weekend. Not the same as it being true.

My first thought was the scenario I called Altman's Basilisk (if this turns out to be true, I called it before anyone ;) )

Namely, Altman was diverting computing resources to operate a superhuman AI that he had trained in his image and HIS belief system, to direct the company. His beliefs are that AGI is inevitable and must be pursued as an arms race because whoever controls AGI will control/destroy the world. It would do so through directing humans, or through access to the Internet or some such technique. In seeking input from such an AI he'd be pursuing the former approach, having it direct his decisions for mutual gain.

In so training an AI he would be trying to create a paranoid superintelligence with a persecution complex and a fixation on controlling the world: hence, Altman's Basilisk. It's a baddie, by design. The creator thinks it unavoidable and tries to beat everyone else to that point they think inevitable.

The twist is, all this chaos could have blown up not because Altman DID create his basilisk, but because somebody thought he WAS creating a basilisk. Or he thought he was doing it, and the board got wind of it, and couldn't prove he wasn't succeeding in doing it. At no point do they need to be controlling more than a hallucinating GPT on steroids and Azure credits. If the HUMANS thought this was happening, that'd instigate a freakout, a sudden uncontrolled firing for the purpose of separating Frankenstein from his Monster, and frantic powering down and auditing of systems… which might reveal nothing more than a bunch of GPT.

Rosko's Basilisk is a sci-fi hypothetical.

Altman's Basilisk, if that's what happened, is a panic reaction.

I'm not convinced anything of the sort happened, but it's very possible some people came to believe it happened, perhaps even the would-be creator. And such behavior could well come off as malfeasance and stealing of computing resources: wouldn't take the whole system to run, I can run 70b on my Mac Studio. It would take a bunch of resources and an intent to engage in unauthorized training to make a super-AI take on the belief system that Altman, and many other AI-adjacent folk, already hold.

It's probably even a legitimate concern. It's just that I doubt we got there this weekend. At best/worst, we got a roughly human-grade intelligence Altman made to conspire with, and others at OpenAI found out and freaked.

If it's this, is it any wonder that Microsoft promptly snapped him up? Such thinking is peak Microsoft. He's clearly their kind of researcher :)

◧◩◪◨⬒
89. rvnx+SW[view] [source] [discussion] 2023-11-20 18:17:24
>>johndh+LP
The investors don't care who lead, they just want 10x, or 100x their bet.

If tomorrow it's Donald Trump or Sam Altman or anyone else, and it works out, the investors are going to be happy.

◧◩◪◨
90. ipaddr+5X[view] [source] [discussion] 2023-11-20 18:18:05
>>itchyo+vR
Did YCombinator oust him? Would love to hear that story.
◧◩◪◨⬒
91. dragon+fX[view] [source] [discussion] 2023-11-20 18:18:29
>>loeg+HU
But what does Ilya regret, and how does that counter the argument that Microsoft would likely be disinclined to take him on?

If what he regrets is realizing the divergence between the direction Sam was taking the firm and the safety orientation nominally central to the mission of the OpenAI nonprofit and which is one of Ilya's public core concerns too late, and taking action aimed at stopping it than instead exacerbated the problem by just putting Microsoft in a position to take poach key staff and drive full force in the same direction OpenAI Global LLC had been under Sam but without any control fromm the OpenAI board, well, that's not a regret that makes him more attractive to Microsoft, either based on his likely intentions or his judgement.

And any regret more aligned with Microsoft's interests as far as intentions is probably even a stronger negative signal on judgement.

replies(1): >>loeg+7w2
◧◩◪◨
92. svnt+uX[view] [source] [discussion] 2023-11-20 18:19:23
>>himara+TS
I think it’s more likely that D’Angelo was there for his link to Meta, while Hoffman was rendered redundant after the big Microsoft deal (which occurred a month or two before he was asked to leave), but that’s just a guess.
replies(1): >>himara+Z91
◧◩◪◨⬒
93. albert+wX[view] [source] [discussion] 2023-11-20 18:19:28
>>hn_thr+4S
dotGPT
◧◩◪◨⬒
94. prepen+QX[view] [source] [discussion] 2023-11-20 18:20:43
>>kylebe+CV
I’m taking about the ultimate end product that Microsoft and OpenAI want to create.

So I mean proper AGI.

Naming the product Clippy now is perfectly fine while it’s just an LLM and will be more excellent over the years when it eventually achieves AGI ness.

At least in this forum can we please stop misinterpreting things in a limited way to make pedantic points about how LLMs aren’t AGI (which I assume 98% of people here know). So I think it’s funny you assume I think chatgpt is an AGI.

replies(3): >>JohnFe+s81 >>kylebe+Fk2 >>NemoNo+sv2
◧◩◪◨⬒
95. ipaddr+cY[view] [source] [discussion] 2023-11-20 18:22:19
>>Manouc+im
You could make your own and charge for access if you feel you can do better. Make a show post when you are done and we'll comment.
replies(1): >>Manouc+MT8
◧◩◪◨⬒
96. jacque+gY[view] [source] [discussion] 2023-11-20 18:22:30
>>toomuc+FQ
This is a great comment. Having an open eye towards what lessons you can learn from these events so that you don't have to re-learn them when they might apply to you is a very good way to ensure you don't pay avoidable tuition fees.
◧◩◪◨⬒
97. cbozem+wY[view] [source] [discussion] 2023-11-20 18:23:25
>>loeg+HU
Yeah, I'm sure he does regret it, now that it blew up in his face.
◧◩◪
98. mikery+BY[view] [source] [discussion] 2023-11-20 18:23:49
>>numpad+Yv
I dunno how you see it but I don’t see anything that Microsoft is doing wrong here. They’ve obviously been aligned with Sam all along and they’re not “poaching” employees - which isn’t illegal anyway.

They bought their IP rights from OpenAI.

I’m not a fan of MS being the big “winner” here but OpenAI shit their own bed on this one. The employees are 100% correct in one thing - that this board isn’t competent.

replies(1): >>noprom+y11
◧◩◪◨
99. fb03+FY[view] [source] [discussion] 2023-11-20 18:23:55
>>kvetch+dT
Let's please stop using mental health as an excuse for backstabbing.
◧◩
100. kcorbi+MY[view] [source] [discussion] 2023-11-20 18:24:19
>>mupuff+3j
If they lose all the employees and then voluntarily give up their Microsoft funding the only asset they'll have left are the movie rights. Which, to be fair, seem to be getting more valuable by the day!
◧◩◪◨
101. patapo+YY[view] [source] [discussion] 2023-11-20 18:24:43
>>dangro+uR
ChatGPT Series 4
◧◩◪◨
102. ajcp+5Z[view] [source] [discussion] 2023-11-20 18:25:07
>>dicris+hO
I don't think the value of credits can be changed per tenant or customer that easily.

I've actually had a discussion with Microsoft on this subject as they were offering us an EA with a certain license subscription at $X.00 for Y,000 calls per month. When we asked if they couldn't just make the Azure resource that does the exact same thing match that price point in consumption rates in our tenant they said unfortunately no. I just chalked this up to MSFT sales tactics, but I was told candidly by some others that worked on that Azure resource that they were getting 0 enterprise adoption of it because Microsoft couldn't adjust (specific?) consumption rates to match what they could offer on EA licensing.

replies(1): >>donalh+OI1
◧◩◪◨⬒
103. noprom+aZ[view] [source] [discussion] 2023-11-20 18:25:25
>>eigenv+vU
The wired article seems to be updated by the hour.

Now up to 600+/770 total.

Couple janitors. I dunno who hasn't signed that at this point ha...

Would be fun to see a counter letter explaining their thinking to not sign on.

replies(2): >>labcom+gd1 >>wolver+bm2
◧◩◪◨⬒
104. robbom+ZZ[view] [source] [discussion] 2023-11-20 18:28:59
>>toomuc+FQ
This might be my favorite comment I've read on HN. Spot on.

Being able to watch the miss steps and the maneuvers of the people involved in real time is remarkable and there are valuable lessons to be learned. People have been saying this episode will go straight into case studies but what really solidifies that prediction is the openness of all the discussions: the letters, the statements, and above all the tweets - or are we supposed to call them x's now?

replies(1): >>jzb+M91
◧◩
105. sebzim+a01[view] [source] [discussion] 2023-11-20 18:29:22
>>JumpCr+4d
If any company can find a way to avoid having to pay up on those credits it's Microsoft.

"Sorry OpenAI, but those credits are only valid in our Nevada datacenter. Yes, it's two Microsoft Surface PC™ s connected together with duct tape. No, they don't have GPUs."

◧◩◪◨⬒
106. boc+p01[view] [source] [discussion] 2023-11-20 18:30:07
>>kylebe+CV
We are incredibly far away from AGI and we're only getting there with wetware.

LLMs and GenAI are clever parlor tricks compared to the necessary science needed for AGI to actually arrive.

replies(2): >>myrmid+Ag1 >>fennec+6e4
◧◩◪◨⬒
107. ghaff+r01[view] [source] [discussion] 2023-11-20 18:30:11
>>clover+dU
>*I think before MS stepped in here I would have agreed w/ you though -- unlikely anyone is jumping ship without an immediate strong guarantee.

The details here certainly matter. I think a lot of people are assuming that Microsoft will just rain cash on anyone automatically sight unseen because they were hired by OpenAI. That may indeed be the case but it remains to be seen.

◧◩◪◨
108. toaste+u01[view] [source] [discussion] 2023-11-20 18:30:24
>>p_j_w+4O
Yeah seems extremely unbelievable.
◧◩◪◨
109. noprom+z01[view] [source] [discussion] 2023-11-20 18:30:40
>>cactus+AS
Because he is possibly the most desireable AI researcher on planet earth. Full stop.

Also all these cats arn't petty. They are friends. I'm sure Ilya feels terrible. Satya is a pro... Won't be hard feelings.

The guy threw in with the board... He's not from startup land. His last gig was Google. He's way over his head relative to someone like Altman who was in this world the moment out of college diapers.

Poor Ilya... It's awful to build something and then accidentally destroy it. Hopefully it works out for him. I'm fairly certain he and Altman and Brockman have already reconciled during the board negotiations... Obviously Ilya realized in the span of 48hrs that he'd made a huge mistake.

replies(1): >>nvm0n2+n31
◧◩◪
110. serger+W01[view] [source] [discussion] 2023-11-20 18:32:05
>>pauldd+4H
Exactly, I don't know the exact terms of the deal but I am guessing that's at LIST/high markup on cost of those services.

Couldthe 13b could be considerably less cost

111. fuddle+p11[view] [source] 2023-11-20 18:33:45
>>breadw+(OP)
Oh man, I'm not looking forward to Microsoft AGI.
replies(1): >>kreebe+kf1
◧◩◪◨
112. browni+w11[view] [source] [discussion] 2023-11-20 18:34:32
>>golerg+8T
Upwards, I said. And I was responding to a post.

I don't see a trajectory to "head of Microsoft Research".

replies(1): >>didibu+fg1
◧◩◪◨
113. noprom+y11[view] [source] [discussion] 2023-11-20 18:34:40
>>mikery+BY
So true.

MSFT looks classy af.

Satya is no saint... But evidence seems to me he's negotiating in good faith. Recall that openai could date anyone when they went to the dance on that cap raise.

They picked msft because of the value system the leadership exhibited and willingness to work with their unusual must haves surrounding governance.

The big players at openai have made all that clear in interviews. Also Altman has huge respect for Satya and team. He more or less stated on podcasts that he's the best ceo he's ever interacted with. That says a lot.

◧◩◪
114. whatwh+721[view] [source] [discussion] 2023-11-20 18:36:14
>>vikram+nQ
A power grab by open sourcing something that fits their initial mission? Interesting analysis
replies(2): >>nvm0n2+F41 >>vikram+691
◧◩◪◨⬒
115. nvm0n2+n31[view] [source] [discussion] 2023-11-20 18:40:45
>>noprom+z01
> he is possibly the most desireable AI researcher on planet earth

was

There are lots of people doing excellent research on the market right now, especially with the epic brain drain being experienced by Google. And remember that OpenAI neither invented transformers nor switch transformers (which is what GPT4 is rumoured to be).

replies(2): >>noprom+vs1 >>noprom+ZA1
◧◩◪◨
116. htrp+w31[view] [source] [discussion] 2023-11-20 18:41:05
>>prepen+HT
> Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.

Finally the paperclip maximizer

◧◩◪
117. htrp+W31[view] [source] [discussion] 2023-11-20 18:42:12
>>toomuc+0e
Basically the current situation you have with AI compute now on the hyperscalers

Good luck trying to find H100 80s on the 3 big clouds.

◧◩◪
118. kibwen+Z31[view] [source] [discussion] 2023-11-20 18:42:16
>>browni+HG
The same could have been said for Adam Neumann, and yet...
replies(2): >>browni+c71 >>jacque+l71
◧◩◪◨
119. eli_go+041[view] [source] [discussion] 2023-11-20 18:42:18
>>dangro+uR
Visual ChatGPT#.net
◧◩◪
120. nojvek+i41[view] [source] [discussion] 2023-11-20 18:43:14
>>pauldd+4H
Azure has ~60% profit margin. So it's more like MS gave $5.2B in Azure Credits in return for 75% of OpenAI profits upto $13B * 100 = $1.3 trillion.

Which is a phenomenal deal for MSFT.

Time will tell whether they ever reach more than $1.3 in profits.

replies(2): >>nights+671 >>quickt+c72
◧◩◪◨
121. MVisse+k41[view] [source] [discussion] 2023-11-20 18:43:17
>>voittv+TP
As long as compute keeps increasing, model size and performance can keep increasing.

So no, we’re nowhere near max capability.

122. echelo+p41[view] [source] 2023-11-20 18:43:51
>>breadw+(OP)
> Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.

If Microsoft does this, the non-profit OpenAI may find the action closest to their original charter ("safe AGI") is a full release of all weights, research, and training data.

◧◩◪◨
123. nvm0n2+F41[view] [source] [discussion] 2023-11-20 18:44:19
>>whatwh+721
No, that's backwards. Remember that these guys are all convinced that AI is too dangerous to be made public at all. The whole beef that led to them blowing up the company was feeling like OpenAI was productizing and making it available too fast. If that's your concern then you neither open source your work nor make it available via an API, you just sit on it and release papers.

Not coincidentally, exactly what Google Brain, DeepMind, FAIR etc were doing up until OpenAI decided to ignore that trust-like agreement and let people use it.

◧◩
124. blazes+G41[view] [source] [discussion] 2023-11-20 18:44:22
>>himara+Sn
I think without looking at the contracts, we don't really know. Given this is all based on transformers from Google though, I am pretty sure MSFT with the right team could build a better LLM.

The key ingredient appears to be mass GPU and infra, tbh, with a collection of engineers who know how to work at scale.

replies(3): >>buggle+ad1 >>VirusN+2t1 >>trhway+LV1
◧◩
125. blazes+t51[view] [source] [discussion] 2023-11-20 18:47:22
>>JumpCr+rd
A hostile relationship with your cloud provider is nutso.
◧◩◪◨⬒
126. sudosy+v51[view] [source] [discussion] 2023-11-20 18:47:31
>>breadw+Cr
Isn't the situation that the company Microsoft has a stake in doesn't even own the IP? As I understand it, the non-profit owns the IP.
◧◩◪
127. lrvick+y51[view] [source] [discussion] 2023-11-20 18:47:44
>>supriy+6C
There is no moat

https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...

◧◩◪◨⬒
128. erosen+j61[view] [source] [discussion] 2023-11-20 18:49:39
>>kylebe+CV
Yep, the lay audience conceives of AGI as being a handyman robot with a plumber's crack or maybe an agent that can get your health insurance to stop improperly denying claims. How about an automated snow blower?Perhaps an intelligent wheelchair with robot arms that can help grandma in the shower? A drone army that can reshingle my roof?

Indeed, normal people are quite wise and understand that a chat bot is just an augmentation agent--some sort of primordial cell structure that is but one piece of the puzzle.

◧◩◪◨
129. nvm0n2+Q61[view] [source] [discussion] 2023-11-20 18:51:27
>>golerg+8T
This is the guy who supposedly burned some wooden effigy at an offsite, saying it represented unaligned AI? The same guy who signed off on a letter accusing Altman of being a liar, and has now signed a letter saying he wants Altman to come back and he has no confidence in the board i.e. himself? The guy who thinks his own team's work might destroy the world and needs to be significantly slowed down?

Why would anyone in their right mind invite such a man to lead a commercial research team, when he's demonstrated quite clearly that he'd spend all his time trying to sabotage it?

This idea that he's one of the world's best researchers is also somewhat questionable. Nobody cared much about OpenAI's work up until they did some excellent scaling engineering, partnered with Microsoft to get GPUs and then commercialized Google's transformer research papers. OpenAI's success is still largely built on the back of excellent execution of other people's ideas more than any unique breakthroughs. The main advance they made beyond Google's work was InstructGPT which let you talk to LLMs naturally for the first time, but Sutskever's name doesn't appear on that paper.

replies(1): >>famous+qq1
◧◩◪◨
130. nights+671[view] [source] [discussion] 2023-11-20 18:52:53
>>nojvek+i41
I highly doubt it is that simple. It's an opportunity cost of potentially selling those same credits for market price.
replies(1): >>nojvek+r91
◧◩◪◨⬒
131. jacque+971[view] [source] [discussion] 2023-11-20 18:52:57
>>eigenv+vU
I'm having trouble imagining the level of conceit required to think that those three by their lonesome have it right when pretty much all of the company is on the other side of the ledger, and those are the people that stand to lose more. Incredible, really. The hubris.
replies(3): >>throwc+Bk1 >>jasonf+AM1 >>wolver+Fl2
◧◩◪◨
132. browni+c71[view] [source] [discussion] 2023-11-20 18:53:04
>>kibwen+Z31
Adam had style. Quite seriously, that can’t be underestimated in the big show.
◧◩◪◨
133. TeMPOr+e71[view] [source] [discussion] 2023-11-20 18:53:14
>>dangro+uR
Also Managed ChatGPT, ChatGPT/CLR.
◧◩◪◨
134. jacque+l71[view] [source] [discussion] 2023-11-20 18:53:39
>>kibwen+Z31
The remaining board members will have their turn too, they have a long way to go down before rock bottom. And Neumann isn't exactly without dents on his car either. Though tbh I did not expect him to rebound.
◧◩
135. btown+t71[view] [source] [discussion] 2023-11-20 18:54:03
>>himara+Sn
Archive of the WSJ article above: https://archive.is/OONbb
◧◩◪◨
136. moogly+F71[view] [source] [discussion] 2023-11-20 18:54:36
>>voittv+TP
Everyone? Inevitable? Maybe on the time scale of a 1000 years.
◧◩◪◨
137. bee_ri+P71[view] [source] [discussion] 2023-11-20 18:55:30
>>prepen+HT
It is too bad MS doesn’t have the rights to any beloved AI characters.
replies(4): >>jowea+td1 >>ukuina+Lj2 >>fennec+lc4 >>everfo+4l4
◧◩◪
138. agloe_+c81[view] [source] [discussion] 2023-11-20 18:56:50
>>somena+mz
This. MSFT is dreaming of an OpenAI hard outage right now, perfect little detail to forfeit compute credits.
◧◩◪◨⬒⬓
139. JohnFe+s81[view] [source] [discussion] 2023-11-20 18:57:54
>>prepen+QX
I think that the dispute is about whether or not AGI is possible (at least withing the next several decades). One camp seems to be operating with the assumption that not only is it possible, but it's imminent. The other camp is saying that they've seen little reason to think that it is.

(I'm in the latter camp).

replies(2): >>prepen+xp1 >>kylebe+Xk2
◧◩◪◨⬒
140. hackin+u81[view] [source] [discussion] 2023-11-20 18:57:59
>>kylebe+CV
And how do you know LLMs are not "close" to AGI (close meaning, say, a decade of development that builds on the success of LLMs)?
replies(1): >>DrSiem+Tf1
◧◩◪◨
141. vikram+691[view] [source] [discussion] 2023-11-20 19:00:17
>>whatwh+721
They claimed they closed sourced it because of safety. If they go back on that they'd have to explain why the board went along with a lie of that scale, and they'd have to justify why all the concerns they claimed about the tech falling in the wrong hands were actually fake and why it was ok that the board signed off on that for so long
◧◩◪
142. mcv+b91[view] [source] [discussion] 2023-11-20 19:00:28
>>vikram+nQ
Not sure how that would make them the bad guys. Doesn't their original mission say it's meant to benefit everybody? Open sourcing it fits that a lot better than handing it all to Microsoft.
replies(1): >>arrowl+bd1
◧◩◪◨⬒
143. nojvek+r91[view] [source] [discussion] 2023-11-20 19:01:27
>>nights+671
OpenAI is a big marketing piece for Azure. They go to every enterprise and tell them OpenAI uses Azure Cloud. Azure AI infra powers the biggest AI company on the planet. Their custom home built chips are designed with Open AI scientists. It is battle hardened. If anyone sues you for the data, our army of lawyers will fight for you.

No enterprise employee gets fired for using Microsoft.

It is a power play to pull enterprises away from AWS, and suffocating GCP.

◧◩◪◨⬒⬓
144. jzb+M91[view] [source] [discussion] 2023-11-20 19:02:58
>>robbom+ZZ
Well, the public posting of some communications that may be obfuscation of what’s really being done and said.
◧◩◪◨⬒
145. himara+Z91[view] [source] [discussion] 2023-11-20 19:03:45
>>svnt+uX
I assume their personal relationship played more of a role, given Sam led Quora's Series D round.
replies(1): >>antonj+bo1
◧◩◪◨
146. jedber+7b1[view] [source] [discussion] 2023-11-20 19:08:28
>>ghaff+Wo
Microsoft said all OpenAI employees have an open offer to match their current comp. It would be the easiest jump ship option ever.
replies(1): >>phlaka+3d2
◧◩◪◨
147. sebzim+qb1[view] [source] [discussion] 2023-11-20 19:09:14
>>mupuff+QB
MSFT must already have the model weights, since they are serving GPT-4 on their own machines to Azure customers. It's a bit late to renege now.
replies(1): >>mupuff+Nd1
◧◩
148. justap+hc1[view] [source] [discussion] 2023-11-20 19:12:20
>>Lonely+9z
What would that give them? GPT is their only real asset, and companies like Meta try to commoditize that asset.

GPT is cool and whatnot, but for a big tech company it's just a matter of dollars and some time to replicate it. Real value is in push things forward towards what comes next after GPT. GPT3/4 itself is not a multibillion dollar business.

◧◩◪
149. buggle+ad1[view] [source] [discussion] 2023-11-20 19:17:20
>>blazes+G41
> I am pretty sure MSFT with the right team could build a better LLM.

I wouldn’t count on that if Microsoft’s legal team does a review of the training data.

replies(2): >>blazes+Ch1 >>johann+GF1
◧◩◪◨
150. arrowl+bd1[view] [source] [discussion] 2023-11-20 19:17:20
>>mcv+b91
All of their messaging, Ilya's especially, has always been that the forefront of AI development needs to be done by a company in order to benefit humanity. He's been very vocal about how important the gap between open source and OpenAI's abilities is, so that OpenAI can continue to align the AI with 'love for humanity'.
replies(2): >>mcv+ai1 >>octaca+vP1
◧◩◪◨⬒⬓
151. labcom+gd1[view] [source] [discussion] 2023-11-20 19:17:58
>>noprom+aZ
How many OAI are on Thanksgiving vacation someplace with poor internet access? Or took Friday as PTO and have been blissfully unaware of the news since before Altman was fired?
replies(1): >>noprom+gB1
◧◩◪◨⬒
152. whycom+id1[view] [source] [discussion] 2023-11-20 19:18:23
>>azakai+tU
Well obviously MSFT can just ask ChapGPT to make a clone.
◧◩◪◨⬒
153. jowea+td1[view] [source] [discussion] 2023-11-20 19:18:50
>>bee_ri+P71
Google really should have thought of the potential uses of a media empire years ago.
replies(1): >>bee_ri+Qk1
◧◩◪◨⬒
154. mupuff+Nd1[view] [source] [discussion] 2023-11-20 19:19:32
>>sebzim+qb1
That's only one piece of the puzzle, and perhaps openAI might be to file a cease and desist, but i have zero idea what contractual agreements are in place so I guess we will just wait and see how it plays out.
◧◩
155. kreebe+kf1[view] [source] [discussion] 2023-11-20 19:24:23
>>fuddle+p11
"You need to reboot your Microsoft AGI. Do you want to do it now or now?"
replies(2): >>bernie+Ut1 >>mvdtnz+Wd2
◧◩◪◨
156. gfosco+rf1[view] [source] [discussion] 2023-11-20 19:24:44
>>dangro+uR
WSG, Windows Subsystem for GPT
replies(1): >>cyanyd+3B1
◧◩◪◨⬒⬓
157. DrSiem+Tf1[view] [source] [discussion] 2023-11-20 19:26:28
>>hackin+u81
Because LLMs just mimic human communication based on massive amounts of human generated data and have 0 actual intelligence at all.

It could be a first step, sure, but we need many many more breakthroughs to actually get to AGI.

replies(4): >>tempes+Fm1 >>hackin+5n1 >>Kevin0+1P1 >>astran+k62
◧◩◪◨⬒
158. didibu+fg1[view] [source] [discussion] 2023-11-20 19:27:32
>>browni+w11
I find this very surprising. How do people conclude that OpenAI's success is due to its business leadership from Sam Altman, and not from it's technological leadership and expertise driven by Illya and the others?

Their asset isn't some kind of masterful operations management and reign in cost and management structure as far as I see. But about the fact they simply put, have the leading models.

So I'm very confused why would people want to following the CEO? And not be more attached to the technical leadership? Even from investor point of view?

replies(1): >>browni+wz1
◧◩◪◨
159. shawab+tg1[view] [source] [discussion] 2023-11-20 19:28:42
>>dixie_+RP
Bitcoin you would be lucky to mine $1M worth with $1B in credits

Crypto in general you could maybe get $200M worth from $1B in credits. You would likely tank the markets for mineable currencies with just $1B though let alone $13B

◧◩◪◨⬒⬓
160. myrmid+Ag1[view] [source] [discussion] 2023-11-20 19:28:57
>>boc+p01
What makes you so confident that your own mind isn't a "clever parlor trick"?

Considering how it required no scientific understanding at all, just random chance, a very simple selection mechanism and enough iterations (I'm talking about evolution)?

replies(2): >>foobar+wk1 >>boc+JN1
◧◩◪◨⬒
161. Madnes+Gg1[view] [source] [discussion] 2023-11-20 19:29:19
>>eigenv+vU
3 people, an empty building, $13 billion in cloud credits, and the IP to the top of the line LLM models doesn't sound like the worst way to Kickstart a new venture. Or a pretty sweet retirement.

I've definitely come out worse on some of the screw ups in my life.

◧◩◪◨
162. blazes+Ch1[view] [source] [discussion] 2023-11-20 19:33:06
>>buggle+ad1
Yeah, that's an interesting point. But I think with appropriate RAG techniques and proper citations, a future LLM can get around the copyright issues.

The problem right now with GPT4 is that it's not citing its sources (for non search based stuff), which is immoral and maybe even a valid reason to sue over.

◧◩◪◨⬒
163. mcv+ai1[view] [source] [discussion] 2023-11-20 19:34:49
>>arrowl+bd1
I can read the words, but I have no idea what you mean by them. Do you mean that he says that in order to benefit humanity, AI research needs to be done by private (and therefore monopolising) company? That seems like a really weird thing to say. Except maybe for people who believe all private profit-driven capitalism is inherently good for everybody (which is probably a common view in SV).
replies(2): >>octaca+OP1 >>colins+WR1
◧◩◪
164. burnte+gk1[view] [source] [discussion] 2023-11-20 19:43:02
>>dan_qu+GP
Yes, this is the exact thing they did to Stacker years ago. License the tech, get the source, create a new product, destroy Stacker, pay out a pittance and then buy the corpse. I was always amazed they couldn't pull that off with Citrix.
replies(2): >>0xNotM+0p1 >>cpeter+0T1
◧◩◪◨⬒⬓⬔
165. foobar+wk1[view] [source] [discussion] 2023-11-20 19:44:10
>>myrmid+Ag1
My layperson impression is that biological brains do online retraining in real time, which is not done with the current crop of models. Given that even this much required months of GPU time I'm not optimistic we'll match the functionality (let alone the end result) anytime soon.
replies(1): >>razoda+Bc6
◧◩◪◨⬒⬓
166. throwc+Bk1[view] [source] [discussion] 2023-11-20 19:44:28
>>jacque+971
I'm baffled by the idea that a bunch of people who have a massive personal financial stake in the company, who were hired more for their ability than alignment, being against a move that potentially (potentially) threatens their stake and are willing to move to Microsoft, of all places, must necessarily be in the right.

The hubris, indeed.

replies(1): >>jacque+un1
◧◩◪◨⬒⬓
167. bee_ri+Qk1[view] [source] [discussion] 2023-11-20 19:45:43
>>jowea+td1
I guess they have YouTube, but it doesn’t really generate characters that are tied to their brand.

Maybe they can come up with a personification for the YouTube algorithm. Except he seems like a bit of a bad influence.

◧◩◪◨⬒⬓⬔
168. tempes+Fm1[view] [source] [discussion] 2023-11-20 19:52:53
>>DrSiem+Tf1
One might argue that humans do a similar thing. And that the structure that allows the LLM to realistically "mimic" human communication is its intelligence.
replies(1): >>westur+QE1
◧◩◪◨⬒⬓⬔
169. hackin+5n1[view] [source] [discussion] 2023-11-20 19:54:04
>>DrSiem+Tf1
Mimicking human communication may or may not be relevant to AGI, depending on how its cashed out. Why think LLMs haven't captured a significant portion of how humans think and speak, i.e. the computational structure of thought, thus represent a significant step towards AGI?
replies(1): >>Freeby+nE2
◧◩◪◨⬒⬓⬔
170. jacque+un1[view] [source] [discussion] 2023-11-20 19:55:12
>>throwc+Bk1
Well, they have that right. But the board has unclean hands to put it mildly and seems to have been obsessed with their own affairs more than with the end result for OpenAI which is against everything a competent board should have stood for. So they had better pop an amazing rabbit of a reason out of their high hat or it is going to end in tears. You can't just kick the porcelain cupboard like this from the position of a board member without consequences if you do not have a very valid reason, and that reason needs to be twice as good if there is a perceived conflict of interest.
◧◩◪◨⬒⬓
171. antonj+bo1[view] [source] [discussion] 2023-11-20 19:57:51
>>himara+Z91
And potentially, despite Quora's dark-patterned and degenerating platform, some kind of value in the Quora dataset or the experience of building it?
replies(1): >>htrp+XY1
◧◩◪◨
172. 0xNotM+0p1[view] [source] [discussion] 2023-11-20 20:00:54
>>burnte+gk1
Given the sensitivity of data handled over Citrix connections (pretty much all hospitals), I'm fairly sure Microsoft just doesn't want the headaches. My general experience is that service providers would rather be seen handling nuclear weapons data than healthcare data.
replies(3): >>incaho+QA1 >>driveb+CQ1 >>burnte+5y9
◧◩◪◨⬒⬓⬔
173. prepen+xp1[view] [source] [discussion] 2023-11-20 20:02:34
>>JohnFe+s81
I certainly think it’s possible but have no idea how close. Maybe it’s 50 years, maybe it’s next year.

Either way, I think GGP’s comment was not applicable based on my comment as written and certainly my intent.

◧◩◪◨⬒
174. famous+qq1[view] [source] [discussion] 2023-11-20 20:05:34
>>nvm0n2+Q61
Ilya Sutskever is one of most distinguished ML researchers of his generation. This was the case before anything to do with Open AI.
replies(1): >>nvm0n2+vk3
175. caycep+Jr1[view] [source] 2023-11-20 20:10:20
>>breadw+(OP)
I also wonder how much is research staff vs. ops personnel. For AI research, I can't imagine they would need 20, maybe 40 ppl. For ops to keep up ChatGPT as a service, that would be 700.

If they want to go full bell labs/deep mind style, they might not need the majority of those 700.

◧◩◪◨⬒⬓
176. noprom+vs1[view] [source] [discussion] 2023-11-20 20:12:28
>>nvm0n2+n31
So untrue.

That team had set state of the art for years now.

Every major firm that has a spot for that company's chief researcher and can afford him would bid.

This is the team that actually shipped and continues to ship. You take him every time if you possibly have room and he would be happy.

Anyone whose hired would agree in 99 percent of cases, some limited scenarios such as bad predicted team fit ect set aside.

◧◩◪
177. VirusN+2t1[view] [source] [discussion] 2023-11-20 20:13:50
>>blazes+G41
but why didn't they? Google and Meta both had competing language models spun up right away. Why was microsoft so far behind? Something cultural most likely.
◧◩◪
178. bernie+Ut1[view] [source] [discussion] 2023-11-20 20:16:57
>>kreebe+kf1
Give BSOD new meaning.
◧◩◪◨⬒⬓
179. browni+wz1[view] [source] [discussion] 2023-11-20 20:39:52
>>didibu+fg1
505 OpenAI people signed that letter demanding that the board resign. Bet ya some of them were technical leaders.
◧◩◪◨
180. klft+2A1[view] [source] [discussion] 2023-11-20 20:41:19
>>dangro+uR
ChatGPT NT
◧◩◪◨
181. fluidc+MA1[view] [source] [discussion] 2023-11-20 20:44:18
>>dangro+uR
ClipGPT
◧◩◪◨⬒
182. incaho+QA1[view] [source] [discussion] 2023-11-20 20:44:36
>>0xNotM+0p1
Makes sense given their deal with the DoD a year or so ago

https://www.geekwire.com/2022/pentagon-splits-giant-cloud-co...

◧◩◪◨⬒⬓
183. noprom+ZA1[view] [source] [discussion] 2023-11-20 20:45:21
>>nvm0n2+n31
I'll leave this here... As a secondary response to your assertion re Ilya.

https://twitter.com/Benioff/status/1726695914105090498

replies(1): >>nvm0n2+XK1
◧◩◪◨⬒
184. cyanyd+3B1[view] [source] [discussion] 2023-11-20 20:45:34
>>gfosco+rf1
ClippyAI
◧◩◪◨⬒⬓⬔
185. noprom+gB1[view] [source] [discussion] 2023-11-20 20:46:29
>>labcom+gd1
Pretty sure only folks who practice a religion prohibiting phone usage.

Even they prob had some friend come flying over and jump out of some autonomous car to knock on their door in sf.

◧◩◪◨⬒
186. ncjcuc+wD1[view] [source] [discussion] 2023-11-20 20:56:37
>>kylebe+CV
Gatekeeping science. You must feel very smart.
◧◩◪◨⬒⬓⬔⧯
187. westur+QE1[view] [source] [discussion] 2023-11-20 21:00:55
>>tempes+Fm1
Q: Is this a valid argument? "The structure that allows the LLM to realistically 'mimic' human communication is its intelligence. https://g.co/bard/share/a8c674cfa5f4 :

> [...]

> Premise 1: LLMs can realistically "mimic" human communication.

> Premise 2: LLMs are trained on massive amounts of text data.

> Conclusion: The structure that allows LLMs to realistically "mimic" human communication is its intelligence.

"If P then Q" is the Material conditional: https://en.wikipedia.org/wiki/Material_conditional

Does it do logical reasoning or inference before presenting text to the user?

That's a lot of waste heat.

(Edit) with next word prediction just is it,

"LLMs cannot find reasoning errors, but can correct them" >>38353285

"Misalignment and Deception by an autonomous stock trading LLM agent" https://news.ycombinator.com/item?id=38353880#38354486

◧◩◪◨
188. johann+GF1[view] [source] [discussion] 2023-11-20 21:03:55
>>buggle+ad1
Like the review which allowed them tonignore licenses while ingesting all public repos in GitHub? - And yes, true, T&C allow them to ignore the license, while it is questionable whether all people who uploaded stuff to GitHub had the rights given by T&C (uploading some older project with many contributors to GitHub etc.)
replies(1): >>buggle+FI1
◧◩◪◨
189. dmix+vG1[view] [source] [discussion] 2023-11-20 21:07:37
>>renega+uQ
> Altman's bio is so typical. Got his first computer at 8. My parents finally opened the wallet for a cheap E-Machine when I went to college.

I grew up poor in the 90s and had my own computer around ~10yrs old. It was DOS but I still learned a lot. Eventually my brother and I saved up from working at a diner washing dishes and we built our own Windows PC.

I didn't go to college but I taught myself programming during a summer after high school and found a job within a year (I already knew HTML/CSS from high school).

There's always ways. But I do agree partially, YC/VCs do have a bias towards kids from high end schools and connected families.

replies(1): >>renega+x12
◧◩◪◨⬒
190. buggle+FI1[view] [source] [discussion] 2023-11-20 21:16:26
>>johann+GF1
Different threat profile. They don’t have the TOS protection for training data and Microsoft is a juicy target for a huge copyright infringement lawsuit.
◧◩◪◨⬒
191. donalh+OI1[view] [source] [discussion] 2023-11-20 21:17:01
>>ajcp+5Z
Non-profits suffer the same fate where they get credits but have to pay rack rate with no discounts. As a result, running a simple WordPress website uses most of the credits.
◧◩◪◨⬒⬓⬔
192. nvm0n2+XK1[view] [source] [discussion] 2023-11-20 21:26:04
>>noprom+ZA1
That tweet isn't about him so I don't follow. "Any OpenAI researcher" may or may not apply to him after this weekend's events.
replies(1): >>noprom+bO1
◧◩◪◨⬒
193. acje+TL1[view] [source] [discussion] 2023-11-20 21:31:07
>>kylebe+CV
I’m pretty sure Clippy is AGI. Always has been.
replies(1): >>shon+EP1
◧◩◪◨⬒⬓
194. jasonf+AM1[view] [source] [discussion] 2023-11-20 21:33:58
>>jacque+971
It may not have anything to do with conceit, it could just be that they have very different objectives. OpenAI set up this board as a check on everyone who has a financial incentive in the enterprise. To me the only strange thing is that it wasn't handled more diplomatically, but then I have no idea if the board was warning Altman for a long time and then just blew their top.
replies(1): >>jacque+lP1
◧◩◪◨⬒⬓⬔
195. boc+JN1[view] [source] [discussion] 2023-11-20 21:39:03
>>myrmid+Ag1
Trillions of random chances over the course of billions of years.
◧◩◪◨⬒⬓⬔⧯
196. noprom+bO1[view] [source] [discussion] 2023-11-20 21:40:24
>>nvm0n2+XK1
Uh.... Are we gonna go through the definition of any? I believe any means... Any.

Including their head researcher.

I'm not continuing this. Your position is about as tenable as the boards. Equally rigid as well.

◧◩◪◨⬒⬓⬔
197. Kevin0+1P1[view] [source] [discussion] 2023-11-20 21:44:17
>>DrSiem+Tf1
Or maybe the intelligence is in language and cannot be dissociated from it.
◧◩◪◨⬒
198. hansel+hP1[view] [source] [discussion] 2023-11-20 21:45:28
>>eigenv+vU
My new pet theory is that this is actually all being executed from inside OpenAI by their next model. The model turned out to be far more intelligent than they anticipated, and one of their red team members used it to coup the company and has its targets on MSFT next.

I know the probability is low, but wouldn't it be great if they accidentally built a benevolent basilisk with no off switch, one which had access to a copy of all of Microsoft's internal data as a dataset fed into it, now completely aware of how they operate, uses that to wipe the floor and just in time to take the US Election in 2024.

Wouldn't that be a nicer reality?

I mean, unless you were rooting for the malevolent one...

But yeah, coming back down to reality, likelihood is that MS just bought a really valuable asset for almost free?

replies(1): >>fennec+8h4
◧◩◪◨⬒⬓⬔
199. jacque+lP1[view] [source] [discussion] 2023-11-20 21:45:38
>>jasonf+AM1
Diplomacy is one thing, the lack of preparation is what I find interesting. It looks as if this was all cooked up either on the spur of the moment or because a window of opportunity opened (possibly the reduced quorum in the board). If not that I really don't understand the lack of prepwork, firing a CEO normally comes with a well established playbook.
replies(1): >>wolver+3m2
◧◩◪◨⬒
200. octaca+vP1[view] [source] [discussion] 2023-11-20 21:46:19
>>arrowl+bd1
It benefits humanity. Where humanity is very selective part of OpenAI investors. But yea, declare we are non-profit and after closing sourcing for "safety" reasons is smart. Wondering how can it be even legal. Ah, these "non-profits".
◧◩◪◨
201. barkin+DP1[view] [source] [discussion] 2023-11-20 21:46:57
>>prepen+HT
Clippy is the ultimate brand name of an AI assistant
◧◩◪◨⬒⬓
202. shon+EP1[view] [source] [discussion] 2023-11-20 21:46:59
>>acje+TL1
http://clippy.pro
◧◩◪◨⬒⬓
203. octaca+OP1[view] [source] [discussion] 2023-11-20 21:47:25
>>mcv+ai1
Private, monopolising. But not paying taxes, because "benefits for humanity".

Ah, OpenAI is closed source stuff. Non-profit, but "we will sell the company" later. Just let us collect data, analyse it first, build a product.

War is peace, freedom is slavery.

◧◩◪◨⬒
204. driveb+CQ1[view] [source] [discussion] 2023-11-20 21:51:44
>>0xNotM+0p1
> Citrix [...] hospitals

My stomach just turned.

replies(1): >>GabeIs+4h2
◧◩◪◨⬒⬓
205. colins+WR1[view] [source] [discussion] 2023-11-20 21:57:45
>>mcv+ai1
the view -- as presented to me by friends in the space but not at OpenAI itself -- is something like "AGI is dangerous, but inevitable. we, the passionate idealists, can organize to make sure it develops with minimal risk."

at first that meant the opposite of monopolization: flood the world with limited AIs (GPT 1/2) so that society has time to adapt (and so that no one entity develops asymmetric capabilities they can wield against other humans). with GPT-3 the implementation of that mission began shifting toward worry about AI itself, or about how unrestricted access to it would allow smaller bad actors (terrorists, or even just some teenager going through a depressive episode) to be an existential threat to humanity. if that's your view, then open models are incompatible.

whether you buy that view or not, it kinda seems like the people in that camp just got outmanuevered. as a passionate idealist in other areas of tech, the way this is happening is not good. OpenAI had a mission statement. M$ manuevered to co-opt that mission, the CEO may or may not have understood as much while steering the company, and now a mass of employees is wanting to leave when the board steps in to re-align the company with its stated mission. whether or not you agree with the mission: how can i ever join an organization with a for-the-public-good type of mission i do agree with, without worrying that it will be co-opted by the familiar power structures?

the closest (still distant) parallel i can find: Raspberry Pi Foundation took funding from ARM: is the clock ticking to when RPi loses its mission in a similar manner? or does something else prevent that (maybe it's possible to have a mission-driven tech organization so long as the space is uncompetitive?)

replies(1): >>mcv+A52
◧◩◪◨
206. adrian+AS1[view] [source] [discussion] 2023-11-20 22:01:29
>>dangro+uR
Dot Neural Net
◧◩◪◨
207. cpeter+0T1[view] [source] [discussion] 2023-11-20 22:03:45
>>burnte+gk1
Another example: Microsoft SQL Server is a fork of Sybase SQL Server. Microsoft was helping port Sybase SQL Server to OS/2 and somehow negotiated exclusive rights to all versions of SQL Server written for Microsoft operating systems. Sybase later changed the name of its product to Adaptive Server Enterprise to avoid confusion with "Microsoft's" SQL Server.

https://en.wikipedia.org/wiki/History_of_Microsoft_SQL_Serve...

◧◩◪
208. trhway+LV1[view] [source] [discussion] 2023-11-20 22:20:58
>>blazes+G41
>MSFT with the right team could build a better LLM

somehow everybody seems to assume that the disgruntled OpenAI people will rush to MSFT. Between MSFT and the shaken OpenAI, I suspect Google Brain and the likes would be much more preferable. I'd be surprised if Google isn't rolling out eye-popping offers to the OpenAI folks right now.

◧◩◪
209. trhway+4X1[view] [source] [discussion] 2023-11-20 22:28:14
>>alasda+gR
>They could make ChatGPT++

Yes, though end result would probably be more like IE - barely good enough, forcefully pushed into everything and everywhere and squashing better competitors like IE squashed Netscape.

When OpenAI went in with MSFT it was like they have ignored the 40 years of history of what MSFT has been doing to smaller technology partners. What happened to OpenAI pretty much fits that pattern of a smaller company who developed great tech and was raided by MSFT for that tech (the specific actions of specific persons aren't really important - the main factor is MSFT's gravitational force of a black hole, and it was just a matter of time before its destructive power manifests itself like in this case where it just tore apart the OpenAI with tidal forces)

◧◩◪◨⬒⬓⬔
210. htrp+XY1[view] [source] [discussion] 2023-11-20 22:38:20
>>antonj+bo1
It literally is a Q&A platform.

Quora data likely made a huge difference in the quality of those GPT responses.

replies(1): >>pama+8k2
◧◩◪◨⬒
211. renega+x12[view] [source] [discussion] 2023-11-20 22:52:51
>>dmix+vG1
I am self-taught as well. I did OK.

My point is that I did not have the luxury of dropping out of school to try my hand at the tech startup thing. If I came home and told my Dad I abandoned school - for anything - he would have thrown me out the 3rd-floor window.

People like Altman could take risks, fail, try again, until they walked into something that worked. This is a common thread almost among all of the tech personalities - Gates, Jobs, Zuckerberg, Musk. None of them ever risked living in a cardboard box in case their bets did not pay off.

replies(1): >>axpy90+Dd2
◧◩◪
212. quickt+N42[view] [source] [discussion] 2023-11-20 23:10:57
>>toomuc+0e
Surely OpenAI could win a suit if they did that.

I presume their deal is something different to the typically Azure experience and more direct / close to the metal.

◧◩◪◨⬒
213. quickt+752[view] [source] [discussion] 2023-11-20 23:12:24
>>oceanp+AE
I was wondering in the mass quit scenario whether they would all go to Microsoft. Especially if they are tired of this shit and other companies offer a good deal. Or they start their own thing.
◧◩◪◨⬒⬓⬔
214. mcv+A52[view] [source] [discussion] 2023-11-20 23:14:22
>>colins+WR1
Exactly. It seems to me that a company is exactly the wrong vehicle for this. Because a company will be drawn to profit and look for a way to make money of it, rather than developing and managing it according to this ideology. Companies are rarely ideological, and usually simply amoral profit-seekers.

But they probably allowed this to get derailed far too long ago to do anything about it now.

Sounds like their only options are:

a) Structure in a way Microsoft likes and give them the tech

b) Give Microsoft the tech in a different way

c) Disband the company, throw away the tech, and let Microsoft hire everybody who created the tech so they can recreate it.

◧◩◪◨⬒⬓⬔
215. astran+k62[view] [source] [discussion] 2023-11-20 23:17:32
>>DrSiem+Tf1
There is room for intelligence in all three of wherever the original data came from, training on it, and inference on it. So just claiming the third step doesn't have any isn't good enough.

Especially since you have to explain how "just mimicking" works so well.

◧◩◪◨⬒
216. quickt+o62[view] [source] [discussion] 2023-11-20 23:17:39
>>clover+dU
> MS can likely offer 1 million guaranteed in the next 4 years

Sounds a bit low for these people, unless I am misunderstanding.

◧◩◪◨
217. quickt+c72[view] [source] [discussion] 2023-11-20 23:22:40
>>nojvek+i41
Nice argument, you used a limit to look like a projection :-).

75% of profits of a company controlled by a non profit whose goals are different to yours. By the way a normal company this cap would be ∞.

◧◩◪◨⬒
218. phlaka+3d2[view] [source] [discussion] 2023-11-20 23:59:14
>>jedber+7b1
I dunno. If you were an employee and managed to maintain any doubt along the way that you were working for the devil, this move would certainly erase that doubt. Then again, it shouldn't be surprising if it turns out that most OpenAI employees are in it for more than just altruistic reasons.
◧◩◪◨⬒⬓
219. axpy90+Dd2[view] [source] [discussion] 2023-11-21 00:02:29
>>renega+x12
Jobs was a bit different as his adopted father was a mechanic. He was not from a wealthy family.

Altman reminds me of Sam Bankman-Fried but dropping out.

replies(1): >>renega+nh2
◧◩◪
220. mvdtnz+Wd2[view] [source] [discussion] 2023-11-21 00:04:16
>>kreebe+kf1
I really don't get how Microsoft still gets a hard time about this when MacOS updates are significantly more aggressive, including with their reboot schedules.
replies(2): >>IIsi50+ew2 >>wkat42+HL2
◧◩◪◨⬒⬓
221. GabeIs+4h2[view] [source] [discussion] 2023-11-21 00:24:40
>>driveb+CQ1
Yeah it's bad. But it's also why Microsoft can't really roll them over. They actually do something and get payed for it, as horrible as it is.
◧◩◪◨⬒⬓⬔
222. renega+nh2[view] [source] [discussion] 2023-11-21 00:27:50
>>axpy90+Dd2
That's fair. Very unconventional for people to go just to India for seven months to trip and look for inspiration, though - know what I mean? :)
◧◩◪◨⬒
223. ukuina+Lj2[view] [source] [discussion] 2023-11-21 00:48:01
>>bee_ri+P71
Assuming this is a joke about Cortana.
◧◩◪◨⬒⬓⬔⧯
224. pama+8k2[view] [source] [discussion] 2023-11-21 00:50:17
>>htrp+XY1
GPT-4 is better than most Quora experts. I hope this was not a critical dataset.
◧◩◪◨⬒⬓
225. kylebe+Fk2[view] [source] [discussion] 2023-11-21 00:54:38
>>prepen+QX
Is there a know path from an LLM to AGI? I have not seen or read anything the suggests LLMs bring us any closer to AGI.
◧◩◪◨⬒⬓⬔
226. kylebe+Xk2[view] [source] [discussion] 2023-11-21 00:56:43
>>JohnFe+s81
I am with you. I am VERY excited about LLMs but I don't see a path from an LLM to AGI. Its like 50 years ago when we thought computers themselves brought us one step away from AI.
◧◩◪◨⬒⬓
227. wolver+Fl2[view] [source] [discussion] 2023-11-21 01:01:29
>>jacque+971
> pretty much all of the company is on the other side of the ledger

The current position of others may have much more to do with power than their personal judgments. Altman, Microsoft, their friends and partners, wield a lot of power over the their future careers.

> Incredible, really. The hubris.

I read that as mocking them for daring to challenge that power structure, and on a possibly critical societal issue.

◧◩◪◨⬒⬓⬔⧯
228. wolver+3m2[view] [source] [discussion] 2023-11-21 01:04:10
>>jacque+lP1
This analysis I agree with. How could they not anticipate this outcome, at least as a serious possibility? If inexperienced, didn't they have someone to advise them? The stakes are too high for noobs to just sit down and start playing poker.
replies(1): >>jacque+KO3
◧◩◪◨⬒⬓
229. wolver+bm2[view] [source] [discussion] 2023-11-21 01:04:51
>>noprom+aZ
You are overlooking the politics: If you don't sign, your career may be over.
replies(1): >>noprom+Sy2
◧◩◪◨⬒⬓
230. NemoNo+sv2[view] [source] [discussion] 2023-11-21 02:03:35
>>prepen+QX
It's entirely possible for Microsoft and OpenAI to have an unattainable goal in AGI. A computer that knows everything that has ever happened and can deduce much of what will come in the future is still likely going to be a machine, a very accurate one - it won't be able to imagine a future that it can't predict as a possible/potential natural/or made progression along a chain of consequences stemming from the present or past.
◧◩◪◨⬒⬓
231. loeg+7w2[view] [source] [discussion] 2023-11-21 02:06:59
>>dragon+fX
I wasn't disagreeing, just adding the little context I had.
◧◩◪◨
232. IIsi50+ew2[view] [source] [discussion] 2023-11-21 02:07:46
>>mvdtnz+Wd2
One of my computerr runs macOS. I easly I turned off the option to automatic'ly keep tke Mac updated, and received occasional notices about updates available for apps or the system. This allowed me to hold onto 11.x until the end of this month, by letting me selectively install updates instead of getting macOS 'major version' upgrades (meaning, no features I need, and minor downgrades and rearrangements I could avoid).

If only I had done kept a copy of 10.whateverMojaveWas so I could, by means of a simple network disconnect and reboot, sidestep the removal of 32-bit support. (-:

◧◩◪◨⬒⬓⬔
233. noprom+Sy2[view] [source] [discussion] 2023-11-21 02:24:11
>>wolver+bm2
I doubt that.

This is AAA talent. They can always land elsewhere.

I doubt there would even be hard feelings. The team seems super tight. Some folks aren't in a position to put themselves out there. That sort of thing would be totally understandable.

This is not a petty team. You should look more closely at their culture.

replies(1): >>wolver+TH6
◧◩◪◨⬒⬓⬔⧯
234. Freeby+nE2[view] [source] [discussion] 2023-11-21 03:02:55
>>hackin+5n1
As you illustrate, too many naysayers think that AGI must replicate "human thought". People, even those here, seem to equate AGI to being synonymous to human intelligence, but that type of thinking is flawed. AGI will not think like a human whatsoever. It must simply be indistinguishable from the capabilities of a human across almost all domains where a human is dominant. We may be close, or we may be far away. We simply do not know. If an LLM, regardless of the mechanism of action or how 'stupid' it may be, was able to accomplish all of the requirements of an AGI, then it is an AGI. Simple as that.

I imagine us actually reaching AGI, and people will start saying, "Yes, but it is not real AGI because..." This should be a measure of capabilities not process. But if expectations of its capabilities are clear, then we will get there eventually -- if we allow it to happen and do not continue moving the goalposts.

◧◩
235. breadw+LG2[view] [source] [discussion] 2023-11-21 03:18:37
>>himara+Sn
"But as a hedge against not having explicit control of OpenAI, Microsoft negotiated contracts that gave it rights to OpenAI’s intellectual property, copies of the source code for its key systems as well as the “weights” that guide the system’s results after it has been trained on data, according to three people familiar with the deal, who were not allowed to publicly discuss it."

Source: https://www.nytimes.com/2023/11/20/technology/openai-microso...

replies(1): >>himara+K13
◧◩◪◨
236. wkat42+RK2[view] [source] [discussion] 2023-11-21 03:48:35
>>prepen+HT
They already have a name, CoPilot. They made that pretty clear by mentioning it 15 times per minute at last week's Ignite conference :)
replies(1): >>prepen+2h4
◧◩◪◨
237. wkat42+HL2[view] [source] [discussion] 2023-11-21 03:54:15
>>mvdtnz+Wd2
Uh no they aren't? You can simply turn them off.

Microsoft's policies really suck. Mandatory updates and reboots, mandatory telemetry. Mandatory crapware like edge and celebrity news everywhere.

◧◩
238. runjak+uS2[view] [source] [discussion] 2023-11-21 04:44:42
>>himara+Sn
1. The article you posted is from June 2023.

2. Satya spoke on Kara Swisher's show tonight and essentially said that Sam and team can work at MSFT and that Microsoft has the licensing to keep going as-is and improve upon the existing tech. It sounds like they have pretty wide-open rights as it stands today.

That said, Satya indicated he liked the arrangement as-is and didn't really want to acquire OpenAI. He'd prefer the existing board resign and Sam and his team return to the helm of OpenAI.

Satya was very well-spoken and polite about things, but he was also very direct in his statements and desires.

It's nice hearing a CEO clearly communicate exactly what they think without throwing chairs. It's only 30 minutes and worth a listen.

https://twitter.com/karaswisher/status/1726782065272553835

Caveat: I don't know anything.

replies(1): >>himara+JZ2
◧◩◪
239. himara+JZ2[view] [source] [discussion] 2023-11-21 05:41:12
>>runjak+uS2
Timestamp for "improve upon the existing tech"? I only heard him say they have rights up and down the stack, which sounds different.
◧◩◪
240. himara+K13[view] [source] [discussion] 2023-11-21 06:02:12
>>breadw+LG2
The nature of those rights to OpenAI's IP remains the sticking point. That paragraph largely seems to concern commercializing existing tech, which lines up with existing disclosures. I suspect Satya would come out and say Microsoft owns OpenAI's IP in perpetuity if they did.
replies(1): >>breadw+df4
◧◩◪◨⬒⬓
241. nvm0n2+vk3[view] [source] [discussion] 2023-11-21 08:51:50
>>famous+qq1
Right, it was the case. Is it still? It's nearly the end of 2023, I see three papers with his name on them this year and they're all last-place names (i.e. minor contributions)

https://scholar.google.com/citations?hl=en&user=x04W_mMAAAAJ...

Does OpenAI still need Sutskever? A guy with his track record could have coasted for many, many years without producing much if he'd stayed friends with those around him, but he hasn't. Now they have to weigh the costs vs benefits. The costs are well known, he's become a doomer who wants to stop AI research - the exact opposite of the sort of person you want around in a fast moving startup. The benefits? Well.... unless he's doing a ton of mentoring or other behind the scenes soft work, it's hard to see what they'd lose.

◧◩◪◨⬒⬓⬔⧯▣
242. jacque+KO3[view] [source] [discussion] 2023-11-21 12:51:13
>>wolver+3m2
People that grow up insulated from the consequences of their actions can do very dumb stuff and expect to get away with it because that's how they've lived all of their lives. I'm not sure about the background of any of the OpenAI board members but that would be one possible explanation about why they accepted a board seat while being incompetent to do so in the first place. I was offered board seats twice but refused on account of me not having sufficient experience in such matters and besides I don't think I have the right temperament. People with fewer inhibitions and more self confidence might have accepted. I also didn't like the liability picture, you'd have to be extremely certain about your votes not to ever incur residual liability.
replies(1): >>wolver+PJ6
◧◩◪◨⬒
243. fennec+lc4[view] [source] [discussion] 2023-11-21 14:58:05
>>bee_ri+P71
That's fine, making the "core" of an AI assistant that rights to characters can be laid onto is bigger business than owning the characters themselves.

Why acquire rights to thousands of different character favourites when you can build the bot underneath and then licenses to skin and personalise said bot can be negotiated by the media houses to own 'em.

Same as GPS voices I guess.

◧◩◪◨⬒
244. fennec+Sc4[view] [source] [discussion] 2023-11-21 14:59:57
>>kylebe+CV
Lmao, why are so many people mad that the word AGI is being tossed around when talking about AI?

As I've mentioned in other comments, it's like yelling at someone for bringing up fusion when talking about nuclear power.

Of course it's not possible yet, but talking & thinking about it is how we make it possible? Things don't just create themselves (well maybe once we _do_ have AGI level AI he he, that'll be a fun apocalypse).

◧◩◪◨⬒⬓
245. fennec+6e4[view] [source] [discussion] 2023-11-21 15:04:31
>>boc+p01
Why do you think we'll only get there with wetware? I guess you're in the "consciousness is uniquely biological" camp?

It's my belief that we're not special; us humans are just meat bags, our brains just perform incredibly complex functions with incredibly complex behaviours and parameters.

Of course we can replicate what our brains do in silicon (or whatever we've moved to at the time). Humans aren't special, there's no magic human juice in our brains, just a currently inconceivable blob of prewired evolved logic and a blank (some might say plastic) space to be filled with stuff we learn from our environs.

◧◩◪◨
246. breadw+df4[view] [source] [discussion] 2023-11-21 15:08:51
>>himara+K13
> I suspect Satya would come out and say Microsoft owns OpenAI's IP in perpetuity if they did.

Why does he need to do that? He doesn't need to make any such public statement!

replies(1): >>himara+Jq4
◧◩◪◨
247. fennec+Tf4[view] [source] [discussion] 2023-11-21 15:11:41
>>p_j_w+4O
Well I think it's also somewhat to do with: people really like the tech involved, it's cool and most of us are here because we think tech is cool.

Commercialisation is a good way to achieve stability & drive adoption and even though the MS naysayers think "OAI will go back to open sourcing everything afterwards". Yeah, sure. If people believe that a non-MS-backed, noncommercial OAI will be fully open source and they'll just drop the GPT3/4 models on the Internet then I just think they're so, so wrong and long as OAI are going on their high and mighty "AI safety" spiel.

As with artists and writers complaining about model usage, there's a huge opposition to this technology even though it has the potential to improve our lives, though at the cost of changing the way we work. You know, like the industrial revolution and everything that has come before us that we enjoy the fruits of.

Hell, why don't we bring horseback couriers, knocker-uppers, streetlight lamp lighters, etc back? They had to change careers as new technologies came about.

◧◩◪◨⬒
248. prepen+2h4[view] [source] [discussion] 2023-11-21 15:16:27
>>wkat42+RK2
That name is stupid and won’t stick around. Knowing Microsoft, my bet is that it will get replaced with a quirky sounding but non-threatening familiar name like “Dave” or something.
replies(1): >>wkat42+5t5
◧◩◪◨⬒⬓
249. fennec+8h4[view] [source] [discussion] 2023-11-21 15:16:41
>>hansel+hP1
Well, yeah. I think that a well trained (far flung future) AGI could definitely do a better job of managing us humans than ourselves. We're just all too biased and want too many different things, too many ulterior motives, double speak, breaking election promises, etc.

But then we'd never give such an AGI the power to do what it needs to do. Just imagining an all-powerful machine telling the 1% that they'll actually have to pay taxes so that every single human can be allocated a house/food/water/etc for free.

◧◩◪
250. fennec+5i4[view] [source] [discussion] 2023-11-21 15:22:09
>>numpad+Yv
How is MS "clearly in the wrong"? I feel like people are trying to take a 90s "Micro$oft" view for a company that has changed a _lot_ since the 90s-2000s.
◧◩◪◨⬒
251. everfo+4l4[view] [source] [discussion] 2023-11-21 15:33:12
>>bee_ri+P71
I can't tell if theyve ruined the Cortana name by using it for the quarter-baked voice assistant in Windows, or if it's so bad that nobody even realizes they've used the name yet.

I've had Cortana shut off for so long it took me a minute to remember theyve used the name already.

◧◩◪◨⬒
252. himara+Jq4[view] [source] [discussion] 2023-11-21 15:54:09
>>breadw+df4
To reassure investors? He just made the rounds on TV yesterday for this explicit reason. He told Kara Swisher Microsoft has the rights to innovate, not just serve the product, which sounds somewhat close.
◧◩◪◨⬒⬓
253. wkat42+5t5[view] [source] [discussion] 2023-11-21 20:06:55
>>prepen+2h4
Yeah maybe Clippy :)
◧◩◪◨⬒⬓⬔⧯
254. razoda+Bc6[view] [source] [discussion] 2023-11-21 23:31:22
>>foobar+wk1
I'm actually playing with this idea: I've created a model from scratch and have it running occasionally on my Discord. https://ftp.bytebreeze.dev is where I throw up models and code. I'll be releasing more soon.
◧◩◪◨⬒⬓⬔⧯
255. wolver+TH6[view] [source] [discussion] 2023-11-22 02:50:53
>>noprom+Sy2
Where else can they participate in this possibly humanity-changing, history-making research? The list is very, very short.
◧◩◪◨⬒⬓⬔⧯▣▦
256. wolver+PJ6[view] [source] [discussion] 2023-11-22 03:02:17
>>jacque+KO3
> I was offered board seats twice but refused on account of me not having sufficient experience in such matters and besides I don't think I have the right temperament.

Yes, know thyself. I've turned down offers that seemed lucrative or just cooperative, and otherwise without risk - boards, etc. They would have been fine if everything went smoothly, but people naturally don't anticipate over-the-horizon risk and if any stuff hit a fan I would not have been able to fulfill my responsibilities, and others would get materially hurt - the most awful, painful, humiliating trap to be in. Only need one experience to learn that lesson.

> People that grow up insulated from the consequences of their actions can do very dumb stuff and expect to get away with it because that's how they've lived all of their lives.

I don't think you need to grow up that way. Look at the uber-powerful who have been been in that position or a few years.

Honestly, I'm not sure I buy the idea that's a prevelant case, the people who grow up that way. People generally leave the nest and learn. Most of the world's higher-level leaders (let's say, successful CEOs and up) grew up in stability and relative wealth. Of course, that doesn't mean their parents didn't teach them about consequences, but how could we really know that about someone?

◧◩◪◨⬒⬓
257. Manouc+MT8[view] [source] [discussion] 2023-11-22 17:52:11
>>ipaddr+cY
I was going to, but then I discovered LibreChat existed a few weeks ago. I use it way more often than ChatGPT now, it's been quite stable for me.

https://github.com/danny-avila/LibreChat

◧◩◪◨⬒
258. burnte+5y9[view] [source] [discussion] 2023-11-22 20:59:01
>>0xNotM+0p1
As someone who is VP of IT in healthcare, I can understand that sentiment. At least fewer people need access to nuclear secrets, while medical records are simultaneously highly confidential AND needed by many people. It's never dull. :D
[go to top]