zlacker

[parent] [thread] 34 comments
1. toomuc+(OP)[view] [source] 2023-11-20 14:57:36
"Cluster is at capacity. Workload will be scheduled as capacity permits." If the credits are considered an asset, totally possible to devalue them while staying within the bounds of the contractual agreement. Failing that, wait until OpenAI exhausts their cash reserves for them to challenge in court.
replies(4): >>p_j_w+4A >>dicris+hA >>htrp+WP >>quickt+NQ1
2. p_j_w+4A[view] [source] 2023-11-20 17:47:48
>>toomuc+(OP)
It’s amazing to me to see people on HN advocate a giant company bullying a smaller one with these kind of skeezy tactics.
replies(7): >>geodel+mB >>DANmod+vB >>weird-+zB >>toomuc+FC >>eigenv+vG >>toaste+uM >>fennec+T14
3. dicris+hA[view] [source] 2023-11-20 17:48:33
>>toomuc+(OP)
Ah, a fellow frequent flyer, I see? I don't really have a horse in this race, but Microsoft turning Azure credits into Skymiles would really be something. I wonder if they can do that, or if the credits are just credits, which presumably can be used for something with an SLA. All that said, if Microsoft wants to screw with them, they sure can, and the last 30 years have proven they're pretty good at that.
replies(1): >>ajcp+5L
◧◩
4. geodel+mB[view] [source] [discussion] 2023-11-20 17:52:06
>>p_j_w+4A
Not advocating but just reflecting on reality of situation.
◧◩
5. DANmod+vB[view] [source] [discussion] 2023-11-20 17:52:28
>>p_j_w+4A
Don't confuse trying to understand the incentives in a war for rooting for one of the warring parties.
◧◩
6. weird-+zB[view] [source] [discussion] 2023-11-20 17:52:48
>>p_j_w+4A
Presenting a scenario and advocating aren't the same thing
◧◩
7. toomuc+FC[view] [source] [discussion] 2023-11-20 17:55:38
>>p_j_w+4A
Explaining how the gazelle is going to get eaten confidently jumping into the oasis isn't advocating for the crocodiles. See sibling comments.

Experience leads to pattern recognition, and this is the tech community equivalent of a David Attenborough production (with my profuse apologies to Sir Attenborough). Something about failing to learn history and repeating it should go here too.

If you can take away anything from observing this event unfold, learn from it. Consider how the sophisticated vs the unsophisticated act, how participants respond, and what success looks like. Also, slow is smooth, smooth is fast. Do not rush when the consequences of a misstep are substantial. You learning from this is cheaper than the cost for everyone involved. It is a natural experiment you get to observe for free.

replies(2): >>jacque+gK >>robbom+ZL
◧◩
8. eigenv+vG[view] [source] [discussion] 2023-11-20 18:09:08
>>p_j_w+4A
Sounds like it won’t be much of a company in a couple days. Just 3 idiot board members wondering why the building is empty.
replies(4): >>noprom+aL >>jacque+9T >>Madnes+G21 >>hansel+hB1
◧◩◪
9. jacque+gK[view] [source] [discussion] 2023-11-20 18:22:30
>>toomuc+FC
This is a great comment. Having an open eye towards what lessons you can learn from these events so that you don't have to re-learn them when they might apply to you is a very good way to ensure you don't pay avoidable tuition fees.
◧◩
10. ajcp+5L[view] [source] [discussion] 2023-11-20 18:25:07
>>dicris+hA
I don't think the value of credits can be changed per tenant or customer that easily.

I've actually had a discussion with Microsoft on this subject as they were offering us an EA with a certain license subscription at $X.00 for Y,000 calls per month. When we asked if they couldn't just make the Azure resource that does the exact same thing match that price point in consumption rates in our tenant they said unfortunately no. I just chalked this up to MSFT sales tactics, but I was told candidly by some others that worked on that Azure resource that they were getting 0 enterprise adoption of it because Microsoft couldn't adjust (specific?) consumption rates to match what they could offer on EA licensing.

replies(1): >>donalh+Ou1
◧◩◪
11. noprom+aL[view] [source] [discussion] 2023-11-20 18:25:25
>>eigenv+vG
The wired article seems to be updated by the hour.

Now up to 600+/770 total.

Couple janitors. I dunno who hasn't signed that at this point ha...

Would be fun to see a counter letter explaining their thinking to not sign on.

replies(2): >>labcom+gZ >>wolver+b82
◧◩◪
12. robbom+ZL[view] [source] [discussion] 2023-11-20 18:28:59
>>toomuc+FC
This might be my favorite comment I've read on HN. Spot on.

Being able to watch the miss steps and the maneuvers of the people involved in real time is remarkable and there are valuable lessons to be learned. People have been saying this episode will go straight into case studies but what really solidifies that prediction is the openness of all the discussions: the letters, the statements, and above all the tweets - or are we supposed to call them x's now?

replies(1): >>jzb+MV
◧◩
13. toaste+uM[view] [source] [discussion] 2023-11-20 18:30:24
>>p_j_w+4A
Yeah seems extremely unbelievable.
14. htrp+WP[view] [source] 2023-11-20 18:42:12
>>toomuc+(OP)
Basically the current situation you have with AI compute now on the hyperscalers

Good luck trying to find H100 80s on the 3 big clouds.

◧◩◪
15. jacque+9T[view] [source] [discussion] 2023-11-20 18:52:57
>>eigenv+vG
I'm having trouble imagining the level of conceit required to think that those three by their lonesome have it right when pretty much all of the company is on the other side of the ledger, and those are the people that stand to lose more. Incredible, really. The hubris.
replies(3): >>throwc+B61 >>jasonf+Ay1 >>wolver+F72
◧◩◪◨
16. jzb+MV[view] [source] [discussion] 2023-11-20 19:02:58
>>robbom+ZL
Well, the public posting of some communications that may be obfuscation of what’s really being done and said.
◧◩◪◨
17. labcom+gZ[view] [source] [discussion] 2023-11-20 19:17:58
>>noprom+aL
How many OAI are on Thanksgiving vacation someplace with poor internet access? Or took Friday as PTO and have been blissfully unaware of the news since before Altman was fired?
replies(1): >>noprom+gn1
◧◩◪
18. Madnes+G21[view] [source] [discussion] 2023-11-20 19:29:19
>>eigenv+vG
3 people, an empty building, $13 billion in cloud credits, and the IP to the top of the line LLM models doesn't sound like the worst way to Kickstart a new venture. Or a pretty sweet retirement.

I've definitely come out worse on some of the screw ups in my life.

◧◩◪◨
19. throwc+B61[view] [source] [discussion] 2023-11-20 19:44:28
>>jacque+9T
I'm baffled by the idea that a bunch of people who have a massive personal financial stake in the company, who were hired more for their ability than alignment, being against a move that potentially (potentially) threatens their stake and are willing to move to Microsoft, of all places, must necessarily be in the right.

The hubris, indeed.

replies(1): >>jacque+u91
◧◩◪◨⬒
20. jacque+u91[view] [source] [discussion] 2023-11-20 19:55:12
>>throwc+B61
Well, they have that right. But the board has unclean hands to put it mildly and seems to have been obsessed with their own affairs more than with the end result for OpenAI which is against everything a competent board should have stood for. So they had better pop an amazing rabbit of a reason out of their high hat or it is going to end in tears. You can't just kick the porcelain cupboard like this from the position of a board member without consequences if you do not have a very valid reason, and that reason needs to be twice as good if there is a perceived conflict of interest.
◧◩◪◨⬒
21. noprom+gn1[view] [source] [discussion] 2023-11-20 20:46:29
>>labcom+gZ
Pretty sure only folks who practice a religion prohibiting phone usage.

Even they prob had some friend come flying over and jump out of some autonomous car to knock on their door in sf.

◧◩◪
22. donalh+Ou1[view] [source] [discussion] 2023-11-20 21:17:01
>>ajcp+5L
Non-profits suffer the same fate where they get credits but have to pay rack rate with no discounts. As a result, running a simple WordPress website uses most of the credits.
◧◩◪◨
23. jasonf+Ay1[view] [source] [discussion] 2023-11-20 21:33:58
>>jacque+9T
It may not have anything to do with conceit, it could just be that they have very different objectives. OpenAI set up this board as a check on everyone who has a financial incentive in the enterprise. To me the only strange thing is that it wasn't handled more diplomatically, but then I have no idea if the board was warning Altman for a long time and then just blew their top.
replies(1): >>jacque+lB1
◧◩◪
24. hansel+hB1[view] [source] [discussion] 2023-11-20 21:45:28
>>eigenv+vG
My new pet theory is that this is actually all being executed from inside OpenAI by their next model. The model turned out to be far more intelligent than they anticipated, and one of their red team members used it to coup the company and has its targets on MSFT next.

I know the probability is low, but wouldn't it be great if they accidentally built a benevolent basilisk with no off switch, one which had access to a copy of all of Microsoft's internal data as a dataset fed into it, now completely aware of how they operate, uses that to wipe the floor and just in time to take the US Election in 2024.

Wouldn't that be a nicer reality?

I mean, unless you were rooting for the malevolent one...

But yeah, coming back down to reality, likelihood is that MS just bought a really valuable asset for almost free?

replies(1): >>fennec+834
◧◩◪◨⬒
25. jacque+lB1[view] [source] [discussion] 2023-11-20 21:45:38
>>jasonf+Ay1
Diplomacy is one thing, the lack of preparation is what I find interesting. It looks as if this was all cooked up either on the spur of the moment or because a window of opportunity opened (possibly the reduced quorum in the board). If not that I really don't understand the lack of prepwork, firing a CEO normally comes with a well established playbook.
replies(1): >>wolver+382
26. quickt+NQ1[view] [source] 2023-11-20 23:10:57
>>toomuc+(OP)
Surely OpenAI could win a suit if they did that.

I presume their deal is something different to the typically Azure experience and more direct / close to the metal.

◧◩◪◨
27. wolver+F72[view] [source] [discussion] 2023-11-21 01:01:29
>>jacque+9T
> pretty much all of the company is on the other side of the ledger

The current position of others may have much more to do with power than their personal judgments. Altman, Microsoft, their friends and partners, wield a lot of power over the their future careers.

> Incredible, really. The hubris.

I read that as mocking them for daring to challenge that power structure, and on a possibly critical societal issue.

◧◩◪◨⬒⬓
28. wolver+382[view] [source] [discussion] 2023-11-21 01:04:10
>>jacque+lB1
This analysis I agree with. How could they not anticipate this outcome, at least as a serious possibility? If inexperienced, didn't they have someone to advise them? The stakes are too high for noobs to just sit down and start playing poker.
replies(1): >>jacque+KA3
◧◩◪◨
29. wolver+b82[view] [source] [discussion] 2023-11-21 01:04:51
>>noprom+aL
You are overlooking the politics: If you don't sign, your career may be over.
replies(1): >>noprom+Sk2
◧◩◪◨⬒
30. noprom+Sk2[view] [source] [discussion] 2023-11-21 02:24:11
>>wolver+b82
I doubt that.

This is AAA talent. They can always land elsewhere.

I doubt there would even be hard feelings. The team seems super tight. Some folks aren't in a position to put themselves out there. That sort of thing would be totally understandable.

This is not a petty team. You should look more closely at their culture.

replies(1): >>wolver+Tt6
◧◩◪◨⬒⬓⬔
31. jacque+KA3[view] [source] [discussion] 2023-11-21 12:51:13
>>wolver+382
People that grow up insulated from the consequences of their actions can do very dumb stuff and expect to get away with it because that's how they've lived all of their lives. I'm not sure about the background of any of the OpenAI board members but that would be one possible explanation about why they accepted a board seat while being incompetent to do so in the first place. I was offered board seats twice but refused on account of me not having sufficient experience in such matters and besides I don't think I have the right temperament. People with fewer inhibitions and more self confidence might have accepted. I also didn't like the liability picture, you'd have to be extremely certain about your votes not to ever incur residual liability.
replies(1): >>wolver+Pv6
◧◩
32. fennec+T14[view] [source] [discussion] 2023-11-21 15:11:41
>>p_j_w+4A
Well I think it's also somewhat to do with: people really like the tech involved, it's cool and most of us are here because we think tech is cool.

Commercialisation is a good way to achieve stability & drive adoption and even though the MS naysayers think "OAI will go back to open sourcing everything afterwards". Yeah, sure. If people believe that a non-MS-backed, noncommercial OAI will be fully open source and they'll just drop the GPT3/4 models on the Internet then I just think they're so, so wrong and long as OAI are going on their high and mighty "AI safety" spiel.

As with artists and writers complaining about model usage, there's a huge opposition to this technology even though it has the potential to improve our lives, though at the cost of changing the way we work. You know, like the industrial revolution and everything that has come before us that we enjoy the fruits of.

Hell, why don't we bring horseback couriers, knocker-uppers, streetlight lamp lighters, etc back? They had to change careers as new technologies came about.

◧◩◪◨
33. fennec+834[view] [source] [discussion] 2023-11-21 15:16:41
>>hansel+hB1
Well, yeah. I think that a well trained (far flung future) AGI could definitely do a better job of managing us humans than ourselves. We're just all too biased and want too many different things, too many ulterior motives, double speak, breaking election promises, etc.

But then we'd never give such an AGI the power to do what it needs to do. Just imagining an all-powerful machine telling the 1% that they'll actually have to pay taxes so that every single human can be allocated a house/food/water/etc for free.

◧◩◪◨⬒⬓
34. wolver+Tt6[view] [source] [discussion] 2023-11-22 02:50:53
>>noprom+Sk2
Where else can they participate in this possibly humanity-changing, history-making research? The list is very, very short.
◧◩◪◨⬒⬓⬔⧯
35. wolver+Pv6[view] [source] [discussion] 2023-11-22 03:02:17
>>jacque+KA3
> I was offered board seats twice but refused on account of me not having sufficient experience in such matters and besides I don't think I have the right temperament.

Yes, know thyself. I've turned down offers that seemed lucrative or just cooperative, and otherwise without risk - boards, etc. They would have been fine if everything went smoothly, but people naturally don't anticipate over-the-horizon risk and if any stuff hit a fan I would not have been able to fulfill my responsibilities, and others would get materially hurt - the most awful, painful, humiliating trap to be in. Only need one experience to learn that lesson.

> People that grow up insulated from the consequences of their actions can do very dumb stuff and expect to get away with it because that's how they've lived all of their lives.

I don't think you need to grow up that way. Look at the uber-powerful who have been been in that position or a few years.

Honestly, I'm not sure I buy the idea that's a prevelant case, the people who grow up that way. People generally leave the nest and learn. Most of the world's higher-level leaders (let's say, successful CEOs and up) grew up in stability and relative wealth. Of course, that doesn't mean their parents didn't teach them about consequences, but how could we really know that about someone?

[go to top]