zlacker

[parent] [thread] 27 comments
1. tsunam+(OP)[view] [source] 2023-11-18 23:32:56
Nothing matters if you don’t have the money to enforce the system. Come on get real. Whatever the board says MS can turn off the money in a second and invalidate anything.
replies(1): >>fnordp+q2
2. fnordp+q2[view] [source] 2023-11-18 23:45:43
>>tsunam+(OP)
Microsoft depends on OpenAI much more than OpenAI depends on Microsoft. If you work with OpenAI as a company very often this is extraordinarily obvious.
replies(6): >>SeanAn+n4 >>alumin+i7 >>xigenc+vf >>naet+cs >>kcb+ht >>m3kw9+NJ
◧◩
3. SeanAn+n4[view] [source] [discussion] 2023-11-18 23:55:13
>>fnordp+q2
This doesn't seem very obvious to me. The fact this article exists, and that Microsoft is likely exerting influence over the CEO outcome, implies there's codependence at a minimum.
◧◩
4. alumin+i7[view] [source] [discussion] 2023-11-19 00:08:08
>>fnordp+q2
Microsoft is also OpenAI's main cloud provider, so they certainly have some leverage.
replies(1): >>fnordp+Nj
◧◩
5. xigenc+vf[view] [source] [discussion] 2023-11-19 00:57:09
>>fnordp+q2
They could use Llama instead. OpenAI’s moat is very shallow. They’re still coasting on Google’s research papers.
replies(1): >>fnordp+6k
◧◩◪
6. fnordp+Nj[view] [source] [discussion] 2023-11-19 01:31:40
>>alumin+i7
Aws is JP Morgan’s main cloud provider, and Apples too. Do you think aws has leverage over JPMC and Apple due to that? Or does JPMC and Apple have leverage over aws?

Azure gets a hell of a lot more out of OpenAI than OpenAI gets out of azure. I’ll bet you GPT4 runs on nvidia hardware just as well regardless of who resells it.

replies(2): >>cthalu+5l >>wwtrv+Fs
◧◩◪
7. fnordp+6k[view] [source] [discussion] 2023-11-19 01:34:19
>>xigenc+vf
If you’ve used the models for actual business problems GPT4 and its successive revisions are way beyond llama. They’re not comparable. I’m a huge fan of open models but it’s just different worlds of power. I’d note OpenAI has been working on GPT5 for some time as well, which I would expect to be a remarkable improvement incorporating much of the theoretical and technical advances of the last two years. Claude is the only actual competitor to GPT4 and it’s a “just barely relevant situation.”
replies(1): >>xigenc+In
◧◩◪◨
8. cthalu+5l[view] [source] [discussion] 2023-11-19 01:41:53
>>fnordp+Nj
I think the larger issue here is that there's just not enough of that nvidia hardware out there if Microsoft decided to really play hardball, even if it hurts themselves in the short term. I don't know that any of the other cloud providers have the capacity to immediately shoulder OpenAI's workloads. JPMC or Apple have other clouds they can viably move to - OpenAI might not have anyone else that can meet their needs on short notice.

I think the situation is tough because I can't imagine there aren't legal agreements in place around what OpenAI has to do to access the funding tranches and compute power, but who knows if they are in a position to force the issue, or if I'm write in my supposition to begin with. Even if I am, a protracted legal battle where they don't have access to compute resources, particularly if they can't get an injunction, might be extremely deleterious to OpenAI.

Perhaps Microsoft even knows that they will take a bath on things if they follow this, but don't want to gain a reputation of allowing this sort of thing to happen - they are big enough to take a total bath on the OpenAI side of things and it not be anything close to a fatal blow.

I was more skeptical of this being the case last night, but less so now.

replies(1): >>fnordp+Ml
◧◩◪◨⬒
9. fnordp+Ml[view] [source] [discussion] 2023-11-19 01:46:31
>>cthalu+5l
But why would Microsoft do anything to hurt their business in any way? They are almost certainly more furious for the way they found out than the actual action taken. Given how much Microsoft has bet their business on OpenAI (ask yourself who replaces bing chat? Why does anyone actually use azure in 2023?) being surprised by structural business decisions in their most important partner is shocking, and I think if I were the CEO of Microsoft I would be furious at being shocked more than pining in some weird Altman bromance.
replies(3): >>riboso+rs >>qwytw+Xs >>lazyas+it
◧◩◪◨
10. xigenc+In[view] [source] [discussion] 2023-11-19 01:58:56
>>fnordp+6k
Hm, it’s hard for me to say because most of my prompts would get me banned from OpenAI but I’ve gotten great results for specific tasks using finetuned quantized 30B models on my desktop and laptop. All things considered, it’s a better value for me, especially as I highly value openness and privacy.
replies(3): >>sebast+9C >>int_19+jU >>intend+RU
◧◩
11. naet+cs[view] [source] [discussion] 2023-11-19 02:23:45
>>fnordp+q2
I'm not sure this is true- Microsoft put something like 10 billion into OpenAI, which they absolutely needed to continue the expensive computing and training. Without that investment money OpenAI might quickly find themselves at a huge deficit with no way to climb back out.
replies(2): >>ctvo+3x >>fnordp+cy
◧◩◪◨⬒⬓
12. riboso+rs[view] [source] [discussion] 2023-11-19 02:25:26
>>fnordp+Ml
Microsoft finally has a leg up on Google in the public eye and they're gonna toss it away for Sam Altman? Seems dicey.
◧◩◪◨
13. wwtrv+Fs[view] [source] [discussion] 2023-11-19 02:26:50
>>fnordp+Nj
JP Morgan and Apple can actually afford to pay their cloud bills themselves. Open AI on the other hand can't.

> I’ll bet you GPT4 runs on nvidia hardware

Yes but they'll need to convince someone else like Amazon to give to them for free and regardless what happens next Microsoft will still have a signficant stake in OpenAI due to their previous investments.

◧◩◪◨⬒⬓
14. qwytw+Xs[view] [source] [discussion] 2023-11-19 02:29:17
>>fnordp+Ml
> I would be furious at being shocked more than pining in some weird Altman bromance.

Hypothetically he might also have very little trust in the decision making abilities of the new management and how much their future goals will align with those of Microsoft.

◧◩
15. kcb+ht[view] [source] [discussion] 2023-11-19 02:31:17
>>fnordp+q2
Microsoft depends on OpenAI as long as they're rapidly advancing. It seems the new leadership wants to halt or slow the rapid advancement.
◧◩◪◨⬒⬓
16. lazyas+it[view] [source] [discussion] 2023-11-19 02:31:18
>>fnordp+Ml
> Why does anyone actually use azure in 2023?

When I see it, it has always been “Amazon is a competitor and we don’t buy from competitors”.

◧◩◪
17. ctvo+3x[view] [source] [discussion] 2023-11-19 02:54:34
>>naet+cs
Ah yes, no other company would step in and get this deal from OpenAI if Microsoft pulls out. It's not like Amazon and Google pump billions into the OpenAI competitor.
replies(1): >>mdekke+FS
◧◩◪
18. fnordp+cy[view] [source] [discussion] 2023-11-19 03:01:00
>>naet+cs
Only a small fraction of the $10b was delivered and is apparently largely in azure credits.
◧◩◪◨⬒
19. sebast+9C[view] [source] [discussion] 2023-11-19 03:30:07
>>xigenc+In
What specs are needed to run those models in your local machine without crashing the system?
replies(3): >>xigenc+TI >>mark_l+uN >>throwa+WS
◧◩◪◨⬒⬓
20. xigenc+TI[view] [source] [discussion] 2023-11-19 04:16:07
>>sebast+9C
I use Faraday.dev on an RTX 3090 and smaller models on a 16gb M2 Mac and I’m able to have deep, insightful conversations with personal AI at my direction.

I find the outputs of LLMs to be quite organic when they are given unique identities, and especially when you explore, prune or direct their responses.

ChatGPT comes across like a really boring person who memorized Wikipedia, which is just sad. Previously the Playground completions allowed using raw GPT which let me unlock some different facets, but they’ve closed that down now.

And again, I don’t really need to feed my unique thoughts, opinions, or absurd chat scenarios into a global company trying to create AGI, or have them censor and filter for me. As an AI researcher, I want the uncensored model to play with along with no data leaving my network.

The uses of LLMs for information retrieval are great (Bing has improved alot) but the much more interesting cases for me are how they are able to parse nuance, tone, and subtext - imagine a computer that can understand feelings and respond in kind. Empathetic commuting, and it’s already here on my PC unplugged from the Internet.

replies(1): >>mark_l+8O
◧◩
21. m3kw9+NJ[view] [source] [discussion] 2023-11-19 04:24:04
>>fnordp+q2
Microsoft already has the models and weights, not the tech
◧◩◪◨⬒⬓
22. mark_l+uN[view] [source] [discussion] 2023-11-19 04:53:55
>>sebast+9C
Another data point: I can (barely) run a 30B 4 bit quantized model on a Mac Mini with 32G on chip memory but it runs slowly (a little less than 10 tokens/second).

13B and 7B models run easily and much faster.

◧◩◪◨⬒⬓⬔
23. mark_l+8O[view] [source] [discussion] 2023-11-19 04:57:56
>>xigenc+TI
+1 Greg. I agree with most of what you say. Also, it is so much more fun running everything locally.
◧◩◪◨
24. mdekke+FS[view] [source] [discussion] 2023-11-19 05:39:43
>>ctvo+3x
I’m pretty sure there are contracts, and one way or another, everyone would get a stay on everyone else and nothing would happen for years except court cases
replies(1): >>dragon+HT
◧◩◪◨⬒⬓
25. throwa+WS[view] [source] [discussion] 2023-11-19 05:41:56
>>sebast+9C
check out https://www.reddit.com/r/LocalLLaMA/
◧◩◪◨⬒
26. dragon+HT[view] [source] [discussion] 2023-11-19 05:48:56
>>mdekke+FS
> I’m pretty sure there are contracts

Which one side or the other would declare terminated for nonperformance by the other side, perhaps while suing for breach.

> and one way or another, everyone would get a stay on everyone else

If by a stay you mean an injunction preventing a change in the arrangements, it seems unlikely that "everyone would get a stay on everyone". Likelihood of success on the merits and harm that is not possible to remediate via damages that would occur if the injunction wasn't placed are key factors for injunctions, and that's far from certain to work in any direction, and even less likely to work in both directions.

> and nothing would happen for years except court cases

Business goes on during court cases, it is very rare that everything is frozen.

◧◩◪◨⬒
27. int_19+jU[view] [source] [discussion] 2023-11-19 05:58:15
>>xigenc+In
Even the best unquantized finetunes of llama2-70b are, at best, somewhat superior to GPT-3.5-turbo (and I'm not even sure they would beat the original GPT-3.5, which was smarter). They are not even close to GPT-4 on any task requiring serious reasoning or instruction following.
◧◩◪◨⬒
28. intend+RU[view] [source] [discussion] 2023-11-19 06:04:19
>>xigenc+In
For an individual use case Llama is fine. If you start getting to large workflows and need reliable outputs, GPT wins out substantially. I know all the papers and headlines about comparative performance, but thats on benchmarks.

Ive found that benchmarks are great as a hygiene test, but pointless when you need to get work done.

[go to top]