zlacker

[parent] [thread] 6 comments
1. hadloc+(OP)[view] [source] 2023-11-22 08:11:46
Branding counts for a lot, but LLM are already a commodity. As soon as someone releases an LLM equivalent to GPT4 or GPT5, most cloud providers will offer it locally for a fraction of what openAI is charging, and the heaviest users will simply self-host. Go look at the company Docker. I can build a container on almost any device with a prompt these days using open source tooling. The company (or brand, at this point?) offers "professional services" I suppose but who is paying for it? Or go look at Redis or Elasti-anything. Or memcached. Or postgres. Or whatever. Industrial-grade underpinnings of the internet, but it's all just commodity stuff you can lease from any cloud provider.

It doesn't matter if OpenAI or AWS or GCP encoded the entire works of Shakespeare in their LLM, they can all write/complete a valid limerick about "There once was a man from Nantucket".

I seriously doubt AWS is going to license OpenAI's technology when they can just copy the functionality, royalty free, and charge users for it. Maybe they will? But I doubt it. To the end user it's just another locally hosted API. Like DNS.

replies(4): >>cyanyd+iw >>worlds+Ow >>iLoveO+YT2 >>rolisz+rL4
2. cyanyd+iw[view] [source] 2023-11-22 12:40:13
>>hadloc+(OP)
I think yuou're assuming that OpenAI is charging a $/compute price equal to what it costs them.

More likely, they're a loss-leader and generating publicity by making it as cheap as possible.

_Everything_ we've seen come out of silicon valley does this, so why would they suddenly be charging the right price?

3. worlds+Ow[view] [source] 2023-11-22 12:44:49
>>hadloc+(OP)
> offer it locally for a fraction of what openAI is charging

I thought the was a somewhat clear agreement that openAI is currently running inference at a loss?

replies(1): >>hadloc+yg2
◧◩
4. hadloc+yg2[view] [source] [discussion] 2023-11-22 21:07:38
>>worlds+Ow
Moore's law seems to have failed on CPUs finally, but we've seen the pattern over and over. LLM specific hardware will undoubtedly bring down the cost. $10,000 A100 GPU will not be the last GPU NVidia ever makes, nor will their competitors stand by and let them hold the market hostage.

Quake and Counter-Strike in the 1990s ran like garbage in software-rendering mode. I remember having to run Counter-Strike on my Pentium 90 at the lowest resolution, and then disable upscaling to get 15fps, and even then smoke grenades and other effects would drop the framerate into the single digits. Almost two years after Quake's release did dedicated 3d video cards (voodoo 1 and 2 were accelerators, depended on a seperate 2d VGA graphics card to feed it) begin to hit the market.

Nowadays you can run those games (and their sequels) in the thousands (tens of thousands?) of frames per second on a top end modern card. I would imagine similar events with hardware will transpire with LLM. OpenAI is already prototyping their own hardware to train and run LLMs. I would imagine NVidia hasn't been sitting on their hands either.

5. iLoveO+YT2[view] [source] 2023-11-23 00:41:34
>>hadloc+(OP)
> I seriously doubt AWS is going to license OpenAI's technology when they can just copy the functionality, royalty free, and charge users for it. Maybe they will? But I doubt it.

You mean like they already do on Amazon Bedrock?

replies(1): >>hadloc+aX2
◧◩
6. hadloc+aX2[view] [source] [discussion] 2023-11-23 01:02:31
>>iLoveO+YT2
Yeah and looks like they're going to offer Llama as well. They offer Redhat linux EC2 instances at a premium, and other paid per hour AMIs. I can't imagine why they wouldn't offer various LLMs at a premium, but not also offer a home-grown LLM at a lower rate once it's ready.
7. rolisz+rL4[view] [source] 2023-11-23 16:50:30
>>hadloc+(OP)
Why do you think cloud providers can undercut OpenAI? From what I know, Llama 70b is more expensive to run than GPT-3.5, unless you can get 70+% utilization rate for your GPUs, which is hard to do.

So far we don't have any open source models that are close to GPT4, so we don't know what it takes to run them for similar speeds.

[go to top]