zlacker

[parent] [thread] 41 comments
1. Terrif+(OP)[view] [source] 2023-11-22 06:19:15
The OpenAI of the past, that dabbled in random AI stuff (remember their DotA 2 bot?), is gone.

OpenAI is now just a vehicle to commercialize their LLM - and everything is subservient to that goal. Discover a major flaw in GPT4? You shut your mouth. Doesn’t matter if society at large suffers for it.

Altman's/Microsoft’s takeover of the former non-profit is now complete.

Edit: Let this be a lesson to us all. Just because something claims to be non-profit doesn't mean it will always remain that way. With enough political maneuvering and money, a megacorp can takeover almost any organization. Non-profit status and whatever the organization's charter says is temporary.

replies(8): >>karmas+82 >>g42gre+03 >>robbom+D4 >>krisof+Ed >>quickt+Rd >>Havoc+Pk >>3cats-+Zo >>cyanyd+9M
2. karmas+82[view] [source] 2023-11-22 06:33:17
>>Terrif+(OP)
> now just a vehicle to commercialize their LLM

I mean it is what they want isn't it. They did some random stuff like, playing dota2 or robot arms, even the Dalle stuff. Now they finally find that one golden goose, of course they are going to keep it.

I don't think the company has changed at all. It succeeded after all.

replies(2): >>nextac+83 >>hadloc+Sa
3. g42gre+03[view] [source] 2023-11-22 06:38:27
>>Terrif+(OP)
Why would society at large suffer from a major flaw in GPT-4, if it's even there? If GPT-4 spits out some nonsense to your customers, just put a filter on it, as you should anyway. We can't seriously expect OpenAI to babysit every company out there, can we? Why would we even want to?
replies(3): >>Terrif+A4 >>dontup+yD >>cyanyd+DM
◧◩
4. nextac+83[view] [source] [discussion] 2023-11-22 06:39:05
>>karmas+82
But it's not exactly a company. It's a nonprofit structured in a way to wholly own a company. In that sense it's like Mozilla.
replies(1): >>karmas+V4
◧◩
5. Terrif+A4[view] [source] [discussion] 2023-11-22 06:48:23
>>g42gre+03
For example, and I'm not saying such flaws exist, GPT4 output is bias in some way, encourages radicalization (see Twitter's, YouTube's, and Facebook's news feed algorithm), create self-esteem issues in children (see Instagram), ... etc.

If you worked for old OpenAI, you would be free to talk about it - since old OpenAI didn't give a crap about profit.

Altman's OpenAI? He will want you to "go to him first".

replies(4): >>g42gre+Z5 >>nearbu+og >>kgeist+Ao >>dontup+cE
6. robbom+D4[view] [source] 2023-11-22 06:48:41
>>Terrif+(OP)
I'm still waiting for an optimized version of that bot that can run locally...
◧◩◪
7. karmas+V4[view] [source] [discussion] 2023-11-22 06:49:28
>>nextac+83
Nonprofit is a just a facade, it was convenient for them to appear as ethnical under that disguise, but they get rid of it when it is inconvenient in a week. 95% of them would rather join MSFT, than being in a non-profit.

Did they company change? I am not convinced.

replies(1): >>ravst3+jb
◧◩◪
8. g42gre+Z5[view] [source] [discussion] 2023-11-22 06:57:51
>>Terrif+A4
We can't expect GPT-4 not to have bias in some way, or not to have all these things that you mentioned. I read in multiple places that GPT products have "progressive" bias. If that's Ok with you, then you just use it with that bias. If not, you fix it by pre-prompting, etc... If you can't fix it, use LLAMA or something else. That's the entrepreneur's problem, not OpenAI's. OpenAI needs to make it intelligent and capable. The entrepreneurs and business users will do the rest. That's how they get paid. If OpenAI to solve all these problems, what business users are going to do themselves? I just don't see the societal harm here.
◧◩
9. hadloc+Sa[view] [source] [discussion] 2023-11-22 07:30:38
>>karmas+82
There's no moat in giant LLMs. Anyone on a long enough timeline can scrape/digitize 99.9X% of all human knowledge and build an LLM or LXX from it. Monetizing that idea and staying the market leader over a period longer than 10 years will take a herculean amount of effort. Facebook releasing similar models for free definitely took the wind out of their sails, even a tiny bit; right now the moat is access to A100 boards. That will change as eventually even the Raspberry Pi 9 will have LLM capabilities
replies(3): >>morale+bc >>cft+5i >>daniel+FB3
◧◩◪◨
10. ravst3+jb[view] [source] [discussion] 2023-11-22 07:33:53
>>karmas+V4
Agree that it's a facade.

Iirc, the NP structure was implemented to attract top AI talent from FAANG. Then they needed investors to fund the infrastructure and hence gave the employees shares or profit units (whatever the hell that is). The NP now shields MSFT from regulatory issues.

I do wonder how many of those employees would actually go to MSFT. It feels more like a gambit to get Altman back in since they were about to cash out with the tender offer.

replies(1): >>dizzyd+1O1
◧◩◪
11. morale+bc[view] [source] [discussion] 2023-11-22 07:39:45
>>hadloc+Sa
OpenAI (ChatGPT) is already a HUGE brand all around the world. No doubt they're the most valuable startup in the AI space. That's their moat.

Unfortunately, in the past few days, the only thing they've accomplished is significantly damaging their brand.

replies(3): >>hadloc+bg >>karmas+Hg >>denlek+am
12. krisof+Ed[view] [source] 2023-11-22 07:51:43
>>Terrif+(OP)
> With enough political maneuvering and money, a megacorp can takeover almost any organization.

In fact this observation is pertinent to the original stated goals of openAI. In some sense companies and organisations are superinteligences. That is they have goals, they are acting in the real world to achieve those goals and they are more capable in some measures than a single human. (They are not AGI, because they are not artificial, they are composed of meaty parts, the individuals forming the company.)

In fact what we are seeing is that when the superinteligence OpenAI was set up there was an attempt to align the goals of the initial founders with the then new organisation. They tried to “bind” their “golem” to make it pursue certain goals by giving it an unconventional governance structure and a charter.

Did they succeed? Too early to tell for sure, but there are at least question marks around it.

How would one argue against? OpenAI appears to have given up the lofty goals of AI safety and preventing the concentration of AI provess. In their pursuit of economic success the forces wishing to enrich themselves overpowered the forces wishing to concentrate on the goals. Safety will be still a figleaf for them, if nothing else to achieve regulatory capture to keep out upstart competition.

How would one argue for? OpenAI is still around. The charter is still around. To be able to achieve the lofty goals contained in it one needs a lot of resources. Money in particular is a resource which enables one greater powers in shaping the world. Achieving the original goals will require a lot of money. The “golem” is now in the “gain resources” phase of its operation. To achieve that it commercialises the relatively benign, safe and simple LLMs it has access to. This serves the original goal in three ways: gains further resources, estabilishes the organisation as a pre-eminent expert on AI and thus AI safety, provides it with a relatively safe sandbox where adversarial forces are trying its safety concepts. In other words all is well with the original goals, the “golem” that is OpenAI is still well aligned. It will achieve the original goals once it has gained enough resources to do so.

The fact that we can’t tell which is happening is in fact the worry and problem with superinteligence/AI safety.

13. quickt+Rd[view] [source] 2023-11-22 07:53:07
>>Terrif+(OP)
They let the fox in. But they didn’t have to. They could have try to raise money without such a sweet deal to MS. They gave away power for cloud credits.
replies(2): >>dragon+Oe >>doikor+Pj
◧◩
14. dragon+Oe[view] [source] [discussion] 2023-11-22 08:00:06
>>quickt+Rd
> They let the fox in. But they didn’t have to. They could have try to raise money without such a sweet deal to MS.

They did, and fell, IIRC, vastly short (IIRC, an order of magnitude, maybe more) short of their minimum short-term target. The commercial subsidiary thing was a risk taken to support the mission because it was clear it was going to fail from lack of funding otherwise.

◧◩◪◨
15. hadloc+bg[view] [source] [discussion] 2023-11-22 08:11:46
>>morale+bc
Branding counts for a lot, but LLM are already a commodity. As soon as someone releases an LLM equivalent to GPT4 or GPT5, most cloud providers will offer it locally for a fraction of what openAI is charging, and the heaviest users will simply self-host. Go look at the company Docker. I can build a container on almost any device with a prompt these days using open source tooling. The company (or brand, at this point?) offers "professional services" I suppose but who is paying for it? Or go look at Redis or Elasti-anything. Or memcached. Or postgres. Or whatever. Industrial-grade underpinnings of the internet, but it's all just commodity stuff you can lease from any cloud provider.

It doesn't matter if OpenAI or AWS or GCP encoded the entire works of Shakespeare in their LLM, they can all write/complete a valid limerick about "There once was a man from Nantucket".

I seriously doubt AWS is going to license OpenAI's technology when they can just copy the functionality, royalty free, and charge users for it. Maybe they will? But I doubt it. To the end user it's just another locally hosted API. Like DNS.

replies(4): >>cyanyd+tM >>worlds+ZM >>iLoveO+9a3 >>rolisz+C15
◧◩◪
16. nearbu+og[view] [source] [discussion] 2023-11-22 08:13:22
>>Terrif+A4
Concerns about bias and racism in ChatGPT would feel more valid if ChatGPT were even one tenth as bias as anything else in life. Twitter, Facebook, the media, friends and family, etc. are all more bias and radicalized (though I mean "radicalized" in a mild sense) than ChatGPT. Talk to anyone on any side about the war in Gaza and you'll get a bunch of opinions that the opposite side will say are blatantly racist. ChatGPT will just say something inoffensive like it's a complex and sensitive issue and that it's not programmed to have political opinions.
◧◩◪◨
17. karmas+Hg[view] [source] [discussion] 2023-11-22 08:15:24
>>morale+bc
The damage remains to be seen

They still have gpt4 and rumored gpt4.5 to offer, so people have no choice but to use them. The internet has such short an attention span, this news will get forgotten in 2 months

◧◩◪
18. cft+5i[view] [source] [discussion] 2023-11-22 08:26:18
>>hadloc+Sa
You are forgetting about the end of the Moore's law. The costs for running a large scale AI won't drop dramatically. Any optimizations will require non-trivial expensive PhD Bell Labs level research. Running intelligent LLMs will be financially accessible only to a few mega corps in the US and China (and perhaps to the European government). The AI "safety" teams will control the public discourse. Traditional search engines that blacklist websites with dissenting opinions will be viewed as the benevolent free speech dinosaurs of the past.
replies(1): >>dontup+oD
◧◩
19. doikor+Pj[view] [source] [discussion] 2023-11-22 08:38:44
>>quickt+Rd
They tried but it did not work. They needed billions for the compute time and top tier talent but were only able to collect millions.
20. Havoc+Pk[view] [source] 2023-11-22 08:45:57
>>Terrif+(OP)
Don’t think the dota bot was random. It’s the perfect mix between complicated yet controllable environment, good data availability and good PR angle.
replies(1): >>dontup+JD
◧◩◪◨
21. denlek+am[view] [source] [discussion] 2023-11-22 08:57:41
>>morale+bc
i don't think that's really any brand loyalty for OpenAI. people will use whatever is cheapest and best. in the longer run people will use whatever has the best access and integration.

what's keeping people with OpenAI for now is that chatGPT is free and GPT3.5 and GPT4 are the best. over time I expect the gap in performance to get smaller and the cost to run these to get cheaper.

if google gives me something close to as good as OpenAI's offering for the same price and it pull data from my gmail or my calendar or my google drive then i'll switch to that.

replies(2): >>dontup+UC >>morale+8V
◧◩◪
22. kgeist+Ao[view] [source] [discussion] 2023-11-22 09:20:22
>>Terrif+A4
GPT3/GPT4 currently moralize about anything slightly controversial. Sure you can construct a long elaborate prompt to "jailbreak" it, but it's so much effort it's easier to just write something by yourself.
23. 3cats-+Zo[view] [source] 2023-11-22 09:23:02
>>Terrif+(OP)
Do we need to false dichotomy. DotA 2 bot was a successful technology preview. You need both research and development in a healthy organisation. Let's call this... hmm I don't know "R&D" for short. Might catch on.
◧◩◪◨⬒
24. dontup+UC[view] [source] [discussion] 2023-11-22 11:26:42
>>denlek+am
This, if anything people really don't like the verbose moralizing and anti-terseness of it.

Ok, the first few times you use it maybe it's good to know it doesn't think it's a person, but short and sweet answers just save time, especially when the result is streamed.

◧◩◪◨
25. dontup+oD[view] [source] [discussion] 2023-11-22 11:29:49
>>cft+5i
This assumes the only way to use LLMs effectively is to have a monolith model that does everything from translation (from ANY language to ANY language) to creative writing to coding to what have you. And supposedly GPT4 is a mixture of experts (maybe 8-cross)

The efficiency of finetuned models is quite, quite a bit improved at the cost of giving up the rest of the world to do specific things, and disk space to have a few dozen local finetunes (or even hundreds+ for SaaS services) is peanuts compared to acquiring 80GB of VRAM on a single device for monomodels

replies(1): >>cft+BJ
◧◩
26. dontup+yD[view] [source] [discussion] 2023-11-22 11:31:08
>>g42gre+03
>If GPT-4 spits out some nonsense to your customers, just put a filter on it, as you should anyway.

Languages other than English exist, and RLHF at least does work in any language you make the request in. regex/nlp, not so much.

replies(1): >>g42gre+jD1
◧◩
27. dontup+JD[view] [source] [discussion] 2023-11-22 11:33:32
>>Havoc+Pk
It was a clever parallel to deep blue, especially as they picked DotA which was always the "harder" game in its genre.

Next up would be an EVE corp run entirely by LLMs

◧◩◪
28. dontup+cE[view] [source] [discussion] 2023-11-22 11:37:42
>>Terrif+A4
>Encourages radicalization (see Twitter's, YouTube's, and Facebook's news feed algorithm)

What do you mean? It recommends things that it thinks people will like.

Also I highly suspect "Altman's OpenAI" is dead regardless. They are now Copilot(tm) Research.

They may have delusions of grandeur regarding being able to resist the MicroBorg or change it from the inside, but that simply does not happen.

The best they can hope for as an org is to live as long as they can as best as they can.

I think Sam's 100B silicon gambit in the middle east (quite curious because this is probably something the United State Federal Government Is Likely Not Super Fond Of) is him realizing that, while he is influential and powerful, he's nowhere near MSFT level.

◧◩◪◨⬒
29. cft+BJ[view] [source] [discussion] 2023-11-22 12:18:35
>>dontup+oD
Sutskever says there's a "phase transition" at the order of 9 bn neurons, after which LLMs begin to become really useful. I don't know much here, but wouldn't the monomodels become overfit, because they don't have enough data for 9+bn parameters?
30. cyanyd+9M[view] [source] 2023-11-22 12:38:05
>>Terrif+(OP)
Non-profit is just a poorly thought out government-ish thing.

If it's really valuable to society, it needs to be a government entity, full stop.

◧◩◪◨⬒
31. cyanyd+tM[view] [source] [discussion] 2023-11-22 12:40:13
>>hadloc+bg
I think yuou're assuming that OpenAI is charging a $/compute price equal to what it costs them.

More likely, they're a loss-leader and generating publicity by making it as cheap as possible.

_Everything_ we've seen come out of silicon valley does this, so why would they suddenly be charging the right price?

◧◩
32. cyanyd+DM[view] [source] [discussion] 2023-11-22 12:41:07
>>g42gre+03
Because real people are using it to make decisions. Decisions that could be entirely skewed in some direction, and often that causes damage.
◧◩◪◨⬒
33. worlds+ZM[view] [source] [discussion] 2023-11-22 12:44:49
>>hadloc+bg
> offer it locally for a fraction of what openAI is charging

I thought the was a somewhat clear agreement that openAI is currently running inference at a loss?

replies(1): >>hadloc+Jw2
◧◩◪◨⬒
34. morale+8V[view] [source] [discussion] 2023-11-22 13:38:50
>>denlek+am
I do think there is some brand loyalty.

People use "the chatbot from OpenAI" because that's what became famous and got all the world a taste of AI (my dad is on that bandwagon, for instance). There is absolutely no way my dad is going to sign up for an Anthropic account and start making API calls to their LLM.

But I agree that it's a weak moat, if OpenAI were to disappear, I could just tell my dad to use "this same thing but from Google" and he'd switch without thinking much about it.

replies(1): >>denlek+Yz1
◧◩◪◨⬒⬓
35. denlek+Yz1[view] [source] [discussion] 2023-11-22 16:37:54
>>morale+8V
good points. on second thought, i should give them due credit for building a brand reputation as being "best" that will continue even if they aren't the best at some point, which will keep a lot of people with them. that's in addition to their other advantages that people will stay because it's easier than learning a new platform and there might be lock-in in terms of it being hard to move a trained gpt, or your chat history to another platform.
◧◩◪
36. g42gre+jD1[view] [source] [discussion] 2023-11-22 16:53:26
>>dontup+yD
No regex, you would use another copy of few-shot prompted GPT-4 as a filter for the first GPT-4!
◧◩◪◨⬒
37. dizzyd+1O1[view] [source] [discussion] 2023-11-22 17:41:05
>>ravst3+jb
Does it actually prevent regulators going after them?
◧◩◪◨⬒⬓
38. hadloc+Jw2[view] [source] [discussion] 2023-11-22 21:07:38
>>worlds+ZM
Moore's law seems to have failed on CPUs finally, but we've seen the pattern over and over. LLM specific hardware will undoubtedly bring down the cost. $10,000 A100 GPU will not be the last GPU NVidia ever makes, nor will their competitors stand by and let them hold the market hostage.

Quake and Counter-Strike in the 1990s ran like garbage in software-rendering mode. I remember having to run Counter-Strike on my Pentium 90 at the lowest resolution, and then disable upscaling to get 15fps, and even then smoke grenades and other effects would drop the framerate into the single digits. Almost two years after Quake's release did dedicated 3d video cards (voodoo 1 and 2 were accelerators, depended on a seperate 2d VGA graphics card to feed it) begin to hit the market.

Nowadays you can run those games (and their sequels) in the thousands (tens of thousands?) of frames per second on a top end modern card. I would imagine similar events with hardware will transpire with LLM. OpenAI is already prototyping their own hardware to train and run LLMs. I would imagine NVidia hasn't been sitting on their hands either.

◧◩◪◨⬒
39. iLoveO+9a3[view] [source] [discussion] 2023-11-23 00:41:34
>>hadloc+bg
> I seriously doubt AWS is going to license OpenAI's technology when they can just copy the functionality, royalty free, and charge users for it. Maybe they will? But I doubt it.

You mean like they already do on Amazon Bedrock?

replies(1): >>hadloc+ld3
◧◩◪◨⬒⬓
40. hadloc+ld3[view] [source] [discussion] 2023-11-23 01:02:31
>>iLoveO+9a3
Yeah and looks like they're going to offer Llama as well. They offer Redhat linux EC2 instances at a premium, and other paid per hour AMIs. I can't imagine why they wouldn't offer various LLMs at a premium, but not also offer a home-grown LLM at a lower rate once it's ready.
◧◩◪
41. daniel+FB3[view] [source] [discussion] 2023-11-23 04:27:47
>>hadloc+Sa
They won't stand still while others are scraping and digitizing. It's like saying there is no moat in search. Scale is a thing. Learning effects are a thing. It's not the worlds widest moat for sure, but it's a moat.
◧◩◪◨⬒
42. rolisz+C15[view] [source] [discussion] 2023-11-23 16:50:30
>>hadloc+bg
Why do you think cloud providers can undercut OpenAI? From what I know, Llama 70b is more expensive to run than GPT-3.5, unless you can get 70+% utilization rate for your GPUs, which is hard to do.

So far we don't have any open source models that are close to GPT4, so we don't know what it takes to run them for similar speeds.

[go to top]