zlacker

[parent] [thread] 73 comments
1. tomohe+(OP)[view] [source] 2023-11-22 06:08:13
So, Ilya is out of the board, but Adam is still on it. I know this will raise some eyebrows but whatever.

Still though, this isn't something that will just go away with Sam back. OAI will undergo serious changes now that Sam has shown himself to be irreplaceable. Future will tell but in the long terms, I doubt we will see OAI as one of the megacorps like Facebook or Uber. They lost the trust.

replies(10): >>wilg+a >>ayakan+d >>jatins+H >>sverha+N >>ilikeh+c1 >>gordon+j1 >>Terrif+I1 >>nathan+S2 >>lacker+d5 >>cowthu+A7
2. wilg+a[view] [source] 2023-11-22 06:09:15
>>tomohe+(OP)
I mean he's not irreplaceable so much as booting him suddenly for no good reason creates problems.
3. ayakan+d[view] [source] 2023-11-22 06:09:26
>>tomohe+(OP)
"I doubt we will see OAI as one of the megacorps like Facebook or Uber. They lost the trust." How is this the case?
replies(1): >>quickt+dg
4. jatins+H[view] [source] 2023-11-22 06:12:29
>>tomohe+(OP)
> I doubt we will see OAI as one of the megacorps like Facebook or Uber. They lost the trust

Whose trust?

5. sverha+N[view] [source] 2023-11-22 06:13:36
>>tomohe+(OP)
Ah, yes, Facebook and Uber, brands known for consistent trustworthiness throughout their existences /s
6. ilikeh+c1[view] [source] 2023-11-22 06:15:42
>>tomohe+(OP)
OAI looks stronger than ever. The untrustworthy bits that caused all this instability over the last 5 days have been ditched into the sea. Care to expand on your claim?
replies(2): >>neta13+82 >>6gvONx+T2
7. gordon+j1[view] [source] 2023-11-22 06:16:31
>>tomohe+(OP)
Facebook has lost trust so many times that I can’t even count but it’s still a Megacorp, isn’t it?
8. Terrif+I1[view] [source] 2023-11-22 06:19:15
>>tomohe+(OP)
The OpenAI of the past, that dabbled in random AI stuff (remember their DotA 2 bot?), is gone.

OpenAI is now just a vehicle to commercialize their LLM - and everything is subservient to that goal. Discover a major flaw in GPT4? You shut your mouth. Doesn’t matter if society at large suffers for it.

Altman's/Microsoft’s takeover of the former non-profit is now complete.

Edit: Let this be a lesson to us all. Just because something claims to be non-profit doesn't mean it will always remain that way. With enough political maneuvering and money, a megacorp can takeover almost any organization. Non-profit status and whatever the organization's charter says is temporary.

replies(8): >>karmas+Q3 >>g42gre+I4 >>robbom+l6 >>krisof+mf >>quickt+zf >>Havoc+xm >>3cats-+Hq >>cyanyd+RN
◧◩
9. neta13+82[view] [source] [discussion] 2023-11-22 06:21:38
>>ilikeh+c1
Please explain your claim as well. I don’t see how this company looks stronger than ever, more like a clown company
replies(3): >>TapWat+R2 >>ilikeh+B3 >>GreedC+b9
◧◩◪
10. TapWat+R2[view] [source] [discussion] 2023-11-22 06:26:33
>>neta13+82
They got rid of the clowns though. They went from having a board with lightweights and insiders to what at least initially is a strong initial 3.
11. nathan+S2[view] [source] 2023-11-22 06:26:38
>>tomohe+(OP)
On the contrary, this saga has shown that a huge number of people are extremely passionate about the existence of OpenAI and it's leadership by Altman, much more strongly and in larger numbers than most had suspected. If anything this has solidified the importance of the company and I think people will trust it more that the situation was resolved with the light speed it was.
replies(1): >>willdr+ld
◧◩
12. 6gvONx+T2[view] [source] [discussion] 2023-11-22 06:26:46
>>ilikeh+c1
> The untrustworthy bits that caused all this instability over the last 5 days have been ditched into the sea

This whole thing started with Altman pushing a safety oriented non-profit into a tense contradiction (edit: I mean the 2019-2022 gpt3/chatgpt for-profit stuff that led to all the Anthropic people leaving). The most recent timeline was

- Altman tries to push out another board member

- That board member escalates by pushing Altman out (and Brockman off the board)

- Altman's side escalates by saying they'll nuke the company

Altman's side won, but how can we say that his side didn't cause any of this instability?

replies(2): >>ilikeh+W3 >>WendyT+Y4
◧◩◪
13. ilikeh+B3[view] [source] [discussion] 2023-11-22 06:31:39
>>neta13+82
I may have been overly eager in my comment because the big bad downside of the new board is none of the founders are on it. I hope the current membership sees reason and fixes this issue.

But I said this because: They've retained the entire company, reinstated its founder as CEO, and replaced an activist clown board with a professional, experienced, and possibly* unified one. Still remains to be seen how the board membership and overall org structure changes, but I have much more trust in the current 3 members steering OpenAI toward long-term success.

replies(1): >>MVisse+i8
◧◩
14. karmas+Q3[view] [source] [discussion] 2023-11-22 06:33:17
>>Terrif+I1
> now just a vehicle to commercialize their LLM

I mean it is what they want isn't it. They did some random stuff like, playing dota2 or robot arms, even the Dalle stuff. Now they finally find that one golden goose, of course they are going to keep it.

I don't think the company has changed at all. It succeeded after all.

replies(2): >>nextac+Q4 >>hadloc+Ac
◧◩◪
15. ilikeh+W3[view] [source] [discussion] 2023-11-22 06:33:36
>>6gvONx+T2
> Altman tries to push out another board member

That event wasn't some unprovoked start of this history.

> That board member escalates by pushing Altman out (and Brockman off the board)

and the entire company retaliated. Then this board member tried to sell the company to a competitor who refused. In the meantime the board went through two interim CEOs who refused to play along with this scheme. In the meantime one of the people who voted to fire the CEO regretted it publicly within 24 hours. That's a clown car of a board. It reflects the quality of most non-profit boards but not of organizations that actually execute well.

replies(1): >>emptys+79
◧◩
16. g42gre+I4[view] [source] [discussion] 2023-11-22 06:38:27
>>Terrif+I1
Why would society at large suffer from a major flaw in GPT-4, if it's even there? If GPT-4 spits out some nonsense to your customers, just put a filter on it, as you should anyway. We can't seriously expect OpenAI to babysit every company out there, can we? Why would we even want to?
replies(3): >>Terrif+i6 >>dontup+gF >>cyanyd+lO
◧◩◪
17. nextac+Q4[view] [source] [discussion] 2023-11-22 06:39:05
>>karmas+Q3
But it's not exactly a company. It's a nonprofit structured in a way to wholly own a company. In that sense it's like Mozilla.
replies(1): >>karmas+D6
◧◩◪
18. WendyT+Y4[view] [source] [discussion] 2023-11-22 06:39:55
>>6gvONx+T2
By recognizing that it didn't "start" with Altman trying to push out another board member, it started when that board member published a paper trashing the company she's on the board of, without speaking to the CEO of that company first, or trying in any way to affect change first.
replies(2): >>6gvONx+S6 >>croes+Y9
19. lacker+d5[view] [source] 2023-11-22 06:41:27
>>tomohe+(OP)
Let's see, Sam Altman is an incredibly charismatic founding CEO, who some people consider manipulative, but is also beloved by many employees. He got kicked out by his board, but brought back when they realized their mistake.

It's true that this doesn't really pattern-match with the founding story of huge successful companies like Facebook, Amazon, Microsoft, or Google. But somehow, I think it's still possible that a huge company could be created by a person like this.

(And of course, more important than creating a huge company, is creating insanely great products.)

replies(2): >>lovepa+E9 >>mkii+Mt
◧◩◪
20. Terrif+i6[view] [source] [discussion] 2023-11-22 06:48:23
>>g42gre+I4
For example, and I'm not saying such flaws exist, GPT4 output is bias in some way, encourages radicalization (see Twitter's, YouTube's, and Facebook's news feed algorithm), create self-esteem issues in children (see Instagram), ... etc.

If you worked for old OpenAI, you would be free to talk about it - since old OpenAI didn't give a crap about profit.

Altman's OpenAI? He will want you to "go to him first".

replies(4): >>g42gre+H7 >>nearbu+6i >>kgeist+iq >>dontup+UF
◧◩
21. robbom+l6[view] [source] [discussion] 2023-11-22 06:48:41
>>Terrif+I1
I'm still waiting for an optimized version of that bot that can run locally...
◧◩◪◨
22. karmas+D6[view] [source] [discussion] 2023-11-22 06:49:28
>>nextac+Q4
Nonprofit is a just a facade, it was convenient for them to appear as ethnical under that disguise, but they get rid of it when it is inconvenient in a week. 95% of them would rather join MSFT, than being in a non-profit.

Did they company change? I am not convinced.

replies(1): >>ravst3+1d
◧◩◪◨
23. 6gvONx+S6[view] [source] [discussion] 2023-11-22 06:52:16
>>WendyT+Y4
I edited my comment to clarify what I meant. The start was him pushing to move fast and break things in the classic YC kind of way. And it's BS to say that she didn't speak to the CEO or try to affect change first. The safety camp inside openai has been unsuccessfully trying to push him to slow down for years.

See this article for all that context (>>38341399 ) because it sure didn't start with the paper you referred to either.

replies(1): >>WendyT+p8
24. cowthu+A7[view] [source] 2023-11-22 06:57:07
>>tomohe+(OP)
I feel like history has shown repeatedly that having a good product matters way more than trust, as evidenced by Facebook and Uber. People seem to talk big smack about lost trust and such in the immediate aftermath of a scandal, and then quitely renew the contracts when the time comes.

All of the big ad companies (Google, Amazon, Facebook) have, like, a scandal per month, yet the ad revenue keeps coming. Meltdown was a huge scandal, yet Intel keeps pumping out the chips.

◧◩◪◨
25. g42gre+H7[view] [source] [discussion] 2023-11-22 06:57:51
>>Terrif+i6
We can't expect GPT-4 not to have bias in some way, or not to have all these things that you mentioned. I read in multiple places that GPT products have "progressive" bias. If that's Ok with you, then you just use it with that bias. If not, you fix it by pre-prompting, etc... If you can't fix it, use LLAMA or something else. That's the entrepreneur's problem, not OpenAI's. OpenAI needs to make it intelligent and capable. The entrepreneurs and business users will do the rest. That's how they get paid. If OpenAI to solve all these problems, what business users are going to do themselves? I just don't see the societal harm here.
◧◩◪◨
26. MVisse+i8[view] [source] [discussion] 2023-11-22 07:00:54
>>ilikeh+B3
If by “long-term-success” you mean a capitalistic lap-dog of microsoft, I’ll agree.

It seems that the safety team within OpenAI lost. My biggest fear with this whole AI thing is hostile takeover, and openAI was best positioned to at least do an effort to prevent that. Now, I’m not so sure anymore.

◧◩◪◨⬒
27. WendyT+p8[view] [source] [discussion] 2023-11-22 07:01:47
>>6gvONx+S6
Your "most recent" timeline is still wrong, and while yes the entire history of OpenAI did not begin with the paper I'm referencing, it is what started this specific fracas, the one where the board voted to oust Sam Altman.

It was a classic antisocial academic move; all she needed to do was talk to Altman, both before and after writing the paper. It's incredibly easy to do that, and her not doing it is what began the insanity.

She's gone now, and Altman remains, substantially because she didn't know how to pick up a phone and interact with another human being. Who knows, she might have even been successful at her stated goal, of protecting AI, had she done even the most basic amount of problem solving first. She should not have been on this board, and I hope she's learned literally anything from this about interacting with people, though frankly I doubt it.

replies(1): >>6gvONx+x9
◧◩◪◨
28. emptys+79[view] [source] [discussion] 2023-11-22 07:05:56
>>ilikeh+W3
Something that's been fairly consistent here on HN throughout the debacle has been an almost fanatical defense of the board's actions as justified.

The board was incompetent. It will go down in the history books as one of the biggest blunders of a board in history.

If you want to take drastic action, you consult with your biggest partner keeping the lights on before you do so. Helen Toner and Tasha McCauley had no business being on this board. Even if you had safety concerns in mind, you don't bypass everyone else with a stake in the future of your business because you're feeling petulant.

◧◩◪
29. GreedC+b9[view] [source] [discussion] 2023-11-22 07:07:14
>>neta13+82
It was a clown board running an awesome company.

They fixed the glitch.

◧◩◪◨⬒⬓
30. 6gvONx+x9[view] [source] [discussion] 2023-11-22 07:09:53
>>WendyT+p8
Honestly, I just don't believe that she didn't talk to Altman about her concerns. I'd believe that she didn't say "I'm publishing a paper about it now" but I can't believe she didn't talk to him about her concerns during the last 4+ years that it's been a core tension at the company.
replies(1): >>WendyT+qa
◧◩
31. lovepa+E9[view] [source] [discussion] 2023-11-22 07:10:34
>>lacker+d5
I think people following Sam Altman is jumping to conclusions. I think it's just as likely that employees are simply following the money. They want to make $$$, and that's what a for-profit company does, which is what Altman wants. I think it's probably not really about Altman or his leadership.
replies(1): >>kareaa+pz
◧◩◪◨
32. croes+Y9[view] [source] [discussion] 2023-11-22 07:12:16
>>WendyT+Y4
>trashing the company

So pointing out risks is trashing the company.

◧◩◪◨⬒⬓⬔
33. WendyT+qa[view] [source] [discussion] 2023-11-22 07:14:48
>>6gvONx+x9
That's what I mean; she should have discussed the paper and its contents specifically with Altman, and easily could have. It's a hugely damaging thing to have your own board member come out critically against your company. It's doubly so when it blindsides the CEO.

She had many, many other options available to her that she did not take. That was a grave mistake and she paid for it.

"But what about academic integrity?" Yes! That's why this whole idea was problematic from the beginning. She can't be objective and fulfill her role as board member. Her role at Georgetown was in direct conflict with her role on the OpenAI board.

◧◩◪
34. hadloc+Ac[view] [source] [discussion] 2023-11-22 07:30:38
>>karmas+Q3
There's no moat in giant LLMs. Anyone on a long enough timeline can scrape/digitize 99.9X% of all human knowledge and build an LLM or LXX from it. Monetizing that idea and staying the market leader over a period longer than 10 years will take a herculean amount of effort. Facebook releasing similar models for free definitely took the wind out of their sails, even a tiny bit; right now the moat is access to A100 boards. That will change as eventually even the Raspberry Pi 9 will have LLM capabilities
replies(3): >>morale+Td >>cft+Nj >>daniel+nD3
◧◩◪◨⬒
35. ravst3+1d[view] [source] [discussion] 2023-11-22 07:33:53
>>karmas+D6
Agree that it's a facade.

Iirc, the NP structure was implemented to attract top AI talent from FAANG. Then they needed investors to fund the infrastructure and hence gave the employees shares or profit units (whatever the hell that is). The NP now shields MSFT from regulatory issues.

I do wonder how many of those employees would actually go to MSFT. It feels more like a gambit to get Altman back in since they were about to cash out with the tender offer.

replies(1): >>dizzyd+JP1
◧◩
36. willdr+ld[view] [source] [discussion] 2023-11-22 07:35:55
>>nathan+S2
That's a misreading of the situation. The employees saw their big bag vanishing and suddenly realised they were employed by a non-profit entity that had loftier goals than making a buck, so they rallied to overturn it and they've gotten their way. This is a net negative for anyone not financially invested in OAI.
replies(1): >>nathan+Mx
◧◩◪◨
37. morale+Td[view] [source] [discussion] 2023-11-22 07:39:45
>>hadloc+Ac
OpenAI (ChatGPT) is already a HUGE brand all around the world. No doubt they're the most valuable startup in the AI space. That's their moat.

Unfortunately, in the past few days, the only thing they've accomplished is significantly damaging their brand.

replies(3): >>hadloc+Th >>karmas+pi >>denlek+Sn
◧◩
38. krisof+mf[view] [source] [discussion] 2023-11-22 07:51:43
>>Terrif+I1
> With enough political maneuvering and money, a megacorp can takeover almost any organization.

In fact this observation is pertinent to the original stated goals of openAI. In some sense companies and organisations are superinteligences. That is they have goals, they are acting in the real world to achieve those goals and they are more capable in some measures than a single human. (They are not AGI, because they are not artificial, they are composed of meaty parts, the individuals forming the company.)

In fact what we are seeing is that when the superinteligence OpenAI was set up there was an attempt to align the goals of the initial founders with the then new organisation. They tried to “bind” their “golem” to make it pursue certain goals by giving it an unconventional governance structure and a charter.

Did they succeed? Too early to tell for sure, but there are at least question marks around it.

How would one argue against? OpenAI appears to have given up the lofty goals of AI safety and preventing the concentration of AI provess. In their pursuit of economic success the forces wishing to enrich themselves overpowered the forces wishing to concentrate on the goals. Safety will be still a figleaf for them, if nothing else to achieve regulatory capture to keep out upstart competition.

How would one argue for? OpenAI is still around. The charter is still around. To be able to achieve the lofty goals contained in it one needs a lot of resources. Money in particular is a resource which enables one greater powers in shaping the world. Achieving the original goals will require a lot of money. The “golem” is now in the “gain resources” phase of its operation. To achieve that it commercialises the relatively benign, safe and simple LLMs it has access to. This serves the original goal in three ways: gains further resources, estabilishes the organisation as a pre-eminent expert on AI and thus AI safety, provides it with a relatively safe sandbox where adversarial forces are trying its safety concepts. In other words all is well with the original goals, the “golem” that is OpenAI is still well aligned. It will achieve the original goals once it has gained enough resources to do so.

The fact that we can’t tell which is happening is in fact the worry and problem with superinteligence/AI safety.

◧◩
39. quickt+zf[view] [source] [discussion] 2023-11-22 07:53:07
>>Terrif+I1
They let the fox in. But they didn’t have to. They could have try to raise money without such a sweet deal to MS. They gave away power for cloud credits.
replies(2): >>dragon+wg >>doikor+xl
◧◩
40. quickt+dg[view] [source] [discussion] 2023-11-22 07:57:33
>>ayakan+d
Scandal a minute Uber lol
◧◩◪
41. dragon+wg[view] [source] [discussion] 2023-11-22 08:00:06
>>quickt+zf
> They let the fox in. But they didn’t have to. They could have try to raise money without such a sweet deal to MS.

They did, and fell, IIRC, vastly short (IIRC, an order of magnitude, maybe more) short of their minimum short-term target. The commercial subsidiary thing was a risk taken to support the mission because it was clear it was going to fail from lack of funding otherwise.

◧◩◪◨⬒
42. hadloc+Th[view] [source] [discussion] 2023-11-22 08:11:46
>>morale+Td
Branding counts for a lot, but LLM are already a commodity. As soon as someone releases an LLM equivalent to GPT4 or GPT5, most cloud providers will offer it locally for a fraction of what openAI is charging, and the heaviest users will simply self-host. Go look at the company Docker. I can build a container on almost any device with a prompt these days using open source tooling. The company (or brand, at this point?) offers "professional services" I suppose but who is paying for it? Or go look at Redis or Elasti-anything. Or memcached. Or postgres. Or whatever. Industrial-grade underpinnings of the internet, but it's all just commodity stuff you can lease from any cloud provider.

It doesn't matter if OpenAI or AWS or GCP encoded the entire works of Shakespeare in their LLM, they can all write/complete a valid limerick about "There once was a man from Nantucket".

I seriously doubt AWS is going to license OpenAI's technology when they can just copy the functionality, royalty free, and charge users for it. Maybe they will? But I doubt it. To the end user it's just another locally hosted API. Like DNS.

replies(4): >>cyanyd+bO >>worlds+HO >>iLoveO+Rb3 >>rolisz+k35
◧◩◪◨
43. nearbu+6i[view] [source] [discussion] 2023-11-22 08:13:22
>>Terrif+i6
Concerns about bias and racism in ChatGPT would feel more valid if ChatGPT were even one tenth as bias as anything else in life. Twitter, Facebook, the media, friends and family, etc. are all more bias and radicalized (though I mean "radicalized" in a mild sense) than ChatGPT. Talk to anyone on any side about the war in Gaza and you'll get a bunch of opinions that the opposite side will say are blatantly racist. ChatGPT will just say something inoffensive like it's a complex and sensitive issue and that it's not programmed to have political opinions.
◧◩◪◨⬒
44. karmas+pi[view] [source] [discussion] 2023-11-22 08:15:24
>>morale+Td
The damage remains to be seen

They still have gpt4 and rumored gpt4.5 to offer, so people have no choice but to use them. The internet has such short an attention span, this news will get forgotten in 2 months

◧◩◪◨
45. cft+Nj[view] [source] [discussion] 2023-11-22 08:26:18
>>hadloc+Ac
You are forgetting about the end of the Moore's law. The costs for running a large scale AI won't drop dramatically. Any optimizations will require non-trivial expensive PhD Bell Labs level research. Running intelligent LLMs will be financially accessible only to a few mega corps in the US and China (and perhaps to the European government). The AI "safety" teams will control the public discourse. Traditional search engines that blacklist websites with dissenting opinions will be viewed as the benevolent free speech dinosaurs of the past.
replies(1): >>dontup+6F
◧◩◪
46. doikor+xl[view] [source] [discussion] 2023-11-22 08:38:44
>>quickt+zf
They tried but it did not work. They needed billions for the compute time and top tier talent but were only able to collect millions.
◧◩
47. Havoc+xm[view] [source] [discussion] 2023-11-22 08:45:57
>>Terrif+I1
Don’t think the dota bot was random. It’s the perfect mix between complicated yet controllable environment, good data availability and good PR angle.
replies(1): >>dontup+rF
◧◩◪◨⬒
48. denlek+Sn[view] [source] [discussion] 2023-11-22 08:57:41
>>morale+Td
i don't think that's really any brand loyalty for OpenAI. people will use whatever is cheapest and best. in the longer run people will use whatever has the best access and integration.

what's keeping people with OpenAI for now is that chatGPT is free and GPT3.5 and GPT4 are the best. over time I expect the gap in performance to get smaller and the cost to run these to get cheaper.

if google gives me something close to as good as OpenAI's offering for the same price and it pull data from my gmail or my calendar or my google drive then i'll switch to that.

replies(2): >>dontup+CE >>morale+QW
◧◩◪◨
49. kgeist+iq[view] [source] [discussion] 2023-11-22 09:20:22
>>Terrif+i6
GPT3/GPT4 currently moralize about anything slightly controversial. Sure you can construct a long elaborate prompt to "jailbreak" it, but it's so much effort it's easier to just write something by yourself.
◧◩
50. 3cats-+Hq[view] [source] [discussion] 2023-11-22 09:23:02
>>Terrif+I1
Do we need to false dichotomy. DotA 2 bot was a successful technology preview. You need both research and development in a healthy organisation. Let's call this... hmm I don't know "R&D" for short. Might catch on.
◧◩
51. mkii+Mt[view] [source] [discussion] 2023-11-22 09:47:33
>>lacker+d5
> It's true that this doesn't really pattern-match with the founding story of huge successful companies like Facebook, Amazon, Microsoft, or Google.

You forgot about Apple.

◧◩◪
52. nathan+Mx[view] [source] [discussion] 2023-11-22 10:22:23
>>willdr+ld
What lofty goals? The board was questioned repeatedly and never articulated clear reasoning for firing Altman and in the process lost the confidence of the employees hence the "rally". The lack of clarity was their undoing whether there would have been a bag for the employees to lose or not.
replies(1): >>muraka+xS2
◧◩◪
53. kareaa+pz[view] [source] [discussion] 2023-11-22 10:39:44
>>lovepa+E9
Given that over 750 people have signed the letter, it's safe to assume that their motivations vary. Some might be motivated by the financial aspects, some might be motivated by Sam's leadership (like considering Sam as a friend who needs support). Some might fervently believe that their work is crucial for the advancement of humanity and that any changes would just hinder their progress. And some might have just caved in to peer pressure.
replies(1): >>strike+LY
◧◩◪◨⬒⬓
54. dontup+CE[view] [source] [discussion] 2023-11-22 11:26:42
>>denlek+Sn
This, if anything people really don't like the verbose moralizing and anti-terseness of it.

Ok, the first few times you use it maybe it's good to know it doesn't think it's a person, but short and sweet answers just save time, especially when the result is streamed.

◧◩◪◨⬒
55. dontup+6F[view] [source] [discussion] 2023-11-22 11:29:49
>>cft+Nj
This assumes the only way to use LLMs effectively is to have a monolith model that does everything from translation (from ANY language to ANY language) to creative writing to coding to what have you. And supposedly GPT4 is a mixture of experts (maybe 8-cross)

The efficiency of finetuned models is quite, quite a bit improved at the cost of giving up the rest of the world to do specific things, and disk space to have a few dozen local finetunes (or even hundreds+ for SaaS services) is peanuts compared to acquiring 80GB of VRAM on a single device for monomodels

replies(1): >>cft+jL
◧◩◪
56. dontup+gF[view] [source] [discussion] 2023-11-22 11:31:08
>>g42gre+I4
>If GPT-4 spits out some nonsense to your customers, just put a filter on it, as you should anyway.

Languages other than English exist, and RLHF at least does work in any language you make the request in. regex/nlp, not so much.

replies(1): >>g42gre+1F1
◧◩◪
57. dontup+rF[view] [source] [discussion] 2023-11-22 11:33:32
>>Havoc+xm
It was a clever parallel to deep blue, especially as they picked DotA which was always the "harder" game in its genre.

Next up would be an EVE corp run entirely by LLMs

◧◩◪◨
58. dontup+UF[view] [source] [discussion] 2023-11-22 11:37:42
>>Terrif+i6
>Encourages radicalization (see Twitter's, YouTube's, and Facebook's news feed algorithm)

What do you mean? It recommends things that it thinks people will like.

Also I highly suspect "Altman's OpenAI" is dead regardless. They are now Copilot(tm) Research.

They may have delusions of grandeur regarding being able to resist the MicroBorg or change it from the inside, but that simply does not happen.

The best they can hope for as an org is to live as long as they can as best as they can.

I think Sam's 100B silicon gambit in the middle east (quite curious because this is probably something the United State Federal Government Is Likely Not Super Fond Of) is him realizing that, while he is influential and powerful, he's nowhere near MSFT level.

◧◩◪◨⬒⬓
59. cft+jL[view] [source] [discussion] 2023-11-22 12:18:35
>>dontup+6F
Sutskever says there's a "phase transition" at the order of 9 bn neurons, after which LLMs begin to become really useful. I don't know much here, but wouldn't the monomodels become overfit, because they don't have enough data for 9+bn parameters?
◧◩
60. cyanyd+RN[view] [source] [discussion] 2023-11-22 12:38:05
>>Terrif+I1
Non-profit is just a poorly thought out government-ish thing.

If it's really valuable to society, it needs to be a government entity, full stop.

◧◩◪◨⬒⬓
61. cyanyd+bO[view] [source] [discussion] 2023-11-22 12:40:13
>>hadloc+Th
I think yuou're assuming that OpenAI is charging a $/compute price equal to what it costs them.

More likely, they're a loss-leader and generating publicity by making it as cheap as possible.

_Everything_ we've seen come out of silicon valley does this, so why would they suddenly be charging the right price?

◧◩◪
62. cyanyd+lO[view] [source] [discussion] 2023-11-22 12:41:07
>>g42gre+I4
Because real people are using it to make decisions. Decisions that could be entirely skewed in some direction, and often that causes damage.
◧◩◪◨⬒⬓
63. worlds+HO[view] [source] [discussion] 2023-11-22 12:44:49
>>hadloc+Th
> offer it locally for a fraction of what openAI is charging

I thought the was a somewhat clear agreement that openAI is currently running inference at a loss?

replies(1): >>hadloc+ry2
◧◩◪◨⬒⬓
64. morale+QW[view] [source] [discussion] 2023-11-22 13:38:50
>>denlek+Sn
I do think there is some brand loyalty.

People use "the chatbot from OpenAI" because that's what became famous and got all the world a taste of AI (my dad is on that bandwagon, for instance). There is absolutely no way my dad is going to sign up for an Anthropic account and start making API calls to their LLM.

But I agree that it's a weak moat, if OpenAI were to disappear, I could just tell my dad to use "this same thing but from Google" and he'd switch without thinking much about it.

replies(1): >>denlek+GB1
◧◩◪◨
65. strike+LY[view] [source] [discussion] 2023-11-22 13:48:43
>>kareaa+pz
Most are probably motivated by money, some are motivated by stability and some are motivated by their loyalty to sam but i think most are motivated by money and stability.
◧◩◪◨⬒⬓⬔
66. denlek+GB1[view] [source] [discussion] 2023-11-22 16:37:54
>>morale+QW
good points. on second thought, i should give them due credit for building a brand reputation as being "best" that will continue even if they aren't the best at some point, which will keep a lot of people with them. that's in addition to their other advantages that people will stay because it's easier than learning a new platform and there might be lock-in in terms of it being hard to move a trained gpt, or your chat history to another platform.
◧◩◪◨
67. g42gre+1F1[view] [source] [discussion] 2023-11-22 16:53:26
>>dontup+gF
No regex, you would use another copy of few-shot prompted GPT-4 as a filter for the first GPT-4!
◧◩◪◨⬒⬓
68. dizzyd+JP1[view] [source] [discussion] 2023-11-22 17:41:05
>>ravst3+1d
Does it actually prevent regulators going after them?
◧◩◪◨⬒⬓⬔
69. hadloc+ry2[view] [source] [discussion] 2023-11-22 21:07:38
>>worlds+HO
Moore's law seems to have failed on CPUs finally, but we've seen the pattern over and over. LLM specific hardware will undoubtedly bring down the cost. $10,000 A100 GPU will not be the last GPU NVidia ever makes, nor will their competitors stand by and let them hold the market hostage.

Quake and Counter-Strike in the 1990s ran like garbage in software-rendering mode. I remember having to run Counter-Strike on my Pentium 90 at the lowest resolution, and then disable upscaling to get 15fps, and even then smoke grenades and other effects would drop the framerate into the single digits. Almost two years after Quake's release did dedicated 3d video cards (voodoo 1 and 2 were accelerators, depended on a seperate 2d VGA graphics card to feed it) begin to hit the market.

Nowadays you can run those games (and their sequels) in the thousands (tens of thousands?) of frames per second on a top end modern card. I would imagine similar events with hardware will transpire with LLM. OpenAI is already prototyping their own hardware to train and run LLMs. I would imagine NVidia hasn't been sitting on their hands either.

◧◩◪◨
70. muraka+xS2[view] [source] [discussion] 2023-11-22 22:56:56
>>nathan+Mx
My story: Maybe they had lofty goals, maybe not, but it sounded like the whole thing was instigated by Altman trying to fire Toner (one of the board members) over a silly pretext of her coauthoring a paper that nobody read that was very mildly negative about OpenAI, during her day job. https://www.nytimes.com/2023/11/21/technology/openai-altman-...

And then presumably the other board members read the writing on the wall (especially seeing how 3 other board members mysteriously resigned, including Hoffman https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...), and realized that if Altman can kick out Toner under such flimsy pretexts, they'd be out too.

So they allied with Helen to countercoup Greg/Sam.

I think the anti-board perspective is that this is all shallow bickering over a 90B company. The pro-board perspective is that the whole point of the board was to serve as a check on the CEO, so if the CEO could easily appoint only loyalists, then the board is a useless rubber stamp that lends unfair legitimacy to OpenAI's regulatory capture efforts.

◧◩◪◨⬒⬓
71. iLoveO+Rb3[view] [source] [discussion] 2023-11-23 00:41:34
>>hadloc+Th
> I seriously doubt AWS is going to license OpenAI's technology when they can just copy the functionality, royalty free, and charge users for it. Maybe they will? But I doubt it.

You mean like they already do on Amazon Bedrock?

replies(1): >>hadloc+3f3
◧◩◪◨⬒⬓⬔
72. hadloc+3f3[view] [source] [discussion] 2023-11-23 01:02:31
>>iLoveO+Rb3
Yeah and looks like they're going to offer Llama as well. They offer Redhat linux EC2 instances at a premium, and other paid per hour AMIs. I can't imagine why they wouldn't offer various LLMs at a premium, but not also offer a home-grown LLM at a lower rate once it's ready.
◧◩◪◨
73. daniel+nD3[view] [source] [discussion] 2023-11-23 04:27:47
>>hadloc+Ac
They won't stand still while others are scraping and digitizing. It's like saying there is no moat in search. Scale is a thing. Learning effects are a thing. It's not the worlds widest moat for sure, but it's a moat.
◧◩◪◨⬒⬓
74. rolisz+k35[view] [source] [discussion] 2023-11-23 16:50:30
>>hadloc+Th
Why do you think cloud providers can undercut OpenAI? From what I know, Llama 70b is more expensive to run than GPT-3.5, unless you can get 70+% utilization rate for your GPUs, which is hard to do.

So far we don't have any open source models that are close to GPT4, so we don't know what it takes to run them for similar speeds.

[go to top]