zlacker

[return to "OpenAI board in discussions with Sam Altman to return as CEO"]
1. airstr+r3[view] [source] 2023-11-18 23:07:37
>>medler+(OP)
This makes sense. The board thinks they're calling the shots, but the reality is the people with the money are the ones calling the shots, always. Boards are just appointed by shareholders aka investors aka capital holders to do their bidding.

The capped-profit / non-profit structure muddles that a little bit, but the reality is that entity can't survive without the funding that goes into the for-profit piece

And if current investors + would-be investors threaten to walk away, what can the board really do? They have no leverage.

Sounds like they really didn't "play the tape forward" and think this through...

◧◩
2. fnordp+f7[view] [source] 2023-11-18 23:25:21
>>airstr+r3
A non profit board absolutely calls the shots at a non profit, in so far as the CEO and their employment goes. Non profit boards are not beholden, structurally, to investors and there are no shareholders.

No stakeholder would walk away from OpenAI for want of sam Altman. They don’t license OpenAI technology or provide funding for his contribution. They do it to get access to GPT4. There is no comparable competitor available.

If anything they would be miffed about how it was handled, but to be frank, unless GPT4 is sam Altman furiously typing, I don’t know he’s that important. The instability caused by the suddenness, that’s different.

◧◩◪
3. tsunam+A8[view] [source] 2023-11-18 23:32:56
>>fnordp+f7
Nothing matters if you don’t have the money to enforce the system. Come on get real. Whatever the board says MS can turn off the money in a second and invalidate anything.
◧◩◪◨
4. fnordp+0b[view] [source] 2023-11-18 23:45:43
>>tsunam+A8
Microsoft depends on OpenAI much more than OpenAI depends on Microsoft. If you work with OpenAI as a company very often this is extraordinarily obvious.
◧◩◪◨⬒
5. xigenc+5o[view] [source] 2023-11-19 00:57:09
>>fnordp+0b
They could use Llama instead. OpenAI’s moat is very shallow. They’re still coasting on Google’s research papers.
◧◩◪◨⬒⬓
6. fnordp+Gs[view] [source] 2023-11-19 01:34:19
>>xigenc+5o
If you’ve used the models for actual business problems GPT4 and its successive revisions are way beyond llama. They’re not comparable. I’m a huge fan of open models but it’s just different worlds of power. I’d note OpenAI has been working on GPT5 for some time as well, which I would expect to be a remarkable improvement incorporating much of the theoretical and technical advances of the last two years. Claude is the only actual competitor to GPT4 and it’s a “just barely relevant situation.”
◧◩◪◨⬒⬓⬔
7. xigenc+iw[view] [source] 2023-11-19 01:58:56
>>fnordp+Gs
Hm, it’s hard for me to say because most of my prompts would get me banned from OpenAI but I’ve gotten great results for specific tasks using finetuned quantized 30B models on my desktop and laptop. All things considered, it’s a better value for me, especially as I highly value openness and privacy.
◧◩◪◨⬒⬓⬔⧯
8. sebast+JK[view] [source] 2023-11-19 03:30:07
>>xigenc+iw
What specs are needed to run those models in your local machine without crashing the system?
◧◩◪◨⬒⬓⬔⧯▣
9. xigenc+tR[view] [source] 2023-11-19 04:16:07
>>sebast+JK
I use Faraday.dev on an RTX 3090 and smaller models on a 16gb M2 Mac and I’m able to have deep, insightful conversations with personal AI at my direction.

I find the outputs of LLMs to be quite organic when they are given unique identities, and especially when you explore, prune or direct their responses.

ChatGPT comes across like a really boring person who memorized Wikipedia, which is just sad. Previously the Playground completions allowed using raw GPT which let me unlock some different facets, but they’ve closed that down now.

And again, I don’t really need to feed my unique thoughts, opinions, or absurd chat scenarios into a global company trying to create AGI, or have them censor and filter for me. As an AI researcher, I want the uncensored model to play with along with no data leaving my network.

The uses of LLMs for information retrieval are great (Bing has improved alot) but the much more interesting cases for me are how they are able to parse nuance, tone, and subtext - imagine a computer that can understand feelings and respond in kind. Empathetic commuting, and it’s already here on my PC unplugged from the Internet.

◧◩◪◨⬒⬓⬔⧯▣▦
10. mark_l+IW[view] [source] 2023-11-19 04:57:56
>>xigenc+tR
+1 Greg. I agree with most of what you say. Also, it is so much more fun running everything locally.
[go to top]