zlacker

[return to "We have reached an agreement in principle for Sam to return to OpenAI as CEO"]
1. jafitc+F91[view] [source] 2023-11-22 14:31:43
>>staran+(OP)
OpenAI's Future and Viability

- OpenAI has damaged their brand and lost trust, but may still become a hugely successful company if they build great products

- OpenAI looks stronger now with a more professional board, but has fundamentally transformed into a for-profit focused on commercializing LLMs

- OpenAI still retains impressive talent and technology assets and could pivot into a leading AI provider if managed well

---

Sam Altman's Leadership

- Sam emerged as an irreplaceable CEO with overwhelming employee loyalty, but may have to accept more oversight

- Sam has exceptional leadership abilities but can be manipulative; he will likely retain control but have to keep stakeholders aligned

---

Board Issues

- The board acted incompetently and destructively without clear reasons or communication

- The new board seems more reasonable but may struggle to govern given Sam's power

- There are still opposing factions on ideology and commercialization that will continue battling

---

Employee Motivations

- Employees followed the money trail and Sam to preserve their equity and careers

- Peer pressure and groupthink likely also swayed employees more than principles

- Mission-driven employees may still leave for opportunities at places like Anthropic

---

Safety vs Commercialization

- The safety faction lost this battle but still has influential leaders wanting to constrain the technology

- Rapid commercialization beat out calls for restraint but may hit snags with model issues

---

Microsoft Partnership

- Microsoft strengthened its power despite not appearing involved in the drama

- OpenAI is now clearly beholden to Microsoft's interests rather than an independent entity

◧◩
2. nuruma+Db1[view] [source] 2023-11-22 14:40:06
>>jafitc+F91
Gpt-generated summary?
◧◩◪
3. Mistle+Gj1[view] [source] 2023-11-22 15:12:05
>>nuruma+Db1
That was my first thought as well. And now it is the top comment on this post. Isn’t this brave new world OpenAI made wonderful?
◧◩◪◨
4. nickpp+In1[view] [source] 2023-11-22 15:30:36
>>Mistle+Gj1
If it’s a good comment, does it really matter if a human or an AI wrote it?
◧◩◪◨⬒
5. makewo+Qq1[view] [source] 2023-11-22 15:44:30
>>nickpp+In1
Yes.
◧◩◪◨⬒⬓
6. nickpp+Vr1[view] [source] 2023-11-22 15:49:19
>>makewo+Qq1
Please expand on that.
◧◩◪◨⬒⬓⬔
7. Mistle+J02[view] [source] 2023-11-22 18:21:52
>>nickpp+Vr1
I think this summarizes it pretty well. Even if you don't mind the garbage, the future AI will feed on this garbage, creating AI and human brain gray goo.

https://ploum.net/2022-12-05-drowning-in-ai-generated-garbag...

https://en.wikipedia.org/wiki/Gray_goo

◧◩◪◨⬒⬓⬔⧯
8. nickpp+Nb2[view] [source] 2023-11-22 19:09:18
>>Mistle+J02
Is this a real problem model trainers actually face or is it an imagined one? The Internet is already full of garbage - 90% of the unpleasantness of browsing these days is filtering through mounts and mounds of crap. Some is generated, some is written, but still crap full of wrong and lies.

I would've imagined training sets were heavily curated and annotated. We already know how to solve this problem for training humans (or our kids would never learn anything useful) so I imagine we could solve it similarly for AIs.

In the end, if it's quality content, learning it is beneficial - no matter who produced it. Garbage needs to be eliminated and the distinction is made either by human trainers or already trained AIs. I have no idea how to train the latter but I am no expert in this field - just like (I suspect) the author of that blog.

[go to top]