zlacker

[parent] [thread] 8 comments
1. janeje+(OP)[view] [source] 2023-11-18 23:10:00
Honestly would be super interested to see what a hypothetical "SamAI" corp would look like, and what they would bring to the table. More competition, but also, probably with less ideological disagreements to distract them from building AI/AGI.
replies(2): >>apppli+J >>btown+P2
2. apppli+J[view] [source] 2023-11-18 23:14:09
>>janeje+(OP)
I mean this as an honest question, but what does Sam bring to the table that any other young and high performing CEO wouldn’t? Is he himself particularly material to OpenAI?
replies(5): >>janeje+w1 >>Solven+K1 >>Rivier+r4 >>coffee+F7 >>smegge+5h
◧◩
3. janeje+w1[view] [source] [discussion] 2023-11-18 23:17:35
>>apppli+J
Experience heading a company that builds high performance AI, I presume. I reckon the learnings from that should be fairly valuable, especially since there's probably not many people who have such experiences.
◧◩
4. Solven+K1[view] [source] [discussion] 2023-11-18 23:18:21
>>apppli+J
Your first mistake is daring to question the cargo cult around CEOs.
5. btown+P2[view] [source] 2023-11-18 23:23:59
>>janeje+(OP)
From what we've seen of OpenAI's product releases, I think it's quite possible that SamAI would adopt as a guiding principle that a model's safety cannot be measured unless it is used by the public, embedded into products that create a flywheel of adoption, to the point where every possible use case has the proverbial "sufficient data for a meaningful answer."

Of course, from this hypothetical SamAI's perspective, in order to build such a flywheel-driven product that gathers sufficient data, the model's outputs must be allowed to interface with other software systems without human review of every such interaction.

Many advocates for AI safety would say that models whose limitations aren't yet known (we're talking about GPT-N where N>4 here, or entirely different architectures) must be evaluated extensively for safety before being released to the public or being allowed to autonomously interface with other software systems. A world where SamAI exists is one where top researchers are divided into two camps, rather than being able to push each other in nuanced ways (with full transparency to proprietary data) and find common ground. Personally, I'd much rather these camps collaborate than not.

replies(1): >>chasd0+eq
◧◩
6. Rivier+r4[view] [source] [discussion] 2023-11-18 23:32:44
>>apppli+J
Ability to attract valuable employees, connections to important people, proven ability to successfully run an AI company.
◧◩
7. coffee+F7[view] [source] [discussion] 2023-11-18 23:50:12
>>apppli+J
Funding, name recognition in the space
◧◩
8. smegge+5h[view] [source] [discussion] 2023-11-19 00:38:55
>>apppli+J
You mean besides the business experience of already having gone down this path so he can speedrun while everyone else is still trying to find the path?

Easy; his contacts list. He has everyone anyone could want in his contacts list politician tech executives financial backers and a preexisting positive relationship with most of them. When alternative would be entrepreneurs are needing to make a deal with a major company like Microsoft or Google it will be upper middlemanagement and lawyers, a committees or three will weigh in on it present it to their bosses etc. With Sam he calls up the CEO and has few drinks at the golf course and they decide to work with him and they make it happen.

◧◩
9. chasd0+eq[view] [source] [discussion] 2023-11-19 01:45:52
>>btown+P2
> must be evaluated extensively for safety before being released to the publIc

JFC someone somewhere define “safety”! Like wtf does it mean in the context of a large language model?

[go to top]