zlacker

[parent] [thread] 1 comments
1. btown+(OP)[view] [source] 2023-11-18 23:23:59
From what we've seen of OpenAI's product releases, I think it's quite possible that SamAI would adopt as a guiding principle that a model's safety cannot be measured unless it is used by the public, embedded into products that create a flywheel of adoption, to the point where every possible use case has the proverbial "sufficient data for a meaningful answer."

Of course, from this hypothetical SamAI's perspective, in order to build such a flywheel-driven product that gathers sufficient data, the model's outputs must be allowed to interface with other software systems without human review of every such interaction.

Many advocates for AI safety would say that models whose limitations aren't yet known (we're talking about GPT-N where N>4 here, or entirely different architectures) must be evaluated extensively for safety before being released to the public or being allowed to autonomously interface with other software systems. A world where SamAI exists is one where top researchers are divided into two camps, rather than being able to push each other in nuanced ways (with full transparency to proprietary data) and find common ground. Personally, I'd much rather these camps collaborate than not.

replies(1): >>chasd0+pn
2. chasd0+pn[view] [source] 2023-11-19 01:45:52
>>btown+(OP)
> must be evaluated extensively for safety before being released to the publIc

JFC someone somewhere define “safety”! Like wtf does it mean in the context of a large language model?

[go to top]