- OpenAI approached Scarlett last fall, and she refused.
- Two days before the GPT-4o launch, they contacted her agent and asked that she reconsider. (Two days! This means they already had everything they needed to ship the product with Scarlett’s cloned voice.)
- Not receiving a response, OpenAI demos the product anyway, with Sam tweeting “her” in reference to Scarlett’s film.
- When Scarlett’s counsel asked for an explanation of how the “Sky” voice was created, OpenAI yanked the voice from their product line.
Perhaps Sam’s next tweet should read “red-handed”.
The really concerning part here is that Altman is, and wants to be, a large part of AI regulation [0]. Quite the public contradiction.
[0] https://www.businessinsider.com/sam-altman-openai-artificial...
Conman plain and simple.
It's a Musk-error not an SBF-error. (Of course, I do realise many will say all three are the same, but I think it's worth separating the types of mistakes everyone makes, because everyone makes mistakes, and only two of these three also did useful things).
Sufficiently advanced incompetence is indistinguishable from malice.
It's still bad, don't get be wrong, it's just something I can distinguish.
I don't think the cookies thing is a good example. That's passive incompetence, to avoid the work of changing their business models. Altman actively does more work to erode people's rights.
> It's still bad, don't get be wrong, it's just something I can distinguish.
Can you? Plausible deniability is one of the first things in any malicious actor's playbook. "I meant well…" If there's no way to know, then you can only assess the pattern of behavior.
But realistically, nobody sapient accidentally spends multiple years building elaborate systems for laundering other people's IP, privacy, and likeness, and accidentally continues when they are made aware of the harms and explicitly asked multiple times to stop…