Like some intern’s idea to train the voice on their favorite movie.
And then they’ve decided that this is acceptable risk/reward and not a big liability, so worth it.
This could be a well-planned opening move of a regulation gambit. But unlikely.
If in fact, that was the case, then OpenAI is not aligned with the statement they just put out about having utmost focus on rigor and careful considerations, in particular this line: "We know we can't imagine every possible future scenario. So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities." [0]
Yes, because we all know the high profile launch for a major new product is entirely run by the interns. Stop being an apologist.
The general public doesn’t understand the details and nuances of training an LLM, the various data sources required, and how to get them.
But the public does understand stealing someone’s voice. If you want to keep the public on your side, it’s best to not train a voice with a celebrity who hasn’t agreed to it.
Ah, the famous rogue engineer.
The thing is, even if it were the case, this intern would have been supervised by someone, who themselves would have been managed by someone, all the way to the top. The moment Altman makes a demo using it, he owns the problem. Such a public fuckup is embarrassing.
> And then they’ve decided that this is acceptable risk/reward and not a big liability, so worth it.
You mean, they were reckless and tried to wing it? Yes, that’s exactly what’s wrong with them.
> This could be a well-planned opening move of a regulation gambit. But unlikely.
LOL. ROFL, even. This was a gambit all right. They just expected her to cave and not ask questions. Altman has a common thing with Musk: he does not play 3D chess.
Any criticism of AI is being met with "but if we all just hype AI harder, it will get so good that your criticisms won't matter" or flat out denied. You've got tech that's deeply flawed with no obvious way to get unflawed, and the current AI 'leaders' run companies with no clear way to turn a profit other than being relentlessly hyped on proposed future growth.
It's becoming an extremely apparent bubble.