zlacker

[parent] [thread] 9 comments
1. startu+(OP)[view] [source] 2024-05-21 02:53:28
Most likely it was an unforced error, as there’ve been a lot of chaos with cofounders and the board revolt, easy to loose track of something really minor.

Like some intern’s idea to train the voice on their favorite movie.

And then they’ve decided that this is acceptable risk/reward and not a big liability, so worth it.

This could be a well-planned opening move of a regulation gambit. But unlikely.

replies(6): >>mmastr+j >>windex+K >>Always+s1 >>Cheer2+I1 >>mbrees+i2 >>kergon+df
2. mmastr+j[view] [source] 2024-05-21 02:56:25
>>startu+(OP)
It makes a lot more sense that he was caught red-handed, likely hiring a similar voice actress and not realizing how strong identity protections are for celebs.
3. windex+K[view] [source] 2024-05-21 03:00:52
>>startu+(OP)
I don't think this makes any sense, at all, quite honestly. Why would an "intern" be training one of ChatGPT's voices for a major release?

If in fact, that was the case, then OpenAI is not aligned with the statement they just put out about having utmost focus on rigor and careful considerations, in particular this line: "We know we can't imagine every possible future scenario. So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities." [0]

[0] https://x.com/gdb/status/1791869138132218351

4. Always+s1[view] [source] 2024-05-21 03:08:04
>>startu+(OP)
At first I thought there may be a /s coming...
5. Cheer2+I1[view] [source] 2024-05-21 03:10:34
>>startu+(OP)
> easy to loose track of something really minor. Like some intern’s idea

Yes, because we all know the high profile launch for a major new product is entirely run by the interns. Stop being an apologist.

6. mbrees+i2[view] [source] 2024-05-21 03:15:19
>>startu+(OP)
This is an unforced error, but it isn’t minor. It’s quite large and public.

The general public doesn’t understand the details and nuances of training an LLM, the various data sources required, and how to get them.

But the public does understand stealing someone’s voice. If you want to keep the public on your side, it’s best to not train a voice with a celebrity who hasn’t agreed to it.

replies(1): >>surfin+Vl
7. kergon+df[view] [source] 2024-05-21 05:25:40
>>startu+(OP)
> Like some intern’s idea to train the voice on their favorite movie.

Ah, the famous rogue engineer.

The thing is, even if it were the case, this intern would have been supervised by someone, who themselves would have been managed by someone, all the way to the top. The moment Altman makes a demo using it, he owns the problem. Such a public fuckup is embarrassing.

> And then they’ve decided that this is acceptable risk/reward and not a big liability, so worth it.

You mean, they were reckless and tried to wing it? Yes, that’s exactly what’s wrong with them.

> This could be a well-planned opening move of a regulation gambit. But unlikely.

LOL. ROFL, even. This was a gambit all right. They just expected her to cave and not ask questions. Altman has a common thing with Musk: he does not play 3D chess.

◧◩
8. surfin+Vl[view] [source] [discussion] 2024-05-21 06:40:34
>>mbrees+i2
I had a conversation with someone responsible for introducing LLMs into the process that involves personal information. That person rejected my concern over one person's data appearing in the report on another person. He told me that it will be possible to train AI to avoid that. The rest of the conversation convinced me that AI is seen as magic that can do anything. It seems to me that we are seeing a split between those who don't understand it and fear it and those who don't understand it, but want to align themselves with it. Those latter are those I fear the most.
replies(1): >>komboo+uw
◧◩◪
9. komboo+uw[view] [source] [discussion] 2024-05-21 08:26:30
>>surfin+Vl
The "AI is magic and we should simply believe" is even being actively promoted because all these VC hucksters need it.

Any criticism of AI is being met with "but if we all just hype AI harder, it will get so good that your criticisms won't matter" or flat out denied. You've got tech that's deeply flawed with no obvious way to get unflawed, and the current AI 'leaders' run companies with no clear way to turn a profit other than being relentlessly hyped on proposed future growth.

It's becoming an extremely apparent bubble.

replies(1): >>surfin+rx
◧◩◪◨
10. surfin+rx[view] [source] [discussion] 2024-05-21 08:36:30
>>komboo+uw
On the plus side, lots of cheap nVida cards heading for eBay once it bursts.
[go to top]