Frito-Lay copied a song by Waits (with different lyrics) and had an impersonator sing it. Witnesses testified they thought Waits had sung the song.
If OpenAI were to anonymously copy someone's voice by training AI on an imitation, you wouldn't have:
- a recognizable singing voice
- music identified with a singer
- market confusion about whose voice it is (since it's novel audio coming from a machine)
I don't think any of this is ethical and think voice-cloning should be entirely illegal, but I also don't think we have good precedents for most AI issues.
Company identifies celebrity voice they want. (Frito=Waits, OpenAi=ScarJo)
Company comes up with novel thing for the the voice to say. (Frito=Song, OpenAI=ChatGpt)
Company decides they don’t need the celebrity they want (Frito=Waits, OpenAI=ScarJo) and instead hire an impersonator (Frito=singer, {OpenAI=impersonator or OpenAI=ScarJo-public-recordings}) to get what they want (Frito=a-facsimile-of-Tom-Waitte’s-voice-in-a-commercial, OpenAi=a-fascimilie-of-ScarJo’s-voice-in-their-chatbot)
When made public, people confuse the fascimilie as the real thing.
I don’t see how you don’t see a parallel. It’s literally best for beat the same, particularly around the part about using an impersonator as an excuse.