But hypothetically if a lot of people would benefit from a GPT with more fake emotions, that might reasonably counterbalance concerns about harm for a mentally unwell minority. If we build a highway, we know that eventually it will lead to deaths from car crashes -- but if the highway is actually adding value by letting people travel, those benefits might reasonably be expected to outweigh that harm. And the people getting into their cars and onto the highway agree, that the benefits outweigh the costs, right up until they crash.
None of this is to say that I think OpenAI's choices here were benevolent rather than a business choice. But I think even if they were trying to do the ethically best thing for the world overall, it would be plausible to move forward
I for one found the fake emotions in their voice demos to be really annoying tho.
We know the risks from cigarettes, but it offers tangible benefits to its users, so they continue to use the product. So too cars and emotionally manipulative AI's, I imagine.
(None of this negates your overall point, but I do think the initial tobacco comparison is very apt.)
Hmm, the tobacco industry is also famous for actively trying to deny and suppress evidence about its harms. They actively didn't want people to be in a position to make a fully informed decision. In cases where jurisdictions introduced policies that packaging etc had to carry factual information about health risks, the tobacco industry pushed back.