But hypothetically if a lot of people would benefit from a GPT with more fake emotions, that might reasonably counterbalance concerns about harm for a mentally unwell minority. If we build a highway, we know that eventually it will lead to deaths from car crashes -- but if the highway is actually adding value by letting people travel, those benefits might reasonably be expected to outweigh that harm. And the people getting into their cars and onto the highway agree, that the benefits outweigh the costs, right up until they crash.
None of this is to say that I think OpenAI's choices here were benevolent rather than a business choice. But I think even if they were trying to do the ethically best thing for the world overall, it would be plausible to move forward
I for one found the fake emotions in their voice demos to be really annoying tho.