zlacker

[parent] [thread] 2 comments
1. dkjaud+(OP)[view] [source] 2023-11-21 00:55:09
"Zealous doomers" seems fair in the context of the vague and melodramatic claims they're pushing. But it makes sense because they're describing the threat of something that doesn't exist and may never exist. What is bad is that they are trying to claim that the threat is real and serious on that basis.
replies(2): >>crooke+H4 >>Terrif+2e
2. crooke+H4[view] [source] 2023-11-21 01:26:18
>>dkjaud+(OP)
Personally, I feel like the risks of future AI developments are real, but none of the stuff I've seen OpenAI do so far has made ChatGPT actually feel "safer" (in a sense of e.g., preventing unhealthy parasocial relationships with the system, actually being helpful when it comes to ethical conflicts, etc), just more stuck-up and excessively moralizing in a way that feels 100% tuned for bland corporate PR bot usage.
3. Terrif+2e[view] [source] 2023-11-21 02:21:54
>>dkjaud+(OP)
> that doesn't exist and may never exist

It doesn’t exist until suddenly it does. I think there are a lot of potential issues we really should be preparing for / trying to solve.

For example, what to do about unemployment. We can’t wait until massive number of people start losing their job before we start working on what to do.

I’m not for slowing down AI research but I do think we need to restrict or slow the deployment of AI if the effects on society are problematic.

[go to top]