zlacker

[parent] [thread] 1 comments
1. Meekro+(OP)[view] [source] 2023-11-18 23:41:14
In the context of this thread, "safety" refers to making sure we don't create an AGI that turns evil.

You're right that wrong answers are a problem, but plain old capitalism will sort that one out-- no one will want to pay $20/month for a chatbot that gets everything wrong.

replies(1): >>resour+w3
2. resour+w3[view] [source] 2023-11-18 23:57:48
>>Meekro+(OP)
How the thing can be called "AGI" if it has no concept of truth? Is it like "60% accuracy is not an AGI, but 65% is"? The argument can be made that 90% accuracy is worse than 60% (people will become more confident to trust the results blindly).
[go to top]