zlacker

[parent] [thread] 9 comments
1. resour+(OP)[view] [source] 2023-11-18 23:35:47
Factually inaccurate results = unsafety. This cannot be fixed under the current model, which has no concept of truth. What kind of "safety" are they talking about then?
replies(3): >>Meekro+31 >>spacem+Z2 >>s1arti+Ta
2. Meekro+31[view] [source] 2023-11-18 23:41:14
>>resour+(OP)
In the context of this thread, "safety" refers to making sure we don't create an AGI that turns evil.

You're right that wrong answers are a problem, but plain old capitalism will sort that one out-- no one will want to pay $20/month for a chatbot that gets everything wrong.

replies(1): >>resour+z4
3. spacem+Z2[view] [source] 2023-11-18 23:51:25
>>resour+(OP)
If factually inaccurate results = unsafety, then the internet must be the most unsafe place on the planet!
replies(1): >>resour+g7
◧◩
4. resour+z4[view] [source] [discussion] 2023-11-18 23:57:48
>>Meekro+31
How the thing can be called "AGI" if it has no concept of truth? Is it like "60% accuracy is not an AGI, but 65% is"? The argument can be made that 90% accuracy is worse than 60% (people will become more confident to trust the results blindly).
◧◩
5. resour+g7[view] [source] [discussion] 2023-11-19 00:10:59
>>spacem+Z2
The internet is not called "AGI". It's the notion of AGI that brought "safety" to the forefront. AI folks became victims of their hype. Renaming the term into something less provocative/controversial (ML?) can reduce expectations to the level of the internet - problem solved?
replies(1): >>autoex+gd
6. s1arti+Ta[view] [source] 2023-11-19 00:32:05
>>resour+(OP)
Truth has very little to do with the safety questions raised by AI.

Factually accurate results also = unsafety. Knowledge = unsafety, free humans = unsafety.

replies(1): >>resour+pc
◧◩
7. resour+pc[view] [source] [discussion] 2023-11-19 00:42:03
>>s1arti+Ta
But they (AI folks) keep talking about "safety" all the time. What is their definition of safety then? What are they trying to achieve?
replies(1): >>s1arti+Zh
◧◩◪
8. autoex+gd[view] [source] [discussion] 2023-11-19 00:47:53
>>resour+g7
> The internet is not called "AGI"

Nether is anything else in existence. I'm glad that philosophers are worrying about what AGI might one day mean for us but it has nothing to do with anything happening in the world today.

replies(1): >>resour+Eg
◧◩◪◨
9. resour+Eg[view] [source] [discussion] 2023-11-19 01:09:45
>>autoex+gd
I fully agree with that. But if you read this thread or any other recent HN thread, you will see "AGI... AGI... AGI" as if it's a real thing. The whole openai debacle with firing/rehiring sama revolves around (non-existent) "AGI" and its imaginary safety/unsafety, and if you dare to question this whole narrative, you will get beaten up.
◧◩◪
10. s1arti+Zh[view] [source] [discussion] 2023-11-19 01:20:59
>>resour+pc
I dont think it has a fixed definition. It is an ambiguous idea that AI will not do or lead to bad things.
[go to top]