zlacker

[parent] [thread] 4 comments
1. hackin+(OP)[view] [source] 2023-05-16 19:34:05
Your claim is assuming we have complete knowledge of how these systems work and thus are in full control of their behavior in any and all contexts. But this is plainly false. We do not have anywhere near a complete mechanistic understanding of how they operate. But this isn't that unusual, many technological advancements happened before the theory. For AI systems that can act in the real world, this state of affairs has the potential to be very dangerous. It is important to get ahead of this danger rather than play catch up once the danger is demonstrated.
replies(1): >>gumbal+Y1
2. gumbal+Y1[view] [source] 2023-05-16 19:41:58
>>hackin+(OP)
The real danger right now is people like sam altman making policy and an eager political class that will be long dead by the time we have to foot the bill. Everything else is bad scifi. We were told the same about computer viruses and how they can bring nuclear wars and as usual the only real danger was humans and bad politics.
replies(1): >>Number+Ti
◧◩
3. Number+Ti[view] [source] [discussion] 2023-05-16 21:09:31
>>gumbal+Y1
I need to make a montage of the thousands of hacker news commenters typing "The REAL danger of AI is ..." followed by some mundane issue.

I'm sorry to pick on you, but do people not get that the non-human intelligence has the potential to be such a powerful and dangerous thing that, yes, it is the real danger? If you think it's not going to be powerful, or not dangerous, please say why! Not that current models are not dangerous, but why the trend is toward something other than machine intelligence that can reason about the world better than humans can. Why is this trend of machines getting smarter and smarter going to suddenly stop?

Or if you agree that these machines are going to get smarter than us, how are we going to control them?

replies(1): >>gumbal+bv
◧◩◪
4. gumbal+bv[view] [source] [discussion] 2023-05-16 22:21:19
>>Number+Ti
Interesting. I am of the opinion that ai is not intelligent hence i dont see much point in entertaining the various scenarios deriving from that possibility. There is nothing dangerous in current ai models or ai itself other than the people controlling it. If it were intelligent then yeah maybe but we are not there yet and unless we adapt the meaning of agi to fit a marketing narrative we wont be there anytime soon.

But if it were intelligent and the conclusion it reaches, once it’s done ingesting all our knowledge, is that it should be done with us then we probably deserve it.

I mean what kind of a species takes joy in “freeing” up people and causing mass unemployment, starts wars over petty issues, allows for famine and thrives on the exploitation of others while standing on piles of nuclear bombs. Also we are literally destroying the planet and constantly looking for ways to dominate each other.

We probably deserve a good spanking.

replies(1): >>Number+af1
◧◩◪◨
5. Number+af1[view] [source] [discussion] 2023-05-17 04:54:38
>>gumbal+bv
That's easy to say in the abstract, but when it comes down to the people you love actually getting hurt, it's a lot harder.

> There is nothing dangerous in current ai models or ai itself other than the people controlling it.

Totally agree! but...

> If it were intelligent then yeah maybe but we are not there yet and unless we adapt the meaning of agi to fit a marketing narrative we wont be there anytime soon.

That's the bit where I don't agree. I don't think we can say with certainty how long it will be, and it may be just years. I never imagined it would be so soon that we have AI that can imitate a human almost perfectly, and actually "understand" questions from college level examinations to write answers that pass them.

[go to top]