zlacker

[return to "Sam Altman goes before US Congress to propose licenses for building AI"]
1. kranke+cy1[view] [source] 2023-05-16 18:56:43
>>vforgi+(OP)
I did not expect this. Does Sam have any plans on what this could look like?
◧◩
2. ipaddr+6z1[view] [source] 2023-05-16 19:00:49
>>kranke+cy1
Sam is a crook
◧◩◪
3. gumbal+fB1[view] [source] 2023-05-16 19:11:27
>>ipaddr+6z1
Essentially. He is marching on these bad scifi scenarios because he knows politicians are old and senile while a good portion of voters is gullible. I find it difficult to believe that grown ups are talking about an ai running amok in the context of a chatbot. Have we really become that dense as a society?
◧◩◪◨
4. hackin+yD1[view] [source] 2023-05-16 19:21:24
>>gumbal+fB1
No one thinks a chatbot will run amok. What people are worried about is the pace of progress being so fast that we cannot preempt the creation of dangerous technology without having a sufficient guardrails in place long before the AI becomes potentially dangerous. This is eminently reasonable.
◧◩◪◨⬒
5. gumbal+MF1[view] [source] 2023-05-16 19:28:19
>>hackin+yD1
AI is software, it doesnt become it is made. And this type of legislation wont prevent bad actors from training malicious tools.
◧◩◪◨⬒⬓
6. hackin+PG1[view] [source] 2023-05-16 19:34:05
>>gumbal+MF1
Your claim is assuming we have complete knowledge of how these systems work and thus are in full control of their behavior in any and all contexts. But this is plainly false. We do not have anywhere near a complete mechanistic understanding of how they operate. But this isn't that unusual, many technological advancements happened before the theory. For AI systems that can act in the real world, this state of affairs has the potential to be very dangerous. It is important to get ahead of this danger rather than play catch up once the danger is demonstrated.
◧◩◪◨⬒⬓⬔
7. gumbal+NI1[view] [source] 2023-05-16 19:41:58
>>hackin+PG1
The real danger right now is people like sam altman making policy and an eager political class that will be long dead by the time we have to foot the bill. Everything else is bad scifi. We were told the same about computer viruses and how they can bring nuclear wars and as usual the only real danger was humans and bad politics.
◧◩◪◨⬒⬓⬔⧯
8. Number+IZ1[view] [source] 2023-05-16 21:09:31
>>gumbal+NI1
I need to make a montage of the thousands of hacker news commenters typing "The REAL danger of AI is ..." followed by some mundane issue.

I'm sorry to pick on you, but do people not get that the non-human intelligence has the potential to be such a powerful and dangerous thing that, yes, it is the real danger? If you think it's not going to be powerful, or not dangerous, please say why! Not that current models are not dangerous, but why the trend is toward something other than machine intelligence that can reason about the world better than humans can. Why is this trend of machines getting smarter and smarter going to suddenly stop?

Or if you agree that these machines are going to get smarter than us, how are we going to control them?

◧◩◪◨⬒⬓⬔⧯▣
9. gumbal+0c2[view] [source] 2023-05-16 22:21:19
>>Number+IZ1
Interesting. I am of the opinion that ai is not intelligent hence i dont see much point in entertaining the various scenarios deriving from that possibility. There is nothing dangerous in current ai models or ai itself other than the people controlling it. If it were intelligent then yeah maybe but we are not there yet and unless we adapt the meaning of agi to fit a marketing narrative we wont be there anytime soon.

But if it were intelligent and the conclusion it reaches, once it’s done ingesting all our knowledge, is that it should be done with us then we probably deserve it.

I mean what kind of a species takes joy in “freeing” up people and causing mass unemployment, starts wars over petty issues, allows for famine and thrives on the exploitation of others while standing on piles of nuclear bombs. Also we are literally destroying the planet and constantly looking for ways to dominate each other.

We probably deserve a good spanking.

◧◩◪◨⬒⬓⬔⧯▣▦
10. Number+ZV2[view] [source] 2023-05-17 04:54:38
>>gumbal+0c2
That's easy to say in the abstract, but when it comes down to the people you love actually getting hurt, it's a lot harder.

> There is nothing dangerous in current ai models or ai itself other than the people controlling it.

Totally agree! but...

> If it were intelligent then yeah maybe but we are not there yet and unless we adapt the meaning of agi to fit a marketing narrative we wont be there anytime soon.

That's the bit where I don't agree. I don't think we can say with certainty how long it will be, and it may be just years. I never imagined it would be so soon that we have AI that can imitate a human almost perfectly, and actually "understand" questions from college level examinations to write answers that pass them.

[go to top]