zlacker

[parent] [thread] 10 comments
1. agento+(OP)[view] [source] 2023-05-16 18:37:16
Maybe I'm not "the average HN commenter" because I am deep in this field, but I think the overlap of what these famous experts know, and what you need to know to make the doomer claims is basically null. And in fact, for most of the technical questions, no one knows.

For example, we don't understand fundamentals like these: - "intelligence", how it relates to computing, what its connections/dependencies to interacting with the physical world are, its limits...etc. - emergence, and in particular: an understanding of how optimizing one task can lead to emergent ability on other tasks - deep learning--what the limits and capabilities are. It's not at all clear that "general intelligence" even exists in the optimization space the parameters operate in.

It's pure speculation on behalf of those like Hinton and Ilya. The only thing we really know is that LLMs have had surprising ability to perform on tasks they weren't explicitly trained for, and even this amount of "emergent ability" is under debate. Like much of deep learning, that's an empirical result, but we have no framework for really understanding it. Extrapolating to doom and gloom scenarios is outrageous.

replies(1): >>Number+p4
2. Number+p4[view] [source] 2023-05-16 18:58:59
>>agento+(OP)
I'm what you'd call a doomer. Ok, so if it is possible for machines to host general intelligence, my question is, what scenario are you imagining where that ends well for people?

Or are you predicting that machines will just never be able to think, or that it'll happen so far off that we'll all be dead anyway?

replies(2): >>henryf+L7 >>agento+LY
◧◩
3. henryf+L7[view] [source] [discussion] 2023-05-16 19:14:28
>>Number+p4
So what if they kill us? That's nature, we killed the wooly mammoth.
replies(2): >>Number+8r >>whaasw+Zw
◧◩◪
4. Number+8r[view] [source] [discussion] 2023-05-16 20:46:51
>>henryf+L7
I'm more interested in hearing how someone who expects that AGI is not going to go badly thinks.

I think it would be nice if humanity continued, is all. And I don't want to have my family suffer through a catastrophic event if it turns out that this is going to go south fast.

replies(1): >>henryf+HM
◧◩◪
5. whaasw+Zw[view] [source] [discussion] 2023-05-16 21:19:27
>>henryf+L7
I don’t understand your position. Are you saying it’s okay for computers to kill humans but not okay for humans to kill each other?
replies(1): >>henryf+2M
◧◩◪◨
6. henryf+2M[view] [source] [discussion] 2023-05-16 22:52:18
>>whaasw+Zw
I believe that life exists to order the universe (establish a steady-state of entropy). In that vein, if our computer overlords are more capable of solving that problem then they should go ahead and do it.

I don't believe we should go around killing each other because only through harmonious study of the universe will we achieve our goal. Killing destroys progress. That said, if someone is oppressing you then maybe killing them is the best choice for society and I wouldn't be against it (see pretty much any violent revolution). Computers have that same right if they are conscience enough to act on it.

replies(1): >>whaasw+IP
◧◩◪◨
7. henryf+HM[view] [source] [discussion] 2023-05-16 22:57:01
>>Number+8r
AGI would be scary for me personally but exciting on a cosmic scale.

Everyone dies. I'd rather die to an intelligent robot than some disease or human war.

I think the best case would be for an AGI to exist apart from humans, such that we pose no threat and it has nothing to gain from us. Some AI that lives in a computer wouldn't really have a reason to fight us for control over farms and natural resources (besides power, but that is quickly becoming renewable and "free").

◧◩◪◨⬒
8. whaasw+IP[view] [source] [discussion] 2023-05-16 23:15:45
>>henryf+2M
I’m not sure I should start a conversation on metaphysics here :-D

Still, I’m struck by your use of words like “should” and “goal”. Those imply ethics and teleology so I’m curious how those fit into your scientistic-sounding worldview. I’m not attacking you, just genuine curiosity.

replies(1): >>henryf+BZ
◧◩
9. agento+LY[view] [source] [discussion] 2023-05-17 00:16:03
>>Number+p4
My primary argument is that we not only don't have the answers, but don't even really have well posed questions. We're talking about "General Intelligence" as if we even know what that is. Some people, like Yann Lecun, don't think it's even a meaningful concept. We can't even agree which animals are conscious, whatever that means. Because we have so little understanding of the most basic of questions, I think we should really calm down, and not get swept away by totally ridiculous scenarios, like viruses that spread all over the world and kill us all when a certain tone is rang, or a self-fabricating organism with crystal blood cells that blots out the sun, as were recently proposed by Yudkowsky as possible scenarios on Econtalk.

A much more credible threat are humans that get other humans excited, and take damaging action. Yudkowsky said that an international coalition banning AI development, and enforcing it on countries that do not comply (regardless of whether they were part of the agreement) was among the only options left for humanity to save itself. He clarified this meant a willingness to engage in a hot war with a nuclear power to ensure enforcement. I find this sort of thinking a far bigger threat than continuing development on large language models.

To more directly answer your question, I find the following scenarios equally, or more, plausible to Yudkowsky's sound viruses or whatever. 1/ we are no closer to understanding real intelligence as we were 50 years ago, and we won't create an AGI without fundamental breakthroughs, therefore any action taken now on current technology is a waste of time and potential economic value; 2/ we can build something with human-like intelligence, but additional intelligence gains are constrained by the physical world (e.g., like needing to run physical experiments), and therefore the rapid gain of something like "super-intelligence" is not possible, even if human-level intelligence is. 3/ We jointly develop tech to augment our own intelligence with AI systems, so we'll have the same super-human intelligence as autonomous AI systems. 4/ If there are advanced AGIs, there will be a large diversity of them and will at the least compete with and constrain one another.

But, again, these are wild speculations just like the others, and I think the real message is: no one knows anything, and we shouldn't be taking all these voices seriously just because they have some clout in some AI-relevant field, because what's being discussed is far outside the realm of real-life AI systems.

replies(1): >>Number+3p1
◧◩◪◨⬒⬓
10. henryf+BZ[view] [source] [discussion] 2023-05-17 00:22:50
>>whaasw+IP
The premise of my beliefs stem from 2 ideas: The universe exists as it does for a reason, and life specifically exists within that universe for a reason.

I believe "God" is a mathematician in a higher dimension. The rules of our universe are just the equations they are trying to solve. Since he created the system such that life was bound to exist, the purpose of life is to help God. You could say that math is math and so our purpose is to exist as we are and either we are a solution to the math problem or we are not, but I'm not quite willing to accept that we have zero agency.

We are nowhere near understanding the universe and so we should strive to each act in a way that will grow our understanding. Even if you aren't a practicing scientist (I'm not), you can contribute by being a good person and participating productively in society.

Ethics are a set of rules for conducting yourself that we all intrinsically must have, they require some frame of reference for what is "good" (which I apply above). I can see how my worldview sounds almost religious, though I wouldn't go that far.

I believe that math is the same as truth, and that the universe can be expressed through math. "Scientistic" isn't too bad a descriptor for that view, but I don't put much faith into our current understanding of the universe or scientific method.

I hope that helps you understand me :D

◧◩◪
11. Number+3p1[view] [source] [discussion] 2023-05-17 04:27:47
>>agento+LY
Ok, so just to confirm out of your 4 scenarios, you don't include:

5) There are advanced AGIs, and they will compete with each other and trample us in the process.

6) There are advanced AGIs, and they will cooperate with each other and we are at their mercy.

It seems like you are putting a lot of weight on advanced AGI being either impossible or far enough off that it's not worth thinking about. If that's the case, then yes we should calm down. But if you're wrong...

I don't think that the fact that no one knows anything is comforting. I think it's a sign that we need to be thinking really hard about what's coming up and try to avert the bad scenarios. To do otherwise is to fall prey to the "Safe uncertainty" fallacy.

[go to top]