zlacker

[parent] [thread] 1 comments
1. tim333+(OP)[view] [source] 2024-05-18 10:42:50
It can be tricky to discuss AGI in any precise way because everyone seems to have their own definition of it and ideas about it. I mean ChatGPT is already intelligent in that if can do university exam type questions better than the average human and general in that it can have a go at most things. And we still seem fine about turning it off - I think you are overestimating the ethics of a species that exploits or eats most other species.

For me the interesting test of computer intelligence would be if it can replace us in the sense that at the moment if all humans disappeared ChatGPT and the like would stop because there would be no electricity but at some point maybe intelligent robots will be able to do that stuff and go on without us. That's kind of what I think of as AGI rather than the Turing stuff. I guess you could call it the computers don't need us point. I'm not sure how far in the future it is. A decade or two?

replies(1): >>_wire_+t7
2. _wire_+t7[view] [source] 2024-05-18 12:10:53
>>tim333+(OP)
> It can be tricky to discuss AGI in any precise way because everyone seems to have their own definition of it and ideas about it.

You have just echoed Turing from his seminal paper without adding any further observation.

> ...you are overestimating the ethics of a species that exploits or eats most other species...

Life would not exist without consumption of other life. Ethicists are untroubled by this. But they are troubled by conduct towards people and animals.

If the conventional measure of an artificial intelligence is in terms of a general inability to tell computers and humans apart, then ethics enters the field at such point: Once you can't tell, ethically you are required to extend the same protections to the AI construct as offered to a person.

To clarify my previous point: pragmatic orientation towards AI technology will enforce a distinction through definition: the Turing test will become re-interpreted as the measure by which machines are reliably distinguished from people, not a measure of whence they surpass people.

To rephrase the central point of Turing's thought experiment: the question of whether machines think is meaningless so as to not merit further discussion because we lack sufficient formal definition of "machine" and "thought."

> ...the computers don't need us point...

I see no reason to expect this at any point, ever. Whatever you are implying as "computers" and "us" with your conjecture of "need" is so detached from today's notions of life is also meaningless. Putting a timeframe on the meaningless is pointless.

> ...go on without us...

This is a loopy conjecture about a new form of life which emerges-from-and-transcends humanity, presumably to the ultimate point of obviating humanity. Ok, so "imagine a world without humanity." Sure, who's doing the imagining? It's absurd.

Turing's point was we lack the vocabulary to discuss these matters, so he offered an approximation with a overtly stated expectation that by about this time (50 or so years from the time of his paper) technology for simulating thought would be sufficiently advanced as to demand a new vocabulary. And here we are.

If you contribution is merely recapitulation of Turing's precept's from decades ago, you're a bit late to the imitation game.

[go to top]