zlacker

[parent] [thread] 2 comments
1. _wire_+(OP)[view] [source] 2024-05-18 00:30:11
// COMPUTING MACHINERY AND INTELLIGENCE By A. M. Turing 1. The Imitation Game I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. //

AFAIK no consensus on what it means to think has developed past Turing's above point, and the "Imitation Game," a.k.a "Turing Test," which was Turing's throwing up his hands at the idea of thinking machines, is today's de facto standard for machine intelligence.

IOW a machine thinks if you think it does.

And by this definition the Turing Test test was passed by Weizenbaum's "Eliza" chatbot in the mid 60s.

Modern chatbots have been refined a lot since, and can accommodate far more sophisticated forms of interrogation, but their limits are still overwhelming if not obvious to the uninitiated.

A crucial next measure of an AGI must be attended by the realization that it's unethical to delete it, or maybe even reset it, or turn it off. We are completely unprepared for such an eventuality, so recourse to pragmatism will demand that no transformer technology can be defined as intelligent in any human sense. It will always regarded as a simulation or robot.

replies(1): >>tim333+6C
2. tim333+6C[view] [source] 2024-05-18 10:42:50
>>_wire_+(OP)
It can be tricky to discuss AGI in any precise way because everyone seems to have their own definition of it and ideas about it. I mean ChatGPT is already intelligent in that if can do university exam type questions better than the average human and general in that it can have a go at most things. And we still seem fine about turning it off - I think you are overestimating the ethics of a species that exploits or eats most other species.

For me the interesting test of computer intelligence would be if it can replace us in the sense that at the moment if all humans disappeared ChatGPT and the like would stop because there would be no electricity but at some point maybe intelligent robots will be able to do that stuff and go on without us. That's kind of what I think of as AGI rather than the Turing stuff. I guess you could call it the computers don't need us point. I'm not sure how far in the future it is. A decade or two?

replies(1): >>_wire_+zJ
◧◩
3. _wire_+zJ[view] [source] [discussion] 2024-05-18 12:10:53
>>tim333+6C
> It can be tricky to discuss AGI in any precise way because everyone seems to have their own definition of it and ideas about it.

You have just echoed Turing from his seminal paper without adding any further observation.

> ...you are overestimating the ethics of a species that exploits or eats most other species...

Life would not exist without consumption of other life. Ethicists are untroubled by this. But they are troubled by conduct towards people and animals.

If the conventional measure of an artificial intelligence is in terms of a general inability to tell computers and humans apart, then ethics enters the field at such point: Once you can't tell, ethically you are required to extend the same protections to the AI construct as offered to a person.

To clarify my previous point: pragmatic orientation towards AI technology will enforce a distinction through definition: the Turing test will become re-interpreted as the measure by which machines are reliably distinguished from people, not a measure of whence they surpass people.

To rephrase the central point of Turing's thought experiment: the question of whether machines think is meaningless so as to not merit further discussion because we lack sufficient formal definition of "machine" and "thought."

> ...the computers don't need us point...

I see no reason to expect this at any point, ever. Whatever you are implying as "computers" and "us" with your conjecture of "need" is so detached from today's notions of life is also meaningless. Putting a timeframe on the meaningless is pointless.

> ...go on without us...

This is a loopy conjecture about a new form of life which emerges-from-and-transcends humanity, presumably to the ultimate point of obviating humanity. Ok, so "imagine a world without humanity." Sure, who's doing the imagining? It's absurd.

Turing's point was we lack the vocabulary to discuss these matters, so he offered an approximation with a overtly stated expectation that by about this time (50 or so years from the time of his paper) technology for simulating thought would be sufficiently advanced as to demand a new vocabulary. And here we are.

If you contribution is merely recapitulation of Turing's precept's from decades ago, you're a bit late to the imitation game.

[go to top]