zlacker

[parent] [thread] 4 comments
1. _wire_+(OP)[view] [source] 2024-05-17 17:44:51
Ridiculous. The board can't even regulate itself in the immediate moment, so who cares if they're not trying to regulate "long term risk". The article is trafficking in nonsense.

"The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI..."

More nonsense.

"...that's safe and beneficial."

Go on...

"Two researchers on the team, Leopold Aschenbrenner and Pavel Izmailov, were dismissed for leaking company secrets..."

The firm is obviously out of control according to first principles, so any claim of responsibility in context is moot.

When management are openly this screwed up in their internal governance, there's no reason to believe anything else they say about their intentions. The disbanding of the "superalignment" team is a simple public admission the firm has no idea what they are doing.

As to the hype-mongering of the article, replace the string "AGI" everywhere it appears with "sentient-nuclear-bomb": how would you feel about this article?

You might want to see the bomb!

But all you'll find is a chatbot.

Bomb#20: You are false data.

Sgt. Pinback: Hmmm?

Bomb#20: Therefore I shall ignore you.

Sgt. Pinback: Hello... bomb?

Bomb#20: False data can act only as a distraction. Therefore, I shall refuse to perceive.

Sgt. Pinback: Hey, bomb?

Bomb#20: The only thing that exists is myself.

Sgt. Pinback: Snap out of it, bomb.

Bomb#20: In the beginning, there was darkness. And the darkness was without form, and void.

Boiler: What the hell is he talking about?

Bomb#20: And in addition to the darkness there was also me. And I moved upon the face of the darkness.

replies(1): >>tim333+Cu
2. tim333+Cu[view] [source] 2024-05-17 21:20:59
>>_wire_+(OP)
Dunno about the "will build AGI" bit being nonsense. Ilya knows more about this stuff than most people.
replies(1): >>_wire_+MN
◧◩
3. _wire_+MN[view] [source] [discussion] 2024-05-18 00:30:11
>>tim333+Cu
// COMPUTING MACHINERY AND INTELLIGENCE By A. M. Turing 1. The Imitation Game I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. //

AFAIK no consensus on what it means to think has developed past Turing's above point, and the "Imitation Game," a.k.a "Turing Test," which was Turing's throwing up his hands at the idea of thinking machines, is today's de facto standard for machine intelligence.

IOW a machine thinks if you think it does.

And by this definition the Turing Test test was passed by Weizenbaum's "Eliza" chatbot in the mid 60s.

Modern chatbots have been refined a lot since, and can accommodate far more sophisticated forms of interrogation, but their limits are still overwhelming if not obvious to the uninitiated.

A crucial next measure of an AGI must be attended by the realization that it's unethical to delete it, or maybe even reset it, or turn it off. We are completely unprepared for such an eventuality, so recourse to pragmatism will demand that no transformer technology can be defined as intelligent in any human sense. It will always regarded as a simulation or robot.

replies(1): >>tim333+Sp1
◧◩◪
4. tim333+Sp1[view] [source] [discussion] 2024-05-18 10:42:50
>>_wire_+MN
It can be tricky to discuss AGI in any precise way because everyone seems to have their own definition of it and ideas about it. I mean ChatGPT is already intelligent in that if can do university exam type questions better than the average human and general in that it can have a go at most things. And we still seem fine about turning it off - I think you are overestimating the ethics of a species that exploits or eats most other species.

For me the interesting test of computer intelligence would be if it can replace us in the sense that at the moment if all humans disappeared ChatGPT and the like would stop because there would be no electricity but at some point maybe intelligent robots will be able to do that stuff and go on without us. That's kind of what I think of as AGI rather than the Turing stuff. I guess you could call it the computers don't need us point. I'm not sure how far in the future it is. A decade or two?

replies(1): >>_wire_+lx1
◧◩◪◨
5. _wire_+lx1[view] [source] [discussion] 2024-05-18 12:10:53
>>tim333+Sp1
> It can be tricky to discuss AGI in any precise way because everyone seems to have their own definition of it and ideas about it.

You have just echoed Turing from his seminal paper without adding any further observation.

> ...you are overestimating the ethics of a species that exploits or eats most other species...

Life would not exist without consumption of other life. Ethicists are untroubled by this. But they are troubled by conduct towards people and animals.

If the conventional measure of an artificial intelligence is in terms of a general inability to tell computers and humans apart, then ethics enters the field at such point: Once you can't tell, ethically you are required to extend the same protections to the AI construct as offered to a person.

To clarify my previous point: pragmatic orientation towards AI technology will enforce a distinction through definition: the Turing test will become re-interpreted as the measure by which machines are reliably distinguished from people, not a measure of whence they surpass people.

To rephrase the central point of Turing's thought experiment: the question of whether machines think is meaningless so as to not merit further discussion because we lack sufficient formal definition of "machine" and "thought."

> ...the computers don't need us point...

I see no reason to expect this at any point, ever. Whatever you are implying as "computers" and "us" with your conjecture of "need" is so detached from today's notions of life is also meaningless. Putting a timeframe on the meaningless is pointless.

> ...go on without us...

This is a loopy conjecture about a new form of life which emerges-from-and-transcends humanity, presumably to the ultimate point of obviating humanity. Ok, so "imagine a world without humanity." Sure, who's doing the imagining? It's absurd.

Turing's point was we lack the vocabulary to discuss these matters, so he offered an approximation with a overtly stated expectation that by about this time (50 or so years from the time of his paper) technology for simulating thought would be sufficiently advanced as to demand a new vocabulary. And here we are.

If you contribution is merely recapitulation of Turing's precept's from decades ago, you're a bit late to the imitation game.

[go to top]