zlacker

[parent] [thread] 13 comments
1. Veedra+(OP)[view] [source] 2022-05-24 00:25:12
I thought I was doing well after not being overly surprised by DALL-E 2 or Gato. How am I still not calibrated on this stuff? I know I am meant to be the one who constantly argues that language models already have sophisticated semantic understanding, and that you don't need visual senses to learn grounded world knowledge of this sort, but come on, you don't get to just throw T5 in a multimodal model as-is and have it work better than multimodal transformers! VLM[1] at least added fine-tuned internal components.

Good lord we are screwed. And yet somehow I bet even this isn't going to kill off the they're just statistical interpolators meme.

[1] https://www.deepmind.com/blog/tackling-multiple-tasks-with-a...

replies(4): >>benree+U >>axg11+13 >>skybri+A6 >>hooand+Nh
2. benree+U[view] [source] 2022-05-24 00:33:50
>>Veedra+(OP)
It’s just my opinion but I think the meme you’re talking about is deeply related to other branches of science and philosophy: ranging from the trust old saw about AI being anything a computer hasn’t done yet to deep meditations on the nature of consciousness.

They’re all fundamentally anthropocentric: people argue until they are blue in the face about what “intelligent” means but it’s always implicit that what they really mean is “how much like me is this other thing”.

Language models, even more so than the vision models that got them funded have empirically demonstrated that knowing the probability of two things being adjacent in some latent space is at the boundary indistinguishable from creating and understanding language.

I think the burden is on the bright hominids with both a reflexive language model and a sex drive to explain their pre-Copernican, unique place in the theory of computation rather than vice versa.

A lot of these problems just aren’t problems anymore if performance on tasks supersedes “consciousness” as the thing we’re studying.

replies(1): >>ravi-d+fd
3. axg11+13[view] [source] 2022-05-24 00:52:06
>>Veedra+(OP)
I firmly believe that ~20-40% of the machine learning community will say that all ML models are dumb statistical interpolators all the way until a few years after we achieve AGI. Roughly the same groups will also claim that human intelligence is special magic that cannot be recreated using current technology.

I think it’s in everyone’s benefit if we start planning for a world where a significant portion of the experts are stubbornly wrong about AGI. As a technology, generally intelligent ML has the potential to change so many aspects of our world. The dangers of dismissing the possibility of AGI emerging in the next 5-10 years are huge.

replies(3): >>sineno+l3 >>woeiru+Ad >>Daishi+Gg
◧◩
4. sineno+l3[view] [source] [discussion] 2022-05-24 00:55:35
>>axg11+13
> The dangers of dismissing the possibility of AGI emerging in the next 5-10 years are huge.

Again, I think we should consider "The Human Alignment Problem" more in this context. The transformers in question are large, heavy and not really prone to "recursive self-improvement".

If the ML-AGI works out in a few years, who gets to enter the prompts?

replies(2): >>voz_+3i >>NickNa+ek
5. skybri+A6[view] [source] 2022-05-24 01:23:31
>>Veedra+(OP)
I think it's something like a very intelligent Borgian library of babel. There are all sorts of books in there, by authors with conflicting opinions and styles, due to the source material. The librarian is very good at giving you something you want to read, but that doesn't mean it has coherent opinions. It doesn't know or care what's authentic and what's a forgery. It's great for entertainment, but you wouldn't want to do research there.

For image generation, it's obviously all fiction. Which is fine and mostly harmless if you you know what you're getting. It's going to leak out onto the Internet, though, and there will be photos that get passed around as real.

For text, it's all fiction too, but this isn't obvious to everyone because sometimes it's based on true facts. There's often not going to be an obvious place where the facts stop and the fiction starts.

The raw Internet is going to turn into a mountain of this stuff. Authenticating information is going to become a lot more important.

◧◩
6. ravi-d+fd[view] [source] [discussion] 2022-05-24 02:30:33
>>benree+U
I'd argue that there is probably at least one leap in terms of human-level writing which isn't just pure prediction. Humans write with intent, which is how we can maintain long run structure. I definitely write like GPT while I'm not paying attention, but with the executive on the task I outperform it. For all we know this is solvable with some small tweak to architecture, and I rather doubt that a model which has solved this problem need be conscious (though our own solution seems correlated with consciousness), but it is one more step.
replies(1): >>dougmw+s11
◧◩
7. woeiru+Ad[view] [source] [discussion] 2022-05-24 02:33:48
>>axg11+13
You should be much more concerned about the prospect of nuclear war right now than the sudden emergence of an AGI.
replies(2): >>random+vr >>Poigna+FS
◧◩
8. Daishi+Gg[view] [source] [discussion] 2022-05-24 03:08:02
>>axg11+13
These ML models aren't capable of generating novel thinking. They allow for extracting knowledge from an existing network. They cannot declare new ideas, identify how to validate them, and gather data and reach conclusions.
9. hooand+Nh[view] [source] 2022-05-24 03:22:57
>>Veedra+(OP)
I haven't been overly surprised by any of it. The final product is still the same, no matter how much they scale it up.

All of these models seem to require a human to evaluate and edit the results. Even Co-Pilot. In theory this will reduce the number of human hours required to write text or create images. But I haven't seen anyone doing that successfully at scale or solving the associated problems yet.

I'm pessimistic about the current state of AI research. It seems like it's been more of the same for many years now.

◧◩◪
10. voz_+3i[view] [source] [discussion] 2022-05-24 03:27:26
>>sineno+l3
Me.

... ... ...

Obviously "/s", obviously joking, but meant to highlight that there are a few parties that would all answer "me" and truly mean it, often not in a positive way.

◧◩◪
11. NickNa+ek[view] [source] [discussion] 2022-05-24 03:55:33
>>sineno+l3
A DAO.
◧◩◪
12. random+vr[view] [source] [discussion] 2022-05-24 05:16:54
>>woeiru+Ad
100 times this. There’s very little sign of AGI, but nuclear weapons exist, can definitely destroy the planet already, are designed to, have nearly done so in the past, and we’re at the most dangerous point in decades.
◧◩◪
13. Poigna+FS[view] [source] [discussion] 2022-05-24 09:43:56
>>woeiru+Ad
Is it really that simple?

We can worry about two things at once. We can be especially worried that at some point (maybe decades away, potentially years away), we'll have nuclear weapons and rampant AGI.

◧◩◪
14. dougmw+s11[view] [source] [discussion] 2022-05-24 11:07:28
>>ravi-d+fd
I agree that intent is the missing piece so far. GTP can respond better to prompts than most people, but does so with a complete lack of intent. The human provides 100% of it.
[go to top]