zlacker

[parent] [thread] 9 comments
1. holler+(OP)[view] [source] 2024-05-15 15:26:22
You're much more confident than I am that the researchers at OpenAI (or anyone else currently alive) are masters of their craft to such an extent that they would even be able to predict whether the next big training run they do will result in a superintelligence or not. Another way of saying the same thing is to say that the only way anyone knows that GPT-4 is not dangerously capable is that it has been deployed extensively enough by now that if it was going to harm us, it would've done so by now: not even the researchers that designed and coded-up GPT-4 or watched it during training could predict with any confidence how capable it would be. For example, everyone was quite surprised by its being able to score in the 90th decile on a bar exam.

Also, even if they never produce a superintelligence, they are likely to produce insights that would make it easier for other teams to produce a superintelligence. (Since employees are free to leave OpenAI and join some other team, there is no practical way to prevent the flow of insights out of OpenAI.)

replies(3): >>Michae+S52 >>anakai+mb2 >>inimin+pj2
2. Michae+S52[view] [source] 2024-05-16 08:25:34
>>holler+(OP)
Why do they need to be 'masters of their craft' to place directional bets?
replies(1): >>holler+Tj2
3. anakai+mb2[view] [source] 2024-05-16 09:38:13
>>holler+(OP)
Call me uninformed, but I do not see a way forward where a statistical model trained to recognise relationships between words or groups of words, and has a front end coded to query that model could suddenly develop its own independence. That's a whole other thing, where the code to interact with it must allow for constant feedback loops of self.improvement and the vast amount of evolutionary activity that entails.

An interactive mathematical model is not going to run away on its own without some very deliberate steps to take it in that direction.

replies(2): >>HDThor+Ri2 >>fnordp+zT2
◧◩
4. HDThor+Ri2[view] [source] [discussion] 2024-05-16 11:19:22
>>anakai+mb2
We have no idea how consciousness works. Just because you dont see a way forward doesnt mean its not there
replies(1): >>echoan+7F4
5. inimin+pj2[view] [source] 2024-05-16 11:27:31
>>holler+(OP)
As someone who has worked on LLMs somewhat extensively, the idea that we are going to accidentally make a superintelligence by that path is literally laughable.
◧◩
6. holler+Tj2[view] [source] [discussion] 2024-05-16 11:34:53
>>Michae+S52
Hmm. It's hard for me to see why you think 'diectional bet' helps us understand the situation.

Certainly, the researchers want the model to be as useful as possible, so there we have what I would call a 'directional bet', but since usefulness is correlated with capability to potentially do harm (i.e., dangerousness) that bet is probably not what you are referring to.

◧◩
7. fnordp+zT2[view] [source] [discussion] 2024-05-16 15:08:55
>>anakai+mb2
You’re right. But are you saying LLMs couldn’t be a part of a more complex system similar to how our brain appears to be several integrated systems with special purpose and interdependence? I assume you’re not assuming everything is static and open ai is incapable of doing anything other offering incremental refinements in chatgpt? Just because they released X doesn’t mean Y+X isn’t coming. And we are talking about a longer game than “right this very second” - where do things go over 10 years? It’s not like open ai is going anywhere.

Maybe the guys who point out tar in tobacco is dangerous and nicotine is addictive maybe we shouldn’t add more for profit and such things would be useful just in case we get there.

But even if we don’t - an increasingly capable multimodal AI has a lot of utility for good and bad. Are we creating power tools with no safety? Or safety written by a bunch of engineers whose life experience extends to their PhD program at an exclusive school studying advanced mathematics? When their limited world collides with complex moral and ethical domains they don’t always have enough context to know why things are the way they are and our forefathers aren’t idiots. They often blunder into a mistake out of hubris.

Put it another way the chance they succeed is non zero. The possibility they succeed and they create a powerful tool that’s incredibly dangerous is non zero too. Maybe we should try to hedge that risk ?

replies(1): >>anakai+hE9
◧◩◪
8. echoan+7F4[view] [source] [discussion] 2024-05-17 03:35:16
>>HDThor+Ri2
I think the point was that on a purely technical level, the LLMs as currently used can’t do anything on their own. They only continue a prompt when given. It’s not like a LLM could “decide” to hack the NSA and publish the data tomorrow, because it determined that this would help humanity. The only thing it can do is try to make people do something when they read the responses.
replies(1): >>anakai+qE9
◧◩◪
9. anakai+hE9[view] [source] [discussion] 2024-05-19 08:30:15
>>fnordp+zT2
I was not saying that LLMs could not be part of a more complex system. What I was saying is that the more complex system is what likely needs to be the focus of discussion rather than the LLM itself.

Basically- the LLM won't run away on its own.

I do agree with a safety focus and guardrails. I dont agree with chicken little sky is falling claims.

◧◩◪◨
10. anakai+qE9[view] [source] [discussion] 2024-05-19 08:32:07
>>echoan+7F4
This is a good interpretation of the point I was getting at, yes.
[go to top]