zlacker

[parent] [thread] 1 comments
1. fnordp+(OP)[view] [source] 2024-05-16 15:08:55
You’re right. But are you saying LLMs couldn’t be a part of a more complex system similar to how our brain appears to be several integrated systems with special purpose and interdependence? I assume you’re not assuming everything is static and open ai is incapable of doing anything other offering incremental refinements in chatgpt? Just because they released X doesn’t mean Y+X isn’t coming. And we are talking about a longer game than “right this very second” - where do things go over 10 years? It’s not like open ai is going anywhere.

Maybe the guys who point out tar in tobacco is dangerous and nicotine is addictive maybe we shouldn’t add more for profit and such things would be useful just in case we get there.

But even if we don’t - an increasingly capable multimodal AI has a lot of utility for good and bad. Are we creating power tools with no safety? Or safety written by a bunch of engineers whose life experience extends to their PhD program at an exclusive school studying advanced mathematics? When their limited world collides with complex moral and ethical domains they don’t always have enough context to know why things are the way they are and our forefathers aren’t idiots. They often blunder into a mistake out of hubris.

Put it another way the chance they succeed is non zero. The possibility they succeed and they create a powerful tool that’s incredibly dangerous is non zero too. Maybe we should try to hedge that risk ?

replies(1): >>anakai+IK6
2. anakai+IK6[view] [source] 2024-05-19 08:30:15
>>fnordp+(OP)
I was not saying that LLMs could not be part of a more complex system. What I was saying is that the more complex system is what likely needs to be the focus of discussion rather than the LLM itself.

Basically- the LLM won't run away on its own.

I do agree with a safety focus and guardrails. I dont agree with chicken little sky is falling claims.

[go to top]