zlacker

[return to "Sam Altman goes before US Congress to propose licenses for building AI"]
1. happyt+ZB1[view] [source] 2023-05-16 19:14:04
>>vforgi+(OP)
We need to MAKE SURE that AI as a technology ISN'T controlled by a small number of powerful corporations with connections to governments.

To expound, this just seems like a power grab to me, to "lock in" the lead and keep AI controlled by a small number of corporations that can afford to license and operate the technologies. Obviously, this will create a critical nexus of control for a small number of well connected and well heeled investors and is to be avoided at all costs.

It's also deeply troubling that regulatory capture is such an issue these days as well, so putting a government entity in front of the use and existence of this technology is a double whammy — it's not simply about innovation.

The current generation of AIs are "scary" to the uninitiated because they are uncanny valley material, but beyond impersonation they don't show the novel intelligence of a GPI... yet. It seems like OpenAI/Microsoft is doing a LOT of theater to try to build a regulatory lock in on their short term technology advantage. It's a smart strategy, and I think Congress will fall for it.

But goodness gracious we need to be going in the EXACT OPPOSITE direction — open source "core inspectable" AIs that millions of people can examine and tear apart, including and ESPECIALLY the training data and processes that create them.

And if you think this isn't an issue, I wrote this post an hour or two before I managed to take it live because Comcast went out at my house, and we have no viable alternative competitors in my area. We're about to do the same thing with AI, but instead of Internet access it's future digital brains that can control all aspects of a society.

◧◩
2. noneth+DG1[view] [source] 2023-05-16 19:32:58
>>happyt+ZB1
This is the definition of regulatory capture. Altman should be invited to speak so that we understand the ideas in his head but anything he suggests should be categorically rejected because he’s just not in a position to be trusted. If what he suggests are good ideas then hopefully we can arrive at them in some other way with a clean chain of custody.

Although I assume if he’s speaking on AI they actually intend on considering his thoughts more seriously than I suggest.

◧◩◪
3. pg_123+AP1[view] [source] 2023-05-16 20:16:15
>>noneth+DG1
There is also growing speculation that the current level of AI may have peaked in a bang for buck sense.

If this is so, and given the concrete examples of cheap derived models learning from the first movers and rapidly (and did I mention cheaply) closing the gap to this peak, the optimal self-serving corporate play is to invite regulation.

After the legislative moats go up, it is once again about who has the biggest legal team ...

◧◩◪◨
4. robwwi+uT1[view] [source] 2023-05-16 20:36:34
>>pg_123+AP1
Counterpoint—-there is growing speculation we are just about to transition to AGI.
◧◩◪◨⬒
5. Eamonn+OT1[view] [source] 2023-05-16 20:38:23
>>robwwi+uT1
Growing? Or have the same voices who have been saying it since the aughts suddenly been platformed.
◧◩◪◨⬒⬓
6. TeMPOr+552[view] [source] 2023-05-16 21:41:41
>>Eamonn+OT1
Yes, growing. It's not that the Voices have suddenly been "platformed" - it's that the field made a bunch of rapid jumps which made the message of those Voices more timely.

Recent developments in AI only further confirm that the logic of the message is sound, and it's just the people that are afraid the conclusions. Everyone has their limit for how far to extrapolate from first principles, before giving up and believing what one would like to be true. It seems that for a lot of people in the field, AGI X-risk is now below that extrapolation limit.

◧◩◪◨⬒⬓⬔
7. rtkwe+dh2[view] [source] 2023-05-16 22:58:50
>>TeMPOr+552
What's the actual new advancements? LLMs to me are great at faking AGI but are no where near actually being a workable general AI. The biggest example to me is you can correct even the newest ChatGPT and ask it to be truthful but it'll make up the same lie within the same continuous conversation. IMO the difference between being able to act truth-y and actually being truthful is a huge gap that involves the core ideas of what separates an actual AGI and a really good chatbot.

Maybe it'll turn out to be a distinction that doesn't matter but I personally still think we're a ways away from an actual AGI.

◧◩◪◨⬒⬓⬔⧯
8. ux-app+0l2[view] [source] 2023-05-16 23:21:24
>>rtkwe+dh2
>Maybe it'll turn out to be a distinction that doesn't matter but I personally still think we're a ways away from an actual AGI.

if you had described GPT to me 2 years ago I would have said no way, we're still a long way away from a machine that can fluidly and naturally converse in natural language and perform arbitrary logic and problem solving, and yet here we are.

I very much doubt that in 5 years time we'll be talking about how GPT peaked in 2023.

◧◩◪◨⬒⬓⬔⧯▣
9. q7xvh9+ll2[view] [source] 2023-05-16 23:24:18
>>ux-app+0l2
Seriously. It's worth pausing for a minute to note that the Turing Test has been entirely solved.

In fact, it has been so thoroughly solved that anyone can download an open-source solution and run it on their computer.

And yet, the general reaction of most people seems to be, "That's kind of cool, but why can't it also order me a cheeseburger?"

◧◩◪◨⬒⬓⬔⧯▣▦
10. eroppl+3n2[view] [source] 2023-05-16 23:34:51
>>q7xvh9+ll2
It has not been solved. Even GPT-4, as impressive as it is for some use cases, is dumb and I can tell the difference between it and a human in a dozen sentences just by demanding sufficient precision.

In some contexts, will some people be caught out? Absolutely. But that's been happening for a while now.

◧◩◪◨⬒⬓⬔⧯▣▦▧
11. ben_w+7K3[view] [source] 2023-05-17 12:48:30
>>eroppl+3n2
"Dumb" isn't why the Turing Test isn't solved. (Have you seen unmoderated chat with normal people? Heck, even smart people outside the domain of expertise; my mum was smart enough to get into university in the UK in the early 60s, back when that wasn't the default, but still believed in the healing power of crystals, homeopathic sodium chloride and silicon dioxide, and Bach flower remedies…)

ChatGPT (I've not got v4) deliberately fails the test by spewing out "as a large language model…", but also fails incidentally by having an attention span similar to my mother's shortly after her dementia diagnosis.

The problem with 3.5 is that it's simultaneously not mastered anything, and yet also beats everyone in whatever they've not mastered — an extremely drunk 50,000 year old Sherlock Holmes who speaks every language and has read every book just isn't going to pass itself off as Max Musstermann in a blind hour-long trial.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
12. rtkwe+iQ3[view] [source] 2023-05-17 13:23:25
>>ben_w+7K3
The lack of an ability to take in new information is maybe the crux of my issues with the LLM to AGI evolution. To my understanding the only way to have it even kind of learn something is to include it in a preamble it reprocesses every time which is maybe workable for small facts but breaks down for updating it from the 202X corpus it was trained on.
[go to top]