zlacker

[return to "Sam Altman goes before US Congress to propose licenses for building AI"]
1. happyt+ZB1[view] [source] 2023-05-16 19:14:04
>>vforgi+(OP)
We need to MAKE SURE that AI as a technology ISN'T controlled by a small number of powerful corporations with connections to governments.

To expound, this just seems like a power grab to me, to "lock in" the lead and keep AI controlled by a small number of corporations that can afford to license and operate the technologies. Obviously, this will create a critical nexus of control for a small number of well connected and well heeled investors and is to be avoided at all costs.

It's also deeply troubling that regulatory capture is such an issue these days as well, so putting a government entity in front of the use and existence of this technology is a double whammy — it's not simply about innovation.

The current generation of AIs are "scary" to the uninitiated because they are uncanny valley material, but beyond impersonation they don't show the novel intelligence of a GPI... yet. It seems like OpenAI/Microsoft is doing a LOT of theater to try to build a regulatory lock in on their short term technology advantage. It's a smart strategy, and I think Congress will fall for it.

But goodness gracious we need to be going in the EXACT OPPOSITE direction — open source "core inspectable" AIs that millions of people can examine and tear apart, including and ESPECIALLY the training data and processes that create them.

And if you think this isn't an issue, I wrote this post an hour or two before I managed to take it live because Comcast went out at my house, and we have no viable alternative competitors in my area. We're about to do the same thing with AI, but instead of Internet access it's future digital brains that can control all aspects of a society.

◧◩
2. noneth+DG1[view] [source] 2023-05-16 19:32:58
>>happyt+ZB1
This is the definition of regulatory capture. Altman should be invited to speak so that we understand the ideas in his head but anything he suggests should be categorically rejected because he’s just not in a position to be trusted. If what he suggests are good ideas then hopefully we can arrive at them in some other way with a clean chain of custody.

Although I assume if he’s speaking on AI they actually intend on considering his thoughts more seriously than I suggest.

◧◩◪
3. pg_123+AP1[view] [source] 2023-05-16 20:16:15
>>noneth+DG1
There is also growing speculation that the current level of AI may have peaked in a bang for buck sense.

If this is so, and given the concrete examples of cheap derived models learning from the first movers and rapidly (and did I mention cheaply) closing the gap to this peak, the optimal self-serving corporate play is to invite regulation.

After the legislative moats go up, it is once again about who has the biggest legal team ...

◧◩◪◨
4. robwwi+uT1[view] [source] 2023-05-16 20:36:34
>>pg_123+AP1
Counterpoint—-there is growing speculation we are just about to transition to AGI.
◧◩◪◨⬒
5. causal+ZW1[view] [source] 2023-05-16 20:53:00
>>robwwi+uT1
Growing among who? The more I learn about and use LLMs the more convinced I am we're in a local maxima and the only way they're going to improve is by getting smaller and cheaper to run. They're still terrible at logical reasoning.

We're going to get some super cool and some super dystopian stuff out of them but LLMs are never going to go into a recursive loop of self-improvement and become machine gods.

◧◩◪◨⬒⬓
6. behnam+152[view] [source] 2023-05-16 21:40:56
>>causal+ZW1
My thoughts exactly. It's hard to see signal among all the noise surrounding LLMs, Even if they say they're gonna hurt you, they have no idea about what it means to hurt, what is "you", and how they're going to achieve that goal. They just spit out things that resemble people have said online. There's no harm from a language model that's literally a "language" model.
◧◩◪◨⬒⬓⬔
7. forget+ca2[view] [source] 2023-05-16 22:11:02
>>behnam+152
You appear to be ignoring a few thousand years of recorded history around what happens when a demagogue gets a megaphone. Human-powered astroturf campaigns were all it took to get randoms convinced lizard people are an existential threat and then -act- on that belief.
◧◩◪◨⬒⬓⬔⧯
8. nullse+nh2[view] [source] 2023-05-16 22:59:34
>>forget+ca2
I think I'm just going to build and open source some really next gen astroturf software that learns continuously as it debates people online in order to get better at changing people's minds. I'll make sure to include documentation in Russian, Chinese and Corporate American English.

What would a good name be? TurfChain?

I'm serious. People don't believe this risk is real. They keep hiding it behind some nameless, faceless 'bad actor', so let's just make it real.

I don't need to use it. I'll just release it as a research project.

◧◩◪◨⬒⬓⬔⧯▣
9. forget+tF2[view] [source] 2023-05-17 01:56:50
>>nullse+nh2
It's not like there isn't a market waiting impatiently for the product...
◧◩◪◨⬒⬓⬔⧯▣▦
10. nullse+F53[view] [source] 2023-05-17 06:46:53
>>forget+tF2
It's definitely not something I would attempt to productize and profit off of. I'm virtually certain someone will, and I'm sure that capability is being worked on as we speak, since we already know this type of thing occurs at scale.

My motivation would be simply shine a light on it. Make it real for people, so we have things to talk about other than just the hypotheticals. It's the kind of tooling that if you're seriously motivated to employ it, you'd probably prefer it remain secret or undetected at least until after it had done it's work for you. I worry that the 2024 US election will be the real litmus test for these things. All things considered it'd be a shame if we go through another Cambridge Analytica moment that in hindsight we really ought to have seen coming.

Some people have their doubts, and I understand that. These issues are so complex that no one individual can hope to have an accurate mental model of the world that is going to serve them reliabily again and again. We're all going to continue to be surprised as events unfold, and the degree to which we are surprised indicates the degree to which our mental models were lacking and got updated. That to me is why I'm erring on the side or pessimism and caution.

[go to top]