zlacker

[return to "Sam Altman goes before US Congress to propose licenses for building AI"]
1. happyt+ZB1[view] [source] 2023-05-16 19:14:04
>>vforgi+(OP)
We need to MAKE SURE that AI as a technology ISN'T controlled by a small number of powerful corporations with connections to governments.

To expound, this just seems like a power grab to me, to "lock in" the lead and keep AI controlled by a small number of corporations that can afford to license and operate the technologies. Obviously, this will create a critical nexus of control for a small number of well connected and well heeled investors and is to be avoided at all costs.

It's also deeply troubling that regulatory capture is such an issue these days as well, so putting a government entity in front of the use and existence of this technology is a double whammy — it's not simply about innovation.

The current generation of AIs are "scary" to the uninitiated because they are uncanny valley material, but beyond impersonation they don't show the novel intelligence of a GPI... yet. It seems like OpenAI/Microsoft is doing a LOT of theater to try to build a regulatory lock in on their short term technology advantage. It's a smart strategy, and I think Congress will fall for it.

But goodness gracious we need to be going in the EXACT OPPOSITE direction — open source "core inspectable" AIs that millions of people can examine and tear apart, including and ESPECIALLY the training data and processes that create them.

And if you think this isn't an issue, I wrote this post an hour or two before I managed to take it live because Comcast went out at my house, and we have no viable alternative competitors in my area. We're about to do the same thing with AI, but instead of Internet access it's future digital brains that can control all aspects of a society.

◧◩
2. ameliu+tT1[view] [source] 2023-05-16 20:36:33
>>happyt+ZB1
> But goodness gracious we need to be going in the EXACT OPPOSITE direction — open source "core inspectable" AIs that millions of people can examine and tear apart, including and ESPECIALLY the training data and processes that create them

Except ... when you look at the problem from a military/national security viewpoint. Do we really want to give this tech away just like that?

◧◩◪
3. vinay_+jW1[view] [source] 2023-05-16 20:50:48
>>ameliu+tT1
If you mean US by 'we', it is problematic because AI inventions are happening all over the globe, much more outside US than inside.
◧◩◪◨
4. behnam+IX1[view] [source] 2023-05-16 20:58:28
>>vinay_+jW1
Name one significant progress in the field of LLMs that happened outside the US. Basically all the scientific papers came from Stanford, CMU, and other US universities. And the major players in the field are all American companies (OpenAI + Microsoft, Google, AnthropicAI, etc.)
◧◩◪◨⬒
5. Improb+yO2[view] [source] 2023-05-17 03:31:50
>>behnam+IX1
Deepmind is owned by Google, but it's British and they've been behind a lot of significant conceptual results in the last couple years. Most significant progress is just "engineering", so it's all done by US corporations.

Other than that there's also things like roformer, but I'm going to assume you won't count that as significant. US universities then certainly don't produce anything significant either though.

◧◩◪◨⬒⬓
6. behnam+Z33[view] [source] 2023-05-17 06:27:04
>>Improb+yO2
> “just engineering”

This tells me the extent of your knowledge about the challenges with these models.

[go to top]