zlacker

[return to "Sam Altman goes before US Congress to propose licenses for building AI"]
1. happyt+ZB1[view] [source] 2023-05-16 19:14:04
>>vforgi+(OP)
We need to MAKE SURE that AI as a technology ISN'T controlled by a small number of powerful corporations with connections to governments.

To expound, this just seems like a power grab to me, to "lock in" the lead and keep AI controlled by a small number of corporations that can afford to license and operate the technologies. Obviously, this will create a critical nexus of control for a small number of well connected and well heeled investors and is to be avoided at all costs.

It's also deeply troubling that regulatory capture is such an issue these days as well, so putting a government entity in front of the use and existence of this technology is a double whammy — it's not simply about innovation.

The current generation of AIs are "scary" to the uninitiated because they are uncanny valley material, but beyond impersonation they don't show the novel intelligence of a GPI... yet. It seems like OpenAI/Microsoft is doing a LOT of theater to try to build a regulatory lock in on their short term technology advantage. It's a smart strategy, and I think Congress will fall for it.

But goodness gracious we need to be going in the EXACT OPPOSITE direction — open source "core inspectable" AIs that millions of people can examine and tear apart, including and ESPECIALLY the training data and processes that create them.

And if you think this isn't an issue, I wrote this post an hour or two before I managed to take it live because Comcast went out at my house, and we have no viable alternative competitors in my area. We're about to do the same thing with AI, but instead of Internet access it's future digital brains that can control all aspects of a society.

◧◩
2. gentle+5V2[view] [source] 2023-05-17 04:48:31
>>happyt+ZB1
- Most, but not all, all the most scary uses for ai are those potentially by governments against their people.

- The next most scary uses are by governments against the people of other countries.

- After that, corporate use of ai against their employees and customers is also terrifying;

- next, the potential for individuals or small organizations seeking to use it for something terrorism-related. Eg, 3d printers or a lab + an ai researcher who helps you make dangerous things I suppose

- near the bottom of the noteworthy list is probably crime. eg, hacking, blackmail, gaslighting, etc

These problems will probably all come up in a big way over the next decade; but limiting ai research to the government and their lackeys? That's extremely terrifying. To prevent the least scary problems, we're jumping into the scariest pool with both feet

Look at how China has been using AI for the last 5-10 years: millions of facial recognition cameras, a scary police force, and a social credit system. In 10-20 years, how much more sophisticated will this be? If the people wanted to rebel, how on Earth will they?

Hell, with generative ai, a sophisticated enough future state could actually make the Dead Internet Theory a reality

That's the future of ai: a personal, automated boot stomping on everybody individually and collectively forever, with no ability to resist

[go to top]