* Automated systems should not be permitted to make adverse decisions against individuals. This is already law in the EU, although it's not clear if it is enforced. This is the big one. Any company using AI to make decisions which affect external parties in any way must not be allowed to require any waiver of the right to sue, participate in class actions, or have the case heard by a jury. Those clauses companies like to put in EULAs would become invalid as soon as an AI is involved anywhere.
* All marketing content must be signed by a responsible party. AI systems increase the amount of new content generated for marketing purposes substantially. This is already required in the US, but weakly enforced. Both spam and "influencers" tend to violate this. The problem isn't AI, but AI makes it worse, because it's cheaper than troll farms, and writes better.
* Anonymous political speech may have to go. That's a First Amendment right in the US, but it's not unlimited. You should be able to say anything you're willing to sign.[1] This is, again, the troll farm problem, and, again, AIs make it worse.
That's probably enough to deal with the immediate problems.
[1] https://mtsu.edu/first-amendment/article/32/anonymous-speech