zlacker

[parent] [thread] 123 comments
1. elil17+(OP)[view] [source] 2023-05-16 14:39:28
This is the message I shared with my senator (edited to remove information which could identify me). I hope others will send similar messages.

Dear Senator [X],

I am an engineer working for [major employer in the state]. I am extremely concerned about the message that Sam Altman is sharing with the Judiciary committee today.

Altman wants to create regulatory roadblocks to developing AI. My company produces AI-enabled products. If these roadblocks had been in place two years ago, my company would not have been able to invest into AI. Now, because we had the freedom to innovate, AI will be bringing new, high paying jobs to our factories in our state.

While AI regulation is important, it is crucial that there are no roadblocks stopping companies and individuals from even trying to build AIs. Rather, regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use - this would allow companies and individuals to research new AIs freely while still ensuring that AI products are properly reviewed.

Altman and his ilk try to claim that aggressive regulation (which will only serve to give them a monopoly over AI) is necessary because an AI could hack it's way out of a laboratory. Yet, they cannot explain how an AI would accomplish this in practice. I hope you will push back against anyone who fear-mongers about sci-fi inspired AI scenarios.

Congress should focus on the real impacts that AI will have on employment. Congress should also consider the realistic risks AI which poses to the public, such as risks from the use of AI to control national infrastructure (e.g., the electric grid) or to make healthcare decisions.

Thank you, [My name]

replies(16): >>letter+z3 >>freedo+48 >>kubota+Va >>dist-e+Yb >>circui+9c >>abeppu+mu >>JumpCr+mB >>jamesh+bS >>nerpde+H21 >>reaper+y41 >>deping+b91 >>brooks+ui1 >>simonh+Oj1 >>johnal+6B1 >>rlytho+VC1 >>cabaal+hK1
2. letter+z3[view] [source] 2023-05-16 14:55:30
>>elil17+(OP)
I made something just for writing your congress person / senator, using generative AI ironically: https://vocalvoters.com/
replies(2): >>samsol+h7 >>blibbl+rc1
◧◩
3. samsol+h7[view] [source] [discussion] 2023-05-16 15:13:52
>>letter+z3
Cool product! Your pay button appears to be disabled though.
replies(1): >>letter+Yv
4. freedo+48[view] [source] 2023-05-16 15:18:04
>>elil17+(OP)
> Altman and his ilk

IANA senator, but if I were you lost me there. The personal insults make it seem petty and completely overshadow the otherwise professional-sounding message.

replies(3): >>elil17+ya >>anigbr+Fg >>to11mt+v61
◧◩
5. elil17+ya[view] [source] [discussion] 2023-05-16 15:28:05
>>freedo+48
I don't mean it as a personal insult at all! The word ilk actually means "a type of people or things similar to those already referred to," it is not an insult or rude word.
replies(4): >>freedo+yb >>mitch3+ec >>helloj+Pu >>logdap+EF
6. kubota+Va[view] [source] 2023-05-16 15:30:07
>>elil17+(OP)
You lost me at "While AI regulation is important" - nope, congress does not need to regulate AI.
replies(6): >>silver+Sg >>tessie+Lk >>wnevet+yt >>haswel+lH >>runarb+uZ >>larati+Rb2
◧◩◪
7. freedo+yb[view] [source] [discussion] 2023-05-16 15:32:30
>>elil17+ya
TIL! https://www.merriam-webster.com/dictionary/ilk

still, there are probably a lot of people like me who have heard it used (incorrectly it seems) as an insult so many times that it's an automatic response :-(

replies(2): >>TheSpi+kl >>DANmod+2M
8. dist-e+Yb[view] [source] 2023-05-16 15:33:51
>>elil17+(OP)
Can you please share what ChatGPT prompt you used to generate this letter template?
replies(1): >>elil17+4f
9. circui+9c[view] [source] 2023-05-16 15:34:42
>>elil17+(OP)
The worries about AI taking over things are founded and important, even if many sci-go depictions of it are inaccurate. I’m not sure if this would be the best solution but please don’t dismiss the issue entirely
replies(2): >>circui+4Z >>alfalf+D21
◧◩◪
10. mitch3+ec[view] [source] [discussion] 2023-05-16 15:34:50
>>elil17+ya
It’s always used derogatorily. I agree that you should change it if you don’t mean for it to come across that way.
replies(3): >>elil17+Me >>anigbr+Ng >>TheSpi+Mm
◧◩◪◨
11. elil17+Me[view] [source] [discussion] 2023-05-16 15:44:16
>>mitch3+ec
That's simply untrue. Here are several recently published articles which use ilk in a neutral or positive context:

https://www.telecomtv.com/content/digital-platforms-services...

https://writingillini.com/2023/05/16/illinois-basketball-ill...

https://www.jpost.com/j-spot/article-742911

replies(3): >>fauxpa+ni >>jerry1+Dk >>dustyl+mn
◧◩
12. elil17+4f[view] [source] [discussion] 2023-05-16 15:45:09
>>dist-e+Yb
I used this old-fashioned method of text generation called "writing" - crazy, I know
◧◩
13. anigbr+Fg[view] [source] [discussion] 2023-05-16 15:51:56
>>freedo+48
Ilk is shorthand for similarity, nothing more. The 'personal insult' is a misunderstanding on your part.
replies(2): >>catiop+EH >>brooks+Lj1
◧◩◪◨
14. anigbr+Ng[view] [source] [discussion] 2023-05-16 15:52:34
>>mitch3+ec
Not true.
replies(1): >>happyt+gk1
◧◩
15. silver+Sg[view] [source] [discussion] 2023-05-16 15:52:39
>>kubota+Va
They might have lost you. But starting with "congress shouldn't regulate AI" would lose the senator.

Which one do you think is more important to convince?

replies(1): >>polski+tZ
◧◩◪◨⬒
16. fauxpa+ni[view] [source] [discussion] 2023-05-16 15:58:05
>>elil17+Me
Doesn’t matter. It won’t be well received. It sounds negative to most readers and being technically correct warns you no points.
replies(1): >>elil17+Ll
◧◩◪◨⬒
17. jerry1+Dk[view] [source] [discussion] 2023-05-16 16:05:34
>>elil17+Me
Remember: you are doing propaganda. Feelings don't care about your facts.
◧◩
18. tessie+Lk[view] [source] [discussion] 2023-05-16 16:06:10
>>kubota+Va
"important" does not mean "good." if you are in the field of AI, AI regulation is absolutely important, whether good or bad.
◧◩◪◨
19. TheSpi+kl[view] [source] [discussion] 2023-05-16 16:07:40
>>freedo+yb
Don't worry, you're not a senator.

And, if there's one thing politicians are know for it's got to be ad hominem.

◧◩◪◨⬒⬓
20. elil17+Ll[view] [source] [discussion] 2023-05-16 16:10:02
>>fauxpa+ni
Well I don't think it really matters what most readers think of it because I was writing it hoping that it would be read by congressional staffers, who I think will know what ilk means.
replies(1): >>ChrisC+6p
◧◩◪◨
21. TheSpi+Mm[view] [source] [discussion] 2023-05-16 16:13:34
>>mitch3+ec
I'm mean, at this point I'm going to argue that it you believe ilk is only ever used derogatorily, you're only reading and hearing people who have axes to grind.

I probably live quite distally to you and am probably exposed to parts of western culture you probably aren't, and I almost never hear nor read ilk as a derogation or used to associate in a derogatory manner.

◧◩◪◨⬒
22. dustyl+mn[view] [source] [discussion] 2023-05-16 16:15:55
>>elil17+Me
It is technically true that ilk is not always used derogatorily. But it is almost always derogatory in modern connotation.

https://grammarist.com/words/ilk/#:~:text=It's%20neutral.,a%....

Also, note that all of the negative examples are politics related. If a politician reads the word 'ilk', it is going to be interpreted negatively. It might be the case that ilk does "always mean" a negative connotation in politics.

You could change 'ilk' to 'friends', and keep the same meaning with very little negative connotation. There is still a slight negative connotation here, in the political arena, but it's a very vague shade, and I like it here.

"Altman and his ilk try to claim that..." is a negative phrase because "ilk" is negative, but also because "try to claim" is invalidating and dismissive. So this has elements or notes of an emotional attack, rather than a purely rational argument. If someone is already leaning towards Altman's side, then this will feel like an attack and like you are the enemy.

"Altman claims that..." removes all connotation and sticks to just the facts.

replies(1): >>elil17+Hq
◧◩◪◨⬒⬓⬔
23. ChrisC+6p[view] [source] [discussion] 2023-05-16 16:23:14
>>elil17+Ll
It's also possible you could be wrong about something, and maybe people are trying to help you.
◧◩◪◨⬒⬓
24. elil17+Hq[view] [source] [discussion] 2023-05-16 16:28:59
>>dustyl+mn
Well even if ilk had a negative connotation for my intended audience (which clearly it does to some people), I am actually trying to invalidate and dismiss Altman's arguments.
replies(1): >>dustyl+bC
◧◩
25. wnevet+yt[view] [source] [discussion] 2023-05-16 16:41:57
>>kubota+Va
> nope, congress does not need to regulate AI.

Not regulating the air quality we breathe for decades turned out amazing for millions of the Americas. Yes, lets do the same with AI! What could possibility go wrong?

replies(1): >>pizza+Av
26. abeppu+mu[view] [source] 2023-05-16 16:44:48
>>elil17+(OP)
> regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use - this would allow companies and individuals to research new AIs freely while still ensuring that AI products are properly reviewed.

While in general I share the view that _research_ should be unencumbered, but deployment should be regulated, I do take issue with your view that safety only matters once they are ready for "widespread use". A tool which is made available in a limited beta can still be harmful, misleading, or too-easily support irresponsible or malicious purposes, and in some cases the harms could be _enabled_ by the fact that the release is limited.

For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn. Because almost no one is aware of the stunning new quality of outputs produced by your model, most people don't believe the victim when they assert that the footage is fake.

I would suggest that the first non-private (e.g. non-employee) release of a tool should make it subject to regulation. If I open a restaurant, on my first night I'm expected to be in compliance with basic health and safety regulations, no matter how few customers I have. If I design and sell a widget that does X, even for the first one I sell, my understanding is there's an concept of an implied requirement that my widgets must actually be "fit for purpose" for X; I cannot sell a "rain coat" made of gauze which offers no protection from rain, and I cannot sell a "smoke detector" which doesn't effectively detect smoke. Why should low-volume AI/ML products get a pass?

replies(10): >>nvegat+5z >>elil17+DT >>shon+3j1 >>verdve+4w1 >>chasd0+Qx1 >>mr_toa+DV1 >>13of40+272 >>8note+C72 >>kraf+io2 >>random+LU2
◧◩◪
27. helloj+Pu[view] [source] [discussion] 2023-05-16 16:46:35
>>elil17+ya
Don't fret too much. I once wrote to my senator about their desire to implement an unconstitutional wealth tax and told them that if they wanted to fuck someone so badly they should kill themself so they could go blow Jesus, and I still got a response back.
◧◩◪
28. pizza+Av[view] [source] [discussion] 2023-05-16 16:49:10
>>wnevet+yt
I think this is a great argument in the opposite direction.. atoms matter, information isn’t. A small group of people subjugated many others to poisonous matter. That matter affected their bodies and a causal link could be made.

Even if you really believe that somewhere in the chain of consequences derived from LLMs there could be grave and material damage or other affronts to human dignity, there is almost always a more direct causal link that acts as the thing which makes that damage kinetic and physical. And that’s the proper locus for regulation. Otherwise this is all just a bit reminiscent of banning numbers and research into numbers.

Want to protect people’s employment? Just do that! Enshrine it in law. Want to improve the safety of critical infrastructure and make sure they’re reliable? Again, just do that! Want to prevent mass surveillance? Do that! Want to protect against a lack of oversight in complex systems allowing for subterfuge via bad actors? Well, make regulation about proper standards of oversight and human accountability. AI doesn’t obviate human responsibility, and a lack of responsibility on the part of humans who should’ve been responsible, and who instead cut corners, doesn’t mean that the blame falls on the tool that cut the corners, but rather the corner-cutters themselves.

replies(3): >>ptsnev+GZ >>hkt+001 >>DirkH+Nk1
◧◩◪
29. letter+Yv[view] [source] [discussion] 2023-05-16 16:50:15
>>samsol+h7
Should enable once you add valid info — if not, let me know
◧◩
30. nvegat+5z[view] [source] [discussion] 2023-05-16 17:04:07
>>abeppu+mu
I think by "widespread use" he means the reach of the AI System. Dangerous analogy but just to get the idea across: In the same way there is higher tax rates to higher incomes, you should increase regulations in relation to how many people could be potentially affected by the AI system. E.G a Startup with 10 daily users should not be in the same regulation bracket as google. If google deploys an AI it will reach Billions of people compared to 10. This would require a certain level of transparency from companies to get something like an "AI License type" which is pretty reasonable given the dangers of AI (the pragmatic ones not the DOOMsday ones)
replies(1): >>abeppu+tC
31. JumpCr+mB[view] [source] 2023-05-16 17:14:45
>>elil17+(OP)
> regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use

What would you say to a simple registration requirement? You give a point of contact and a description of training data, model, and perhaps intended use (could be binary: civilian or dual use). One page, publicly visible.

This gives groundwork for future rulemaking and oversight if necessary.

replies(1): >>elil17+JB
◧◩
32. elil17+JB[view] [source] [discussion] 2023-05-16 17:16:09
>>JumpCr+mB
Personally I think a simple registration requirement would be a good idea, if it were truly simple and accessible to independent researchers.
◧◩◪◨⬒⬓⬔
33. dustyl+bC[view] [source] [discussion] 2023-05-16 17:17:47
>>elil17+Hq
When someone is arguing from a position of strength, they don't need to resort to petty jibes.

You are already arguing from a position of strength.

When you add petty jibes, it weakens your perceived position, because it suggests that you think you need them, rather than relying solely on your argument.

(As a corollary, you should never use petty jibes. When you feel like you need to, shore up your argument instead.)

replies(1): >>elil17+FS
◧◩◪
34. abeppu+tC[view] [source] [discussion] 2023-05-16 17:19:08
>>nvegat+5z
But the "reach" is _not_ just a function of how many users the company has, it's also what they do with it. If you have only one user who generates convincing misinformation that they share on social media, the reach may be large even if your user-base is tiny. Or your new voice-cloning model is used by a single user to make a large volume of fake hostage proof-of-life recordings. The problem, and the reason for guardrails (whether regulatory or otherwise), is that you don't know what your users will do with your new tech, even if there's only a small number of them.
replies(2): >>elil17+YT >>nvegat+mb1
◧◩◪
35. logdap+EF[view] [source] [discussion] 2023-05-16 17:33:17
>>elil17+ya
> Kind; class; sort; type; ; -- sometimes used to indicate disapproval when applied to people.

> The American Heritage® Dictionary of the English Language, 5th Edition.

Yes, only sometimes used to indicate disapproval, but such ambiguity does not work to your favor here. It is better to remove that ambiguity.

◧◩
36. haswel+lH[view] [source] [discussion] 2023-05-16 17:40:52
>>kubota+Va
I’d argue that sweeping categorical statements like this are at the center of the problem.

People are coalescing into “for” and “against” camps, which makes very little sense given the broad spectrum of technologies and problems summarized in statements like “AI regulation”.

I think it’s a bit like saying “software (should|shouldn't) be regulated”. It’s a position that cannot be defended because the term software is too broad.

◧◩◪
37. catiop+EH[view] [source] [discussion] 2023-05-16 17:42:07
>>anigbr+Fg
“ilk” has acquired a negative connotation in its modern usage.

See also https://grammarist.com/words/ilk/#:~:text=It's%20neutral.,a%....

replies(1): >>anigbr+b31
◧◩◪◨
38. DANmod+2M[view] [source] [discussion] 2023-05-16 18:05:18
>>freedo+yb
Tone, intended or otherwise, is a pretty important part of communication!
replies(1): >>nerpde+D41
39. jamesh+bS[view] [source] 2023-05-16 18:38:46
>>elil17+(OP)
> Now, because we had the freedom to innovate, AI will be bringing new, high paying jobs to our factories in our state.

Do we really have to play this game?

If what you’re arguing for is not going to specifically advantage your state over others, and the thing you’re arguing against isn’t going to create an advantage for other states over yours, why make this about ‘your state’ in the first place?

The point of elected representatives is to represent the views of their constituents, not to obtain special advantages for their constituents.

replies(5): >>Pet_An+VS >>elil17+cT >>hacker+nX >>amalco+5l1 >>dragon+gt1
◧◩◪◨⬒⬓⬔⧯
40. elil17+FS[view] [source] [discussion] 2023-05-16 18:41:20
>>dustyl+bC
Well I didn't intend it as a "petty jibe," but in general I disagree. Evocative language and solid arguments can and do coexist.
◧◩
41. Pet_An+VS[view] [source] [discussion] 2023-05-16 18:42:46
>>jamesh+bS
> The point of elected representatives is to represent the views of their constituents, not to obtain special advantages for their constituents.

That is painfully naive, a history of pork projects speaks otherwise.

replies(1): >>hkt+xZ
◧◩
42. elil17+cT[view] [source] [discussion] 2023-05-16 18:43:59
>>jamesh+bS
What I was thinking in my head (although I don't think I articulated this well) is that I hope that smaller businesses who build their own AIs will be able to create some jobs, even if AI as a whole will negatively impact employment (and I think that's going to happen even if just big businesses can play at the AI game).
◧◩
43. elil17+DT[view] [source] [discussion] 2023-05-16 18:46:10
>>abeppu+mu
I agree with you. I that's an excellent and specific proposal for how AI could be regulated. I think you should share this with your senators/representatives.
◧◩◪◨
44. elil17+YT[view] [source] [discussion] 2023-05-16 18:48:01
>>abeppu+tC
I think this gets at what I meant by "widespread use" - if the results of the AI are being put out into the world (outside of, say, a white paper), that's something that should be subject to scrutiny, even if only one person is using the AI to generate those results.
◧◩
45. hacker+nX[view] [source] [discussion] 2023-05-16 19:04:14
>>jamesh+bS
Not to ignore, the development of AI will wipe out jobs in the state
◧◩
46. circui+4Z[view] [source] [discussion] 2023-05-16 19:12:42
>>circui+9c
*sci-fi but I can’t edit it now
◧◩◪
47. polski+tZ[view] [source] [discussion] 2023-05-16 19:14:08
>>silver+Sg
"Congress cannot regulate AI"

https://www.eff.org/deeplinks/2015/04/remembering-case-estab...

◧◩
48. runarb+uZ[view] [source] [discussion] 2023-05-16 19:14:12
>>kubota+Va
If AI is to be a consumer good—which it already is—it needs to be regulated, at the very least to ensure equal quality to a diverse set of customers and other users. Unregulated there is high risk of people being affected by e.g. employers and landlords using AI to discriminate. Or you being sold an AI solution which isn’t as advertised.

If AI will be used by public institutions, especially law enforcement, we need it regulated in the same manner. A bad AI trained on biased data has the potential to be extremely dangerous in the hands of a cop who is already predisposed for racist behavior.

replies(1): >>candio+9L1
◧◩◪
49. hkt+xZ[view] [source] [discussion] 2023-05-16 19:14:22
>>Pet_An+VS
To the best of my knowledge this doesn't happen so much in more functional democracies. It seems to be more of an anglophone thing.
replies(2): >>Aperoc+S01 >>titzer+sb1
◧◩◪◨
50. ptsnev+GZ[view] [source] [discussion] 2023-05-16 19:14:50
>>pizza+Av
You ended up providing examples that have no matter or atoms: protecting jobs, or oversight of complex systems.

These are policies which are a purely imaginary. Only when they get implemented into human law do they get a grain of substance but still imaginary. Failure to comply can be kinetic but that is a contingency not the object (matter :D).

Personally I find good ideas on having regulations on privacy, intelectual property, filming people on my house’s bathroom, NDAs etc. These subjects are central to the way society works today. At least western society would be severely affected if these subjects were suddenly a free for all.

I am not convinced we need such regulation for Ai at this point of technology readiness but if social implications create unacceptable unbalances we can start by regulating in detail. If detailed caveats still do not work then broader law can come. Which leads to my own theory:

All this turbulence about regulation reflects a mismatch between technological, politic and legal knowledge. Tech people don’t know law nor how it flows from policy. Politicians do not know the tech and have not seen its impacts on society. Naturally there is a pressure gradient from both sides that generates turbulence. The pressure gradient is high because the stakes are high: for techs the killing of a new forthcoming field; for politicians because they do not want a big majority of their constituency rendered useless.

Final point: if one sees AI as a means of production which can be monopolised by few capital rich we may see a 19th century inequality remake. It created one of the most powerful ideologies know: Communism.

replies(1): >>mLuby+u72
◧◩◪◨
51. hkt+001[view] [source] [discussion] 2023-05-16 19:17:02
>>pizza+Av
> atoms matter, information isn’t

Algorithmic discrimination already exists, so um, yes, information matters.

Add to that the fact that you're posting on a largely American forum where access to healthcare is largely predicated on insurance, just.. imagine AI underwriters. There's no court of appeal for insurance. It matters.

replies(2): >>pizza+If1 >>johnny+jn1
◧◩◪◨
52. Aperoc+S01[view] [source] [discussion] 2023-05-16 19:20:51
>>hkt+xZ
This is a product of incentives encouraged by the system (i.e. a federal republic), it has nothing to do with languages.
replies(2): >>hkt+e21 >>jamesh+js1
◧◩◪◨⬒
53. hkt+e21[view] [source] [discussion] 2023-05-16 19:25:12
>>Aperoc+S01
It has much to do with culture though - which is transmitted via language.
replies(1): >>Pet_An+kt1
◧◩
54. alfalf+D21[view] [source] [discussion] 2023-05-16 19:26:34
>>circui+9c
Seriously, I'm very concerned by the view being taken here. AI has the capacity to do a ton of harm very quickly. A couple of examples:

- Scamming via impersonation - Misinformation - Usage of AI in a way that could have serious legal ramifications for incorrect responses - Severe economic displacement

Congress can and should examine these issues. Just because OP works at an AI doesnt' mean that company can't exist in a regulated industry.

I too work in the AI space and welcome thoughtful regulation.

replies(3): >>bcrosb+wj1 >>chasd0+7z1 >>Dalewy+FB1
55. nerpde+H21[view] [source] 2023-05-16 19:26:40
>>elil17+(OP)
Lets not focus on "the business" and instead focus on the safety.

Altman can an ulterior motive, but it doesn't mean that we should strive for having some sort of handle on AI safety.

It could be that Altman and OpenAI know exactly how this will look and the backlash that will ensue that we get ZERO oversight and we rush headlong into doom.

Short term we need to focus on the structural unemployment that is about to hit us. As the AI labs use AI to make better AI, it will eat all the jobs until we have a relative handful of AI whisperers.

◧◩◪◨
56. anigbr+b31[view] [source] [discussion] 2023-05-16 19:28:13
>>catiop+EH
This is too subjective to be useful.
replies(2): >>shagie+491 >>kerowa+9g1
57. reaper+y41[view] [source] 2023-05-16 19:34:57
>>elil17+(OP)
This is the message I shared with my senator

If you sent it by e-mail or web contact form, chances are you wasted your time.

If you really want attention, you'll send it as a real letter. People who take the time to actually send real mail are taken more seriously.

◧◩◪◨⬒
58. nerpde+D41[view] [source] [discussion] 2023-05-16 19:35:20
>>DANmod+2M
There is this idea that the shape of a word, how it makes your mouth and face move when you say it connotes meaning on its own. This is called "phonosemantics", just saying "ilk" makes one feel like they are flinging off some sticky aggressive slime.

Ilk almost always has a negative connotation regardless of what the dictionary says.

◧◩
59. to11mt+v61[view] [source] [discussion] 2023-05-16 19:43:23
>>freedo+48
reducto ad nounium is a poor argument.
◧◩◪◨⬒
60. shagie+491[view] [source] [discussion] 2023-05-16 19:56:01
>>anigbr+b31
I would be curious to see an example of 'ilk' being used in a modern, non-sottish local, context where the association is being shown in a neutral or positive light.

I'll give you one: National Public Lands Day: Let’s Help Elk … and Their Ilk - https://pressroom.toyota.com/npld-2016-elk/ (it's a play on words)

61. deping+b91[view] [source] 2023-05-16 19:56:30
>>elil17+(OP)
What's the point of these letters? Everyone knows this is rent-seeking behavior by OpenAI, and they're going to pay off the right politicians to get it passed.

Dear Senator [X],

It's painfully obvious that Sam Altman's testimony before the judiciary committee is an attempt to set up rent-seeking conditions for OpenAI, and to snuff out competition from the flourishing open source AI community.

We will be carefully monitoring your campaign finances for evidence of bribery.

Hugs and Kiss,

[My Name]

replies(3): >>verdve+ca1 >>kweinb+gd1 >>peripi+F92
◧◩
62. verdve+ca1[view] [source] [discussion] 2023-05-16 20:00:55
>>deping+b91
If you want to influence the politicians without money, this is not the way.
replies(2): >>mark_l+hy1 >>rlytho+DC1
◧◩◪◨
63. nvegat+mb1[view] [source] [discussion] 2023-05-16 20:07:02
>>abeppu+tC
Good point. As non native speaker I thought reach was related to a quantity but that was wrong. Thanks for the clarification.
◧◩◪◨
64. titzer+sb1[view] [source] [discussion] 2023-05-16 20:07:22
>>hkt+xZ
Corruption is a kind of decay that afflicts institutions. Explicit rules, transparency, checks and balances, and consequences for violating the rules are the only thing that can prevent, or diagnose, or treat corruption. Where you find corruption is where one or more of these things is lacking. It has absolutely nothing to do the -acry or -ism attached to a society, institution, or group.
replies(1): >>dragon+Bt1
◧◩
65. blibbl+rc1[view] [source] [discussion] 2023-05-16 20:12:20
>>letter+z3
AI generated persuasion is pretty much what they're upset about
◧◩
66. kweinb+gd1[view] [source] [discussion] 2023-05-16 20:17:13
>>deping+b91
Did you watch the hearing? He specifically said that licensing wouldn’t be for the smaller places and didn’t want to impede their progress. The pitfalls of consolidation and regulatory capture also came up.
replies(1): >>phpist+wf1
◧◩◪
67. phpist+wf1[view] [source] [discussion] 2023-05-16 20:27:40
>>kweinb+gd1
>>He specifically said that licensing wouldn’t be for the smaller places

This is not a rebuttle to regulatory capture. it is in fact built into the model

These "small companies" are feeder systems for the large company, it is a place for companies to raise to the level where they would come under the burden of regulations, and prevented from growing larger there by making them very easy to acquire by the large company.

The small company has to sell or raise massive amounts of capital to just piss away on compliance cost. Most will just sell

replies(1): >>SoftTa+6h1
◧◩◪◨⬒
68. pizza+If1[view] [source] [discussion] 2023-05-16 20:28:42
>>hkt+001
I am literally agreeing with you but in a much more precise way. These are questions of “who gets what stuff”, “who gets which house”, “who gets which heart transplant”, “which human being sits in the big chair at which corporation”, “which file on which server that’s part of the SWIFT network reports that you own how much money”, “which wannabe operator decides their department needs to purchase which fascist predictive policing software”, etc.

Imagine I 1. hooked up a camera feed of a lava lamp to generate some bits and then 2. hooked up the US nuclear first strike network to it. I would be an idiot, but would I be an idiot because of 1. or 2.?

Basically I think it’s totally reasonable to hold these two beliefs: 1. there is no reason to fear the LLM 2. there is every reason to fear the LLM in the hands of those who refuse to think about their actions and the burdens they may impose on others, probably because they will justify the means through some kind of wishy washy appeal to bad probability theory.

The -plogp that you use to judge the sense of some predicted action you take is just a model, it’s just numbers in RAM. Only when those numbers are converted into destructive social decisions does it convert into something of consequence.

I agree that society is beginning to design all kinds of ornate algorithmic beating sticks to use against the people. The blame lies with the ones choosing to read tea leaves and then using the tea leaves to justify application of whatever Kafkaesque policies they design.

◧◩◪◨⬒
69. kerowa+9g1[view] [source] [discussion] 2023-05-16 20:31:23
>>anigbr+b31
language is subjective
◧◩◪◨
70. SoftTa+6h1[view] [source] [discussion] 2023-05-16 20:37:33
>>phpist+wf1
The genie is out of the bottle. The barriers to entry are too low, and the research can be done in parts of the word that don't give $0.02 what the US Congress thinks about it.
replies(1): >>enigmo+9B1
71. brooks+ui1[view] [source] 2023-05-16 20:44:31
>>elil17+(OP)
What specific ideas has Altman proposed that you disagree with? And where has he said AI could hack its way out of a laboratory?

I agree with being skeptical of proposals from those with vested interests, but are you just arguing against what you imagine Altman will say, or did I miss some important news?

◧◩
72. shon+3j1[view] [source] [discussion] 2023-05-16 20:47:15
>>abeppu+mu
> For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn.

You make a great point here. This is why we need as much open source and as much wide adoption as possible. Wide adoption = public education in the most effective way.

The reason we are having this discussion at all is precisely because OpenAI, Stability.ai, FAIR/Llama, and Midjourney have had their products widely adopted and their capabilities have shocked and educated the whole world, technologists and laymen alike.

The benefit of adoption is education. The world is already adapting.

Doing anything that limits adoption or encourages the underground development of AI tech is a mistake. Regulating it in this way will push it underground and make it harder to track and harder for the public to understand and prepare for.

replies(1): >>abeppu+Cs1
◧◩◪
73. bcrosb+wj1[view] [source] [discussion] 2023-05-16 20:49:46
>>alfalf+D21
You're never going to be able to regulate what a person's computer can run. We've been through this song and dance with cryptography. Trying to keep it out of the hands of bad actors will be a waste of time, effort, and money.

These resources should be spent lessening the impact rather than trying to completely control it.

replies(2): >>staunt+tx1 >>ChatGT+mv2
◧◩◪
74. brooks+Lj1[view] [source] [discussion] 2023-05-16 20:50:47
>>anigbr+Fg
"Ilk" definitely has a negative or dismissive connotation, at least in the US. You would never use it to express positive thoughts; you would use "stature" or similar.

The denotation may not be negative, but if you use ilk in what you see as a neutral way, people will get a different message than you're trying to send.

75. simonh+Oj1[view] [source] 2023-05-16 20:50:50
>>elil17+(OP)
> ..is necessary because an AI could hack it's way out of a laboratory. Yet, they cannot explain how an AI would accomplish this in practice.

I’m sympathetic to your position in general, but I can’t believe you wrote that with a straight face. “I don’t know how it would do it, therefore we should completely ignore the risk that it could be done.”

I’m no security expert, but I’ve been following the field incidentally and dabbling since writing login prompt simulators for the Prime terminals at college to harvest user account passwords. When I was a Unix admin I used to have fun figuring out how to hack my own systems. Security is unbelievably hard. An AI eventually jail braking is an eventual almost certainty we need to prepare for.

replies(1): >>elil17+i63
◧◩◪◨⬒
76. happyt+gk1[view] [source] [discussion] 2023-05-16 20:51:50
>>anigbr+Ng
I'd argue that you're right that there's nothing intrinsically disparaging about ilk as a word, but in contemporary usage it does seem to have become quite negative. I know the dictionary doesn't say it, but in my discussions it seems to have shifted towards the negative.

Consider this: "Firefighters and their ilk." It's not a word that nicely described a group, even though that's what it's supposed to do. I think the language has moved to where we just say Firefighters now when it's positive, and ilk or et al when it's a negative connotation.

Just my experience.

◧◩◪◨
77. DirkH+Nk1[view] [source] [discussion] 2023-05-16 20:55:19
>>pizza+Av
Your argument could just as easily be applied to human cloning and argue for why human cloning and genetic engineering for specific desirable traits should not be illegal.

And it isn't a strong argument for the same reason that it isn't a good argument when used to argue we should allow human cloning and just focus on regulating the more direct causal links like non-clone employment loss from mass produced hyper-intelligent clones, and ensuring they have legal rights, and having proper oversight and non-clone human accountability.

Maybe those things could all make ethical human cloning viable. But I think the world coming together and being like "holy shit this is happening too fast. Our institutions aren't ready at all nor will they adapt fast enough. Global ban" was the right call.

It is not impossible that a similar call is also appropriate here with AI. I personally dunno what the right call is, but I'm pretty skeptical of any strong claim that it could never be the right call to outright ban some forms of advanced AI research just like we did with some forms of advanced genetic engineering research.

This isn't like banning numbers at all. The blame falling on the corner-cutters doesn't mean the right call is always to just tell the blamed not to cut corners. In some cases the right call is instead taking away their corner-cutting tool.

At least until our institutions can catch up.

replies(1): >>pizza+j52
◧◩
78. amalco+5l1[view] [source] [discussion] 2023-05-16 20:57:52
>>jamesh+bS
> The point of elected representatives is to represent the views of their constituents, not to obtain special advantages for their constituents.

A lot of said constituents' views are, in practice, that they should receive special advantages.

◧◩◪◨⬒
79. johnny+jn1[view] [source] [discussion] 2023-05-16 21:10:47
>>hkt+001
> Add to that the fact that you're posting on a largely American forum where access to healthcare is largely predicated on insurance

Why do so many Americans think universal health care means there is no private insurance? In most countries, insurance is compulsory and tightly regulated. Some like the Netherlands and France have public insurance offered by the government. In other places like Germany, your options are all private, but underprivileged people have access to government subsidies for insurance (Americans do too, to be fair). Get sick in one of these places as an American, you will be handed a bill and it will still make your head spin. Most places in Europe work like this. Of course, even in places with nationalized healthcare like the UK, non-residents would still have to pay. What makes Germany and NL and most other European countries different from that system is if you're a resident without an insurance policy, you will also have to pay a hefty fine. You are basically auto-enrolled in an invisible "NHS" insurance system as a UK resident. Of course, most who can afford it in the UK still pay for private insurance. The public stuff blends being not quite good with generally poor availability.

Americans are actually pretty close to Germany with their healthcare. What makes the US system shitty can be boiled down to two main factors:

- Healthcare networks (and state incorporation laws) making insurance basically useless outside of a small collection of doctors and hospitals, and especially your state

- Very little regulation on insurance companies, pharmaceutical companies or healthcare providers in price-setting

The latter is especially bad. My experience with American health insurance has been that I pay more for much less. $300/month premiums and still even seeing a bill is outrageous. AI underwriters won't fix this, yeah, but they aren't going to make it any worse because the problem is in the legislative system.

> There's no court of appeal for insurance.

No, but you can of course always sue your insurance company for breach of contract if they're wrongfully withholding payment. AI doesn't change this, but AI can make this a viable option for small people by acting as a lawyer. Well, in an ideal world anyways. The bar association cartels have been very quick to raise their hackles and hiss at the prospect of AI lawyers. Not that they'll do anything to stop AI from replacing most duties of a paralegal of course. Can't have the average person wielding the power of virtually free, world class legal services.

replies(1): >>menset+IR1
◧◩◪◨⬒
80. jamesh+js1[view] [source] [discussion] 2023-05-16 21:39:28
>>Aperoc+S01
Seems like it’s under-studied (due to anglophone bias in the English language political science world probably) - but comparative political science is a discipline, and this paper suggests it’s a matter of single-member districts rather than the nature of the constitutional arrangement: https://journals.sagepub.com/doi/10.1177/0010414090022004004

(I would just emphasize, before anyone complains, that the Federal Republic of Germany is very much a federal republic.)

◧◩◪
81. abeppu+Cs1[view] [source] [discussion] 2023-05-16 21:41:50
>>shon+3j1
I think the stance that regulation slows innovation and adoption, and that unregulated adoption yields public understanding is exceedingly naive, especially for technically sophisticated products.

Imagine if, e.g. drugs testing and manufacture was subject to no regulations. As a consumer, if you can be aware that some chemicals are very powerful and useful, but you can't be sure that any specific product has the chemicals it says it has, that it was produced in a way that ensures a consistent product, or that it was tested for safety, or what the evidence is that it's effective against a particular condition. Even if wide adoption of drugs from a range of producers occurs, does the public really understand what they're taking, and whether it's safe? Should the burden be on them to vet every medication on the market? Or is appropriate to have some regulation to ensure medications have have their active ingredients in the amounts stated, and are produced with high quality assurance, and are actually shown to be effective? Oh, no, says a pharma industry PR person. "Doing anything that limits the adoption or encourages the underground development of bioactive chemicals is a mistake. Regulating it in this way will push it underground and make it harder to track and harder for the public to understand and prepare for."

If a team of PhDs can spend weeks trying to explain "why did the model do Y in response to X?" or figure out "can we stop it from doing Z?", expecting "wide adoption" to force "public education" to be sufficient to defuse all harms such that no regulation whatsoever is necessary is ... beyond optimistic.

replies(3): >>verdve+Kw1 >>shon+jA1 >>komali+fL1
◧◩
82. dragon+gt1[view] [source] [discussion] 2023-05-16 21:45:04
>>jamesh+bS
> The point of elected representatives is to represent the views of their constituents, not to obtain special advantages for their constituents.

The views of their constituents are probably in favor of special advantages for their constituents, so the one may imply the other.

I mean, some elected representatives may represent constituencies consisting primarily of altruistic angels, but that is…not the norm.

◧◩◪◨⬒⬓
83. Pet_An+kt1[view] [source] [discussion] 2023-05-16 21:45:23
>>hkt+e21
I think it's more like culture carries language with it. Along with other things, but language is one of the more recognizable ones.
◧◩◪◨⬒
84. dragon+Bt1[view] [source] [discussion] 2023-05-16 21:47:18
>>titzer+sb1
> Corruption is a kind of decay that afflicts institutions.

It can be, but often its often the project of a substantial subset of the people creating institutions, so its misleading and romanticizing the past to view it as “decay”.

replies(1): >>titzer+Wx1
◧◩
85. verdve+4w1[view] [source] [discussion] 2023-05-16 22:01:02
>>abeppu+mu
why should we punish the model or the majority because some people might use a tool for bad things?
◧◩◪◨
86. verdve+Kw1[view] [source] [discussion] 2023-05-16 22:05:04
>>abeppu+Cs1
Regulation does slow innovation, but is often needed because those innovating will not account for externalities. This is why we have the Clean Air and Water Act.

The debate is really about how much and what type of regulation. It is of strategic importance that we do not let bad actors get the upper hand, but we also know that bad actors will rarely follow any of this regulation anyway. There is something to be said for regulating the application rather than the technology, as well as for realizing that large corporations have historically used regulatory capture to increase their moat.

Given it seems quite unlikely we will be able to stop prompt injections, what are we to do?

Provenance seems like a good option, but difficult to implement. It allows us to track who created what, so when someone does something bad, we can find and punish them.

There are analogies to be made with the Bill of Rights and gun laws. Gun analogy seem interesting because they have to be registered, but often criminals won't and the debate is quite polarized.

◧◩◪◨
87. staunt+tx1[view] [source] [discussion] 2023-05-16 22:09:57
>>bcrosb+wj1
> You're never going to be able to regulate what a person's computer can run.

You absolutely can. Maybe you can't effectively enforce that regulation but you can regulate and you can take measures that make violating the regulation impractical or risky for most people. By the way, the "crypto-wars" never ended and are ongoing all around the world (UK, EU, India, US...)

◧◩
88. chasd0+Qx1[view] [source] [discussion] 2023-05-16 22:11:48
>>abeppu+mu
> I cannot sell a "rain coat" made of gauze which offers no protection from rain, and I cannot sell a "smoke detector" which doesn't effectively detect smoke. Why should low-volume AI/ML products get a pass?

i can sell a webserver that gets used to host illegal content all day long. Should that be included? Where does the regulation end? I hate that the answer to any question seems to be just add more government.

replies(1): >>komali+PL1
◧◩◪◨⬒⬓
89. titzer+Wx1[view] [source] [discussion] 2023-05-16 22:12:38
>>dragon+Bt1
I am no way suggesting that corruption is a new thing. It is an erosive force that has always operated throughout history. The amount of corruption in an institution tends to increase unless specifically rooted out. It goes up and down over time as institutions rise and fall or fade in obsolescence.
◧◩◪
90. mark_l+hy1[view] [source] [discussion] 2023-05-16 22:13:46
>>verdve+ca1
You are exactly correct.

I have sent correspondence about ten times to my Congressmen and Senators. I have received a good reply (although often just saying there is nothing that they can do) except for the one time I contacted Jon Kyl and unfortunately mentioned data about his campaign donations from Monsanto - I was writing about a bill he sponsored that I thought would have made it difficult for small farmers to survive economically and make community gardens difficult because of regulations. No response on that correspondence.

replies(3): >>StillB+7B1 >>verdve+oB1 >>anon84+jU1
◧◩◪
91. chasd0+7z1[view] [source] [discussion] 2023-05-16 22:19:06
>>alfalf+D21
> Congress can and should examine these issues

great, how does that apply to China or Europe in general? Or a group in Russia or somewhere else? Are you assuming every governing body on the surface of the earth is going to agree on the terms used to regulate AI? I think it's a fool's errand.

◧◩◪◨
92. shon+jA1[view] [source] [discussion] 2023-05-16 22:26:18
>>abeppu+Cs1
My argument isn't that regulation in general is bad. I'm an advocate of greater regulation in medicine, drugs in particular. But the cost of public exposure to potentially dangerous unregulated drugs is a bit different than trying to regulate or create a restrictive system around the development and deployment of AI.

AI is a very different problem space. With AI, even the big models easily fit on a micro SD card. You can carry around all of GPT4 and its supporting code on a thumb drive. You can transfer it wirelessly in under 5 minutes. It's quite different than drugs or conventional weapons or most other things from a practicality perspective when you really think about enforcing developmental regulation.

Also consider that criminals and other bad actors don't care about laws. The RIAA and MPAA have tried hard for 20+ years to stop piracy and the DMCA and other laws have been built to support that, yet anyone reading this can easily download the latest blockbuster movie or in the theater.

Even still, I'm not saying don't make laws or regulations on AI. I'm just saying we need to carefully consider what we're really trying to protect or prevent.

Also, I certainly believe that in this case, the widespread public adoption of AI tech has already driven education and adaptation that could not have been achieved otherwise. My mom understands that those pictures of Trump being chased by the cops are fake. Why? Because Stable Diffusion is on my home computer so I can make them too. I think this needs to continue.

93. johnal+6B1[view] [source] 2023-05-16 22:32:04
>>elil17+(OP)
I'm not American even, so I cannot, but what a good idea! I hope the various senators hear this message.
◧◩◪◨
94. StillB+7B1[view] [source] [discussion] 2023-05-16 22:32:18
>>mark_l+hy1
I'm 99% sure that that vast majority of federal congress people (which represent ~1 million people each) never see your emails/letters. Your largely speaking to interns/etc who work in the office unless you happen to make a physical appointment and show up in person.

Those interns have a pile of form letters they send for about 99% of (e)mail they get, and if you happen to catch their attention you might get more than than the usual tick mark in a spreadsheet (for/against X). Which at best might be as much as a sentence or two in a weekly correspondence summary which may/may not be read by your representative depending on how seriously they take their job.

replies(1): >>runsWp+nG1
◧◩◪◨⬒
95. enigmo+9B1[view] [source] [discussion] 2023-05-16 22:32:25
>>SoftTa+6h1
All the more reason to oppose regulation like this, since if it were in place the US would fall behind other countries without such regulation.
◧◩◪◨
96. verdve+oB1[view] [source] [discussion] 2023-05-16 22:33:40
>>mark_l+hy1
It applies more generally, if you want to change anyone's mind, don't attack or belittle them.

Everything has become so my team vs your team... you are bad because you think differently...

replies(1): >>komali+oK1
◧◩◪
97. Dalewy+FB1[view] [source] [discussion] 2023-05-16 22:35:45
>>alfalf+D21
I fear the humans engaging in such nefarious activities far more than some blob of code being used by humans engaging in such nefarious activities.

Likewise for activities that aren't nefarious too. Whatever fears that could be placed on blobs of code like "AI", are far more merited being placed on humans.

◧◩◪
98. rlytho+DC1[view] [source] [discussion] 2023-05-16 22:43:00
>>verdve+ca1
The way is not emails some office assistant deletes when they do not align with the already chosen path forward they just need Cherry picked support to leverage to manufacture consent
99. rlytho+VC1[view] [source] 2023-05-16 22:44:39
>>elil17+(OP)
So you sent a letter saying “Mr Congress save my job that is putting others jobs at risk.”

You think voice actors and writers are not saying the same?

When do we accept capitalism as we know it is just a bullshit hallucination we grew up with? It’s no more an immutable feature of reality than a religion?

I don’t owe propping up some rich person’s figurative identity, or yours for that matter.

◧◩◪◨⬒
100. runsWp+nG1[view] [source] [discussion] 2023-05-16 23:10:08
>>StillB+7B1
If you get the eyes of the intern it can still help. They brief the senator/congressman, work on bills, etc.
101. cabaal+hK1[view] [source] 2023-05-16 23:33:28
>>elil17+(OP)
Sending this to my senator would just notify her of what company she should reach out to for a campaign contribution.
◧◩◪◨⬒
102. komali+oK1[view] [source] [discussion] 2023-05-16 23:33:54
>>verdve+oB1
Right so the most effective way to influence your politician is to disrupt their life, because they belittle their constituents' existence every day, by completely ignoring them and often working directly against their interests, unless they can further their own political goals.

In places like the usa I don't think politicians should expect privacy or peace. They have so much power compared to the citizen and they so rarely further the interests of the general population in good faith.

Given how they treat you, it's best to abandon politeness (which only helps them further belittle your meaninglessness in their decision making) and put a crowd in front of their house, accost them at restaurants, and find other ways of reminding them how accessible and functionally answerable they are to the people they're supposed to serve.

replies(1): >>selimt+Ec2
◧◩◪
103. candio+9L1[view] [source] [discussion] 2023-05-16 23:38:54
>>runarb+uZ
AI is being used as a consumer good, including to discriminate:

https://www.smh.com.au/national/nsw/maximise-profits-facial-...

AI is being used by law enforcement and public institutions. In fact so much that perhaps this is a good link:

https://www.monster.com/jobs/search?q=artificial+intelligenc...

In both cases it's too late to do anything about it. AI is "loose". Oh and I don't know if you noticed, governments have collectively decided law doesn't apply to them, only to their citizens, and only in a negative way. For instance, just about every country has laws on the books guaranteeing timely emergency care at hospitals, with timely defined as within 1 or 2 hours.

Waiting times are 8-10 hours (going up to days) and this is the normal situation now, it's not a New Year's eve or even Friday evening thing anymore. You have the "right" to less waiting time, which can only mean the government (the worst hospitals are public ones) should be forced to fix this, spending whatever it needs to to fix it. And it can be fixed, I mean at this point you'd have to give physicians and nurses a 50% rise and double the number employed and 10x the number in training.

Government is just outright not doing this, and if one thing's guaranteed, this will keep getting worse, a direct violation of your rights in most states, for the next 10 years minimum, but probably longer.

replies(1): >>runarb+ZO1
◧◩◪◨
104. komali+fL1[view] [source] [discussion] 2023-05-16 23:39:33
>>abeppu+Cs1
With the pharma example, what if we as a society circumvented the issue by not having closed source medicine? If the means to produce aspirin, including ingredients, methodology, QA, etc, were publicly available, what would that look like?

I met some biohackers at defcon that took this perspective, a sort of "open source but for medicine" ideology. I see the dangers of a massively uneducated population trying to 3d print aspirin poisoning themselves, but they already do that with horse paste so I'm not sure it's a new issue.

◧◩◪
105. komali+PL1[view] [source] [discussion] 2023-05-16 23:43:14
>>chasd0+Qx1
Just because there's a conversation about adding more government doesn't mean people are seeking a totalitarian police state. Seems quite the opposite for many of these commenters supporting regulation in fact.

Similarly it's not really good faith to assume everyone opposed to regulation in this field is seeking a lawless libertarian (or anarchist perhaps) utopia.

◧◩◪◨
106. runarb+ZO1[view] [source] [discussion] 2023-05-17 00:03:55
>>candio+9L1
Post hoc consumer protection is actually quite common. Just think how long after cars entered the marked before they were regulated. Now we have fuel standards, led bans, seat belts, crash tests etc. Even today we are still adding consumer protection to stuff like airline travels and medicine, even though commercial airliners and laboratory made drugs have been around for almost a century.
◧◩◪◨⬒⬓
107. menset+IR1[view] [source] [discussion] 2023-05-17 00:24:08
>>johnny+jn1
America could afford universal healthcare, but it would require convincing people to pay much higher taxes.
◧◩◪◨
108. anon84+jU1[view] [source] [discussion] 2023-05-17 00:42:30
>>mark_l+hy1
Well, it's not like getting a response means anything anyway. The contents of the response has no correlation with their future behavior.

Politicians just know that it's better to be nice to people who seem to like you or are engaged with the system, since they want to keep getting your vote. If not then the person isn't worth your time.

◧◩
109. mr_toa+DV1[view] [source] [discussion] 2023-05-17 00:51:22
>>abeppu+mu
> I cannot sell a "rain coat" made of gauze which offers no protection from rain, and I cannot sell a "smoke detector" which doesn't effectively detect smoke. Why should low-volume AI/ML products get a pass?

There are already laws against false advertising, misrepresentation etc. We don’t need extra laws specifically for AI that doesn’t perform well.

What most people are concerned about is AI that performs too well.

◧◩◪◨⬒
110. pizza+j52[view] [source] [discussion] 2023-05-17 02:17:37
>>DirkH+Nk1
I can get your example about eugenics. I get that the worry is that it would become pervasive due to social pressure and make the dominant position to do it. And that this would passively, gradually strip personhood away from those who didn’t receive it. There’s a tongue-in-cheek conversation to be had about how people already choose their mating partners this way and making it truly actually outright illegal might not really reflect the real processes of reality, but that’s a tad too cheeky perhaps.

But even then, that’s a linear diffusion- one person, one body mod. I guess you could say that their descendants would proliferate and multiply so the alteration slowly grows exponentially over the generations.. but the FUD I hear from AI decelerationists is that it would be an explosive diffusion of harms, like, as soon as the day after tomorrow. One architect, up to billions of victims, allegedly. Not that I think it’s unwise to be compelled to precaution with new and mighty technologies, but what is it that some people are so worried about that they’re willing to ban all research, and choke all the good that has come from them, already? Maybe it’s just a symptom of the underlying growing mistrust in the social contract..

replies(1): >>DirkH+lo2
◧◩
111. 13of40+272[view] [source] [discussion] 2023-05-17 02:36:44
>>abeppu+mu
> revenge porn

I would assert that just as I have the right to pull out a sheet of paper and write the most vile, libelous thing on it I can imagine, I have the right to use AI to put anyone's face on any body, naked or not. The crime comes from using it for fraud. Take gasoline for another example. Gasoline is powerful stuff. You can use it to immolate yourself or burn down your neighbor's house. You can make Molitov cocktails and throw them at nuns. But we don't ban it, or saturate it with fire retardants, because it has a ton of other utility, and we can just make those outlying things illegal. Besides, five years from now, nobody's going to believe a damned thing they watch, listen to, or read.

replies(1): >>abeppu+ka2
◧◩◪◨⬒
112. mLuby+u72[view] [source] [discussion] 2023-05-17 02:40:48
>>ptsnev+GZ
Ironically communism would've had a better chance of success if it had AI for the centrally planned economy and social controls. Hardcore materialism will play into automation's hands though.

We're more likely to see a theocratic movement centered on the struggle of human souls vs the soulless simulacra of AI.

replies(1): >>ptsnev+s63
◧◩
113. 8note+C72[view] [source] [discussion] 2023-05-17 02:42:25
>>abeppu+mu
I don't think ai regulation is the right tool to combat revenge porn?

The right one is to grant people rights over their likeness, so you could use something more like copyright law

Even if it's a real recording, you should still have control over it

◧◩
114. peripi+F92[view] [source] [discussion] 2023-05-17 03:05:33
>>deping+b91
Maybe I'm naive, but it isn't clear to me that this is rent-seeking behavior by OpenAI.
◧◩◪
115. abeppu+ka2[view] [source] [discussion] 2023-05-17 03:11:58
>>13of40+272
I have the right to use my camera to film adult content. I do not have the right to open a theater which shows porn to any minor who pays for a ticket. It's perfectly legal for me to buy a gallon of gasoline, and bunch of finely powdered lead, and put them into the same container, creating gasoline with lead content. It is _not_ fine for me to run a filling station which sells leaded gasoline to motorists. You want to drink unpasteurized milk fresh from your cow? Cool. You want to sell unpasteurized milk to the public? Shakier ground.

I think you should continue to have the right to use whatever program to generate whatever video clip you like on your computer. That is a distinct matter from whether a commercially available video generative AI service has some obligations to guard against abusive uses. Personal freedoms are not the same as corporate freedom from regulatory burdens, no matter how hard some people will work to conflate them.

◧◩
116. larati+Rb2[view] [source] [discussion] 2023-05-17 03:30:14
>>kubota+Va
If if someone doesn't agree with this, regulate what exactly?

Does scikit-learn count or we are just not going to bother defining what we mean by "AI"?

"AI" is whatever congress says it is? That is an absolutely terrible idea.

◧◩◪◨⬒⬓
117. selimt+Ec2[view] [source] [discussion] 2023-05-17 03:37:52
>>komali+oK1
In Pakistan, there was a provincial politician (Zulfiqar Mirza) who’s probably killed more than one person, who has been seen on TV going to police and bureaucrats saying “I’m a villain and you know it”
◧◩
118. kraf+io2[view] [source] [discussion] 2023-05-17 05:53:00
>>abeppu+mu
I like jobs too but what about the risks of AI? Some people I respect a lot are arguing - convincingly in my opinion - that this tech might just end human civilization. Should we roll the die on this?
◧◩◪◨⬒⬓
119. DirkH+lo2[view] [source] [discussion] 2023-05-17 05:53:32
>>pizza+j52
I mean, I imagine there are anti-genetic engineering FUD folks that go so far as to then say we should totally ban crispr cas9. I would caution against over-indexing on the take of only some AI decelerationists.

Totally agree we could be witnessing a growing mistrust in the social contract.

◧◩◪◨
120. ChatGT+mv2[view] [source] [discussion] 2023-05-17 07:10:18
>>bcrosb+wj1
I hate to say this because it would be shocking, but computers as we know them could be taken off people.

Again it sounds extreme but in an extreme situation it could happen / not impossible.

◧◩
121. random+LU2[view] [source] [discussion] 2023-05-17 11:14:26
>>abeppu+mu
> For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn.

As I understand it, revenge porn is seen as being problematic because it can lead to ostracization in certain social groups. Would it not be better to regulate such discrimination? The concept of discrimination is already recognized in law. This would equally solve for revenge porn created with a camera. The use of AI is ultimately immaterial here. It is the human behaviour as a product of witnessing material that is the concern.

◧◩
122. elil17+i63[view] [source] [discussion] 2023-05-17 12:41:02
>>simonh+Oj1
It’s less about how it could be hacked and more about why an AI would do that or have the capability to do it without any warning.
replies(1): >>simonh+j69
◧◩◪◨⬒⬓
123. ptsnev+s63[view] [source] [discussion] 2023-05-17 12:41:58
>>mLuby+u72
> Ironically communism would've had a better chance of success if it had AI for the centrally planned economy and social controls. Hardcore materialism will play into automation's hands though.

Exactly! A friend of mine who is into the communist ideology thinks that whichever society taps AI for productivity efficiency, and even policy, will become the new hegemon. I have no immediate counterpoint besides the technology not being there yet.

I can definitely imagine LLM based on political manifests. A personal conversation with your senator at any time about any subject! That is the basic part though: The politician being augmented by the LLM.

The bad part is a party, driven by a LLM or similar political model, where the human guy you see and elect is just a mouthpiece like in "The moon is a harsh mistress". Policy would all be algorithmic and the LLM out provide the interface between the fundamental processing and the mouthpiece.

These conflicts will likely lead to the conflicts you mention. I am pretty sure there will be a new -ism.

◧◩◪
124. simonh+j69[view] [source] [discussion] 2023-05-19 03:44:35
>>elil17+i63
That’s the alignment problem. We don’t know what the actual goals of an AI trained neural net are. We know what criteria we trained it against, but it turns out that’s not at all the same thing.

I highly recommend Rob Miles channel on YouTube. Here’s a good one, but they’re all fascinating. It turns out training an AI to have the actual goals we want it to have is fiendishly difficult.

https://youtu.be/hEUO6pjwFOo

[go to top]