zlacker

[parent] [thread] 18 comments
1. brooks+(OP)[view] [source] 2023-05-16 20:41:53
I'm not following this "good ideas must come from an ideologically pure source" thing.

Shouldn't we be evaluating ideas on the merits and not categorically rejecting (or endorsing) them based on who said them?

replies(6): >>briant+h3 >>samsta+F3 >>parent+cI >>dreamc+YM >>bcrosb+MR >>dTal+Sr1
2. briant+h3[view] [source] 2023-05-16 20:58:43
>>brooks+(OP)
> Shouldn't we be evaluating ideas on the merits and not categorically rejecting (or endorsing) them based on who said them?

The problem is when only the entrenched industry players & legislators have a voice, there are many ideas & perspectives that are simply not heard or considered. Industrial groups have a long history of using regulations to entrench their positions & to stifle competition...creating a "barrier to entry" as they say. Going beyond that, industrial groups have shaped public perception & the regulatory apparatus to effectively create a company store, where the only solutions to some problem effectively (or sometimes legally) must go through a small set of large companies.

This concern is especially prescient now, as these technologies are unprecedentedly disruptive to many industries & private life. Using worst case scenario fear mongering as a justification to regulate the extreme majority of usage that will not come close to these fears, is disingenuous & almost always an overreach of governance.

replies(2): >>samsta+74 >>chii+yB
3. samsta+F3[view] [source] 2023-05-16 21:00:11
>>brooks+(OP)
Aside from who is saying them, the premise holds water.

AI is beyond-borders, and thus unenforceable in practicality.

The top-minds-of-AI are a group that cannot be regulated.

-

AI isnt about the industries it shall disrupt ; AI is the policy-makers it will expose.

THAT is what they are afraid of.

--

I have been able to do financial lenses into organizations that even with rudimentary BI would have taken me months/weeks - but I have been able to find insights which took me minutes.

AI regulation right now, in this infancy, is about damage control.

---

Its the same as the legal weed market. You think BAIN Capital just all of a sudden decided to jump into the market without setting up their spigot?

Do you think that haliburton under cheney was able to setup their supply chains without cheney as head of KBR/Hali/CIA/etc...

Yeah, this is the same play ; AI is going to be squashed until they can use it to profit over you.

Have you watched ANIME ever? Yeah... its here now.

replies(2): >>turtle+ux >>throwa+DC
◧◩
4. samsta+74[view] [source] [discussion] 2023-05-16 21:03:24
>>briant+h3
I can only say +1 = and I know how much HN hates that, but ^This.
◧◩
5. turtle+ux[view] [source] [discussion] 2023-05-17 00:07:51
>>samsta+F3
Which anime(s)? If ANIME is the title, that's going to be hard to search.

Do you mean like Serial Experiments Lain?

replies(1): >>throwa+RC
◧◩
6. chii+yB[view] [source] [discussion] 2023-05-17 00:37:48
>>briant+h3
> there are many ideas & perspectives that are simply not heard or considered.

of course, but just because those ideas are unheard, doesn't mean they are going to be any better.

An idea should stand on its own merits, and be evaluated objectively. It doesn't matter who was doing the proposing.

Also the problem isn't that bad ideas might get implemented, but that the legislature isn't willing or able to make updates to laws that encoded a bad idea. Perhaps it isnt known that it is a bad idea until after the fact, and the methods of democracy we have today isn't easily able to force updates to bad laws encoding bad ideas.

replies(1): >>briant+tI
◧◩
7. throwa+DC[view] [source] [discussion] 2023-05-17 00:45:08
>>samsta+F3
This is a very interesting post. I don't understand this part: <<AI is the policy-makers it will expose>> Can you help to explain a different way?

And hat tip to this comment:

    Have you watched ANIME ever? Yeah... its here now.
The more I watch the original Ghost in the Shell, the more I think it has incredible foresight.
replies(2): >>samsta+0I2 >>samsta+pB3
◧◩◪
8. throwa+RC[view] [source] [discussion] 2023-05-17 00:45:36
>>turtle+ux
One idea / suggestion: The original Ghost in the Shell.
9. parent+cI[view] [source] 2023-05-17 01:30:02
>>brooks+(OP)
The subtle difference between the original statement and yours:

Ideas that drive governing decisions should be globally good - meaning there should be more than just @sama espousing them.

replies(1): >>r_hood+u21
◧◩◪
10. briant+tI[view] [source] [discussion] 2023-05-17 01:32:26
>>chii+yB
> of course, but just because those ideas are unheard, doesn't mean they are going to be any better.

It probably does mean it's better at least for the person with the perspective. Too bad only a very few get a seat at the table to advocate for their own interests. It would be better if everyone has agency to advocate for their interests.

> Also the problem isn't that bad ideas might get implemented, but that the legislature isn't willing or able to make updates to laws that encoded a bad idea

First, this is a hyped up crisis where some people are claiming it will end humanity. There have been many doomsday predictions & people scared by these predictions are effectively scammed by those fomenting existential fear. It's interesting that the representatives of large pools of capital are suddenly existentially afraid when there is open source competition.

Second, once something is in the domain of government it will only get more bloated & controlled by monied lobbyists. The legislatures controlled by lobbyists will never make it better, only worse. There have been so many temporary programs that continue to exist & expand. Many bloated omnibus bills too long to read passed under some sort of "emergency". The government's tendency is to grow & to serve the interests of the corporations that pay the politicians. Fear is an effective tool to convince people to accept things against their interests.

11. dreamc+YM[view] [source] 2023-05-17 02:13:54
>>brooks+(OP)
The recent history of tech CEOs advocating for regulations only they can obey has become so blatant that any tech CEO who advocates for regulation should be presumed guilty until proven innocent.
replies(1): >>brooks+B02
12. bcrosb+MR[view] [source] 2023-05-17 03:06:10
>>brooks+(OP)
Not when it comes to politics.

You'll be stuck in the muck while they're laughing their ass off all the way to the bank.

replies(1): >>jasonm+ES
◧◩
13. jasonm+ES[view] [source] [discussion] 2023-05-17 03:15:22
>>bcrosb+MR
It doesn't even matter if "his heart is pure" ... Companies are not run that way.

We have lawyers.

◧◩
14. r_hood+u21[view] [source] [discussion] 2023-05-17 05:08:01
>>parent+cI
You're defending an argument that is blatantly self contradictory within the space of two sentences.

A) "anything he suggests should be categorically rejected because he’s just not in a position to be trusted."

B) "If what he suggests are good ideas then hopefully we can arrive at them in some other way with a clean chain of custody."

These sentences directly follow each other and directly contradict each other. Logically you can't categorically (the categorical is important here. Categorical means something like "treat as a universal law") reject a conclusion because it is espoused by someone you dislike, while at the same time saying you will accept that conclusion if arrived at by some other route.

"I will reject P if X proposes P, but will accept P if Y proposes P." is just poor reasoning.

replies(1): >>brooks+012
15. dTal+Sr1[view] [source] 2023-05-17 09:33:34
>>brooks+(OP)
I think what they are trying to say is that Sam Altman is very smart, but misaligned. If we assume that he is 1) sufficiently smart and 2) motivated to see OpenAI succeed, then his suggestions must be assumed to lead to a future where OpenAI is successful. If that future looks like it contradicts a future we want (for instance, user-controlled GPT-4 level AIs running locally on every machine), his suggestions should therefore be treated as reliably radioactive.
◧◩
16. brooks+B02[view] [source] [discussion] 2023-05-17 13:51:44
>>dreamc+YM
Sure, go execute him for all I care.

My point was that an idea shold not need attribution for you to know whether it's good or bad, for your own purposes. I can't imagine looking at a proposal and deciding whether to support or oppose based on the author rather than content.

If Altman is that smart and manipulative, all he has to do is advocate the opposite of what he wants and you'll be insisting that we must give him exactly what he wants, on principle. That's funny with kids but no way to run public policy.

◧◩◪
17. brooks+012[view] [source] [discussion] 2023-05-17 13:53:29
>>r_hood+u21
More clearly said than I managed, yep.

But I suppose it comes down to priorities. If good policy is less important than contradicting P, I suppose that approach makes sense.

◧◩◪
18. samsta+0I2[view] [source] [discussion] 2023-05-17 16:43:14
>>throwa+DC
ANIME predicted the exact corprate future...

Look at all anime cyber cities...

Its not as high tech as you may imagine, but the surveillance is there

EDIt your "company" is watching every

◧◩◪
19. samsta+pB3[view] [source] [discussion] 2023-05-17 21:11:25
>>throwa+DC
don't understand this part: <<AI is the policy-makers it will expose>> Can you help to explain a different way?*

===

Policy makers will not understanding what they are doing.

[go to top]