zlacker

[parent] [thread] 17 comments
1. baby+(OP)[view] [source] 2023-11-22 18:02:54
Why are you assuming employees are incentivized by $$$ here, and why do you think the board's reason is related to safety or that employees don't care about safety? It just looks like you're spreading FUD at this point.
replies(4): >>jejeyy+X >>hacker+Y1 >>mi_lk+P3 >>DirkH+3h
2. jejeyy+X[view] [source] 2023-11-22 18:06:36
>>baby+(OP)
of course the employees are motivated by $$$ - is that even a question?
replies(1): >>Xelyne+YV1
3. hacker+Y1[view] [source] 2023-11-22 18:11:11
>>baby+(OP)
The large majority of people are motivated by $$$ (or fame) and if they all tell me otherwise I know many of them are lying.
4. mi_lk+P3[view] [source] 2023-11-22 18:17:55
>>baby+(OP)
It's you who are naive if you really think the majority of those 7xx employees care more about safe AGI than their own equity upside
replies(2): >>nh2342+l8 >>concor+Oh
◧◩
5. nh2342+l8[view] [source] [discussion] 2023-11-22 18:37:22
>>mi_lk+P3
Why would anyone care about safe agi? its vaporware.
replies(2): >>mecsre+na >>stillw+2e
◧◩◪
6. mecsre+na[view] [source] [discussion] 2023-11-22 18:44:23
>>nh2342+l8
Everything is vaporware until it gets made. If you wait until a new technology definitively exists to start caring about safety, you have guaranteed it will be unsafe.

Lucky for us this fiasco has nothing to do with AGI safety, only AI technology. Which only affects automated decision making in technology that's entrenched in every fact of our lives. So we're all safe here!

replies(1): >>supert+zi
◧◩◪
7. stillw+2e[view] [source] [discussion] 2023-11-22 18:59:17
>>nh2342+l8
Exactly what an OpenAI developer would understand. All the more reason to ride the grift that brought them this far
8. DirkH+3h[view] [source] 2023-11-22 19:15:26
>>baby+(OP)
Assuming employees are not incentivized by $$$ here seems extraordinary and needs a pretty robust argument to show it isn't playing a major factor when there is this much money involved.
◧◩
9. concor+Oh[view] [source] [discussion] 2023-11-22 19:18:21
>>mi_lk+P3
Uh, I reckon many do. Money is easy to come by for that type of person and avoiding killing everyone matters to them.
◧◩◪◨
10. supert+zi[view] [source] [discussion] 2023-11-22 19:22:30
>>mecsre+na
> If you wait until a new technology definitively exists to start caring about safety, you have guaranteed it will be unsafe.

I don’t get this perspective. The first planes, cars, computers, etc. weren’t initially made with safety in mind. They were all regulated after the fact and successfully made safer.

How can you even design safety into something if it doesn’t exist yet? You’d have ended up with a plane where everyone sat on the wings with a parachute strapped on if you designed them with safety first instead of letting them evolve naturally and regulating the resulting designs.

replies(4): >>FartyM+yk >>bcrosb+wn >>mecsre+Lw >>jonono+pQ2
◧◩◪◨⬒
11. FartyM+yk[view] [source] [discussion] 2023-11-22 19:31:16
>>supert+zi
The difference between unsafe AGI and an unsafe plane or car is that the plane/car are not existential risks.
replies(1): >>optymi+Nv1
◧◩◪◨⬒
12. bcrosb+wn[view] [source] [discussion] 2023-11-22 19:45:40
>>supert+zi
The US government got involved in regulating airplanes long before there were any widely available commercial offerings:

https://en.wikipedia.org/wiki/United_States_government_role_...

If you're trying to draw a parallel here then safety and the federal government needs to catch up. There's already commercial offerings that any random internet user can use.

replies(1): >>supert+Po
◧◩◪◨⬒⬓
13. supert+Po[view] [source] [discussion] 2023-11-22 19:51:31
>>bcrosb+wn
I agree, and I am not saying that AI should be unregulated. At the point the government started regulating flight, the concept of an airplane had existed for decades. My point is that until something actually exists, you don’t know what regulations should be in place.

There should be regulations on existing products (and similar products released later) as they exist and you know what you’re applying regulations to.

◧◩◪◨⬒
14. mecsre+Lw[view] [source] [discussion] 2023-11-22 20:33:22
>>supert+zi
I understand where you're coming from and I think that's reasonable in general. My perspective would be: you can definitely iterate on the technology to come up with safer versions. But with this strategy you have to make an unsafe version first. If you got in one of the first airplanes ever made the likely hood of crashing is pretty high.

At some point, our try it until it works approach will bite us. Consider the calculations done to determine if fission bombs would ignite the atmosphere. You don't want to test that one and find out. As our technology improves exponentially we're going to run into that situation more and more frequently. Regardless if you think it's AGI or something else, we will eventually run into some technology where one mistake is a cataclysm. How many nuclear close calls have we already experienced.

◧◩◪◨⬒⬓
15. optymi+Nv1[view] [source] [discussion] 2023-11-23 02:30:43
>>FartyM+yk
How is it an 'existential risk'? Its body of knowledge is publicly available, no?
replies(1): >>FartyM+kh2
◧◩
16. Xelyne+YV1[view] [source] [discussion] 2023-11-23 06:53:03
>>jejeyy+X
No, it's just counter to the idea that it was "employee power" that brought sam back.

It was capital and the pursuit of more of it.

It always is.

◧◩◪◨⬒⬓⬔
17. FartyM+kh2[view] [source] [discussion] 2023-11-23 10:48:45
>>optymi+Nv1
What do you mean by "its"? There isn't any AGI yet. ChatGPT is far from that level.
◧◩◪◨⬒
18. jonono+pQ2[view] [source] [discussion] 2023-11-23 15:16:53
>>supert+zi
The principles, best practices and tools of safety engineering can be applied to new projects. We have decades of experience now. Not saying it will be perfect on the first try, or that we know everything that is needed. But the novel aspects of AI are not an excuse to not try.
[go to top]