zlacker

[parent] [thread] 4 comments
1. c_cran+(OP)[view] [source] 2023-07-06 12:10:17
I imagine some corporations might toy with the idea of letting a LLM or AI manage operations, but this would still be under some person's oversight. AIs don't have the legal means to own property.
replies(1): >>trasht+xA2
2. trasht+xA2[view] [source] 2023-07-06 23:06:35
>>c_cran+(OP)
There would probably be a board. But a company run by a superintelligent AI would quickly become so complex that the inner workings of the company would become a black box to the board.

And as long as the results improve year over year, they would have little incentive to make changes.

replies(1): >>c_cran+fs4
◧◩
3. c_cran+fs4[view] [source] [discussion] 2023-07-07 13:48:16
>>trasht+xA2
>But a company run by a superintelligent AI would quickly become so complex that the inner workings of the company would become a black box to the board.

The AI is still doing the job in the real world of allocating resources, hiring and firing people, and so on. It's not so complex as to be opaque. When an AI plays chess, the overall strategy might not be clear, but the actions it is doing are still obvious.

replies(1): >>trasht+p3c
◧◩◪
4. trasht+p3c[view] [source] [discussion] 2023-07-10 01:23:28
>>c_cran+fs4
> The AI is still doing the job in the real world of allocating resources, hiring and firing people, and so on.

When we have superintelligence, the AI is not going to a hire a lot of people, only fire them.

And I fully expect the technical platform it runs on 50 years after the last human engineer is fired, is going to be as incomprehensible to humans as the complete codebase of Google is to a regular 10-year-old, at best.

The "code" it would be running might include some code written in a human readable programming language, but would probably include A LOT of logic hidden deep inside neural networks with parameter spaces many orders of magnitude greater than GPT-4.

And on the hardware side, the situation would be similar. Chips created by superintelligent AGI's are likely to be just as difficult to reverse engineer as the neural networks that created them.

replies(1): >>c_cran+SXi
◧◩◪◨
5. c_cran+SXi[view] [source] [discussion] 2023-07-11 21:31:31
>>trasht+p3c
The outputs that LLMs produce have never had a problem with being unrecognizable, only the inputs that went into making them. It's also inherently harder to obfuscate some things. Mass firing is obviously something that is done to reduce costs.
[go to top]