zlacker

[return to "We have reached an agreement in principle for Sam to return to OpenAI as CEO"]
1. MattHe+bx[view] [source] 2023-11-22 10:10:09
>>staran+(OP)
I was hopeful for a private-industry approach to AI safety, but it looks unlikely now, and due to the slow pace of state investment in public AI R&D, all approaches to AI safety look unlikely now.

Safety research on toy models will continue to provide developments, but the industry expectation appears to be that emergent properties puts a low ceiling on what can be learned about safety without researching on cutting edge models.

Altman touted the governance structure of OpenAI as a mechanism for ensuring the organisation's prioritisation of safety, but the reports of internal reallocation away from safety towards keeping ChatGPT running under load concern me. Now the board has demonstrated that it was technically capable but insufficiently powerful to keep these interests in line, it seems unclear how any safety-oriented organisation, including Anthropic, could avoid the accelerationist influence of funders.

◧◩
2. sgt101+Pz1[view] [source] 2023-11-22 16:23:56
>>MattHe+bx
There are no emergent properties, just a linear increase in knowledge that can be retrieved.

- It can't plan

- It can't do arithmetic

- It can't reason

- It can approximately retrieve knowledge with a natural language query (there are some issues with this, but it's very good)

- It can encode data into natural languages and other modalities

I'm not worried about it, I am worried about how badly people have misunderstood what it can do and then attempted to use it for things that matter.

But I'm not surprised.

◧◩◪
3. quickt+MA3[view] [source] 2023-11-23 03:53:25
>>sgt101+Pz1
I don't think AI safetyists are worried about any model they have created so far. But if we are able to go from letter-soup "ooh look that almost seems like a sentence, SOTA!" to GPT4 in 20 years, where will go in the next 20? And what is the point they are becoming powerful. Let alone all the crazy ways people are trying to augment them with RAG, function calls, get them to run on less computer power and so on.

Also being better at humans at everything is not a prerequisite for danger. Probably a scary moment is when it could look at a C (or Rust, C++, whatever) codebase, find an exploit, and then use that exploit as a worm. If it can do that on everyday hardware not top end GPUs (either because the algorithms are made more efficient, or every iPhone has a tensor unit).

[go to top]