zlacker

[parent] [thread] 2 comments
1. rockem+(OP)[view] [source] 2023-05-17 03:37:20
Since no one here watched the actual hearings, I feel like I should point out that he said that nothing at the level they've created today should be eligible for any "licensing".

If you did watch the hearings it would have been pretty clear that the goal of any such licensing would be to prevent the runaway AI scenario or AGI from being unknowingly being created. It's obvious that some sort of agency would need to be set up far in advance of when it's possible for runaway AI to happen. Regulatory capture was also specifically brought up as a potential downside.

This article is just pushing a cynical narrative for clicks and y'all are eating it up.

replies(2): >>quailf+39 >>iNic+Pz
2. quailf+39[view] [source] 2023-05-17 05:21:20
>>rockem+(OP)
I watched the hearing, but didn’t read the article, and I was going to say. These comments are far more vitriolic than I would have expected.
3. iNic+Pz[view] [source] 2023-05-17 09:52:25
>>rockem+(OP)
Having also watched the hearing I was pretty surprised at all the negativity in the comments. My view of Sam Altman has improved after watching the hearings. He seems to sincerely believe that he is doing the right thing. He owns zero equity in OpenAI and has no financial incentive. Of course, if you don't buy the AI might be dangerous argument then this seems just like theatrics. But there are clear threats with the existing models [1], and I believe there will be even greater threats in the future (see Superintelligence or The Precipice or Human Compatible). Also this [2], and this master list of failures [3].

[1]: https://arxiv.org/abs/2305.06972 [2]: https://arxiv.org/abs/2210.01790 [3]: https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3Hs...

[go to top]