zlacker

[return to "OpenAI is now everything it promised not to be: closed-source and for-profit"]
1. mellos+pe[view] [source] 2023-03-01 10:46:59
>>isaacf+(OP)
This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.

They've achieved marvellous things, OpenAI, but the pivot and long-standing refusal to deal with it honestly leaves an unpleasant taste, and doesn't bode well for the future, especially considering the enormous ethical implications of advantage in the field they are leading.

◧◩
2. adamsm+r61[view] [source] 2023-03-01 16:30:49
>>mellos+pe
>This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.

The thing is, it's probably a good thing. The "Open" part of OpenAI was always an almost suicidal bad mistake. The point of starting the company was to try and reduce p(doom) of creating an AGI and it's almost certain that the more people have access to powerful and potentially dangerous tech that p(doom) increases. I think OpenAI is one of the most dangerous organizations on the planet and being more closed reduces that danger slightly.

◧◩◪
3. fragsw+291[view] [source] 2023-03-01 16:39:34
>>adamsm+r61
I'm curious what you think makes them dangerous?
◧◩◪◨
4. therea+ve1[view] [source] 2023-03-01 16:59:20
>>fragsw+291
For long form, I’d suggest cold-takes blog who is very systematic thinker and has been focusing on agi risk recently. https://www.cold-takes.com
◧◩◪◨⬒
5. fragsw+Si1[view] [source] 2023-03-01 17:15:20
>>therea+ve1
I see a lot of "we don't know how it works therefore it could destroy all of us" but that sounds really handwavy to me. I want to see some concrete examples of how it's dangerous.
◧◩◪◨⬒⬓
6. pixl97+8X1[view] [source] 2023-03-01 20:00:53
>>fragsw+Si1
Are people dangerous? Yes or no question.

Do we have shitloads of regulations on what people can or cannot do? Yes or no question.

◧◩◪◨⬒⬓⬔
7. fragsw+TY1[view] [source] 2023-03-01 20:10:45
>>pixl97+8X1
Sometimes yes, and sometimes yes.

I can be convinced, I just want to see the arguments.

◧◩◪◨⬒⬓⬔⧯
8. pixl97+Ud4[view] [source] 2023-03-02 14:42:29
>>fragsw+TY1
The best argument I can make is to say, do not come at the issue with black/white thinking. I try to look at it more of 'probability if/of'.

Myself, I think the probability of a human level capable intelligence/intellect/reason AI is year 100% in the next decade or so, maybe 2 decades if things slow down. I'm talking about taking information from a wide range sensors and being able to use it short term thinking, and then being able to reincorporate it into long term memory as humans do.

So that gets us human level AGI, but why is it capped there? Science as far as I know hasn't come up with a theorem that says once you are smart as a human you hit some limit and it doesn't get any better than that. So, now you have to ask, by producing AGI in computer form, have we actually created an ASI? A machine with vast reasoning capability, but also submicrosecond access to a vast array of different data sources. For example every police camera. How many companies will allow said AI into their transaction systems for optimization? Will government controlled AI's have laws that your data must be accessed and monitored by the AI? Already you can see how this can spiral into dystopia...

But that is not the limit. If AI can learn and reason like a human, and humans can build and test smarter and smarter machines, why can't the AI? Don't think of AI as the software running on a chip somewhere, also think of it as every peripheral controlled by said AI. If we can have an AI create another AI (hardware+software) the idea of AI alignment is gone (and it's already pretty busted as it is).

Anyway, I've already written half a book here and have not even touched on any number of the arguments out there. Maybe reading something by Ray Kurzweil (pie in the sky, but interestingly we're following that trend) or Nick Bostrom would be a good place to start just to make sure you've not missed any of the existing arguments that are out there.

Also, if you are a video watcher check Robert Miles youtube channel

◧◩◪◨⬒⬓⬔⧯▣
9. mrtran+xv4[view] [source] 2023-03-02 15:59:17
>>pixl97+Ud4
> Myself, I think the probability of a human level capable intelligence/intellect/reason AI is year 100% in the next decade or so, maybe 2 decades if things slow down.

How is this supposed to work, as we reach the limit to how small transistors can get? Fully simulating a human brain would take a vast amount of computing power, far more than is necessary to train a large language model. Maybe we don't need to fully simulate a brain for human-level artificial intelligence, but even if it's a tenth of the brain that's still a giant, inaccessible amount of compute.

For general, reason-capable AI we'll need a fundamentally different approach to computing, and there's nothing out there that'll be production-ready in a decade.

[go to top]