zlacker

[parent] [thread] 16 comments
1. 015a+(OP)[view] [source] 2023-11-19 00:19:27
And any talented engineer or scientist who actually wants to build safe AGI in an organization that isn't obsessed with boring B2B SaaS would align with Ilya. See, there are two sides to this? Sam isn't a god, despite what the media makes him out to be; none of them are.
replies(8): >>andy99+o6 >>branda+B6 >>fisf+U7 >>wwtrv+4g >>foota+yi >>m3kw9+yB >>rvba+kp1 >>laurel+cq1
2. andy99+o6[view] [source] 2023-11-19 00:59:48
>>015a+(OP)
AGI has nothing to do with transformers. It's a hypothetical towards which there has been no progress other than finding things that didn't work. It's a cool thing to work on, but it's so different than what the popular version of openAI is, and it has such different timescales and economics... if some vestigial openAI wants to work on that, cool. There is definitely also room in the market for the current openAI centered around GPT-x et al, even if some people consider SaaS beneath them, and I hope they (OpenAI) find a way to continue with that mission.
replies(1): >>015a+KF
3. branda+B6[view] [source] 2023-11-19 01:01:07
>>015a+(OP)
The problem is it already became the other thing in a very impactful way.

If Sam were to be ousted it should have happened before ChatGPT was unleashed on the world.

replies(1): >>015a+mI
4. fisf+U7[view] [source] 2023-11-19 01:10:53
>>015a+(OP)
Sure, but without funding and/or massive support from MS this is not going to happen.
5. wwtrv+4g[view] [source] 2023-11-19 02:09:16
>>015a+(OP)
Would those talented engineers or scientists be content with significantly lower compensation and generally significantly less resources to work with. However good their intentions might this probably won't make them too attractive to future investors and antagonizing MS doesen't seem like a great idea.

OpenAI is far from being self-sustainable and without significant external investment they'll just probably be soon overtaken by someone else.

replies(1): >>015a+vK
6. foota+yi[view] [source] 2023-11-19 02:22:14
>>015a+(OP)
I don't think the people that want to move slowly and do research are necessarily working at OpenAI.
7. m3kw9+yB[view] [source] 2023-11-19 04:32:25
>>015a+(OP)
It’s just an illusion that Sam is trying to be unsafe about it, it’s a scare tactic or sorts to get what they want. Example regulations, and now internally, power. It’s all bs man this AI will end the world stuff, it’s pushed for an agenda and you all are eating it up
◧◩
8. 015a+KF[view] [source] [discussion] 2023-11-19 05:06:30
>>andy99+o6
Its been, like, two years dude. This mindset is entirely why any organization which has a chance at inventing/discovering ASI can't be for-profit and needs to be ran by scientists. You've got tik tok brain. Google won't be able to do it, because they're too concerned about image, and also got a bad case of corpo tik tok brain. Mistral and Anthropic won't be able to do it, because they have VC expectations to meet. Sam's next venture, if he chooses to walk that path, also won't, for the same reason. Maybe Meta? Do you want them being the first to ASI?

If you believe that the hunt for ASI shouldn't be OpenAI's mission, then that's an entirely different thing. The problem is: That is their mission. Its literally their mission statement and the responsibility of the board to direct every part of the organization toward that goal. Every fucking investor, including Microsoft, knew this, they knew the corporate governance, they knew the values alignment, they knew the power the nonprofit had over the for-profit. Argue credentialism, fine, but four people on the board that Microsoft invested in are now saying that OAI's path isn't the right path toward ASI; and, in case it matters, one of them is universally regarded as one of the top minds on the subject of artificial intelligence, on the planet. The time to argue credentialism was when the investors were signing checks; but they didn't. Its too late now.

My hunch is that the majority of the sharpest research minds at OAI didn't sign on to build B2B SaaS and become the next Microsoft; more accurately, Microsoft's thrall, because they'll never surpass Microsoft, they'll always be their second, if Satya influences Sam back into the boardroom then Microsoft, not Sam, will always be in control. Sam isn't going to be able to right that ship. OAI without its mission also isn't going to exist. That's the reality everyone on Sam's side needs to realize. And; absolutely, an OAI under Ilya is also extremely suspect in its ability to raise the resources necessary to meet the non-profit's goals; they'll be a zombie for years.

The hard reality that everyone needs to accept at this point is, OpenAI is probably finished. Unless they made some massive breakthrough a few weeks ago, which Sam did hint to three days ago, which should be the last hope that we all hold on to, that AI research as a species hasn't just been set back a decade with this schism. If that's the case; I think anyone hoping that Microsoft regains control the company is not thinking the situation through, and the best case scenario is: Ilya and the scientists retain control, and they are given space to understand the breakthrough.

replies(1): >>mjburg+BZ
◧◩
9. 015a+mI[view] [source] [discussion] 2023-11-19 05:30:55
>>branda+B6
And if Microsoft had major concerns about OpenAI's board and governance, it should have been voiced and addressed before they invested. Yet; here we are; slaves to our past decisions.
◧◩
10. 015a+vK[view] [source] [discussion] 2023-11-19 05:53:22
>>wwtrv+4g
I don't know; on a lot of those questions. I tend to think that there was more mission and ideology at OAI than at most companies; and that's a very powerful motivational force.

Here's something I feel higher confidence in, but still don't know: Its not obvious to me that OAI would be overtaken by someone else. There are two misconceptions that we need to leave on the roadside: (1) Technology always evolves forward, and (2) More money produces better products. Both of these ideas, at best, only indicate correlative relationships, and at worst are just wrong. No one has overtaken GPT-4 yet. Money is a necessary input to some of these problems, but you can't just throw money at it and get better results.

And here's something I have even higher confidence in: "Being overtaken by someone else" is a sin worthy of the death penalty in the Valley VC Culture; but their's is not the only culture.

replies(1): >>tjsrjh+cl1
◧◩◪
11. mjburg+BZ[view] [source] [discussion] 2023-11-19 08:31:41
>>015a+KF
The problem is that this "AGI research group" is staffed by people who build statitiscal models, call them AI, and are delusional enough to think this is a route to general intelligene.

There is no alterantive, if you're wedded to "fitting functions to frequencies of text tokens" as your 'research paradigm' -- the only think that can be is a commericalised trinket.

So either the whole org starts staffing top level research teams in neuroscience, cognitive science, philosophy of mind, logic, computer science, biophysics, materials science.... or it just delivers an app.

If sam is the only one interested in the app, its because he's the only sane guy in the room.

replies(1): >>sudosy+6e1
◧◩◪◨
12. sudosy+6e1[view] [source] [discussion] 2023-11-19 10:48:11
>>mjburg+BZ
There is little evidence that conditional statistical models can never be a route to AGI. There's limited evidence they can, but far less they can't.

You may be interested by the neuroscience research on the application of a time-difference like algorithm in the brain; predictor-corrector systems are just conditional statistical models being trained by reinforcement.

replies(1): >>mjburg+Mf1
◧◩◪◨⬒
13. mjburg+Mf1[view] [source] [discussion] 2023-11-19 11:04:18
>>sudosy+6e1
I am well aware of the literature in the area. 'Trained by reinforcement' in the case of animals includes direct causal contact with the environment, as well as sensory-motor adaption, and organic growth.

The semantics of the terms of the 'algorithm' in the case of animals are radically different, and insofar as these algorithms describe anything, it is because they are empirically adequate.

I'm not sure what you think 'evidence' would look like that conditional probability cannot lead to agi -- other than a serious of obvious observations: conditional probability doesnt resolve causation, it is therefore not a model of any physical process, it does not provide a mechanism for generating the propositions which are conditioned-on, it does not model relevance, and a huge list of other severe issues.

The idea that P(A|B) is even relevant to AGI is a sign of a fundamental basic lack of curiosity beyond what is on-trend in computer science.

We can easily explain why any given conditional probability mdoel can encode aribatary (Q, A) pairs -- so any given 'task' expressed as a sequence of Q-A prompts/replies can be modelled.

But who cares. The burden-of-proof on people claiming that conditional probability is a route to AGI is to explain how it models: causation, relevance, counter-factual reasoning, deduction, abduction, sensory-motor adaption, etc.

The gap between what has been provided and this burden-of-proof is laughable

◧◩◪
14. tjsrjh+cl1[view] [source] [discussion] 2023-11-19 11:59:26
>>015a+vK
Citation needed on the ideology being a powerful motivational force in this context. People who think they're doing groundbreaking work that'll impact the future of humanity are going to be pretty motivated ideologically either way regardless of if they're also drinking the extra flavor from the mission statement's Kool-Aid.
15. rvba+kp1[view] [source] 2023-11-19 12:33:35
>>015a+(OP)
Where do you go if you want to build an unsafe AGI with no morals? Military? China? Russia?

(I am aware that conceptually it can lead to a skynet scenario)

16. laurel+cq1[view] [source] 2023-11-19 12:41:13
>>015a+(OP)
There are significantly fewer people that would want to work with Ilya than there are people that would want to work with Sam/Greg.

If Ilya could come out and clearly articulate how his utopian version of OpenAI would function, how they will continue to sustain the research and engineering efforts, at what cadence they can innovate and keep shipping and how will they make it accessible to others then maybe there would be more support.

replies(1): >>incogn+UD1
◧◩
17. incogn+UD1[view] [source] [discussion] 2023-11-19 14:36:14
>>laurel+cq1
Wrong. Ilya is the goose that laid the golden egg. Do you think other orgs don’t have engineers and data scientists?
[go to top]