zlacker

[return to "Sam Altman, Greg Brockman and others to join Microsoft"]
1. 9dev+w9[view] [source] 2023-11-20 08:37:33
>>JimDab+(OP)
I don’t quite buy your Cyberpunk utopia where the Megacorp finally rids us of those pesky ethics qualms (or ”shackles“, as you phrased it.) Microsoft can now proceed without the guidance of a council that actually has humanities interests in mind, not only those of Microsoft shareholders. I don’t know whether all that caution will turn out to have been necessary, but I guess we’re just gleefully heading into whatever lies ahead without any concern whatsoever, and learn it the hard way.

It’s a bit tragic that Ilya and company achieved the exact opposite of what they intended apparently, by driving those they attempted to slow down into the arms of people with more money and less morals. Well.

◧◩
2. Terrif+Fn[view] [source] 2023-11-20 09:55:52
>>9dev+w9
> It’s a bit tragic that Ilya and company achieved the exact opposite of what they intended apparently, by driving those they attempted to slow down into the arms of people with more money and less morals. Well.

If they didn’t fire him, Altman will just continue to run hog wild over their charter. In that sense they lose either way.

At least this way, OpenAI can continue to operate independently instead of being Microsoft’s zombie vassal company with their mole Altman pulling the strings.

◧◩◪
3. stingr+pu[view] [source] 2023-11-20 10:43:44
>>Terrif+Fn
How will they be able to continue doing their things without money?

It seems like people forget that it was the investors’ money that made all this possible in the first place.

◧◩◪◨
4. jampek+yB[view] [source] 2023-11-20 11:32:04
>>stingr+pu
Developing new algorithms and methods doesn't necessarily, or even typically, take billions.
◧◩◪◨⬒
5. sebzim+cC[view] [source] 2023-11-20 11:36:37
>>jampek+yB
Yeah but testing if they work does, that's the problem.

There are probably load so ways you can make language models with 100M parameters more efficient, but most of them won't scale to models with 100B parameters.

IIRC there is a bit of a phase transition that happens around 7B parameters where the distribution of activations changes qualitatively.

Anthropic have interpretability papers where their method does not work for 'small' models (with ~5B parameters) but works great for models with >50B parameters.

◧◩◪◨⬒⬓
6. kvetch+sM[view] [source] 2023-11-20 12:47:18
>>sebzim+cC
Deep NN aren't the only path to AGI... They actually could be one of the worst paths

For Example, check out the proceedings of the AGI Conference that's been going on for 16 years. https://www.agi-conference.org/

I have faith that Ilya. He's not going to allow this blunder to define his reputation.

He's going to go all in on research to find something to replace Transformers, leaving everyone else in the dust.

[go to top]