zlacker

[parent] [thread] 14 comments
1. dkjaud+(OP)[view] [source] 2023-11-20 18:01:01
> There's no way to read any of this other than that the entire operation is a clown show.

In that reading Altman is head clown. Everyone is blaming the board, but you're no genius if you can't manage your board effectively. As CEO you have to bring everyone along with your vision; customers, employees and the board.

replies(3): >>lambic+F3 >>topspi+H7 >>sebzim+Z8
2. lambic+F3[view] [source] 2023-11-20 18:14:12
>>dkjaud+(OP)
I don't get this take. No matter how good you are at managing people, you cannot manage clowns into making wise decisions, especially if they are plotting in secret (which obviously was the case here since everyone except for the clowns were caught completely off-guard).
replies(2): >>Terrif+7a >>Jeremy+Ad
3. topspi+H7[view] [source] 2023-11-20 18:28:52
>>dkjaud+(OP)
> In that reading Altman is head clown.

That's a good bet. 10 months ago Microsoft's newest star employee figured he was on the way to "break capitalism."

https://futurism.com/the-byte/openai-ceo-agi-break-capitalis...

replies(1): >>dkjaud+Aa
4. sebzim+Z8[view] [source] 2023-11-20 18:33:10
>>dkjaud+(OP)
He probably didn't consider that the board would make such an incredibly stupid decision. Some actions are so inexplicable that no one can reasonable foresee them.
◧◩
5. Terrif+7a[view] [source] [discussion] 2023-11-20 18:36:52
>>lambic+F3
Can't help but feel it was Altman that struck first. MS effectively Nokia-ed OpenAI - i.e. buyout executives within the organization and have them push the organization towards making deals with MS, giving MS a measure of control over said organization - even if not in writing, they achieve some political control.

Bought-out executives eventually join MS after their work is done or in this case, they get fired.

A variant of Embrace, Extend, Extinguish. Guess the OpenAI we knew, was going to die one way or another the moment they accepted MS's money.

◧◩
6. dkjaud+Aa[view] [source] [discussion] 2023-11-20 18:38:41
>>topspi+H7
AGI hype is a powerful hallucinogen, and some are smoking way too much of it.
replies(1): >>93po+Sr
◧◩
7. Jeremy+Ad[view] [source] [discussion] 2023-11-20 18:48:32
>>lambic+F3
Consider that Altman was a founder of OpenAI and has been the only consistent member of the board for its entire run.

The board as currently constituted isn't some random group of people - Altman was (or should have been) involved in the selection of the current members. To extent that they're making bad decisions, he has to bear some responsibility for letting things get to where they are now.

And of course this is all assuming that Altman is "right" in this conflict, and that the board had no reason to oust him. That seems entirely plausible, but I wouldn't take it for granted either. It's clear by this flex that he holds great sway at MS and with OpenAI employees, but do they all know the full story either? I wouldn't count on it.

replies(2): >>93po+hr >>random+nm1
◧◩◪
8. 93po+hr[view] [source] [discussion] 2023-11-20 19:40:03
>>Jeremy+Ad
There’s a LOT that goes into picking board members outside of competency and whether you actually want them there. They’re likely there for political reasons and Sam didn’t care because he didn’t see it impacting him at all, until they got stupid and thought they actually held any leverage at all
◧◩◪
9. 93po+Sr[view] [source] [discussion] 2023-11-20 19:42:38
>>dkjaud+Aa
I think it’s overly simplistic to make blanket statements like this unless you’re on the bleeding edge of the work in this industry and have some sort of insight that literally no one else does.
replies(1): >>dkjaud+Bw
◧◩◪◨
10. dkjaud+Bw[view] [source] [discussion] 2023-11-20 20:00:19
>>93po+Sr
I can be on the bleeding edge of whatever you like and be no closer to having any insight into AGI anymore than anyone else. Anyone who claims they have should be treated with suspicion (Altman is a fine example here).

There is no concrete definition of intelligence, let alone AGI. It's a nerdy fantasy term, a hallowed (and feared!) goal with a very handwavy, circular definition. Right now it's 100% hype.

replies(1): >>coder-+Ic1
◧◩◪◨⬒
11. coder-+Ic1[view] [source] [discussion] 2023-11-20 23:11:46
>>dkjaud+Bw
You don't think AGI is feasible? GPT is already useful. Scaling reliably and predictably yields increases in capabilities. As its capabilities increase it becomes more general. Multimodal models and the use of tools further increase generality. And that's within the current transformer architecture paradigm; once we start reasonably speculating, there're a lot of avenues to further increase capabilities e.g. a better architecture over transformers, better architecture in general, better/more GPUs, better/more data etc. Even if capabilities plateau there are other options like specialised fine-tuned models for particular domains like medicine/law/education.

I find it harder to imagine a future where AGI (even if it's not superintelligent) does not have a huge and fundamental impact.

replies(2): >>jacobm+gk1 >>NemoNo+my1
◧◩◪◨⬒⬓
12. jacobm+gk1[view] [source] [discussion] 2023-11-20 23:55:50
>>coder-+Ic1
This is exactly what the previous poster was talking about, these definitions are so circular and hand-wavey.

AI means "artificial intelligence", but since everyone started bastardizing the term for the sake of hype to mean anything related to LLMs and machine learning, we now use "AGI" instead to actually mean proper artificial intelligence. And now you're trying to say that AI + applying it generally = AGI. That's not what these things are supposed to mean, people just hear them thrown around so much that they forget what the actual definitions are.

AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.

replies(1): >>93po+xX6
◧◩◪
13. random+nm1[view] [source] [discussion] 2023-11-21 00:09:04
>>Jeremy+Ad
If he has great sway with Microsoft and OpenAI employees how has he failed as a leader? Hackernews commenters are becoming more and more reddit everyday.
◧◩◪◨⬒⬓
14. NemoNo+my1[view] [source] [discussion] 2023-11-21 01:34:38
>>coder-+Ic1
It's not about feasibility or level of intelligence per say - I expect AI to be able to pass a turing test long before an AI actually "wakes up" to a level of intelligence that establishes an actual conscious self identity comparable to a human.

For all intents and purposes the glorified software of the near future will appear to be people but they will not be and they will continue to have issues that simply don't make sense unless they were just really good at acting - the article today about the AI that can fix logic errors but not "see" them is a perfect example.

This isn't the generation that would wake up anyway. We are seeing the creation of the worker class of AI, the manager class, the AI made to manage AI - they may have better chances but it's likely going to be the next generation before we need to be concerned or can actually expect a true AGI but again - even an AI capable of original and innovative thinking with an appearance of self identity doesn't guarantee that the AI is an AGI.

I'm not sure we could ever truly know for certain

◧◩◪◨⬒⬓⬔
15. 93po+xX6[view] [source] [discussion] 2023-11-22 12:41:44
>>jacobm+gk1
Intelligence is gathering and application of knowledge and skills.

Computers have been gathering and applying information since inception. A calculator is a form of intelligence. I agree "AI" is used as a buzzword with sci-fi connotations, but if we're being pedantic about words then I hold my stated opinion that literally anything that isn't biological and can compute is "artificial" and "intelligent"

> AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.

Why not? Conceptually there's no physical reason why this isn't possible. Computers can simulate neurons. With enough computers we can simulate enough neurons to make a simulation of a whole brain. We either don't have that total computational power, or the organization/structure to implement that. But brains aren't magic that is incapable of being reproduced.

[go to top]