Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.
So I mean proper AGI.
Naming the product Clippy now is perfectly fine while it’s just an LLM and will be more excellent over the years when it eventually achieves AGI ness.
At least in this forum can we please stop misinterpreting things in a limited way to make pedantic points about how LLMs aren’t AGI (which I assume 98% of people here know). So I think it’s funny you assume I think chatgpt is an AGI.
LLMs and GenAI are clever parlor tricks compared to the necessary science needed for AGI to actually arrive.
Finally the paperclip maximizer
Indeed, normal people are quite wise and understand that a chat bot is just an augmentation agent--some sort of primordial cell structure that is but one piece of the puzzle.
(I'm in the latter camp).
It could be a first step, sure, but we need many many more breakthroughs to actually get to AGI.
Considering how it required no scientific understanding at all, just random chance, a very simple selection mechanism and enough iterations (I'm talking about evolution)?
Maybe they can come up with a personification for the YouTube algorithm. Except he seems like a bit of a bad influence.
Either way, I think GGP’s comment was not applicable based on my comment as written and certainly my intent.
> [...]
> Premise 1: LLMs can realistically "mimic" human communication.
> Premise 2: LLMs are trained on massive amounts of text data.
> Conclusion: The structure that allows LLMs to realistically "mimic" human communication is its intelligence.
"If P then Q" is the Material conditional: https://en.wikipedia.org/wiki/Material_conditional
Does it do logical reasoning or inference before presenting text to the user?
That's a lot of waste heat.
(Edit) with next word prediction just is it,
"LLMs cannot find reasoning errors, but can correct them" >>38353285
"Misalignment and Deception by an autonomous stock trading LLM agent" https://news.ycombinator.com/item?id=38353880#38354486
Yes, though end result would probably be more like IE - barely good enough, forcefully pushed into everything and everywhere and squashing better competitors like IE squashed Netscape.
When OpenAI went in with MSFT it was like they have ignored the 40 years of history of what MSFT has been doing to smaller technology partners. What happened to OpenAI pretty much fits that pattern of a smaller company who developed great tech and was raided by MSFT for that tech (the specific actions of specific persons aren't really important - the main factor is MSFT's gravitational force of a black hole, and it was just a matter of time before its destructive power manifests itself like in this case where it just tore apart the OpenAI with tidal forces)
Especially since you have to explain how "just mimicking" works so well.
I imagine us actually reaching AGI, and people will start saying, "Yes, but it is not real AGI because..." This should be a measure of capabilities not process. But if expectations of its capabilities are clear, then we will get there eventually -- if we allow it to happen and do not continue moving the goalposts.
Why acquire rights to thousands of different character favourites when you can build the bot underneath and then licenses to skin and personalise said bot can be negotiated by the media houses to own 'em.
Same as GPS voices I guess.
As I've mentioned in other comments, it's like yelling at someone for bringing up fusion when talking about nuclear power.
Of course it's not possible yet, but talking & thinking about it is how we make it possible? Things don't just create themselves (well maybe once we _do_ have AGI level AI he he, that'll be a fun apocalypse).
It's my belief that we're not special; us humans are just meat bags, our brains just perform incredibly complex functions with incredibly complex behaviours and parameters.
Of course we can replicate what our brains do in silicon (or whatever we've moved to at the time). Humans aren't special, there's no magic human juice in our brains, just a currently inconceivable blob of prewired evolved logic and a blank (some might say plastic) space to be filled with stuff we learn from our environs.
I've had Cortana shut off for so long it took me a minute to remember theyve used the name already.