I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.
Think about it.
What's the most expressive medium we have which is also absolutely inundated with data?
To broadly be able to predict human speech you need to broadly be able to predict the human mind. To broadly predict a human mind requires you build a model of it, and to have a model of a human mind? Welcome to general intelligence.
We won't realize we've created an AGI until someone makes a text model, starts throwing random problems at it, and discovers that it's able to solve them.
It's important to note that this is your assumption which I believe to be wrong (for most people here).
We don't really understand what intelligence means -- in humans or our creations -- but ChatGPT gives us a little more insight (just like ELIZA, and the psychological research behind it, did).
At the very least, ChatGPT helps us build increasingly better Turing tests.
> These arguments take the form, "I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X."
> [...]
> The criticisms that we are considering here are often disguised forms of the argument from consciousness, Usually if one maintains that a machine can do one of these things, and describes the kind of method that the machine could use, one will not make much of an impression.
Every time "learning machines" are able to do a new thing, there's a "wait, it is just mechanical, _real_ intelligence is the goalpost".
[0] https://www.espace-turing.fr/IMG/pdf/Computing_Machinery_and...
For _what purpose_, tho? It's a good party trick, but its tendency to be confidently wrong makes using it for anything important a bit fraught.
That's about the only purpose I've found so far, but it seems a big one?
I can't really imagine asking it a question about anything I cared about and not verifying via a second source, though, given its accuracy issues. This makes it feel a lot less useful.
Respectfully, that reads as needlessly combative within the context. It sounds like the blockchain proponents who say that the only people who are against cryptocurrencies are the ones who are “bitter for having missed the boat”.¹
It is possible and perfectly reasonable to identify problems in ChatGPT and similar technologies without feeling threatened. Simple example: someone who is retired and monetarily well off, whose way of living and sense of self worth are in no way affected by developments in AI, can still be critical and express valid concerns when these models tell you that it’s safe to boil a baby² or give other confident but absurdly wrong answers to important questions.
¹ I’m not saying that’s your intention, but consider that type of rhetoric may be counterproductive if you’re trying to make another understand your point of view.
² I passed by that specific example on Mastodon but I’m not finding it now.
Language is way, way far removed from intelligence. This is well-known in cognitive psychology. You'll find plenty of examples of stroke victims who are still intelligent but have lost the ability to produce coherent sentences, and (though much rarer) examples of people who can produce clear, eloquent prose, yet are so learning and mentally challenged that they can't even tell the difference between fantasy and reality.
ChatGPT is good at making up stories.
I cannot but think that this approach of "Strong Opinions, Weakly Held" is a much stronger path forward towards AGI than what we had before.
It is also obvious that we are in the middle of a shift of some kind. Very hard to see from within, but clearly we will look back at 2022 as the beginning of something…
Just because people shift the goalposts doesn't mean that the new position of the goalposts isn't closer to being correct than the old position. You can criticise the people for being inconsistent or failing to anticipate certain developments, but that doesn't tell you anything about where the goalposts should be.
This is a non sequitur. The human mind does a whole lot more than string words together. Being able to predict which word would logically follow another does not require the ability to predict anything other than just that.
One of major problems of modern computer-based work is that there are too many people already in those roles, doing work that isn't needed. Case in point: the culling of tens of thousands of software engineers, people who would consider themselves to be doing 'bullshit jobs'.
That reminds me how in my youth many were planning on vacations to Mars resorts and unlimited fusion energy) Stars looked so close, only a matter of time!
> Being able to predict which word would logically follow another does not require the ability to predict anything other than just that.
Why? Wouldn't you expect that technique to generally fail if it isn't intelligent enough to know what's happening in the sentence?
This seems to have been the rallying cry of AI-ish stuff for the past 30 years, tho. At a certain point you have to ask "but how much time"? Like, a lot of people were confidently predicting speech recognition as good as a human's from the 90s on, for instance. It's 2023, and the state of the art in speech recognition is a fair bit better than Dragon Dictate in the 90s, but you still wouldn't trust it for anything important.
That's not to say AI is useless, but historically there's been a strong tendency to say, of AI-ish things "it's 95% of the way there, how hard could the last 5% be?" The answer appears to be "quite hard, actually", based on the last few decades.
As this AI hype cycle ramps up, we're actually simultaneously in the down ramp of _another_ AI hype cycle; the 5% for self-driving cars is going _very slowly indeed_, and people seem to have largely accepted that, while still predicting that the 5% for generative language models will be easy. It's odd.
(Though, also, I'm not convinced that it _is_ just a case of making a better ChatGPT; you could argue that if you want correct results, a generative language model just isn't the way to go at all, and that the future of these things mostly lies in being more convincingly wrong...)
The expressiveness of language lets this be true of almost everything.