But yes indeed, there are many, many AI products launched during this era of rapid progress. Even kind of shoddy products can be monetized if they provide value over what we had before. I think the crowded market and all the bullshit and all the awesome, all at once, is a sign of very rapid progress in this space. It will probably not always be like this and who knows what we are approaching.
ChatGPT, when provided with a synthetic prompt is reliably a synthesizer, or to use the loaded term, a bullshiter.
When provided with an analytic prompt, it is reliably a translator.
Terms, etc: https://www.williamcotton.com/articles/chatgpt-and-the-analy...
In terms of closing the gap between AI hype and useful general purpose AI tools, no one can reasonably deny that it's an absolute quantum leap.
It's just not a daily driver for technical experts yet.
sounds like most people tbf
ChatGPT isn't as good as a human who puts in a lot of effort, but in many jobs it can easily outperform humans who don't care very much.
Before this point in history we accepted 'I am that I am' because there wasn't any challenger to the title. Now that we are putting this to question we realize our definitions may not work well.
Given we are not talking about state changes in electrons, there is nothing wrong with this description of ChatGPT - it truly does feel like a massive advance to anyone who has even cursorily played with it.
For example, you can ask it questions like "Who was born first, Margaret Thatcher or George Bush?" and "Who was born first, Tony Blair or George Bush?" and in each instance it infers which George Bush you are talking about.
I honestly couldn't imagine something like this being this good only three years ago.
I'll also throw random programming questions into it, and it's been hit and miss. SO is probably still faster, and I like seeing the discussion. The problem with chatGPT right now is it gives an answer like it's certainty when it's often wrong.
I can see the benefits of this interaction model (basically summarizing all the things from a search into what feels like a person talking back), but I don't see change the world level hype at the moment.
I also wonder if LLMs will get worse over time through propagation error as content is generated by other LLMs.
- Embedding free text data on safety observations, clustering them together, using text completion to automatically label the clusters, and identifying trends
- Embedding free text data on equipment failures. Some of our equipment failures have been classified manually by humans into various categories. I use the embeddings to train a model to predict those categories for uncategorized failures.
- Analyzing employee development goals and locating common themes. Then using this to identify where there are gaps we can fill in training offerings.
My point was that “consciousness” and “intelligence” are very different things. One does not imply the other.
Consciousness is about self reflection. Intelligence is about insight and/or problem solving. The two are often correlated, especially in animals, especially in humans, but they’re not the same thing at all.
“Is chatgpt consciousness” is a totally different question than “is chatgpt intelligent”.
We will know chatgpt is intelligent when it passes our tests of intelligence, which are imperfect but at least directionally correct.
I have no idea if/when we we know whether chatgpt is conscious, because we don’t really have good definitions of consciousness, let along tests, as you note.
(2) Counter. I asked it the other day "how many movies were Tom Hanks and Meg Ryan in together" and the answer ChatGPT gave was 2 ... not only is that wrong it is astonishingly wrong (IMO). You could be forgiven for forgetting Ithaca from 2015. I could forgive ChatGPT for forgetting that one. But You've Got Mail? That's a very odd omission. So much so I'm genuinely curious how it could possible get the answer wrong in that way. And for the record, Google presents the correct answer (4) in a cut out segment right at the top, a result and presentation very close to what one would expect from ChatGPT.
I don't know about other use cases like generating stories (or tangentially art of any kind) for inspiration, etc. But as a search engine things like ChatGPT NEED to have attributions. If I ask the question "Does a submarine appear in the movie Battlefield Earth?" it will confidently answer "no". I _think_ that answer is right, but I'm not really all that confident it is right. It needs to present the reasons it thinks that is right. Something like "No. I believe this because (1) the keyword submarine doesn't appear in the IMDb keywords (<source>), (2) the word submarine doesn't appear in the wikipedia plot synopsis (<source>), (3) the film takes place in Denver (<source>) which is landlocked making it unlikely a submarine would be found in that location during the course of the film."
The Tom Hanks / Meg Ryan question/answer would at least more interesting if it explained how it managed to be so uniquely incorrect. That question will haunt me though ... there's some rule about this right? Asking about something you have above average knowledge in and watching someone confidently answer it incorrectly. How am I supposed to ever trust ChatGPT again about movie queries?
Well, I'm no fan of chatGPT. But it appears most people are worse than chatGPT, because just regurgitate what they hear with no thought or contemplation. So you can't really blame average folks who struggle with the concepts of intelligence/understanding that you mention.