We are now exposed to companies hyping huge general purpose models with whatever tech is the latest fad, which resonates with the average person who wants to generate memes, etc.
This is impressive only at the surface level. Take a specific application: prompting it to write you an algorithm, outside of any copying-and-pasting from a textbook these models will generate bad/incorrect code and then explain why it works.
It's like having an incompetent junior on your team who has the bravado of a senior 10x-er.
That's not to say "AI" doesn't have a purpose, but currently it seems just hyped up by sales people looking for Series-A funding or an IPO cash-out. I want to see the models developed for specific tasks that will have a big impact, rather than the slight-of-hand or circus tricks we currently get.
Maybe that time is passed, and general models are the future and we will just have to wait until they're as good as any specific model that was built for any task you can ask of it.
It will be interesting what happens when these "general" models are used without much thought and their unchecked results lead to harm. Will we still find companies culpable?
Personally, I care very little about whether the machine is intelligent or not. If it actually happens in my lifetime, I believe it will be unmistakable.
I am interested in how people solve problems. If you built and trained a model that solves a challenging task, THAT is something I find noteworthy and what I want to read about.
Apparently utility is boring, and “just ML” now. There’s tons of academic papers I see fly under the radar probably because they solve specific problems that the average person doesn’t know exists. Much of ML doesn’t foray into “popular science” enough to hold general public interest.