AI is great. ChatGPT is incredible. But I feel tired when I see so many new products being built that incorporate AI in some way, like "AI for this..." "AI for that..." I think it misapplies AI. But more than that, it's just too much. Right? Right? Anyone else feel like this? Everything is about ChatGPT, AI, prompts or startups we can build with that. It's like the crypto craze all over again, and I'm a little in dread of the shysters again, the waste, the opportunity cost of folks pursuing this like a mad crowd rather than being a little more thoughtful about where to go next. Not a great look for the "scene" methinks. Am I alone in this view?
When I started with the topic I watched a documentary with Joseph Weizenbaum ([1]) and felt weirded out that someone would step away from such an interesting and future-shaping topic. But the older I get, the more I feel that technology is not the solution to everything and AI might actually make more problems than it solves. I still think Bostrom's paperclip maximizer ([2]) is lacking fundamental understandings of the status quo and just generated unnecessary commotion.
[1] http://www.plugandpray-film.de/en/ [2] https://www.lesswrong.com/tag/paperclip-maximizer
And you're not alone, I feel the same since ~2015
[1] Synonyms of artificial has "faked" : https://www.thesaurus.com/browse/artificial
[2] Synonyms of fake has "artificial": https://www.thesaurus.com/browse/fake
ChatGPT, when provided with a synthetic prompt is reliably a synthesizer, or to use the loaded term, a bullshiter.
When provided with an analytic prompt, it is reliably a translator.
Terms, etc: https://www.williamcotton.com/articles/chatgpt-and-the-analy...
> The business plans of the next 10,000 startups are easy to forecast: Take X and add AI.
I think the AI hype cycle isn't done building. A few days ago, Paul Graham tweeted[2] this:
> One of the differences between the AI boom and previous tech booms is that AI is technically more difficult. That combined with VC funds' shift toward earlier stage investing with less analysis will mean that, for a while, money will be thrown at any AI startup.
[1]: https://twitter.com/kevin2kelly/status/718166465216512001
> These arguments take the form, "I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X."
> [...]
> The criticisms that we are considering here are often disguised forms of the argument from consciousness, Usually if one maintains that a machine can do one of these things, and describes the kind of method that the machine could use, one will not make much of an impression.
Every time "learning machines" are able to do a new thing, there's a "wait, it is just mechanical, _real_ intelligence is the goalpost".
[0] https://www.espace-turing.fr/IMG/pdf/Computing_Machinery_and...
Imagine how the HN users who disagree with that feel. It is beyond fatiguing. I’m frequently reminded of the companies who added “blockchain” to their name and saw massive jumps in their stock price, despite having noting to do with blockchains¹.
¹ https://www.theverge.com/2017/12/21/16805598/companies-block...
Which it occasionally mistypes. Then you're off to chase a small piece of error in a tub of boilerplate. Great stuff! For actual example, see [0]
[0] https://blog.ploeh.dk/2022/12/05/github-copilot-preliminary-...
Neglecting that (only because it's harder to navigate whether I should expect it to handle state for an extremely finite space; even if it's in a different representation than it's directly used to), I know I saw a post where it failed at rock, paper, scissors. Just found it:
https://www.reddit.com/r/OpenAI/comments/zjld09/chat_gpt_isn...
I think we're already there. A legion of AI based startups seem to be coming out daily (https://www.futuretools.io/) that offer little more than gimmicks.
I came here to make this comment. Thank you for doing it for me.
I remember feeling shocked when this article appeared in the Atlantic in 2008, "Is Google Making Us Stupid?": https://www.theatlantic.com/magazine/archive/2008/07/is-goog...
The existence of the article broke Betteridge's law for me. The fact that this phenomenon it is not more widely discussed describes the limit of human intelligence. Which brings me back around to the other side... perhaps we were never as intelligent as we suspected?
It is often frustrating that English has words with such different (but clearly related) definitions, as it can make it far too easy to end up talking past each other.
https://arxiv.org/abs/2210.05189 but all NNs _are_ if statements!
https://twitter.com/marvinvonhagen/status/162365814434901197...