zlacker

ChatGPT Is a Gimmick

submitted by blueri+(OP) on 2025-05-22 04:04:25 | 115 points 159 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
◧◩
12. max_+ih[view] [source] [discussion] 2025-05-22 07:54:00
>>notepa+Xf
The millenial tech bros have mostly made thier money off gimmicks like Instagram, TikTok et al

I was very disgusted when I saw VC firms with billions in AUM put money into things like FartCoin, Digital Twins

The Boomer VCs financed stuff that is genuinely useful, MRI Scanners, Google, Apple Computers, Genetech (brought insulin to the masses).

The milenial VCs fund stuff that is at best convenient to have (Airbnb, Uber) but usually gimmicks, Instagram, Tiktok.

Sam Altman is the master of gimmicks.

He took the GPT model that already existed and wrapped it into chat format similar to Elizer[0]

Got Neural style that existed for a long time and paired it with Studio Ghibli fanatics. [1]

[0]: https://en.m.wikipedia.org/wiki/ELIZA_effect

[1]: https://en.m.wikipedia.org/wiki/Neural_style_transfer

32. ohxh+8j[view] [source] 2025-05-22 08:11:20
>>blueri+(OP)
This seems unusually shallow for the hedgehog review. I thought we'd largely moved on from this sort of sentimental, "I can't get good outputs therefore nobody can" style essay -- not to mention the water use argument! They've published far better writing on LLMs too: see "Language Machinery" from fall 23 [1]

[1] https://hedgehogreview.com/issues/markets-and-the-good/artic...

◧◩
46. cess11+Hk[view] [source] [discussion] 2025-05-22 08:26:06
>>ddxv+zf
"It's so easy to spin up an example "write me a sample chat app" or whatever and be amazed how quickly and fully it realizes this idea, but it does kinda beg the question, now what?"

Which we already had, it's just a 'git clone https://github.com/whatevs/huh' away, or doing one of millions of tutorials on whatever topic. Pretty much everyone who can build something out of Elixir/Phoenix has a chat app, an e-commerce store and a scraping platform just laying around.

◧◩
57. Andrew+El[view] [source] [discussion] 2025-05-22 08:35:00
>>wiseow+2k
Turns out that AI is not good at summarising things:

https://futurism.com/ai-chatbots-summarizing-research

◧◩◪
68. blixt+Tm[view] [source] [discussion] 2025-05-22 08:45:07
>>mort96+Gl
My list of uses of AI includes:

- Turning a lot of data into a small amount of data, such as extracting facts from a text, translating and querying a PDF, cleaning up a data dump such as getting a clean Markdown table from a copy/pasted HTML source of a web page etc (IMO it often goes wrong when you go the other way and try to turn a small prompt into a lot of data)

- Creating illustrations representing ephemeral data (eg my daily weather report illustration which I enjoy looking at every day even if the data it produces is not super useful: https://github.com/blixt/sol-mate-eink)

- Using Cursor to perform coding tasks that are tedious but I know what the end result should look like (so I can spend low effort verifying it) -- it has an 80% success rate and I deem it to save time but it's not perfect

- Exploration of a topic I'm not familiar with (I've used o3 extensively while double checking facts, learning about laws, answering random questions that would be too difficult to Google, etc etc) -- o3 is good at giving sources so I can double check important things

Beyond this, AI is also a form of entertainment for me, like using realtime voice chat, or video/image generation to explore random ideas and seeing what comes out. Or turning my ugly sketches into nicer drawings, and so forth.

71. panstr+Kn[view] [source] 2025-05-22 08:51:35
>>blueri+(OP)
If anyone is interested in AI in relation to learning, I think the best take on that I've seen so far was from Derek (Veritasium) in this recent talk: https://www.youtube.com/watch?v=0xS68sl2D70

It's a lot more balanced compared to the doomy attitude in the primary post.

◧◩
75. sausag+6o[view] [source] [discussion] 2025-05-22 08:53:51
>>danlit+ag
I've had the same feeling for awhile. I tried to articulate it last night actually, I don't know to how much success: https://pid1.dev/posts/ai-skeptic/
◧◩◪
86. wiseow+uq[view] [source] [discussion] 2025-05-22 09:19:37
>>mort96+Rl
https://www.theatlantic.com/technology/archive/2015/03/when-...

https://www.today.com/money/are-smartphones-making-us-lazy-t...

Etc., etc.

◧◩◪
87. wazoox+vq[view] [source] [discussion] 2025-05-22 09:19:42
>>mnky98+5j
Perplexity often quotes references that simply don't exist. Recent examples provided by perplexity :

Google Cloud. (2024). "Broadcast Transformation with Google Cloud." https://cloud.google.com/solutions/media-entertainment/broad...

Microsoft Azure. (2024). "Azure for Media and Entertainment." https://azure.microsoft.com/en-us/solutions/media-entertainm...

IBC365. (2023). "The Future of Broadcast Engineering: Skills and Training." https://www.ibc.org/tech-advances/the-future-of-broadcast-en...

Broadcast Bridge. (2023). "Cloud Skills for Broadcast Engineers." https://www.thebroadcastbridge.com/content/entry/18744/cloud...

SVG Europe. (2023). "OTT and Cloud: The New Normal for Broadcast." https://www.svgeurope.org/blog/headlines/ott-and-cloud-the-n...

None of these exist, neither at the provided URLs or elsewhere.

◧◩◪
138. Tade0+e01[view] [source] [discussion] 2025-05-22 14:38:55
>>terhec+dj
Not OP, but here's one instance over which I already had an internet fistfight with a person swearing by LLMs[0], meaning it should serve as a decent example:

> Suppose I'm standing on Earth and suddenly gravity stopped affecting me. What would be my trajectory? Specifically what would be my distance from Earth over time?

https://chatgpt.com/c/682edff8-c540-8010-acaa-8d9b5c26733d

It gives the "small distance approximation" in the examples, even if I ask for the solution after two hours, which at 879km is already quite off the correct ~820km.

An approximation that is better in the order of seconds to hours is pretty simple:

  s(t) = sqrt((R^2 + (Vt)^2)) - R
And it's even plotted in the chart, but again - numbers are off.

[0] Their results were giving wildly incorrect numbers at less than 100 seconds already, which was what originally prompted me to respond - they didn't even match the formula.

[go to top]