zlacker

[return to "Is AI the next crypto? Insights from HN comments"]
1. Closi+g8[view] [source] 2023-11-08 18:14:47
>>kcorbi+(OP)
Interesting analysis. Suprised HN is so negative towards AI (and that the positive:negative ratio to AI is about the same as it was for Crypto a few years ago!)

The obvious difference is that AI has abundant use-cases, while Crypto only has tenuous ones.

Maybe there is added negativity considering it is a technology where there is clearly a potential threat to jobs on a personal level (e.g. lift operators were very negative towards automatic lifts).

◧◩
2. kcorbi+Nc[view] [source] 2023-11-08 18:32:47
>>Closi+g8
(author here) Yes, I was surprised that AI didn't have more positive sentiment overall!

Subjectively, the two flavors of AI-negative sentiment I've seen most commonly on HN are (1) its potential to invade privacy, and (2) its potential to displace workers, including workers in tech.

I think that (1) was by far the most common concern up until around the ChatGPT release, at which point (2) became a major concern for many HN readers.

◧◩◪
3. devjab+3p[view] [source] 2023-11-08 19:21:10
>>kcorbi+Nc
I’m curious as to your conclusions on point (2). We use GPT daily but see nothing with it that threatens tech workers. At least not in any sense that we haven’t seen already, with how a two or three developers can basically do what it took one or two teams to do 25 years ago.

In terms of actually automating any form of ”thinking” tech work, LLMs are proving increasingly terrible. I say this as someone who work in a place where GPT writes all our documentation except for some very limited parts of our code base which can’t legally be shared with it. It increasingly also replaces our code-generation tools for most ”repetitive” work and it auto-generates a lot of our data-models based on various forms of inputs. But the actual programming? It’s so horrible at it that it’s mostly used as a joke. Well, except that it’s also not used like that by people who aren’t CS educated. The thing is though, we’ve already had to replace some of the “wonderful” automation that’s being cooked up by Product Owners, BI engineers and so on. Things which work, until they need to scale.

This is obviously very anecdotal, but I’m very underwhelmed and very impressed by AI at the same time. On one hand it’s frighteningly good at writing documentation… seriously, it wrote some truly amazing documentation based on a function named something along the lines of getCompanyInfoFromCVR (CVR being the Danish digital company registry) and the documentation GPT wrote based on just that was better than what I could’ve written. But tasked with writing some fairly basic computation it fails horribly. And I mean, where are my self driving cars?

So I think it’s a bit of a mix. But honestly, I suspect that for a lot of us, LLMs will generate an abundance of work when things need to get cleaned up.

◧◩◪◨
4. idonot+6I[view] [source] 2023-11-08 20:47:00
>>devjab+3p
Try out a local language model for the docs you can't get chatgpt to write.

You can run small quantized models on apple silicon if you have it.

I've been using a 70B local model for things like this and it works well

[go to top]