zlacker

[parent] [thread] 17 comments
1. kcorbi+(OP)[view] [source] 2023-11-08 18:32:47
(author here) Yes, I was surprised that AI didn't have more positive sentiment overall!

Subjectively, the two flavors of AI-negative sentiment I've seen most commonly on HN are (1) its potential to invade privacy, and (2) its potential to displace workers, including workers in tech.

I think that (1) was by far the most common concern up until around the ChatGPT release, at which point (2) became a major concern for many HN readers.

replies(5): >>cft+G1 >>fallin+m2 >>btilly+Z2 >>Manfre+z5 >>devjab+gc
2. cft+G1[view] [source] 2023-11-08 18:39:21
>>kcorbi+(OP)
The most negative argument about GPT-like AI for me is that it has summed up all world's information into only one "correct" opinion. Bing chat for example terminates the conversation and offers you to apologize when you reply that it's incorrect and you don't like its biases. With the old search engines, at least, in theory, you could get links to websites with different points of view.
replies(1): >>HPsqua+23
3. fallin+m2[view] [source] 2023-11-08 18:42:26
>>kcorbi+(OP)
Would be interesting to compare with overall sentiment on HN over time. I feel like it has gotten more and more bitter and negative over the time I've been here.
replies(1): >>JohnFe+T7
4. btilly+Z2[view] [source] 2023-11-08 18:43:57
>>kcorbi+(OP)
I would be curious.

What happens if you divide it not by comments, but by commenters? How much is sentiment being shaped by a vocal minority who is always saying the same thing, and how much does it seem to be a broad-based sentiment among the overall audience that occasionally responds?

replies(2): >>jay_ky+t9 >>johnny+5O
◧◩
5. HPsqua+23[view] [source] [discussion] 2023-11-08 18:44:14
>>cft+G1
Microsoft is going to be especially sensitive in this area, given their experience with Tay and "user-guided sentiments"

https://en.m.wikipedia.org/wiki/Tay_(chatbot)

6. Manfre+z5[view] [source] 2023-11-08 18:53:47
>>kcorbi+(OP)
There was nothing about the ethical question of training models on copyrighted content? Nothing about centralizing power even further in big tech companies? Nothing about AI flooding the world with mediocre content and wrong information?

These are genuine questions, not critique on your statement.

replies(1): >>apeter+Vg
◧◩
7. JohnFe+T7[view] [source] [discussion] 2023-11-08 19:03:27
>>fallin+m2
My personal outlook about how AI has been developing has certainly become increasingly dark with time. I still have a hope, though, that things won't be as bad as I fear.
replies(1): >>ies7+V61
◧◩
8. jay_ky+t9[view] [source] [discussion] 2023-11-08 19:10:11
>>btilly+Z2
yeah, really good point. I've noticed on some topics a few users come out of the woodwork and post a lot.
9. devjab+gc[view] [source] 2023-11-08 19:21:10
>>kcorbi+(OP)
I’m curious as to your conclusions on point (2). We use GPT daily but see nothing with it that threatens tech workers. At least not in any sense that we haven’t seen already, with how a two or three developers can basically do what it took one or two teams to do 25 years ago.

In terms of actually automating any form of ”thinking” tech work, LLMs are proving increasingly terrible. I say this as someone who work in a place where GPT writes all our documentation except for some very limited parts of our code base which can’t legally be shared with it. It increasingly also replaces our code-generation tools for most ”repetitive” work and it auto-generates a lot of our data-models based on various forms of inputs. But the actual programming? It’s so horrible at it that it’s mostly used as a joke. Well, except that it’s also not used like that by people who aren’t CS educated. The thing is though, we’ve already had to replace some of the “wonderful” automation that’s being cooked up by Product Owners, BI engineers and so on. Things which work, until they need to scale.

This is obviously very anecdotal, but I’m very underwhelmed and very impressed by AI at the same time. On one hand it’s frighteningly good at writing documentation… seriously, it wrote some truly amazing documentation based on a function named something along the lines of getCompanyInfoFromCVR (CVR being the Danish digital company registry) and the documentation GPT wrote based on just that was better than what I could’ve written. But tasked with writing some fairly basic computation it fails horribly. And I mean, where are my self driving cars?

So I think it’s a bit of a mix. But honestly, I suspect that for a lot of us, LLMs will generate an abundance of work when things need to get cleaned up.

replies(2): >>idonot+jv >>wizzwi+OD
◧◩
10. apeter+Vg[view] [source] [discussion] 2023-11-08 19:39:33
>>Manfre+z5
Centralizing power is a good point.

It feels like a huge dependency with a bunch of money involved.

I cannot _not_ see it clumping to a sentiment comparable to "you either AWS' or have no idea what cloud/network/cluster means".

We use these things like it’s actually "something". It’s not. We don’t build things with it. We configure other people’s software.

It’s born to be promoted as the next big enterprise stuff. You either know how to configure it or are not enterprise-worthy.

And that farts. Being dependent on someone else’s stuff has never turned out good.

Well, I mean. You can also not give a duck and squeeze out all the money. Work a job, abandon it and jump on the next train.

Feels useless, doesn’t it?

◧◩
11. idonot+jv[view] [source] [discussion] 2023-11-08 20:47:00
>>devjab+gc
Try out a local language model for the docs you can't get chatgpt to write.

You can run small quantized models on apple silicon if you have it.

I've been using a 70B local model for things like this and it works well

◧◩
12. wizzwi+OD[view] [source] [discussion] 2023-11-08 21:21:21
>>devjab+gc
> I say this as someone who work in a place where GPT writes all our documentation

> But the actual programming? It’s so horrible at it that it’s mostly used as a joke.

Please, for the sake of your future selves, hire someone who can write good documentation. (Or, better still but much harder, develop that skill yourself!) GPT documentation is the new auto-generated Javadoc comments: it looks right to someone who doesn't get what documentation is for, and it might even be a useful summary to consult (if it's kept up-to-date), but it's far less useful than the genuine article.

If GPT's better than you at writing documentation (not just faster), and you don't have some kind of language-processing disability, what are you even doing? Half of what goes into documentation is stuff that isn't obvious from the code! Even if you find writing hard, at least write bullet points or something; then, if you must, tack those on top of that (clearly marked) GPT-produced summary of the code.

replies(2): >>famous+aO >>devjab+IL1
◧◩
13. johnny+5O[view] [source] [discussion] 2023-11-08 22:12:44
>>btilly+Z2
absolutely. One fun thing I learnt about reddit is how blocking just a few hyper commenters can suddenly make a post 10x calmer. You should definitely take into account the frequency of the users commenting with data like this.
◧◩◪
14. famous+aO[view] [source] [discussion] 2023-11-08 22:13:15
>>wizzwi+OD
Have you actually tried to use GPT-4 for documentation?

Whether it's obvious from the code or not is kind of irrelevant. It gets non obvious things as well.

replies(1): >>wizzwi+J11
◧◩◪◨
15. wizzwi+J11[view] [source] [discussion] 2023-11-08 23:24:32
>>famous+aO
It's not going to know that this widget's green is blue-ish because it's designed to match the colours in the nth-generation photocopied manual, which at some point was copied on a machine that had low magenta – nor that it's essential that the green remains blue-ish, because lime and moss are different categories added in a different part of the system. Documentation is supposed to explain why, not just what, the code does – and how it can be used to do what it is for: all things that you cannot derive from the source code, no matter how clever you are.

Honestly, I don't actually care what you do. The more documentation is poisoned by GPT-4 output, the less useful future models built by the “big data” approach will be, but the easier it'll be to spot and disregard their output as useless. If this latest “automate your documentation” fad paves the way for a teaching moment or three, it'll have served some useful purpose.

replies(1): >>famous+aa1
◧◩◪
16. ies7+V61[view] [source] [discussion] 2023-11-09 00:01:14
>>JohnFe+T7
Because the majority of those ai "startups/founders" were blockchain startups before this and big data startup before that and "any buzzword tech" before that too.

They will pivot their vision to the next toy after this too.

◧◩◪◨⬒
17. famous+aa1[view] [source] [discussion] 2023-11-09 00:23:17
>>wizzwi+J11
I guess we'll just have to disagree.

Every now and then, the why is useful information that sheds needed light. Most of the time however, it's just unnecessary information taking up valuable space.

Like this example.

>this widget's green is blue-ish because it's designed to match the colours in the nth-generation photocopied manual, which at some point was copied on a machine that had low magenta

I'm sorry but unless matching the manual is a company mandate, this is not necessary at all to know and is wasted space.

Knowing the "low magenta" bit is especially useless information, company mandate or not.

>nor that it's essential that the green remains blue-ish, because lime and moss are different categories added in a different part of the system.

Now this is actual useful information. But it's also Information GPT can Intuit if the code that defines these separate categories are part of the context.

Even if it's not and you need to add it yourself (assuming you are even aware yourself. Not every human writing documentation is aware of every moving part) then you've still saved a lot of valuable time by passing it through 4 first and then adding anything else.

◧◩◪
18. devjab+IL1[view] [source] [discussion] 2023-11-09 05:39:01
>>wizzwi+OD
> Half of what goes into documentation is stuff that isn't obvious from the code!

I’d say that greatly depends on your code. I’ve had GPT write JSDoc where it explains exactly why a set or functions is calculating the German green energy tariffs the way they do. Some of what it wrote went into great detail about how the tariff is not applied if your plant goes over a specific level of production, and why we try to prevent that.

I get your fears, but I don’t appreciate your assumptions into something you clearly both don’t know anything about (our code/documentation) and something you apparently haven’t had much luck with compared to us (LLM documentation).

You’re not completely wrong of course. If you write code with bad variable names and functions that do more than they need to, then GPT is rather bad at hallucinating the meaning. But it’s not like we just blindly let it auto write our documentation without reading it.

[go to top]