zlacker

[parent] [thread] 69 comments
1. Closi+(OP)[view] [source] 2023-11-08 18:14:47
Interesting analysis. Suprised HN is so negative towards AI (and that the positive:negative ratio to AI is about the same as it was for Crypto a few years ago!)

The obvious difference is that AI has abundant use-cases, while Crypto only has tenuous ones.

Maybe there is added negativity considering it is a technology where there is clearly a potential threat to jobs on a personal level (e.g. lift operators were very negative towards automatic lifts).

replies(16): >>greent+w4 >>kcorbi+x4 >>wnevet+35 >>monero+s5 >>mdgrec+C5 >>adverb+a6 >>jl6+N6 >>mattlo+p7 >>jampek+D7 >>__loam+e8 >>JohnFe+Eb >>jejeyy+Zh >>tester+mm >>themag+jD >>verdve+pN >>gumbal+q71
2. greent+w4[view] [source] 2023-11-08 18:32:40
>>Closi+(OP)
HN isn't negative on "AI" in a vacuum. HN is negative on AI hype which is completely out of control.
3. kcorbi+x4[view] [source] 2023-11-08 18:32:47
>>Closi+(OP)
(author here) Yes, I was surprised that AI didn't have more positive sentiment overall!

Subjectively, the two flavors of AI-negative sentiment I've seen most commonly on HN are (1) its potential to invade privacy, and (2) its potential to displace workers, including workers in tech.

I think that (1) was by far the most common concern up until around the ChatGPT release, at which point (2) became a major concern for many HN readers.

replies(5): >>cft+d6 >>fallin+T6 >>btilly+w7 >>Manfre+6a >>devjab+Ng
4. wnevet+35[view] [source] 2023-11-08 18:35:14
>>Closi+(OP)
>Interesting analysis. Suprised HN is so negative towards AI (and that the positive:negative ratio to AI is about the same as it was for Crypto a few years ago!)

I would be curious to know many HNers were previously burned by crypto. Fool me once, etc.

5. monero+s5[view] [source] 2023-11-08 18:36:30
>>Closi+(OP)
Crypto has objectively increased my net worth by many zeroes, while AI has a lot of hand-wavey "productivity gains" while the self driving cars I was promised are being banned for homicide...
replies(3): >>__loam+t6 >>rynean+x8 >>joegib+OJ
6. mdgrec+C5[view] [source] 2023-11-08 18:37:04
>>Closi+(OP)
I feel like the value of crypto going up is backed by solid economics. There is only ever going to be so much of it so losing your money to inflation isn't a risk. People die and don't tell anyone their keys, people forget their keys so we're constantly losing crypto too, w/ currency this is taken into account and print new dollars not w/ crypto. I also think it reflects a general sentiment here in the states and probably China too that we're barreling toward a fucked up never world and crypto is some kind of safe guard against that. In times of uncertainty people turn this gold, this is very much a digital gold.
replies(4): >>dragon+R6 >>z3phyr+g7 >>acdha+T7 >>NBJack+09
7. adverb+a6[view] [source] 2023-11-08 18:39:12
>>Closi+(OP)
> The obvious difference is that AI has abundant use-cases, while Crypto only has tenuous ones.

Hacker News comment sentiment is not a reliable measurem of what the average hacker news developer thinks.

For one, only people who are very invested about something will post about it.

For two, many comments are probably not from developers and instead from fake accounts.

It does not seem surprising to me that both of these factors would be in favor of a more positive sentiment for crypto. People that like it seem to really like it and talk about it a lot, and there is a large financial incentive for numerous actors to create fake accounts and comments.

◧◩
8. cft+d6[view] [source] [discussion] 2023-11-08 18:39:21
>>kcorbi+x4
The most negative argument about GPT-like AI for me is that it has summed up all world's information into only one "correct" opinion. Bing chat for example terminates the conversation and offers you to apologize when you reply that it's incorrect and you don't like its biases. With the old search engines, at least, in theory, you could get links to websites with different points of view.
replies(1): >>HPsqua+z7
◧◩
9. __loam+t6[view] [source] [discussion] 2023-11-08 18:40:29
>>monero+s5
Congratulations on finding your greater fools.
replies(1): >>kridsd+gy
10. jl6+N6[view] [source] 2023-11-08 18:42:06
>>Closi+(OP)
I’m not negative towards AI, but I know how hype cycles work - even for genuinely good tech. I’m not looking forward to the coming waves of scammers and grifters and snake oil salesmen in this space.
replies(1): >>tracer+Fl
◧◩
11. dragon+R6[view] [source] [discussion] 2023-11-08 18:42:14
>>mdgrec+C5
> I feel like the value of crypto going up is backed by solid economics. There is only ever going to be so much of it so losing your money to inflation isn't a risk.

This is solid economics iff you assume that crypto has a utility for which there is no substitute which does not share the same supply constraint feature, and even then its not solid economics for a current investment unless you also assume that that utility is the entire basis for its current valuation. Because even if it has a nonsubstitutable utility, if that's not the basis of its current value, then the "solid economics" is that there is some price it could reach from which further value drop because of supply (of substitutes) will not erode value, but there is no guarantee of what that level is.

replies(1): >>Closi+Hd
◧◩
12. fallin+T6[view] [source] [discussion] 2023-11-08 18:42:26
>>kcorbi+x4
Would be interesting to compare with overall sentiment on HN over time. I feel like it has gotten more and more bitter and negative over the time I've been here.
replies(1): >>JohnFe+qc
◧◩
13. z3phyr+g7[view] [source] [discussion] 2023-11-08 18:43:27
>>mdgrec+C5
Gold does not require much infrastructure or utility to keep existing, and it is operable by anyone with hands or less.
14. mattlo+p7[view] [source] 2023-11-08 18:43:50
>>Closi+(OP)
There is a lot of negativity about the way it is used I think.

Most people will agree that LLMs are pretty neat, but now instead of every startup being "like Uber but for ..." they are "like chatGPT but for ...".

Everyone is trying to chuck AI into their products and most of the time there is no need, or the product is just a thin fine-tune over an existing LLM model that adds essentially near-zero value. HN is fairly negative on that sort of thing I think (rightly so IMO)

replies(1): >>__loam+Wd
◧◩
15. btilly+w7[view] [source] [discussion] 2023-11-08 18:43:57
>>kcorbi+x4
I would be curious.

What happens if you divide it not by comments, but by commenters? How much is sentiment being shaped by a vocal minority who is always saying the same thing, and how much does it seem to be a broad-based sentiment among the overall audience that occasionally responds?

replies(2): >>jay_ky+0e >>johnny+CS
◧◩◪
16. HPsqua+z7[view] [source] [discussion] 2023-11-08 18:44:14
>>cft+d6
Microsoft is going to be especially sensitive in this area, given their experience with Tay and "user-guided sentiments"

https://en.m.wikipedia.org/wiki/Tay_(chatbot)

17. jampek+D7[view] [source] 2023-11-08 18:44:21
>>Closi+(OP)
I'd guess HN is negative towards hype in general. "AI" is very much full of hype.

What happened with previous AI hypes, the term AI was abandoned and the techniques and disciplines were "rebranded".

Probably will happen again. When something works and we start to understand how and when it works (and especially when it doesn't) it stops being "AI" and becomes something more boring.

◧◩
18. acdha+T7[view] [source] [discussion] 2023-11-08 18:45:01
>>mdgrec+C5
> There is only ever going to be so much of it so losing your money to inflation isn't a risk.

This comment feels like it’s 2013 and there hasn’t been a decade of people creating thousands of other tokens and forks, or realizing that high volatility in liquidity or exchange rates is more of a problem than the levels of currency inflation we commonly see (the price increases we’ve seen for the last couple years are most of the inflation we’ve seen and that practice would be unaffected).

It especially misses the understanding that deflation is much worse for anyone who isn’t already rich. The model that anyone who bought a decade ago deserves to be fabulously rich is … unlikely to be popular with the rest of the world.

19. __loam+e8[view] [source] 2023-11-08 18:46:17
>>Closi+(OP)
It's not obvious to me whether we've actually found a good killer app for generative AI yet, unless you consider chatgpt or sites like midjourney the killer apps. A lot of the new wave of ai startups are just wrappers for gpt. I don't think that adds a ton of value over just asking gpt-4 your query as opposed to subscribing to a billion different ai services. I also question the value of ai art in general when everyone involved in creative labor thinks you're a piece of shit for using it.
replies(3): >>Closi+kb >>tayo42+fd >>wruza+8j
◧◩
20. rynean+x8[view] [source] [discussion] 2023-11-08 18:47:21
>>monero+s5
Dan Olson is FAR more eloquent than I will ever be: https://www.youtube.com/watch?v=5pYeoZaoWrA&t=9s
replies(1): >>monero+ua
◧◩
21. NBJack+09[view] [source] [discussion] 2023-11-08 18:49:25
>>mdgrec+C5
That's a poor comparison.

Crypto is an umbrella term for a number of solutions, including blockchains (roughly 1,000+ as of right now) and cryptocurrencies (roughly 22,000+). While a given blockchain may be limited in terms of how much can be 'mined' or grow, you or I could very easily create a new cryptocurrency or even a new blockchain. Assuming we got traction with it, there would now be N+1 more out there.

Gold is not something we can so easily create. It also has intrinsic value through practical applications.

◧◩
22. Manfre+6a[view] [source] [discussion] 2023-11-08 18:53:47
>>kcorbi+x4
There was nothing about the ethical question of training models on copyrighted content? Nothing about centralizing power even further in big tech companies? Nothing about AI flooding the world with mediocre content and wrong information?

These are genuine questions, not critique on your statement.

replies(1): >>apeter+sl
◧◩◪
23. monero+ua[view] [source] [discussion] 2023-11-08 18:55:34
>>rynean+x8
2.5 hour video to tell me why you are smarter than me, no thanks
◧◩
24. Closi+kb[view] [source] [discussion] 2023-11-08 18:59:22
>>__loam+e8
I certainly consider ChatGPT to be one killer app.

Copilot to be another.

Midjourney to be another - or at least diffusion based image editing tools which can be brought into photo and video editing workflows. The killer app here is probably integration of diffusion models into apps like Photoshop (and eventually video).

Some real virtual assistant applications seem right around the corner (i.e. a real life J.A.R.V.I.S seems like an inevitability within the year rather than a pipe dream, and to me would be a killer app)

And then lots of other killer apps are pretty obvious to imagine with development (e.g. customer service applications like IT helpdesks, Computer game dialogue where you can really influence interactions...)

replies(2): >>__loam+2f >>thisgo+Qr
25. JohnFe+Eb[view] [source] 2023-11-08 19:00:34
>>Closi+(OP)
> Maybe there is added negativity considering it is a technology where there is clearly a potential threat to jobs on a personal level

I'm not worried about this on a personal level, but I'm very worried about the wider risk of too many people being put out of work too quickly. That's my biggest concern with these tools.

◧◩◪
26. JohnFe+qc[view] [source] [discussion] 2023-11-08 19:03:27
>>fallin+T6
My personal outlook about how AI has been developing has certainly become increasingly dark with time. I still have a hope, though, that things won't be as bad as I fear.
replies(1): >>ies7+sb1
◧◩
27. tayo42+fd[view] [source] [discussion] 2023-11-08 19:06:24
>>__loam+e8
Ai advancements have my small but positive impacts on things I've done.

But my not so informed opinion is text as an interface is only a small feature of bigger useful products, not the main focus. Instead of learning sql, you can ask a regular question. It feels like inventing the mouse to use with computers.

◧◩◪
28. Closi+Hd[view] [source] [discussion] 2023-11-08 19:08:32
>>dragon+R6
Agreed! It's backed by solid economics if you believe a load of things that aren't backed up by solid economics.
◧◩
29. __loam+Wd[view] [source] [discussion] 2023-11-08 19:10:00
>>mattlo+p7
I think a major problem that is going to become more and more obvious is that AI is actually pretty expensive compared to good old deterministic computing. If there's a way to solve a problem without resorting to sending an inference request to a gpu cluster, we should do it that way. Otherwise you're wasting electricity.
replies(7): >>willsm+qs >>Closi+Vw >>hiAndr+oJ >>verdve+UN >>everfr+sR >>TeMPOr+z11 >>__Matr+bx1
◧◩◪
30. jay_ky+0e[view] [source] [discussion] 2023-11-08 19:10:11
>>btilly+w7
yeah, really good point. I've noticed on some topics a few users come out of the woodwork and post a lot.
◧◩◪
31. __loam+2f[view] [source] [discussion] 2023-11-08 19:14:04
>>Closi+kb
I guess I'm wondering if LLMs as customer service agents are actually going to be good or if it's just going to be another layer of indirection I need to get through to talk to a human. Is the video game dialog actually going to be good or will it fall flat compared to hand crafted narrative. Do I actually like copilot butting in with suggestions when I'm trying to program something.
replies(2): >>Closi+ps >>TeMPOr+n21
◧◩
32. devjab+Ng[view] [source] [discussion] 2023-11-08 19:21:10
>>kcorbi+x4
I’m curious as to your conclusions on point (2). We use GPT daily but see nothing with it that threatens tech workers. At least not in any sense that we haven’t seen already, with how a two or three developers can basically do what it took one or two teams to do 25 years ago.

In terms of actually automating any form of ”thinking” tech work, LLMs are proving increasingly terrible. I say this as someone who work in a place where GPT writes all our documentation except for some very limited parts of our code base which can’t legally be shared with it. It increasingly also replaces our code-generation tools for most ”repetitive” work and it auto-generates a lot of our data-models based on various forms of inputs. But the actual programming? It’s so horrible at it that it’s mostly used as a joke. Well, except that it’s also not used like that by people who aren’t CS educated. The thing is though, we’ve already had to replace some of the “wonderful” automation that’s being cooked up by Product Owners, BI engineers and so on. Things which work, until they need to scale.

This is obviously very anecdotal, but I’m very underwhelmed and very impressed by AI at the same time. On one hand it’s frighteningly good at writing documentation… seriously, it wrote some truly amazing documentation based on a function named something along the lines of getCompanyInfoFromCVR (CVR being the Danish digital company registry) and the documentation GPT wrote based on just that was better than what I could’ve written. But tasked with writing some fairly basic computation it fails horribly. And I mean, where are my self driving cars?

So I think it’s a bit of a mix. But honestly, I suspect that for a lot of us, LLMs will generate an abundance of work when things need to get cleaned up.

replies(2): >>idonot+Qz >>wizzwi+lI
33. jejeyy+Zh[view] [source] 2023-11-08 19:25:46
>>Closi+(OP)
HN is made of real people. People with emotions - like FOMO/jealousy/threatened etc. I get the sense that a lot of people feel like they missed the boat be it crypto, or AI, etc.
◧◩
34. wruza+8j[view] [source] [discussion] 2023-11-08 19:30:08
>>__loam+e8
A lot of startups in general are just wrappers over something. The reason for that is that people on average (not "an average person", it includes all of us) have many out-of-their-scope issues and aren't even aware of something generic that could help solving it. Dividing a common [al]mighty resource into hundreds of packaged solutions and marketing them separately is how people get hold of the unknown. They then can choose to migrate to a root tech/resource, or to stay where they are for various reasons.
◧◩◪
35. apeter+sl[view] [source] [discussion] 2023-11-08 19:39:33
>>Manfre+6a
Centralizing power is a good point.

It feels like a huge dependency with a bunch of money involved.

I cannot _not_ see it clumping to a sentiment comparable to "you either AWS' or have no idea what cloud/network/cluster means".

We use these things like it’s actually "something". It’s not. We don’t build things with it. We configure other people’s software.

It’s born to be promoted as the next big enterprise stuff. You either know how to configure it or are not enterprise-worthy.

And that farts. Being dependent on someone else’s stuff has never turned out good.

Well, I mean. You can also not give a duck and squeeze out all the money. Work a job, abandon it and jump on the next train.

Feels useless, doesn’t it?

◧◩
36. tracer+Fl[view] [source] [discussion] 2023-11-08 19:40:45
>>jl6+N6
Wait you mean you don't like having every single Data adjacent professional on LinkedIn sending you their personal newsletter to let you know how much they know and how incredibly "involved" and expert at "AI" they are?
37. tester+mm[view] [source] 2023-11-08 19:44:59
>>Closi+(OP)
>Suprised HN is so negative towards AI

I feel like it is overrated and overhyped

It sucks because that's impressive field, but over decade of hype on self-driving cars and now naivety of experts being replaced by chat bot is annoying

Don't get me wrong, I'm not saying those things don't work, just not as good as people try to convince us

replies(1): >>pixl97+Z01
◧◩◪
38. thisgo+Qr[view] [source] [discussion] 2023-11-08 20:09:39
>>Closi+kb
I don't think midjourney can keep up with chatGPT with dalle as is. The experience creating an image is so much better with chatGPT4 now since you don't have to memorize commands
◧◩◪◨
39. Closi+ps[view] [source] [discussion] 2023-11-08 20:12:09
>>__loam+2f
LLMs will be good replacements for a portion of customer service queries (eg where’s my order, help me fix this common computer issue etc), and will probably be a bad replacement for other queries (eg complaint handling), but it doesn’t have to fix all for it to transform the sector and be a killer app.

Video game dialogue remains to be seen, but I already find ChatGPT based text adventures super fun! So I suspect there will be demand for both handcrafted static stories and AI dynamically-generated stories (ie they can be different things, one doesn’t have to replace the other, just like email didn’t immediately replace the post service).

I don’t know if you enjoy copilot, but for me it’s definitely supercharges my productivity.

◧◩◪
40. willsm+qs[view] [source] [discussion] 2023-11-08 20:12:23
>>__loam+Wd
For now it is. If it continues to be the best way to solve problems, the cost will drop with time
◧◩◪
41. Closi+Vw[view] [source] [discussion] 2023-11-08 20:32:22
>>__loam+Wd
More expensive to run but cheaper to write.

Engineers are expensive, so actually the cost/benefit analysis is a little more complex and different problems will have different solutions.

replies(1): >>__loam+tW
◧◩◪
42. kridsd+gy[view] [source] [discussion] 2023-11-08 20:39:31
>>__loam+t6
Exactly. The guy was like "Well, I found a huge nugget of gold in the Klondike while everyone around me was made destitute, therefore abandoning your family to stand in a freezing river for two years is objectively good."
◧◩◪
43. idonot+Qz[view] [source] [discussion] 2023-11-08 20:47:00
>>devjab+Ng
Try out a local language model for the docs you can't get chatgpt to write.

You can run small quantized models on apple silicon if you have it.

I've been using a 70B local model for things like this and it works well

44. themag+jD[view] [source] 2023-11-08 21:00:46
>>Closi+(OP)
> Surprised HN is so negative towards AI (and that the positive:negative ratio to AI is about the same as it was for Crypto a few years ago!)

I'm not. AI tools will have huge benefits in some industries. But the main use case that people will experience (at least, the use case they recognize) on a daily basis will be scams and frustration. That's why people are negative. Not because the technology is bad or does not have uses, but because the average experience that people will consciously have will be negative.

It's already impossible to know what's real and what's not. Customer service is already majority bots. You'll never be able to talk to a human again if you have an issue with something. Blackmail and ransomware scams are going to get dialed up to 11. Everything is going to be automated in the most annoying ways possible. People are going to lose their jobs. Most of the jobs that will be lost are "meaningless," but our society revolves around meaningless jobs because they provide order, income and—as a consequence—dignity. All of that is going out the window.

Crypto had a purpose that no one actually cared about. No one cared until people started to see the scam potential and then it took off. AI is going to do the same thing.

AI tools will revolutionize medicine, engineering, manufacturing, and logistics. There will be huge benefits for all of humanity. But you won't think about this day-to-day. You'll just be bombarded by more (and better) scams more quickly.

I am amazed at what AI tools can do already. Had these tools existed 10 or 15 years ago my entire life would be different. Better? I have no idea. Maybe, maybe not. But even if it would have been better I know enough to know that I would not recognize that.

replies(1): >>pixl97+YY
◧◩◪
45. wizzwi+lI[view] [source] [discussion] 2023-11-08 21:21:21
>>devjab+Ng
> I say this as someone who work in a place where GPT writes all our documentation

> But the actual programming? It’s so horrible at it that it’s mostly used as a joke.

Please, for the sake of your future selves, hire someone who can write good documentation. (Or, better still but much harder, develop that skill yourself!) GPT documentation is the new auto-generated Javadoc comments: it looks right to someone who doesn't get what documentation is for, and it might even be a useful summary to consult (if it's kept up-to-date), but it's far less useful than the genuine article.

If GPT's better than you at writing documentation (not just faster), and you don't have some kind of language-processing disability, what are you even doing? Half of what goes into documentation is stuff that isn't obvious from the code! Even if you find writing hard, at least write bullet points or something; then, if you must, tack those on top of that (clearly marked) GPT-produced summary of the code.

replies(2): >>famous+HS >>devjab+fQ1
◧◩◪
46. hiAndr+oJ[view] [source] [discussion] 2023-11-08 21:26:13
>>__loam+Wd
Let's zeroth-order a single GPT-4 query as using 0.01 kWh (which is probably massive overkill for most queries but we'll roll with it).

Let's high ball US residential electricity prices are about 25¢ per kWh. So 25¢ of electricity gets us 100 GPT-4 queries. $25 gets us 10_000.

Let's low ball average US developer salaries at a cool $100_000/yr. 50 40 hour weeks in a year makes 2_000 working hours makes $50 per hour. So with our very generous margins all working against us, a US developer would have to be making 20_000 GPT-4 queries an hour, or a little over 5 per second, in order to end up costing in electricity what he is making salary-wise.

I have no real point to this story except that electricity is much cheaper than most people have a useful frame of reference for. My mom used to complain about teenage me not running the dishwasher at full load until I worked out that the electricity and water together costed about 50¢ a run and offered her a clean $20 to offset my next 400 only three-quarters full runs.

Your bonus programming tip: Many programming languages let you legally use underscores to space large numbers! Try "million = 1_000_000" next time you fire up Python.

replies(1): >>willsm+ZV
◧◩
47. joegib+OJ[view] [source] [discussion] 2023-11-08 21:27:32
>>monero+s5
Check out my next world-changing invention, a Stripe form where a million people can send me twenty bucks each…
48. verdve+pN[view] [source] 2023-11-08 21:45:43
>>Closi+(OP)
Crypto also had many use-cases and what we are seeing is reality of LLMs. They are not the magic pixy dust you can just sprinkle on and get better outcomes. There are a lot of complications and downsides that come with them.
◧◩◪
49. verdve+UN[view] [source] [discussion] 2023-11-08 21:49:07
>>__loam+Wd
This was the gist of my PhD, a deterministic algo to replace a wasteful genetic (evolutionary) algo. It was multiple exponentials less wasteful
replies(1): >>__loam+xW
◧◩◪
50. everfr+sR[view] [source] [discussion] 2023-11-08 22:06:37
>>__loam+Wd
People said that about virtualized code, but then computers got 100x faster and now we're running 10 megabyte web apps in a 500 megabyte client to display a simple page of text, and it still loads acceptably fast.

The AI algos will get 100x faster through a combination of hardware and software optimizations. Then, deterministic vs AI will mean the unnoticeable difference between displaying some info to the user in 0.001s vs 0.1s. Then, AI will become the default.

replies(1): >>__loam+2W
◧◩◪
51. johnny+CS[view] [source] [discussion] 2023-11-08 22:12:44
>>btilly+w7
absolutely. One fun thing I learnt about reddit is how blocking just a few hyper commenters can suddenly make a post 10x calmer. You should definitely take into account the frequency of the users commenting with data like this.
◧◩◪◨
52. famous+HS[view] [source] [discussion] 2023-11-08 22:13:15
>>wizzwi+lI
Have you actually tried to use GPT-4 for documentation?

Whether it's obvious from the code or not is kind of irrelevant. It gets non obvious things as well.

replies(1): >>wizzwi+g61
◧◩◪◨
53. willsm+ZV[view] [source] [discussion] 2023-11-08 22:28:13
>>hiAndr+oJ
I actually would have guessed a full load dishwasher would cost less than that, maybe 15-20 cents.
◧◩◪◨
54. __loam+2W[view] [source] [discussion] 2023-11-08 22:28:24
>>everfr+sR
I'm not sure if this actually correct. Performance increases were reliable and consistent for a long time but we're reaching the physical limitations of Moore's law. Unless you have new physics or new models of computation, we might reach an actual speed limit this decade when the transistors are limited but the size of atoms.

I also believe there will always be a need for determinism. There will absolutely be applications where the randomness of ai is unacceptable.

replies(1): >>everfr+T11
◧◩◪◨
55. __loam+tW[view] [source] [discussion] 2023-11-08 22:30:21
>>Closi+Vw
The proliferation of extremely expensive algorithms isn't necessarily good. A lot of ink has been spilled about how much useless work crypto does. We should consider the impact of AI on the total computational resources of the species carefully.
◧◩◪◨
56. __loam+xW[view] [source] [discussion] 2023-11-08 22:30:51
>>verdve+UN
Show us the paper that sounds sick
replies(1): >>verdve+BY
◧◩◪◨⬒
57. verdve+BY[view] [source] [discussion] 2023-11-08 22:40:36
>>__loam+xW
I'll do you one better

https://github.com/verdverm/pypge

https://github.com/verdverm/go-pge/blob/master/pge_gecco2013...

The reviews had awesome and encouraging comments

◧◩
58. pixl97+YY[view] [source] [discussion] 2023-11-08 22:42:32
>>themag+jD
The one issue with AI and you not thinking about it day to day, is you'll only see it where it makes mistakes, or it performs criminal actions.

It's the IT effect. When IT does it's job right, everyone asks why you pay them, then IT screws up, everyone will ask why you pay them. Things just working is transparent and we don't notice it's even there.

◧◩
59. pixl97+Z01[view] [source] [discussion] 2023-11-08 22:51:52
>>tester+mm
The problem with self driving cars is we are.. putting the cart before the horse.

That is a lot of the hard issues with driving are preemptive knowledge issues. I see a ball rolling towards the road from the left. I as a human know that, one the ball will likely roll out in front of me, and two, a kid/person may be following that. Now if you see a blowing trash bag, you probably aren't going to take any risky corrective action to avoid it.

The problem just a vision knowledge system is a ball and blowing trashbag are just objects that have the same priority. You have no categorization system of the relative meaning and dangers behind each action.

But things start getting weird when you couple LLMs with vision knowledge. Really, it's much too slow currently, but in multi-modal systems objects get depth of meaning. That trash bag can be identified, and a low risk can be assigned to it. While the ball can also be identified and a high risk assigned to it. Along with a bunch of other generalization that humans typically do.

◧◩◪
60. TeMPOr+z11[view] [source] [discussion] 2023-11-08 22:54:47
>>__loam+Wd
I agree, but then I expect the major benefit of current AI will be in providing reference solutions to previously intractable problems - it'll be much easier to develop more deterministic, classical / GOFAI methods of solving those problems once we have a wasteful but working solution to play with and test against.
◧◩◪◨⬒
61. everfr+T11[view] [source] [discussion] 2023-11-08 22:56:49
>>__loam+2W
New models of computation are a given, and improved application-specific circuits for the most widely-used models are also a given (I believe current models run mostly on enterprise GPUs). Together these could easily make AI models 100x more efficient even without any advancements in the underlying chipmaking processes.

> I also believe there will always be a need for determinism. There will absolutely be applications where the randomness of ai is unacceptable.

For high-assurance apps, I agree there will always be a need, sure. Of course, these high-assurance apps will be supervised by AI that can inspect it and raise alarm bells if anything unexpected happens.

For consumer apps though, an app might actually feel less "random" to the user if there's an AI that can intuit exactly what they are trying to accomplish when they perform certain actions in the app (much like a friendly tech-savvy teacher sitting down with you to help you accomplish something in the app).

replies(1): >>__loam+s91
◧◩◪◨
62. TeMPOr+n21[view] [source] [discussion] 2023-11-08 23:00:25
>>__loam+2f
> I guess I'm wondering if LLMs as customer service agents are actually going to be good or if it's just going to be another layer of indirection I need to get through to talk to a human.

As always, the tech isn't the problem - the way business applies it is. Customer service automation isn't done to help you better - it's done to make it cheaper to make you go away without making too big of a fuss. Companies building and employing customer service systems will find ways to make even GPT-4 incapable of providing anything the customer would find remotely useful.

◧◩◪◨⬒
63. wizzwi+g61[view] [source] [discussion] 2023-11-08 23:24:32
>>famous+HS
It's not going to know that this widget's green is blue-ish because it's designed to match the colours in the nth-generation photocopied manual, which at some point was copied on a machine that had low magenta – nor that it's essential that the green remains blue-ish, because lime and moss are different categories added in a different part of the system. Documentation is supposed to explain why, not just what, the code does – and how it can be used to do what it is for: all things that you cannot derive from the source code, no matter how clever you are.

Honestly, I don't actually care what you do. The more documentation is poisoned by GPT-4 output, the less useful future models built by the “big data” approach will be, but the easier it'll be to spot and disregard their output as useless. If this latest “automate your documentation” fad paves the way for a teaching moment or three, it'll have served some useful purpose.

replies(1): >>famous+He1
64. gumbal+q71[view] [source] 2023-11-08 23:32:31
>>Closi+(OP)
I dont think people are negative about ai, but rather about how its built and what it’s used for. At least that’s my case. Ai is a great tool, but training it on people’s property, without permission or recognition is harmful. Equally using it to manipulate people, spread misinformation and generate spam is detrimental to tech overall. Worse, spreading fear and claiming to want using it to replace people - after stealing their work - instead of creating value and new industries is just petty. Therefore i will forever hold the people that promote ai in that manner in contempt to the end of time. To see so many smart people simply fall for and parrot the idiocy that ai will replace workers is sad. Sam altman and his bros knew that FUD works and went that path. The sheeple followed.

Instead ai should be promoted as what it is - a job and growth creator and should be built honouring people’s property. It can be done and should be done that way.

◧◩◪◨⬒⬓
65. __loam+s91[view] [source] [discussion] 2023-11-08 23:47:04
>>everfr+T11
You have a lot of faith in this ai stuff. It's not magic.
replies(1): >>everfr+8b1
◧◩◪◨⬒⬓⬔
66. everfr+8b1[view] [source] [discussion] 2023-11-08 23:59:37
>>__loam+s91
AI is already considerably more knowledgeable and easier to communicate with than the customer service representatives I interact with day to day. Interacting with an API through ChatGPT, I would have a lot more faith that my inquiry would be solved given the tools available at that customer service tier.

It's only been three years since AI Dungeon opened my mind to how powerful generative AI could be, and GPT-4 blows that out of the water. Whatever gets released three more years from now will likely blow GPT-4 out of the water.

AI is already considerably smarter than the dumbest humans, in terms of its ability to hold a conversation in natural language and make arguments based on fact. It's only a matter of time before it's smarter than the average human, and at the current pace, that time will arrive within the next decade.

All useful technology improves over time, and I see no reason to believe AI will be any different.

◧◩◪◨
67. ies7+sb1[view] [source] [discussion] 2023-11-09 00:01:14
>>JohnFe+qc
Because the majority of those ai "startups/founders" were blockchain startups before this and big data startup before that and "any buzzword tech" before that too.

They will pivot their vision to the next toy after this too.

◧◩◪◨⬒⬓
68. famous+He1[view] [source] [discussion] 2023-11-09 00:23:17
>>wizzwi+g61
I guess we'll just have to disagree.

Every now and then, the why is useful information that sheds needed light. Most of the time however, it's just unnecessary information taking up valuable space.

Like this example.

>this widget's green is blue-ish because it's designed to match the colours in the nth-generation photocopied manual, which at some point was copied on a machine that had low magenta

I'm sorry but unless matching the manual is a company mandate, this is not necessary at all to know and is wasted space.

Knowing the "low magenta" bit is especially useless information, company mandate or not.

>nor that it's essential that the green remains blue-ish, because lime and moss are different categories added in a different part of the system.

Now this is actual useful information. But it's also Information GPT can Intuit if the code that defines these separate categories are part of the context.

Even if it's not and you need to add it yourself (assuming you are even aware yourself. Not every human writing documentation is aware of every moving part) then you've still saved a lot of valuable time by passing it through 4 first and then adding anything else.

◧◩◪
69. __Matr+bx1[view] [source] [discussion] 2023-11-09 02:37:30
>>__loam+Wd
I think that's why there's a big focus on its ability to write code: Spend the gpu-cluster cost once, generate code, run that code on tiny instance. Need to make changes? Warm up the cluster...
◧◩◪◨
70. devjab+fQ1[view] [source] [discussion] 2023-11-09 05:39:01
>>wizzwi+lI
> Half of what goes into documentation is stuff that isn't obvious from the code!

I’d say that greatly depends on your code. I’ve had GPT write JSDoc where it explains exactly why a set or functions is calculating the German green energy tariffs the way they do. Some of what it wrote went into great detail about how the tariff is not applied if your plant goes over a specific level of production, and why we try to prevent that.

I get your fears, but I don’t appreciate your assumptions into something you clearly both don’t know anything about (our code/documentation) and something you apparently haven’t had much luck with compared to us (LLM documentation).

You’re not completely wrong of course. If you write code with bad variable names and functions that do more than they need to, then GPT is rather bad at hallucinating the meaning. But it’s not like we just blindly let it auto write our documentation without reading it.

[go to top]