The obvious difference is that AI has abundant use-cases, while Crypto only has tenuous ones.
Maybe there is added negativity considering it is a technology where there is clearly a potential threat to jobs on a personal level (e.g. lift operators were very negative towards automatic lifts).
Subjectively, the two flavors of AI-negative sentiment I've seen most commonly on HN are (1) its potential to invade privacy, and (2) its potential to displace workers, including workers in tech.
I think that (1) was by far the most common concern up until around the ChatGPT release, at which point (2) became a major concern for many HN readers.
I would be curious to know many HNers were previously burned by crypto. Fool me once, etc.
Hacker News comment sentiment is not a reliable measurem of what the average hacker news developer thinks.
For one, only people who are very invested about something will post about it.
For two, many comments are probably not from developers and instead from fake accounts.
It does not seem surprising to me that both of these factors would be in favor of a more positive sentiment for crypto. People that like it seem to really like it and talk about it a lot, and there is a large financial incentive for numerous actors to create fake accounts and comments.
This is solid economics iff you assume that crypto has a utility for which there is no substitute which does not share the same supply constraint feature, and even then its not solid economics for a current investment unless you also assume that that utility is the entire basis for its current valuation. Because even if it has a nonsubstitutable utility, if that's not the basis of its current value, then the "solid economics" is that there is some price it could reach from which further value drop because of supply (of substitutes) will not erode value, but there is no guarantee of what that level is.
Most people will agree that LLMs are pretty neat, but now instead of every startup being "like Uber but for ..." they are "like chatGPT but for ...".
Everyone is trying to chuck AI into their products and most of the time there is no need, or the product is just a thin fine-tune over an existing LLM model that adds essentially near-zero value. HN is fairly negative on that sort of thing I think (rightly so IMO)
What happens if you divide it not by comments, but by commenters? How much is sentiment being shaped by a vocal minority who is always saying the same thing, and how much does it seem to be a broad-based sentiment among the overall audience that occasionally responds?
What happened with previous AI hypes, the term AI was abandoned and the techniques and disciplines were "rebranded".
Probably will happen again. When something works and we start to understand how and when it works (and especially when it doesn't) it stops being "AI" and becomes something more boring.
This comment feels like it’s 2013 and there hasn’t been a decade of people creating thousands of other tokens and forks, or realizing that high volatility in liquidity or exchange rates is more of a problem than the levels of currency inflation we commonly see (the price increases we’ve seen for the last couple years are most of the inflation we’ve seen and that practice would be unaffected).
It especially misses the understanding that deflation is much worse for anyone who isn’t already rich. The model that anyone who bought a decade ago deserves to be fabulously rich is … unlikely to be popular with the rest of the world.
Crypto is an umbrella term for a number of solutions, including blockchains (roughly 1,000+ as of right now) and cryptocurrencies (roughly 22,000+). While a given blockchain may be limited in terms of how much can be 'mined' or grow, you or I could very easily create a new cryptocurrency or even a new blockchain. Assuming we got traction with it, there would now be N+1 more out there.
Gold is not something we can so easily create. It also has intrinsic value through practical applications.
These are genuine questions, not critique on your statement.
Copilot to be another.
Midjourney to be another - or at least diffusion based image editing tools which can be brought into photo and video editing workflows. The killer app here is probably integration of diffusion models into apps like Photoshop (and eventually video).
Some real virtual assistant applications seem right around the corner (i.e. a real life J.A.R.V.I.S seems like an inevitability within the year rather than a pipe dream, and to me would be a killer app)
And then lots of other killer apps are pretty obvious to imagine with development (e.g. customer service applications like IT helpdesks, Computer game dialogue where you can really influence interactions...)
I'm not worried about this on a personal level, but I'm very worried about the wider risk of too many people being put out of work too quickly. That's my biggest concern with these tools.
But my not so informed opinion is text as an interface is only a small feature of bigger useful products, not the main focus. Instead of learning sql, you can ask a regular question. It feels like inventing the mouse to use with computers.
In terms of actually automating any form of ”thinking” tech work, LLMs are proving increasingly terrible. I say this as someone who work in a place where GPT writes all our documentation except for some very limited parts of our code base which can’t legally be shared with it. It increasingly also replaces our code-generation tools for most ”repetitive” work and it auto-generates a lot of our data-models based on various forms of inputs. But the actual programming? It’s so horrible at it that it’s mostly used as a joke. Well, except that it’s also not used like that by people who aren’t CS educated. The thing is though, we’ve already had to replace some of the “wonderful” automation that’s being cooked up by Product Owners, BI engineers and so on. Things which work, until they need to scale.
This is obviously very anecdotal, but I’m very underwhelmed and very impressed by AI at the same time. On one hand it’s frighteningly good at writing documentation… seriously, it wrote some truly amazing documentation based on a function named something along the lines of getCompanyInfoFromCVR (CVR being the Danish digital company registry) and the documentation GPT wrote based on just that was better than what I could’ve written. But tasked with writing some fairly basic computation it fails horribly. And I mean, where are my self driving cars?
So I think it’s a bit of a mix. But honestly, I suspect that for a lot of us, LLMs will generate an abundance of work when things need to get cleaned up.
It feels like a huge dependency with a bunch of money involved.
I cannot _not_ see it clumping to a sentiment comparable to "you either AWS' or have no idea what cloud/network/cluster means".
We use these things like it’s actually "something". It’s not. We don’t build things with it. We configure other people’s software.
It’s born to be promoted as the next big enterprise stuff. You either know how to configure it or are not enterprise-worthy.
And that farts. Being dependent on someone else’s stuff has never turned out good.
Well, I mean. You can also not give a duck and squeeze out all the money. Work a job, abandon it and jump on the next train.
Feels useless, doesn’t it?
I feel like it is overrated and overhyped
It sucks because that's impressive field, but over decade of hype on self-driving cars and now naivety of experts being replaced by chat bot is annoying
Don't get me wrong, I'm not saying those things don't work, just not as good as people try to convince us
Video game dialogue remains to be seen, but I already find ChatGPT based text adventures super fun! So I suspect there will be demand for both handcrafted static stories and AI dynamically-generated stories (ie they can be different things, one doesn’t have to replace the other, just like email didn’t immediately replace the post service).
I don’t know if you enjoy copilot, but for me it’s definitely supercharges my productivity.
Engineers are expensive, so actually the cost/benefit analysis is a little more complex and different problems will have different solutions.
You can run small quantized models on apple silicon if you have it.
I've been using a 70B local model for things like this and it works well
I'm not. AI tools will have huge benefits in some industries. But the main use case that people will experience (at least, the use case they recognize) on a daily basis will be scams and frustration. That's why people are negative. Not because the technology is bad or does not have uses, but because the average experience that people will consciously have will be negative.
It's already impossible to know what's real and what's not. Customer service is already majority bots. You'll never be able to talk to a human again if you have an issue with something. Blackmail and ransomware scams are going to get dialed up to 11. Everything is going to be automated in the most annoying ways possible. People are going to lose their jobs. Most of the jobs that will be lost are "meaningless," but our society revolves around meaningless jobs because they provide order, income and—as a consequence—dignity. All of that is going out the window.
Crypto had a purpose that no one actually cared about. No one cared until people started to see the scam potential and then it took off. AI is going to do the same thing.
AI tools will revolutionize medicine, engineering, manufacturing, and logistics. There will be huge benefits for all of humanity. But you won't think about this day-to-day. You'll just be bombarded by more (and better) scams more quickly.
I am amazed at what AI tools can do already. Had these tools existed 10 or 15 years ago my entire life would be different. Better? I have no idea. Maybe, maybe not. But even if it would have been better I know enough to know that I would not recognize that.
> But the actual programming? It’s so horrible at it that it’s mostly used as a joke.
Please, for the sake of your future selves, hire someone who can write good documentation. (Or, better still but much harder, develop that skill yourself!) GPT documentation is the new auto-generated Javadoc comments: it looks right to someone who doesn't get what documentation is for, and it might even be a useful summary to consult (if it's kept up-to-date), but it's far less useful than the genuine article.
If GPT's better than you at writing documentation (not just faster), and you don't have some kind of language-processing disability, what are you even doing? Half of what goes into documentation is stuff that isn't obvious from the code! Even if you find writing hard, at least write bullet points or something; then, if you must, tack those on top of that (clearly marked) GPT-produced summary of the code.
Let's high ball US residential electricity prices are about 25¢ per kWh. So 25¢ of electricity gets us 100 GPT-4 queries. $25 gets us 10_000.
Let's low ball average US developer salaries at a cool $100_000/yr. 50 40 hour weeks in a year makes 2_000 working hours makes $50 per hour. So with our very generous margins all working against us, a US developer would have to be making 20_000 GPT-4 queries an hour, or a little over 5 per second, in order to end up costing in electricity what he is making salary-wise.
I have no real point to this story except that electricity is much cheaper than most people have a useful frame of reference for. My mom used to complain about teenage me not running the dishwasher at full load until I worked out that the electricity and water together costed about 50¢ a run and offered her a clean $20 to offset my next 400 only three-quarters full runs.
Your bonus programming tip: Many programming languages let you legally use underscores to space large numbers! Try "million = 1_000_000" next time you fire up Python.
The AI algos will get 100x faster through a combination of hardware and software optimizations. Then, deterministic vs AI will mean the unnoticeable difference between displaying some info to the user in 0.001s vs 0.1s. Then, AI will become the default.
Whether it's obvious from the code or not is kind of irrelevant. It gets non obvious things as well.
I also believe there will always be a need for determinism. There will absolutely be applications where the randomness of ai is unacceptable.
https://github.com/verdverm/pypge
https://github.com/verdverm/go-pge/blob/master/pge_gecco2013...
The reviews had awesome and encouraging comments
It's the IT effect. When IT does it's job right, everyone asks why you pay them, then IT screws up, everyone will ask why you pay them. Things just working is transparent and we don't notice it's even there.
That is a lot of the hard issues with driving are preemptive knowledge issues. I see a ball rolling towards the road from the left. I as a human know that, one the ball will likely roll out in front of me, and two, a kid/person may be following that. Now if you see a blowing trash bag, you probably aren't going to take any risky corrective action to avoid it.
The problem just a vision knowledge system is a ball and blowing trashbag are just objects that have the same priority. You have no categorization system of the relative meaning and dangers behind each action.
But things start getting weird when you couple LLMs with vision knowledge. Really, it's much too slow currently, but in multi-modal systems objects get depth of meaning. That trash bag can be identified, and a low risk can be assigned to it. While the ball can also be identified and a high risk assigned to it. Along with a bunch of other generalization that humans typically do.
> I also believe there will always be a need for determinism. There will absolutely be applications where the randomness of ai is unacceptable.
For high-assurance apps, I agree there will always be a need, sure. Of course, these high-assurance apps will be supervised by AI that can inspect it and raise alarm bells if anything unexpected happens.
For consumer apps though, an app might actually feel less "random" to the user if there's an AI that can intuit exactly what they are trying to accomplish when they perform certain actions in the app (much like a friendly tech-savvy teacher sitting down with you to help you accomplish something in the app).
As always, the tech isn't the problem - the way business applies it is. Customer service automation isn't done to help you better - it's done to make it cheaper to make you go away without making too big of a fuss. Companies building and employing customer service systems will find ways to make even GPT-4 incapable of providing anything the customer would find remotely useful.
Honestly, I don't actually care what you do. The more documentation is poisoned by GPT-4 output, the less useful future models built by the “big data” approach will be, but the easier it'll be to spot and disregard their output as useless. If this latest “automate your documentation” fad paves the way for a teaching moment or three, it'll have served some useful purpose.
Instead ai should be promoted as what it is - a job and growth creator and should be built honouring people’s property. It can be done and should be done that way.
It's only been three years since AI Dungeon opened my mind to how powerful generative AI could be, and GPT-4 blows that out of the water. Whatever gets released three more years from now will likely blow GPT-4 out of the water.
AI is already considerably smarter than the dumbest humans, in terms of its ability to hold a conversation in natural language and make arguments based on fact. It's only a matter of time before it's smarter than the average human, and at the current pace, that time will arrive within the next decade.
All useful technology improves over time, and I see no reason to believe AI will be any different.
They will pivot their vision to the next toy after this too.
Every now and then, the why is useful information that sheds needed light. Most of the time however, it's just unnecessary information taking up valuable space.
Like this example.
>this widget's green is blue-ish because it's designed to match the colours in the nth-generation photocopied manual, which at some point was copied on a machine that had low magenta
I'm sorry but unless matching the manual is a company mandate, this is not necessary at all to know and is wasted space.
Knowing the "low magenta" bit is especially useless information, company mandate or not.
>nor that it's essential that the green remains blue-ish, because lime and moss are different categories added in a different part of the system.
Now this is actual useful information. But it's also Information GPT can Intuit if the code that defines these separate categories are part of the context.
Even if it's not and you need to add it yourself (assuming you are even aware yourself. Not every human writing documentation is aware of every moving part) then you've still saved a lot of valuable time by passing it through 4 first and then adding anything else.
I’d say that greatly depends on your code. I’ve had GPT write JSDoc where it explains exactly why a set or functions is calculating the German green energy tariffs the way they do. Some of what it wrote went into great detail about how the tariff is not applied if your plant goes over a specific level of production, and why we try to prevent that.
I get your fears, but I don’t appreciate your assumptions into something you clearly both don’t know anything about (our code/documentation) and something you apparently haven’t had much luck with compared to us (LLM documentation).
You’re not completely wrong of course. If you write code with bad variable names and functions that do more than they need to, then GPT is rather bad at hallucinating the meaning. But it’s not like we just blindly let it auto write our documentation without reading it.