I was always a bit sketched out by the macro economics classes I took in college.
The obvious difference is that AI has abundant use-cases, while Crypto only has tenuous ones.
Maybe there is added negativity considering it is a technology where there is clearly a potential threat to jobs on a personal level (e.g. lift operators were very negative towards automatic lifts).
Subjectively, the two flavors of AI-negative sentiment I've seen most commonly on HN are (1) its potential to invade privacy, and (2) its potential to displace workers, including workers in tech.
I think that (1) was by far the most common concern up until around the ChatGPT release, at which point (2) became a major concern for many HN readers.
I would be curious to know many HNers were previously burned by crypto. Fool me once, etc.
Related and with less parody, most technical analysis is no better than chicken bone divination.
It would be interesting to see a broader analysis across subjects and see if that shared movement wasn't about AI and crypto, but largely just sort of a fluctuation in general tone across HN, or if instead relative to general HN sentiment, sentiment on Crypto an AI movements were correlated prior to the recent divergence.
Hacker News comment sentiment is not a reliable measurem of what the average hacker news developer thinks.
For one, only people who are very invested about something will post about it.
For two, many comments are probably not from developers and instead from fake accounts.
It does not seem surprising to me that both of these factors would be in favor of a more positive sentiment for crypto. People that like it seem to really like it and talk about it a lot, and there is a large financial incentive for numerous actors to create fake accounts and comments.
AI/ML barrier to entry is far simpler and vastly user friendly compared to crypto. Instant value return or gratification from ML products (GTPs and rest) is far more mainstream friendly.
Another view is the "loss" factor. Nobody, thus far, has has had their funds stolen or lost using ML products. I understand content creators and those who, unwillingly, contributed knowledge to learning systems did get circumvented but i'm talking about users/customers. Compare that to the negative stigma of crypto frauds and stereotypical association to illegal transactions.
Apples vs. rotten oranges in my opinion!
This is solid economics iff you assume that crypto has a utility for which there is no substitute which does not share the same supply constraint feature, and even then its not solid economics for a current investment unless you also assume that that utility is the entire basis for its current valuation. Because even if it has a nonsubstitutable utility, if that's not the basis of its current value, then the "solid economics" is that there is some price it could reach from which further value drop because of supply (of substitutes) will not erode value, but there is no guarantee of what that level is.
It's interesting to see how AI/Crypto relate to each other, but e.g. for sentiment analysis we would need to check if maybe HN got more negative _overall_ or if it's actually dependent on the two topics you chose
Most people will agree that LLMs are pretty neat, but now instead of every startup being "like Uber but for ..." they are "like chatGPT but for ...".
Everyone is trying to chuck AI into their products and most of the time there is no need, or the product is just a thin fine-tune over an existing LLM model that adds essentially near-zero value. HN is fairly negative on that sort of thing I think (rightly so IMO)
It’s also interesting to see sentiment trending downward in time for both topics, even as the real-world benefits of AI become more obvious. My gut feeling is that this shows some of the contrarian bias on HN: Comments here are more optimistic about things that aren’t yet mainstream, but lean negative as soon as something becomes too popular or mainstream.
Interesting article. Thanks for including the details about fine tuning your own model.
What happens if you divide it not by comments, but by commenters? How much is sentiment being shaped by a vocal minority who is always saying the same thing, and how much does it seem to be a broad-based sentiment among the overall audience that occasionally responds?
What happened with previous AI hypes, the term AI was abandoned and the techniques and disciplines were "rebranded".
Probably will happen again. When something works and we start to understand how and when it works (and especially when it doesn't) it stops being "AI" and becomes something more boring.
To that end, I'm very curious: how does AI compare against other major tech advancements that are relevant in the HN community? The AI vs crypto comparison is the one I see the most, or ChatGPT/LLM vs iPhone, but surely there are some other less splashy or controversial comparisons.
What about something like React/Angular/SPAs vs AI? Less exciting, I know. I'm just really curious about how AI stacks up against something other than the obvious ("obvious," because of what I've been reading - again, biases) comparisons.
This comment feels like it’s 2013 and there hasn’t been a decade of people creating thousands of other tokens and forks, or realizing that high volatility in liquidity or exchange rates is more of a problem than the levels of currency inflation we commonly see (the price increases we’ve seen for the last couple years are most of the inflation we’ve seen and that practice would be unaffected).
It especially misses the understanding that deflation is much worse for anyone who isn’t already rich. The model that anyone who bought a decade ago deserves to be fabulously rich is … unlikely to be popular with the rest of the world.
edit: Looks like this is the notebook used in the article: https://github.com/corbt/hn-analysis/blob/main/analysis.ipyn...
The classification seems pretty fraught; one cell shows a sample of articles classified as crypto, which appears to include a bunch of cryptography and other unrelated articles.
*EDIT*
added an addendum to the post with the sentiment analysis graph including both rust and remote work.
Interestingly, there is in fact a noticeable downward slope in average sentiment over time for those topics as well, although they both remain far more popular than either AI or crypto.
Crypto is an umbrella term for a number of solutions, including blockchains (roughly 1,000+ as of right now) and cryptocurrencies (roughly 22,000+). While a given blockchain may be limited in terms of how much can be 'mined' or grow, you or I could very easily create a new cryptocurrency or even a new blockchain. Assuming we got traction with it, there would now be N+1 more out there.
Gold is not something we can so easily create. It also has intrinsic value through practical applications.
AI has plenty of use cases. Just ChatGPT alone is helping me everyday in many different ways. I don't think it's remotely comparable.
Crypto's sole usefulness remains in providing money transfers/liquidity in parts of the world where the local systems are failing or off-limits to the users.
Everyone can appreciate a photo of an astronaut on a horse.
Few can grasp the concept and significance of a store of value. Let alone a DAO.
It's the same reason why Harry Potter is more popular than Einstein's "On the electrodynamics of moving bodies" in which he came up with the theory of relativity.
When others talk about something we don't understand, we tend to get angry and dismissive. Like boys in elementary school who think girls are stupid. Because they can't figure out why they act the way they do. So on top of the lower popularity, we also get the hate towards crypto.
These are genuine questions, not critique on your statement.
https://github.com/corbt/hn-analysis/blob/main/analysis.ipyn...
The deep learning wave began before 2010 when this analysis starts. When I was looking for a job in 2009, there was already a big deep learning hype wave, and my new employer sent me around to look for industrial funding.
Copilot to be another.
Midjourney to be another - or at least diffusion based image editing tools which can be brought into photo and video editing workflows. The killer app here is probably integration of diffusion models into apps like Photoshop (and eventually video).
Some real virtual assistant applications seem right around the corner (i.e. a real life J.A.R.V.I.S seems like an inevitability within the year rather than a pipe dream, and to me would be a killer app)
And then lots of other killer apps are pretty obvious to imagine with development (e.g. customer service applications like IT helpdesks, Computer game dialogue where you can really influence interactions...)
I'm not worried about this on a personal level, but I'm very worried about the wider risk of too many people being put out of work too quickly. That's my biggest concern with these tools.
The way I see it, we are taught from a young age to fear finance and money. We're told that we need to had it to someone else to store safely (banks). We're told that we need someone else to invest it for us for our future (401k). We're told that our government is responsible for protecting us (military and taxes).
As a result, we constantly hand off the responsibility to others and we don't make the effort to learn and understand it ourselves. We get angry and dismissive when we don't want to talk about something we are fearful of.
Of course, the people who take the different approach, of diving into understand it, are the ones who are benefiting the most from it. Those are the wealthy people on Wall Street. The bankers. The CEOs. The people who keep pushing the fear.
AI isn't about money or finance, it is about knowledge.
Could you check the overall HN sentiment over time, to see if we got more pessimistic ?
But my not so informed opinion is text as an interface is only a small feature of bigger useful products, not the main focus. Instead of learning sql, you can ask a regular question. It feels like inventing the mouse to use with computers.
Molly White’s comment at the end is superb.
1. Initial introduction or release
2. Major hype and influx of greed money. <- AI is here now
3. Failure to live up to the hype, resulting in the tech becoming a punchline and gobs of money lost
4. Renaissance of the tech as its true potential is eventually realized, which doesn't match the original hype but ends up very useful
5. Iteration and improvement with no clear "done" or "achieved" milestone, it just becomes part of society
The bombardment of charlatans taking advantage of the term, coupled with commercials everywhere suggests we will soon hit stage 3 for AI. The Super Bowl commercials are usually the tipping point.Crypto is at stage 3 now.
Not all technologies make it to steps 4-5.
Hell, I remember when social media followed the same path. And ecommerce before it. Or the web in general before that. And on and on it goes.
I'm a crypto sceptic but I wasn't always like that. There was a time many years ago when Ethereum was brand new and I was an eager early adopter. I tried creating wallets, tried running a node to see what it does, put in some money through an exchange, and then... Nothing. There was nothing to do after jumping through all those hoops. In fact, turns out the only thing to do with the crypto wallet was to wait for its value to maybe increase over time. (Hence the "number go up" meme.) And for that to be realized, I would need to sell the coins to a new sucker to get real money out again — suspiciously pyramid-like.
And it's still like that today. There's no reason for me to ever open those old wallets again (and surely I don't even have the passwords anymore because self-custody is such a terrible idea UX-wise). And there's no reason to try any of the new stuff because it still obviously does nothing I'd need.
The early Internet wasn't like that. There was plenty to see and try, and interesting people to interact with. Once you tried it, you probably wanted to go back.
Today's early AI is like the early Internet in all the ways that crypto isn't and never will be. There's plenty you can do with ChatGPT and other models, right off the bat. You can install interesting stuff locally or run it on somebody else's server. You don't need to run the crypto-style terrible UX gauntlet and buy coins from a shady operator. AI is already so much easier and more useful and more powerful than crypto-web3-anything, it's competing in a completely different race.
OpenSea has lost 99% of their transaction volume in the past year, and even more of their revenue. I'd be shocked if the same happens to OpenAI. One was a fad, the other isn't.
The bar for Ai should be what can learn and comprehend inputs without simply scraping Reddit, Twitter, and other social platforms on the Internet and then parsing responses. Ai as a term need to have an updated definition before things begin to become less hype and more meaningful. What is being marketed incorrectly as Ai these days is similar to how Crypto overpromised and under delivered, while also funneling money out of everyone's pockets, creating a frenzy of bad investment, and in creating unrealistic fear and very stupidly problems in society based on overconfidence in low-brow overconfidence in tech.
In terms of actually automating any form of ”thinking” tech work, LLMs are proving increasingly terrible. I say this as someone who work in a place where GPT writes all our documentation except for some very limited parts of our code base which can’t legally be shared with it. It increasingly also replaces our code-generation tools for most ”repetitive” work and it auto-generates a lot of our data-models based on various forms of inputs. But the actual programming? It’s so horrible at it that it’s mostly used as a joke. Well, except that it’s also not used like that by people who aren’t CS educated. The thing is though, we’ve already had to replace some of the “wonderful” automation that’s being cooked up by Product Owners, BI engineers and so on. Things which work, until they need to scale.
This is obviously very anecdotal, but I’m very underwhelmed and very impressed by AI at the same time. On one hand it’s frighteningly good at writing documentation… seriously, it wrote some truly amazing documentation based on a function named something along the lines of getCompanyInfoFromCVR (CVR being the Danish digital company registry) and the documentation GPT wrote based on just that was better than what I could’ve written. But tasked with writing some fairly basic computation it fails horribly. And I mean, where are my self driving cars?
So I think it’s a bit of a mix. But honestly, I suspect that for a lot of us, LLMs will generate an abundance of work when things need to get cleaned up.
I suspect there are two things being measured at once here: people's sentiments changing, and the content being discussed changing.
Once a topic becomes "trendy", the average article quality seems to drop. You go from research articles and niche blogs to the general press and businesses trying to cash in on the trend.
A chatGPT user could just be someone who popped onto the website and submitted the chat form.
The only demonstration crypto has is 'look, more money today' and that only works sometimes.
AI's demonstration is followed by explanation of how it will kill us all, or why it won't work in context X, or take our job, etc, but you can kind of just ignore that and use it.
Most of "AI startups" are close to scams, i.e. they are oftentimes just interfaces to proprietary APIs that monetize on impressiveness of LLMs.
Is OCR "a scam just like crypto"? How about voice recognition, used daily all over the world? What about spam filters? Clearly useless over hyped technology right?
Even if you wanted to limit the term AI to large language models, which by the way, would make your use of the term incredibly wrong, it STILL has many common and useful application. You can use LLMs to classify text (sentiment, toxicity, etc), they can be paired with voice models to improve speech recognition or improve translation services, and so on.
I think it's better to ask what you think the major similarity is between AI and crypto, because it's hard to find any other than a subset of the crypto fanatics now jumping on LLMs as the solution to every problem. But this group isn't actually part of the AI community.
Meanwhile millions of people pay for tools that are now integrating AI to enhance their value add.
Free ChatGPT is just a loss leader for API and paid acct and a way to better train the model.
It's a constant conversation now, with ChatGPT, over hard problems. The AI doesn't always get it right, but it's a great partner, with so many great suggestions.
I cannot imagine going back.
Virtually nothing else has this dynamic, even VC funded startups during ZIRP.
It feels like a huge dependency with a bunch of money involved.
I cannot _not_ see it clumping to a sentiment comparable to "you either AWS' or have no idea what cloud/network/cluster means".
We use these things like it’s actually "something". It’s not. We don’t build things with it. We configure other people’s software.
It’s born to be promoted as the next big enterprise stuff. You either know how to configure it or are not enterprise-worthy.
And that farts. Being dependent on someone else’s stuff has never turned out good.
Well, I mean. You can also not give a duck and squeeze out all the money. Work a job, abandon it and jump on the next train.
Feels useless, doesn’t it?
Marketing terms vary, before it was "big data", now it's "AI".
The problem with AI is that it is being shoved into places without any thought of what the benefit actually is or wether or not its actually works.
As someone else said, I feel like much of what is coming out of this AI boom is basically a scam.
For example I was looking at task management app and I was intrigued by some "AI" powered ones. All it really was, was being able to make a task and it asks chatgpt to generate subtasks. The subtasks it generated were basically useless.
No "AI" to help manage my schedule or any other benefits. We are automating the easiest parts of the task with unhelpful content. This is because chatgpt is limited, it doesn't have api hooks into your application so it can't really provide any real benefit.
Some of the uses if AI are real and beneficial (like Amazon using AI to summarize reviews). But the vast majority are just shoving AI somewhere it doesn't need to be (or at least chatgpt doesn't need to be since its just a LLM at the end of the day).
This bubble is going to burst once people finally realize that ChatGPT is not an "AI" as science fiction has sold us, but it is being used as a general smart AI when its honestly dumb as nails except for certain use cases.
You need exchanges to do anything useful in crypto. And as we've seen most recently in the FTX case, all the exchanges are wretched hives of scum and villainy.
Personally I don't partake - but they get a value in it, I guess. Who am I to judge?
I feel like it is overrated and overhyped
It sucks because that's impressive field, but over decade of hype on self-driving cars and now naivety of experts being replaced by chat bot is annoying
Don't get me wrong, I'm not saying those things don't work, just not as good as people try to convince us
"VCs have entered the chat"
The trail of broken crypto startups serve as counter evidence. There were plenty of merchants initially dragged in by the appeal of cutting out at least Visa/Mastercard's cut, and in many cases governments too.
And then the consumer adoption wasn't there, and the prices for merchants were also too high, so many ripped them out again.
Maybe I am wrong, but it seemed like there were a lot of people talking about it and in it (same with NFT) and then it plummeted.
That’s why I kinda felt like AI is the same. The bubble is going to pop as we hit limitations on what this can actually do.
But I will also admit that some of this could be living within a tech bubble.
AI weaponisation (socially, politically, militarily, financially, etc..) will be a pretty big deal going forward, and will potentially see shady shit happen at a scale and effectiveness that will make crypto's little niche-scams look like child's play. The same old story we've seen with every major new system we introduce throughout human history.
Crypto is merely a usage token. It's like comparing "the world's banking system" with "the world".
OpenAI cannot make every product and market them to every segment. If you wrap their API and provide a novel UX with precise positioning, there's value there.
OpenAI can copy the underlying collection of features tomorrow morning, but if the positioning is precise enough, you will easily outcompete them.
For an example developers can understand, see managed SaaS: which is a collection of companies raking in billions in revenue from simply wrapping AWS/GCP/Azure in ways that the underlying platforms even end up copying anyways but succeeding because their developer experience is better, or their feature set is better focused, or they're just plain nicer to work with.
If you want to ask "are LLMs the next metaverse" or something.... implying LLMs are over-promising on their utility and the hype is all driven by the companies controlling the tech.... even that is a stretch but makes more sense.
Anyone selling you AGI, or "make money on youtube by typing a sentence into this prompt" is probably a crypto scammer who found a new group of people to scam.
No, I don't mean it in a derogatory way: crypto leaves a lot of loose ends to be tied up by everyday humans who just want to be left alone - oh, and if anything goes wrong once, your fortune could be lost and there's no one to complain to.
AI, on the other hand, is already making its way into browsers like Microsoft's Edge where I can ask it to generate all kinds of ideas, images, summaries, etc. via the chat format everyone is already friendly with. Likewise, GPT3 & 4's first major applications that took off was ChatGPT that brought AI chat to noobs and you don't need to be a hacker bro to use that.
In contrast, the first time I downloaded Metamask and tried to buy USDC, I quickly found out that there are different versions (correct me if I'm wrong) of this single cryptocurrency hosted on the Polygon, Ethereum, Avalanche, etc. blockchains.
What's that even supposed to mean to a beginner who wants to send money to a third-world country in minutes? And, remember: one wrong step and you could possibly lose everything.
From that perspective, AI is not the new crypto. If you ignore the noise and focus on the actual work, you will find a lot of good things about this field, and I might say even breakthrough advances, that help us reconsider what intelligent life really means.
Disclaimer: we are one of the first "chat to your docs" companies that came out as soon as ChatGPT was released when all we had at our disposal was text-davinci-003 and basic vector stores. Now, we do mostly other things.
Edit: fixed typos
Similarly with IRC. ChatGPT responds, IRC is hit or miss.
It's an imperfect discussion, and can always lead to weird rabbit holes and dead ends, but it sure feels more efficient than the alternatives.
Not really. There are plenty of decentralized exchanges which are proven, reliable, auditable, generally used by many without issues.
see: https://uniswap.org https://curve.fi/ https://1inch.io
It's the centralized exchanges, which are more akin to traditional financial institutions whose records are not on a publicly visible blockchain but rather private databases or... apparently spreadsheets... which fall victim to the same issues we have seen in the past in the traditional financial world.
Even if OpenAI doesn't, if its a thin wrapper with no deep proprietary edge, someone else can; your offering is ripe for commodification, even if that doesn't come from OpenAI themselves.
Likewise when breathless reporters keep asking non-AI companies what their AI strategy is, you know you're firmly in step 2. Remember when Walmart was expected to have a "metaverse strategy"?
Also worth noting that many (most?) technologies do not have a step 4 or 5. They're just permanently/indefinitely dead after the hype train goes off the rails (see: personal jetpacks)
If that's not possible, it's useless for the proposed use case: "send and receive money globally without any intermediary".
I propose we fork the site and send out an invite for HNGoodVibesOnly for anyone whose post history is in the top half of the median sentiment distribution.
I ran full nodes, wrote smart contracts, even had 200 GPUs mining ethereum at one point. I still have a bunch of wallets, exchange accounts, ENS names, you name it. Interesting, kind of fun, but then a big "Ok, now what?". Turns out not much other than writing some crypto thing to do another crypto thing that does another crypto thing.
Since getting generally disgusted with the sleaze I saw from the inside I haven't touched any of it in years.
How much difference has this made in my life? Zero (other than not being grossed out on a regular basis). How many times have I had to dust off a wallet or write a smart contract to do something I couldn't do better, faster, and cheaper elsewhere? Zero. How many times have I wanted to buy something and needed crypto? Zero. My experience is an anecdote for the entire space - a lot of time, money, and energy spent with no tangible value and nothing to show for it.
Ethereum is over eight years old, bitcoin nearly 15. ChatGPT has been out for less than a year and I use it on a daily basis to save time and come up with fairly novel things I'm not sure I could on my own. Of course the roots of ChatGPT go back quite far but then again so do merkle trees.
I wish I would have saved the time, money, and grey hairs on crypto for "AI" - I have way more fun with Llama, Whisper, and dozens of other models with immediate and real use cases on a daily basis.
Video game dialogue remains to be seen, but I already find ChatGPT based text adventures super fun! So I suspect there will be demand for both handcrafted static stories and AI dynamically-generated stories (ie they can be different things, one doesn’t have to replace the other, just like email didn’t immediately replace the post service).
I don’t know if you enjoy copilot, but for me it’s definitely supercharges my productivity.
When Neha Narula said (roughly) crypto is speedrunning the entire history of traditional finance and repeating the same mistakes she was pointing out the arrogance and (possibly willful) ignorance of many crypto promoters. Take that and add on justifiable anger at proof-of-work schemes during a developing climate crisis and much of the dismissive attitude and hate is well earned.
They also seem to work very well for summarizing large amounts of data, for automating the generation of basic legal texts, for extracting key data points from paperwork (invoices, mortgage applications, bank statements, etc).
It's useful to separate whether there is a lot of dubious hype (true of any new foundational technology) from whether useful things are being done. Both can be true at the same time. Lots of fraud and stupidity, but also lots of valuable work happening. With crypto, there was none of the latter, other than criminal applications.
The internet also attracted lots of hype and poor ROI consulting projects...but here we are.
If I want to send someone money, I can send anyone in the world BTC securely and instantly without any intermediary.
If other party wants to convert to fiat then they can do so through an exchange, of which there are many.
Most people who own crypto exchanged dollars for it.
Engineers are expensive, so actually the cost/benefit analysis is a little more complex and different problems will have different solutions.
You can run small quantized models on apple silicon if you have it.
I've been using a 70B local model for things like this and it works well
That's why "technical novelty" ranks ridiculously low on scale of things that make most successful software businesses these days: if anything technical novelty is more of an albatross on most software businesses than a saving grace.
Building traction in software is more about the 100s of other concerns that apply to every business: brand recognition, communicating your value proposition effectively, being able to sell to the target customer effectively, having the correct UX, the right proofs, the list goes on.
Copying that is just as hard as ever, if not harder.
Then came ChatGPT. The tens of billions dollars poured into the next hype, often by the same hustlers who hyped crypto (but for sure they were not the ones who lost the billions or stood in front of the court). Same environmental damage due to wasted electricity poured into wasteful, pointless computation. Same hype without any actual usefulness. In both cases this sentiment is met with a huff but it's factual.
Came the AI-written mushroom hunting guides.
Came AI Modi singing trending garnering votes for him. Our worst fears of democracy ending in favor of the candidate with the most processing power aka with the most money winning.
This pandemic (for covid didn't end yet) already 300 000 people died because of hand written propaganda (https://www.npr.org/sections/health-shots/2022/05/13/1098071...) when the next pandemic will come the AI produced oh so plausible bullshit will flood everything and millions will die.
Aza Raskin compared it to a zero day vulnerability for the operating system of humanity and he is oh so right.
Who is laughing now?
I'm not. AI tools will have huge benefits in some industries. But the main use case that people will experience (at least, the use case they recognize) on a daily basis will be scams and frustration. That's why people are negative. Not because the technology is bad or does not have uses, but because the average experience that people will consciously have will be negative.
It's already impossible to know what's real and what's not. Customer service is already majority bots. You'll never be able to talk to a human again if you have an issue with something. Blackmail and ransomware scams are going to get dialed up to 11. Everything is going to be automated in the most annoying ways possible. People are going to lose their jobs. Most of the jobs that will be lost are "meaningless," but our society revolves around meaningless jobs because they provide order, income and—as a consequence—dignity. All of that is going out the window.
Crypto had a purpose that no one actually cared about. No one cared until people started to see the scam potential and then it took off. AI is going to do the same thing.
AI tools will revolutionize medicine, engineering, manufacturing, and logistics. There will be huge benefits for all of humanity. But you won't think about this day-to-day. You'll just be bombarded by more (and better) scams more quickly.
I am amazed at what AI tools can do already. Had these tools existed 10 or 15 years ago my entire life would be different. Better? I have no idea. Maybe, maybe not. But even if it would have been better I know enough to know that I would not recognize that.
Or you could simply use a traditional wire transfer and currency would be converted automatically. USA and Italy exchange millions of dollars every day - it's nothing special.
Not a single avg human I know even tried crypto...
Government were never going to allow some unfettered parallel monetary system. Governments invested a lot of political capital to monitor and control monetary flows (from reasons/excuses like fraud, terrorist financing, money laundering). No way there were gonna relinquish that power.
They might use the technology itself as CBDC, but that is about it.
It was sold as new web 4.0 or whatever. NFTs were the future. Smart contracts.... And all it seemed to deliver were all mistakes and scams monetary system made in 19th century.
And when you played around with something the next and better version is already around the next corner!
I never seen something like this :)
Lumping everything 'AI' together doesn't make much sense to me.
Gold, shares etc.
I believe the criticism is correct as the current driver of crypto is either a 'i put that much money in I'm not selling until it increases again' or gambled los.
After all the miners want to get paid.
But hey binance and others struggle let's see if there is a collapse soon
Essentially Crypto lead to huge investment into GPUs and GPU technology, and then once Crypto collapsed lead to a huge capacity of compute power which then decidedly was put into AI.
I doubt that investment into GPU technology would have been driven by the idea of AI alone. Something manic like Crypto had to drive it initially. Without Crypto, I imagine the AI revolution we're seeing today would not have taken place.
Come on, OP! Spill the beans!
> But the actual programming? It’s so horrible at it that it’s mostly used as a joke.
Please, for the sake of your future selves, hire someone who can write good documentation. (Or, better still but much harder, develop that skill yourself!) GPT documentation is the new auto-generated Javadoc comments: it looks right to someone who doesn't get what documentation is for, and it might even be a useful summary to consult (if it's kept up-to-date), but it's far less useful than the genuine article.
If GPT's better than you at writing documentation (not just faster), and you don't have some kind of language-processing disability, what are you even doing? Half of what goes into documentation is stuff that isn't obvious from the code! Even if you find writing hard, at least write bullet points or something; then, if you must, tack those on top of that (clearly marked) GPT-produced summary of the code.
Let's high ball US residential electricity prices are about 25¢ per kWh. So 25¢ of electricity gets us 100 GPT-4 queries. $25 gets us 10_000.
Let's low ball average US developer salaries at a cool $100_000/yr. 50 40 hour weeks in a year makes 2_000 working hours makes $50 per hour. So with our very generous margins all working against us, a US developer would have to be making 20_000 GPT-4 queries an hour, or a little over 5 per second, in order to end up costing in electricity what he is making salary-wise.
I have no real point to this story except that electricity is much cheaper than most people have a useful frame of reference for. My mom used to complain about teenage me not running the dishwasher at full load until I worked out that the electricity and water together costed about 50¢ a run and offered her a clean $20 to offset my next 400 only three-quarters full runs.
Your bonus programming tip: Many programming languages let you legally use underscores to space large numbers! Try "million = 1_000_000" next time you fire up Python.
Crypto and AI both attract get rich quick bullshitters but I think AI right now is actually a crazy unexpected sci fi tech while crypto wasn't good for anything except fraud, gambling and the black market.
No, it wasn't. Crypto made GPUs scarce and expensive, and prices haven't come down yet. Crypto set AI back in many ways, including the infestation of grifters who have moved from Crypto to AI
AFAIK the actual features of the GPU that are used to accelerate AI (fp16, int8 and mixed-precision tensor processing) had nothing to do with the features used by miners. And no miners were buying/financing-the-creation-of the class of server GPUs used by folks like OpenAI because they didn’t make financial sense for mining (we’re talking 10x the price of a high-end consumer GPU and nowhere near that increase in hashrate).
OCR is a technology.
Cryptocoins are a community. The _SAME_ people who pushed crypto have now moved into the AI sphere and are hawking AI.
Its like all the snake oil salesmen of the 1800s suddenly discovered that cars are selling and have become car salesmen. That doesn't mean that cars are a scam, it means that many, many people trying to sell cars to you are scammers.
Having our guard up against hucksters, especially when the great community of hucksters are obviously moving in lockstep to say the same thing (and coordinate their arguments thanks to the internet / meme culture), it makes it easy to pick out when to be on guard.
With the degree of data available I have wondered if you could determine a point at which you could suggest users see professional help. Then if you could do such a thing, would it be ethical to do so? Would it be ethical _not_ to do so?
I think a good story can be told for both sides.
- You will need to get permission from your bank to send international wire transfers (sign forms/agreements). - takes a long time (in the order of days) - expensive (~$50-$75 for outgoing international wire, and $25-$50 to receive it).
crypto was booming as investment vehicle, buying one was trivial, many people received very tangible value.
The first papers that used GPUs to train neural networks were from the end of the 2000s and the beginning of the 2010s, before the Bitcoin price hike of 2013. But years before that, Nvidia had already introduced the CUDA architecture to GPUs in 2006 [1], which were used, among others things, to speed up algorithms to analyze seismic data for oil and gas exploration [2].
So with or without the "crypto fever", I believe the same advancements in GPU technology would have followed - but maybe not the scarcity brought by the investments in crypto mining. Because of this, we may also argue the opposite, that crypto got in the way of AI development and was one of the culprits of the "GPU rich vs GPU poor" division we hear/read about nowadays.
In a very similar fashion, though, I do tend to believe that PC gaming holds far more importance to the rise of both AI and crypto...
[1] https://www.gamesindustry.biz/nvidia-unveils-cuda-the-gpu-co...
Cryptocoins are a technology. Users of a particular cryptocoin protocol form a community.
But AI isn't a scam, that's their point. The more opt comparison here may be to horse buggies instead of snake oil. We probably will inevitable move more towards generative content, and everyone's trying to find their place as livelihoods are being impacted.
And of course there's the legalities of what's used to train AI. Crypto was completely decoupled and economic concepts aren't exactly copyright to begin with. So this doesn't apply much here.
The AI algos will get 100x faster through a combination of hardware and software optimizations. Then, deterministic vs AI will mean the unnoticeable difference between displaying some info to the user in 0.001s vs 0.1s. Then, AI will become the default.
It stands to reason therefore that the people who remain online to comment may have lower levels of mental hygiene by virtue of their ongoing exposure to the internet and social media, thus resulting in a gradual decrease in sentiment over time.
[0]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7364393/#:~:tex...
we need to drawn more disctint lines.
I remember that in the beginning people were dreaming about self-contained crypto economy where exchanges would not be needed - that didn't really work out.
Anything can be a scam when a scammer is saying it. Its not so hard to make an AI scam.
Step 1: Say that you're an AI specialist.
Step 2: Take people's money.
Step 3: Done. You now have their money. Don't even "profit", just take their money.
As long as dumbasses give their money to the latest-and-greatest crap and "technological fashion statements", this scam will continue over-and-over again. I mean, at least the cryptocoin community had a word-babble of blockchain technologies and hashing. AI is so new that they barely have any language for this scam and people are still forking money over. Its almost laughable at how little defenses people have against this.
---------
The SBF and FTX saga is your template. Just do the same thing except with AI-like words and you'll get pretty far these days.
Whether it's obvious from the code or not is kind of irrelevant. It gets non obvious things as well.
The forms are for KYC activity, and agreements on what the limitations of liability are. The delays are to validate that the transfers are handled and secured, and ideally can't be charged back. The fees are to cover the costs of the people who do the work for that.
It's not perfect, but it's quite a bit better than the checks and balances that exist for folks who get hit by a scam and are convinced to go to a crypto kiosk and pay a scammer because they have been frightened by a threat to a loved one, or are taken in by a scammer about services being cut off, or desperately paying off a ransomware demand in the hopes that your business or personal records won't be leaked or published.
Each also has "true believers" who can ignore the facts before them and incessantly hype an imaginary future.
Within the eurozone (the 20 countries using the euro), there’s SEPA instant credit which clears in less than ten seconds, is available 24/7, and costs practically nothing (a few cents). It’s a fine example of how thoughtful regulation can enable a system that is better than any crypto solution.
That is true however I'd say that for example the venezuelian and turkish people who managed to scoop Bitcoin (or Ethereum) didn't do too badly:
Inflation in Venezuela 2022 and estimate for 2023: 210% and 51% Inflation in Turkey 2022 and estimate for 2023: 70% and 50%
These aren't the only countries.
I personally know a doctor from Iran who tried semi-recently to convert his savings into Bitcoin (and failed: bank didn't let him). And he basically lost all his savings (inflation and bank defaults: double whammy).
From the comfort of countries using strong currencies it's easy to dismiss Bitcoin but there are many countries where shit did hit the fan really hard.
Not it's not a panacea: for example many african countries are experiencing ultra high inflation but cannot use Bitcoin because fees are way too high for these people ($6 USD to move Bitcoin today: I just checked).
> Compare that to the negative stigma of crypto frauds and stereotypical association to illegal transactions.
Seen all the people and exchanges owners busted and going to jail and seen moves like the EU soon de-anonymizing every single wallet out there (as soon as a transaction is made), and seen the public ledger, I don't even know if that bad bad reputation is going to stay for long.
Look at venezuela and turkey inflation rate these last two years (and the estimates for the coming years). Look at the SNAFU that happened in Iran and banks defaulting and now inflation kicking in.
It may be an ultra risky bet (and there are serious opsec risks too) but when your savings are going lose 90% of their value in two years anyway, why not take it?
Bitcoin was, after all, created as a gigantic middle finger in response to infinite money printing.
The world is big and there are average consumers in countries other than the US or the EU.
I also believe there will always be a need for determinism. There will absolutely be applications where the randomness of ai is unacceptable.
FTX was also a $32B company until it wasn’t.
Or in other words, we already have flying cars - but the form of a flying car that's compatible with reality is called a helicopter, and piloting one comes with a fuck ton of expensive hoops to jump through.
That's the overall problem with all the cool sci-fi tech - it's cool in an action movie, when the protagonists are the only ones who get to use it. It stops being cool and becomes either useless or dangerous, once every rando gets to use it in their daily lives.
It'll be interesting when there is more distinction between the two in utility. LLMs already have a fair amount of utility in their relatively early stages and I've certainly seen meaningful adoption of diffusion model generated images to replace stock photo usage.
Are they really?
A lot of these fly-by-night operations are just glorified SASS apps sticking a few tokens in front of your text before it goes to ChatGPT and calling the whole thing a new AI application.
There's definitely a low of low-effort crap in the market today in "AI". There's some real gems out there for sure but... my guard is up. Some of these businesses have no actual business model and are coasting purely on hype.
And that's probably the _better_ of AI startups these days, in that it actually has a product, actually has a businessplan (a crappy one but one exists). There's even worse crap than this out there.
What is really hype right now is how AI is going to upset all our day to day lives and earn billions for startups.
It is probably going to be a bit more incremental than the hype right now, and most of the profits will likely go to the already established tech companies.
ChatGPTs AI assistants are already seen as a sign that a good number of AI startups that are a thin layer around someone else's LLM will collapse due to zero moat.
It also isn't even clear where the profits are going to come from GPT/LLMs other than from NVIDIA selling shovels to the miners. Beyond that it will probably be the existing tech companies and they may be running them at a considerable loss for a long time to come.
I don't think we'll ever even know how many NFT projects there were out there, all taking up space on the various chains, all shilling garbage artwork, all promising all manner of shit from video games to magazines to comics to television series, many of which raised huge sums of money, virtually all of which is now gone. And it's easy to point and laugh at the people who thought these things were anything but scams, but also, in a better world, we wouldn't let tons of people be scammed like this. Being vulnerable to certain kinds of hype shouldn't give other people permission to rob you blind.
Not convinced. Why did Google beat Yahoo? Why is Facebook huge while Friendster and Myspace are jokes? At some point - perhaps further down the line than most of us are used to thinking of - technical ability matters.
Yes, absolutely. The money is in selling companies an excuse to keep customers on hold until they give up and stop trying to get the refund they're legally entitled, "voice recognition" is valuable only because it's a legally acceptable smokescreen.
https://github.com/verdverm/pypge
https://github.com/verdverm/go-pge/blob/master/pge_gecco2013...
The reviews had awesome and encouraging comments
It's the IT effect. When IT does it's job right, everyone asks why you pay them, then IT screws up, everyone will ask why you pay them. Things just working is transparent and we don't notice it's even there.
Car crash detection, automatic photo editing, heart rate sensing, etc. We use this stuff daily but there's generally little hype about the underlying tech (though some hype about specific applications).
What's in step 2 is "Generative AI", which IMO is also a misnomer for "large language models". The viability and uses of these models is far from proven out yet.
Oh yeah, imagine a transportation technology that killed people every week. No way that would be legal. Except if it's cars, for some reason they magically get a pass.
> Or in other words, we already have flying cars - but the form of a flying car that's compatible with reality is called a helicopter, and piloting one comes with a fuck ton of expensive hoops to jump through.
We could get rid of those hoops and flying cars would still have a lower death rate than the regular kind. But they can't replicate the "our oopsies are someone else's problem" field that cars have. That's the hard part.
were volumes and convenience(e.g. liquidity, automation, etc) comparable to crypto?
Investments is huge market, it is hard for me to track what are the current volumes of crypto tradings and holdings, but it still can be significant.
Nah. Cryptocoin users fluidly switch between cryptocoins. You could be into... I dunno... Lunacoin... and by next week you'd have changed all your money into Mooncoin before anyone notices. In fact, I can pump Lunacoin while I'm selling it and no one would be wise to my tricks.
OCR users don't really. Its a lot of friction.
There's a reason why cryptocoins were so good at scamming. They were fluid enough that you could be a dishonest snake. But if you write like a 10,000 line codebase using tesseract-ocr, you ain't switching off of that without some serious amounts of effort. (Certainly not a week's worth of effort IMO).
Imagine a transportation technology that killed orders of magnitude more of people every week. That's the reality if you just magically s/car/jetpack/g for everyone.
> We could get rid of those hoops and flying cars would still have a lower death rate than the regular kind.
Not really. Driving a car is trivial compared to flying a helicopter; the hoops in question are mostly about ensuring pilots are properly trained (vs. half-ass bullshit trained, "you'll learn the real thing on the road" that is getting a driver's license) and actually meet some health standards. Number and difficulty of hoops differ in various areas of aviation, but they all recognize just how much easier it is to kill yourself with an aircraft, and how much more death and destruction an aircraft can cause.
That is a lot of the hard issues with driving are preemptive knowledge issues. I see a ball rolling towards the road from the left. I as a human know that, one the ball will likely roll out in front of me, and two, a kid/person may be following that. Now if you see a blowing trash bag, you probably aren't going to take any risky corrective action to avoid it.
The problem just a vision knowledge system is a ball and blowing trashbag are just objects that have the same priority. You have no categorization system of the relative meaning and dangers behind each action.
But things start getting weird when you couple LLMs with vision knowledge. Really, it's much too slow currently, but in multi-modal systems objects get depth of meaning. That trash bag can be identified, and a low risk can be assigned to it. While the ball can also be identified and a high risk assigned to it. Along with a bunch of other generalization that humans typically do.
Crypto hasn't made me more productive in any way. To me, it's had the same utility as an online casino.
I also agree that there's something there there with LLMs... but also that it's hopelessly overhyped right now.
Smartphones are a good example of this - nowadays we tend to think about iPhone or BlackBerry as the start of the smartphone "era" - but that wasn't the actual start of smartphones.
The first smartphones were called PDAs, and there was a hype cycle around that! Lots of companies wanted in! But adoption was abysmal and the whole thing fizzled out. BlackBerry and iPhone were the steps 4-5 of that cycle.
The state of LLMs right now is the Palm Pilot. Whiz-bang. Cool. Tons of press. Lots of imagined applications and attempts at mainstream adoption - but honestly nowhere near good enough to achieve mainstream breakout. Died a slow death without fulfilling its most lofty promises, and the space was relegated to a niche status until the actual entrants arrived to actually achieve mainstream success.
I think LLMs will have a step 4-5 with actual mainstream success. I just don't think the current players are it, and also that the vast majority of the current players have no pants on and are just pure grift.
> I also believe there will always be a need for determinism. There will absolutely be applications where the randomness of ai is unacceptable.
For high-assurance apps, I agree there will always be a need, sure. Of course, these high-assurance apps will be supervised by AI that can inspect it and raise alarm bells if anything unexpected happens.
For consumer apps though, an app might actually feel less "random" to the user if there's an AI that can intuit exactly what they are trying to accomplish when they perform certain actions in the app (much like a friendly tech-savvy teacher sitting down with you to help you accomplish something in the app).
As always, the tech isn't the problem - the way business applies it is. Customer service automation isn't done to help you better - it's done to make it cheaper to make you go away without making too big of a fuss. Companies building and employing customer service systems will find ways to make even GPT-4 incapable of providing anything the customer would find remotely useful.
We have some experience now how technology created a lot more problems where we rushed into solutions without thinking of the consequences. It's experience based technoscepticism.
If social media could can polarize countries, imagine what a readily available reasoning engine can do.
I think being able to spot where these diverge is really important to understanding the world and where we should spend our limited time on it.
It’s not a money printer when everybody else also thinks it’s going down.
I don’t think AI as a general computing platform or as a replacement for coders is particularly close but there are lots of game changing incremental things LLMs do extremely well today. Something I could never find with crypto.
I do think we’ll find a lot of these aren’t defensible companies (like Lensa) but sometimes you get instagram even when the value prop seems slim.
Honestly, I don't actually care what you do. The more documentation is poisoned by GPT-4 output, the less useful future models built by the “big data” approach will be, but the easier it'll be to spot and disregard their output as useless. If this latest “automate your documentation” fad paves the way for a teaching moment or three, it'll have served some useful purpose.
It’s also really quite good in transforming language A to language B if you’re learning a new programming language
Instead ai should be promoted as what it is - a job and growth creator and should be built honouring people’s property. It can be done and should be done that way.
So the question becomes, what information are you interested on proving to someone on the internet? Say you want to ask an Israeli on Twitter about some bomb stuff and you want to prove you are a reporter. Say you want to prove in a comment on HN, that a repository on github is yours.
However one problem arises. The digital identity or identities, have to be stored somewhere. What happens if there is an outage? OpenAI had a multiple hour outage just today, and an ISP in Australia had a 12 hour outage yesterday. In that case, people cannot prove digitally their identity or identities (hundred of them if they like), even in real life.
The Greek government requires for the digital identity to be proven, access to internet[1]. I was just researching that right now.
Last, Estonia tries to secure the digital identities of their citizens on the blockchain[2]. Why digital identities need to be secured on a blockchain? Just a server or two, in a government building are not enough? How could a globally competitive network of miners, each one holding the digital information independent of any other, be more secure than the one or two servers solution?
[1] https://wallet.gov.gr/ [2] https://www.pwc.com/gx/en/services/legal/tech/assets/estonia...
It's only been three years since AI Dungeon opened my mind to how powerful generative AI could be, and GPT-4 blows that out of the water. Whatever gets released three more years from now will likely blow GPT-4 out of the water.
AI is already considerably smarter than the dumbest humans, in terms of its ability to hold a conversation in natural language and make arguments based on fact. It's only a matter of time before it's smarter than the average human, and at the current pace, that time will arrive within the next decade.
All useful technology improves over time, and I see no reason to believe AI will be any different.
They will pivot their vision to the next toy after this too.
"Interestingly, there is in fact a noticeable downward slope in average sentiment over time for those topics as well"
I would speculate total sentiment on HN is trending down. Its the disillusionment with tech.
Of course, this is not quite true because many people were harmed indirectly when criminal theft of their money was facilitated by the low barrier to entry that cryptocurrency presents to the would-be money launderer.
Its secondary value is buying and selling legal goods and services on the internet without having to deal with credit card companies, but only for techbros.
The article was a pretty good demonstration of this, I thought. That kind of sentiment analysis would be very difficult using a rule based model.
Every now and then, the why is useful information that sheds needed light. Most of the time however, it's just unnecessary information taking up valuable space.
Like this example.
>this widget's green is blue-ish because it's designed to match the colours in the nth-generation photocopied manual, which at some point was copied on a machine that had low magenta
I'm sorry but unless matching the manual is a company mandate, this is not necessary at all to know and is wasted space.
Knowing the "low magenta" bit is especially useless information, company mandate or not.
>nor that it's essential that the green remains blue-ish, because lime and moss are different categories added in a different part of the system.
Now this is actual useful information. But it's also Information GPT can Intuit if the code that defines these separate categories are part of the context.
Even if it's not and you need to add it yourself (assuming you are even aware yourself. Not every human writing documentation is aware of every moving part) then you've still saved a lot of valuable time by passing it through 4 first and then adding anything else.
Nah. Far more people use crypto for speculation than for actually illicit purposes.
Interesting turn of phrase as rotten apples vs oranges would be much more natural to my ear
Original HN title was actual title, then it was changed.
Your opinion on the validity or ethics of that utility has no impact on the fact that for some people they have utility.
And even Facebook definitely didn't win because if some technical choice... no one cared what tech stack powered a social media site, and if anything Facebook was less advanced than MySpace as far as users were concerned.
—
If you're talking about how it matters further down the line, then you're walking away from the wrapper thesis too: the whole line being parroted is that it's just an API wrapper ripe for the copying. Good luck getting to the "further down the line" reliably as a company, let alone down the line and then killing your competitor with a game plan that mostly consists of copying them.
~~~
> In 2021, at the height of the investor frenzy for crypto startups, entrepreneur Chris Horne raised $2 million in seed funding for Filta, a marketplace on which customers could buy and sell custom nonfungible token face filters that could digitally augment their face, say, by adding cat whiskers or a block head. But by the time the company launched in late summer of 2022, enthusiasm for crypto had waned and Filta was faltering.
>
> So Horne pivoted to the new hottest sector: artificial intelligence. He ditched the NFT idea, and this year relaunched Filta as a generative AI-powered digital pet, one that talks and can offer its owner emotional support. The technology behind his new company is OpenAI’s large language model, ChatGPT. And Horne is running his new Filta venture off the capital he raised for his original concept.
That is probably the most cynical version of “crypto guy pivots to AI” I have ever read, but even here it’s an obvious improvement. Before, he was going to sell people pictures of cats on the blockchain. Now, he is going to sell people pictures of cats that will talk and offer emotional support and not be on the blockchain. Strictly better!
~~~Seems like maybe a little bit?
The camera that was pointing down at an angle was the worst. Both models would only identify a dog and a person correctly about 15% of the time (missing me or my partner as I walked by and waved), with an actual object detection about 80% of the time even when there was nothing in its ground truths in-frame!! (usually as desks, beds or chairs, i don't recall exactly but it was furniture - and it was pointed at my empty back lot). It had just as many shadow/sunspot/tree failures as Motion. The other camera at eye level did a great job with cars, but not so much with people's side profiles, only head-on.
It was laughably bad. And I have no intention of training my own models on my datasets because I don't have time to label. I did this in 2018-2019 so I don't know what the state of the art object detection models are like today, maybe they got their shit together for non-canonical angles.
I eventually switched back to full-time recording on a 2 TB HDD and if I need to scan back i can jog the livestream because it saves weeks of data.
I have coworkers with family in Turkey (and Lebanon, Iran, Argentina...): they want USD. They don't care about Bitcoin, they want stablecoins. Most stablecoins are inherently dangerous, because you need to trust sketchy (when not outright criminal) and centralized entities to issue quasi-dollars that can get shut down by the US DoJ at any time. If they don't collapse on their own before that.
Venezuela is an exception because a few people manage to mine Bitcoin illegaly, given that electricity is virtually free. Other than that it seems the most practical currencies in Venezuela right now are contraband gasoline sold in Colombia, drugs, kidnapping, prostitution, ...
CBDCs are the absolute antithesis of cryptocurrencies.
What "existing adoption" are you talking about? In 15 years I've seen only speculation and a grand total of zero products used day-to-day by average people.
The fact you lump all these things together is a perfect illustration. Most blockchain and cryptocurrencies proponents understand nothing about technology.
There is no moat. Anything even moderately profitable will be implemented in 4 hours by the whales.
ChatGPT in the first month had more demonstrated utility than Crypto since it was founded.
They are certainly impressive, but their utility-to-hype/gimmick ratio is incredibly low right now, which could cause a crash. The greater the disappointment the greater the crash.
I'm reminded of 3D TVs. Remember those? Avatar came out in 2009. By 2016 the trend was dead. Despite the cries of "this time it's different." Of course, that time it was not different. The tech was impressive. Much more than the previous time the fad was around in the '80s. Remember the blue/red glasses? Absolutely not a single person talks about 3D TV today.
The 3D TV was a technical success but it was too much of a gimmick that it died out. My Facebook feed is a never-ending stream of AI generated garbage. I think people are going to tire of it, realize the images it makes are about as goofy as a 2004 MySpace page, and maybe it will stick around to fill out the useless corporate email and document bureaucracy and boilerplate framework code monkey BS.
But ChatGPT isn't writing Breaking Bad or The Sopranos anytime soon.
the majority of people don't have formal training in probability and statistics, not to mention limit theorems and finance, so who cares how they view the stock market? I mean, I care, in the sense of educating people but most people don't really want to put the time in.
stocks and gambling both have risk, but only stocks reward many/most types of risk; gambling does not. The expected value of stocks is positive; gambling is not.
What people are trying to say about stocks is that they are stochastic, and so is gambling.
on the larger topic, Crypto also does not reward risk or offer a positive expected value. It's stochastic nature is driven by the changing opinions people have about it, or secondary effects from how much other stochastic markets might rely on it. Mining bitcoins is stochastic from the point of view of a miner, but not really from the point of view of the market or at any scale, but without a productive use case providing a reward, no postive expected value and the reward for risk ("you got a coin") is not above the cost of mining, at least not for long.
Where is the problem: those people who don't have this risk affinity don't need to buy/use a jetpack. Similarly, not everybody should go ice climbing or BASE jumping. Thus I see no reason to outlaw jetpacks just because of their danger.
Clearly the AI being compared here is the recent boom in generative AI. OCR didn’t have companies chucking billions at experiments and make chip manufacturing stocks soar.
This is problematic, when the nature of the hyped thing is distorted. Cryptoassets for example were useful mostly for high risk speculation, shady money transfers and for pyramid schemes. As a currency these things totally sucked, but that was the promise: "Soon money is going to be replaced by this thing". Additionally they hyped the underlying technology ("everything must be on the blockchain!") and some idiots went along and made it part of their tech stack without any rational reason to do so.
Machine learning is different in that it already showed some incredible value. That value comes with potentially huge societal impacts as it will destroy entire classes of jobs and distort the concept of truth even further. But having a thing that does what it does is genuinely useful, outside of get-rich-quick schemes and speculation.
Now machine learning is in danger of being overhyped into something it is not. As impressive as some of the results are, this is not artificial intelligence in the traditional "artifical consciousness" sense of the word. It is a way to come up with plausible outputs to a given input.
LLMs -- I've gradually been using them more and more, with tangible benefits (less effort to complete a task / quicker turnaround on projects). Some workflows that were unimaginable are now possible, because of this bi-directional bridge between structured information and human language.
One is a pyramid scheme, the other is a digital exoskeleton / ironman suit. It really doesn't compare.
There’s a difference between having competition and having a business that can be trivially cloned. The challenge for a lot of AI startups is to show that they are adding something and are not just a dumb wrapper.
The killer app for generative AI is going to be propaganda. This hasn't entered the discourse yet because nobody wants to advertise that they're running a propaganda mill. I suspect they already exist though - there've been a number of news articles I've read recently where I'm like "I'm pretty sure somebody fed a tweet or police blotter into GPT-4 instead of writing this."
This works now because people are accustomed to trusting what they read. Once the channel has been flooded and it becomes cheap to make it look like your views are echoed by 1000 mainstream news media outlets and millions of people online, people will just stop believing everything they read. Similarly once any idiot can have ChatGPT write a college-level term paper, the skill of writing at the college level won't be worth anything. When you can have ChatGPT write a recommendation letter with a 15-second prompt, it ceases to be a useful signal for how much you believe in the person you're recommending. When you have GMail expand your one-sentence e-mail into 4 paragraphs with generative AI and then the recipient summarizes the 4 paragraph e-mail back into one-sentence, maybe you should've just written the one sentence to begin with.
The value in blockchain technologies is in unforgeability, scarcity, and forced consensus. In a world where forgery is trivially easy, content is trivially abundant, and nobody believes anybody else, a technology that ensures that mutually-distrusting computer systems all represent the same data gets quite valuable.
I’d say that greatly depends on your code. I’ve had GPT write JSDoc where it explains exactly why a set or functions is calculating the German green energy tariffs the way they do. Some of what it wrote went into great detail about how the tariff is not applied if your plant goes over a specific level of production, and why we try to prevent that.
I get your fears, but I don’t appreciate your assumptions into something you clearly both don’t know anything about (our code/documentation) and something you apparently haven’t had much luck with compared to us (LLM documentation).
You’re not completely wrong of course. If you write code with bad variable names and functions that do more than they need to, then GPT is rather bad at hallucinating the meaning. But it’s not like we just blindly let it auto write our documentation without reading it.
The negativity in tech is largely scapegoating driven in my opinion. The slanderers behind the non-existent 'Techlash' haven't stopped any more than the idiots trying to ban actual non-backdoored cryptography. It is all so incredibly stupid to me yet people keep on falling for the crap often enough that I disengage with them entirely. And people basically look at me like I'm the crazy one for pointing it out.
Binance has let go 1000 people in summer and just again 100 in Sept.
Yeah, even with a bunch of safety features... Well, this Mitchell & Webb skit sums up the human-factor. [0]
Crypto is very likely neither cheaper nor faster, since you can't spend the crypto directly, and need to FX it through an exchange on the sending side and the receiving side, each of which will take a cut (often percentages of the total). You also need to fund the account sending, and you need to transfer from the exchange receiving to a bank account. Both of those transfers could also cost money. You're also doing FX twice (USD -> crypto, crypto -> Yen), rather than once (USD -> Yen).
If you fuck up an international wire transfer, it may take a month or two for the funds to make it back, and you may need to have numerous conversations with both banks (I've been through this pain more than once and it sucks). If you fuck up a crypto transfer you lose your money with no recourse.
All-in-all the wire transfer is the better (and probably cheaper/faster) experience.
i agree with the bubble sentiment that apparently many people have. i recognize how it would be at my own career's expense. but i feel that many arguments made here miss the forest for the trees.
applied statistics or statistical learning has been around long enough and we have seen its innovations and rebranding over the decades. i clearly see the theoretical point to it and hence decided to find my place in this field.
however, the "AI" movement as of late, including the generative AI bits, fall into the bubble territory for me. just like those who are serious about blockchain and its wide implications will still toil towards it, so would companies serious about machine learning.
however, most people are riding the wave are for short-term gains, just like many in crypto space were there for speculative money-making.
the LLMs to me are an evolution (albeit a macro one) to predictive functionality of smartphone keyboards of the past, but they are touted to be the holy grail. their capabilities are impressive, but it only scales up so much in its current form. those just making an app on top of api provided by these services will not last. moreover, the explosion of advancements mean there will be no stability for those maintaining the infrastructure in the near future.
at least with the pursuit of making the largest models have shed light on the need to optimize the deep learning stack, which is the only silver lining for me.
i would love to be wrong and see what comes next, but i believe the general public will lose interest soon and we will have another winter before a major breakthrough. the "AGI" claims are just like those made by vr enthusiasts in the 2010s...i mean the 1980s.
And who said I am not addicted? I don't do hard drugs but I am certainly addicted to coffee, sugar and maybe other habits I (moderately) indulge in but I would be very pissed off if someone else would try to take away from me.
This point of view is too simplistic, not everything needs to be a billion dollar idea or differentiated. You can make an absolute good living with no moat and good product senses. Even if you get outgunned, you move on to the next opportunity.
The trust issue is real, china's great firewall is real and the crack down of sec is also real.
And without anyone exchanging your bitcoins no one cares.
And I'm comparison to our fiat a ton of critical features are missing like money laundering.
Not saying that it’s wrong or right, but it would be like asking someone in a cult to tell you if the cult is a scam. You might find a few dissenting voices, but most are going to support or it would lack the critical mass required to stay afloat.
The people declaring it dead either don't grasp it or got burnt by buying in during the last bull run.
Take a step back guys, if you trade, trade anticyclic, now is a better time to buy than two years ago - just sayin.
But outside of a couple of meme articles about how "someone bought a house with BTC!" the only use case I can find for crypto is money laundering or ransomeware.
And how does blockchain make this work? By making authenticity too expensive for spammers, you've made it too expensive for 90+% of the population. The spammers/propagandists have orders of magnitudes more money than me.
The thing about gambling is it's a zero sum game. It doesn't enable any "real" productivity, it's just passing money around (with skimming off the top).
ML/AI isn't necessarily like that, it can be actually useful. Nevermind chatbots, we've already see how "AI" is useful in products for the last decade (e.g. google search results and extracting structured data out of emails, just to name a couple).
The only similarity is the hype/confusion cycle. Lots of crypto people got rich because they were in the right place at the right time, and they want to be there with the chatbot wave next.
The fact that AI/ML can be judged on real utility will limit some of this, and I think these crypto people will be in for a rude awakening if they think they can replicate their success here. With crypto the "game" of gambling / speculating meant that there was a lot of demand for ongoing endeavors, but once people realize that low effort ChatGPT reskins don't deliver anything tangible it'll be pretty obvious the emperor has no clothes.
You can't buy/trade ChatGPT prompts, after all - unless, perhaps, you were to create prompt NFTs?
I suspect that the actual cryptocurrency that wins out here hasn't been invented yet, or it'll be a layer on top of Ethereum. It needs to actually function like a currency, and it needs to give you mechanisms to trade items of value in the real world, goods and services, for future goods and services. None of this "it's just a wildly variable front over USD that you can profit off of swing trades."
AI is way too overhyped and also completely not understood. I think most people here immediately think of some kind of genetic algorithm when they hear AI but even a simple thermostat could be marketed as AI even if all it does is turn on the furnace when some thermometer provides a low signal. The only thing reading AI on a product tells you is that there is software.
I'm unconvinced GPT will remain as a mass market tool. Google Docs got super popular because people don't want to fork out like $50 for microsoft word; they're not going to fork over $15/month to do web searches.
For an apples:apples comparison we need to compare the AI of today with the cryptocurrencies of 2063, or the cryptocurrencies of today with the AI of 1984.
What you saw was probably some light astroturfing, backed by wave after wave of non-tech celebrity sponsors, and a pump-n-dump shill bidding scheme.
The issue is that this LLM boom, coinciding with the crypto-bro crash, has caused a lot of scammers to pivot out of cryptocoins and into AI.
There's a real subset of R&D happening with AI no doubt. But keep your guard up, there's also a flood of cryptobros who have lost everything after FTX who are trying to pivot into another field. That's all that I'm saying.
"Lihe Pharmaceutical Technology Company, based in Wuhan, Hebei Province, China, was charged with fentanyl trafficking conspiracy and international money laundering, along with Chinese nationals Mingming Wang, 34, who is the alleged holder for three bitcoin accounts shared by sales agents for Lihe Pharmaceutical, and Xinqiang Lu, 40, the alleged recipient of funds via Western Union on the company’s behalf. " [1]
[1] https://www.justice.gov/opa/pr/justice-department-announces-...
Fiat money derives its value from what you say, indeed. It is traded on exchanges because of that. But fungibility is only one of several factors. Money should also serve as store of value, but how long is highly debatable and the point.
Regardless HN has always seemed to me to have a more pessimistic view so it is interesting to see the converse. Also it would be interesting to bucket by timezone.