Though I'm not a "vibe coder" myself I very much recognize this as part of the "appeal" of GenAI tools more generally. Trying to get Image Generators to do what I want has a very "gambling-like" quality to it.
if it doesn't work the first time you pull the lever, it might the second time, and it might not. Either way, the house wins.
It should be regulated as gambling, because it is. There's no metaphor, the only difference from a slot machine is that AI will never output cash directly, only the possibility of an output that could make money. So if you're lucky with your first gamble, it'll give you a second one to try.
Gambling all the way down.
That's wild. Anything with non-deterministic output will have this.
Anything with non-deterministic output that charges money ...
Edit Added words to clarify what I meant.
Brain scans have revealed that waiting for a potential win stimulates the same areas as the win itself. That's the "appeal" of gambling. Your brain literally feels like it's winning while waiting because it _might_ win.
All those laid off coders gambled on a career that didn’t pan out.
Want more certainty in life, gonna have to get political.
And even then there is no guarantee the future give a crap. Society may well collapse in 30 years, or 100…
This is all just role play to satisfy the prior generations story driven illusions.
Every prompt and answer is contributing value toward your progress toward the final solution, even if that value is just narrowing the latent space of potential outputs by keeping track of failed paths in the context window, so that it can avoid that path in a future answer after you provide followup feedback.
The vast majority of slot machine pulls produce no value to the player. Every single prompt into an LLM tool produces some form of value. I have never once had an entirely wasted prompt unless you count the AI service literally crashing and returning a "Service Unavailable" type error.
One of the stupidest takes about AI is that a partial hallucination or a single bug destroys the value of the tool. If a response is 90% of the way there and I have to fix the 10% of it that doesn't meet my expectations, then I still got 90% value from that answer.
It also kind of breaks the whole argument that they're designed to be addictive in order to make you spend more on tokens.
This has not been my experience, maybe sometimes, but certainly not always.
As an example: asking chatgpt/gemini about how to accomplish some sql data transformation set me back in finding the right answer because the answer it did give me was so plausible but also super duper not correct in the end. Would've been better off not using it in that case.
Brings to mind "You can't build a ladder to the moon"
That assumes that the value of a solution is linear with the amount completed. If the Pareto Principle holds (80% of effects come from 20% of causes), then not getting that critical 10+% likely has an outsized effect on the value of the solution. If I have to do the 20% of the work that's hard and important after taking what the LLM did for the remainder, I haven't gained as much because I still have to build the state machine in my head to understand the problem-space well enough to do that coding.
Though you could say the same thing about pretty much any VC funded sector in the "Growth" phase. And I probably will.
- I buy stock that doesn't perform how I expected.
- I hire someone to produce art.
- I pay a lawyer to represent me in court.
- I pay a registration fee to play a sport expecting to win.
- I buy a gift for someone expecting friendship.
Are all gambas.
You aren't paying for the result (the win), you are paying for the service that may produce the desired result, and in some cases one of may possibly desirable results.
Hence the adage "sir, this is a casino"
Especially when you try to get them to generate something they explicitly tell you they won't, like nudity. It feels akin to hacking.
Neither is GenAI, the grandparent comment is dumb.
I almost can't believe this idea is being seriously considered by anybody. By that logic buying any CPU is gambling because it's not deterministic how far you can overclock it.
Just so you know, not every llm use case requires paying for tokens. You can even run a local LLM and use cline w/ it for all your coding needs. Pull that slot machine lever as many times as you like without spending a dollar.
That's still not gambling and it's silly to pretend it is. It feels like gambling but that's it.
You only lose those rights in the contracts you sign (which, in terms of GPT, you've likely clicked through a T&C which waves all right to dispute or reclaim payment).
If you ask an artist to draw a picture and decide it's crap, you can refuse to take it and to pay for it. They won't be too happy about it, but they'll own the picture and can sell it on the market.
If you don't get something good the first time you buy a book, you might with the next book, or you might not. Either way, the house wins.
It should be regulated as gambling, because it is. There's no metaphor — the only difference from a slot machine is that books will never output cash directly, only the possibility of an insight or idea that could make money. So if you're lucky with your first gamble, you'll want to try another.
Gambling all the way down.
Maybe art is special, but there are other professions where someone can invest heaps of time and effort without delivering the expected result. A trial attorney, treasure hunter, oil prospector, app developer. All require payment for hours of service, regardless of outcome.
When it comes to work that requires craftmanship it's pretty common to be able to not pay them if they do a poor job. It may cost you more than you paid them to fix their mistake, but you can generally reclaim your money you paid them if the work they did was egregiously poor.
AI is not an excuse to turn off your brain. I find it ironic that many people complain that they have a hard time identifying the hallucinations in LLM generated content, and then also complain that LLM's are making LLM users dumber.
The problem here is also the solution. LLM's make smarter people even smarter, because they get even better at thinking about the hard parts, while not wasting time thinking about the easy parts.
But people who don't want to think at all about what they are doing... well they do get dumber.
I personally think of AI tools as an incremental aid that enables me to focus more of my efforts on the really hard 10-20% of the problem, and get paid more to excel at doing what I do best already.
But with an LLM, I was able to eliminate this bad path faster and earlier. I also learned more about my own lack of knowledge and improved myself.
I truly mean it when I say that I have never had an unproductive experience with modern AI. Even when it hallucinates or gives me a bad answer, that is honing my own ability to think, detect inconsistencies, examine solutions for potential blindspots, etc.
When you get deep into engineering with AI you will find yourself spending a dramatically larger percentage of your time thinking about the hardest things you have ever thought about, and dramatically less time thinking about basic things that you've already done hundreds of times before.
You will find the limits of your abilities, then push past those limits like a marathon runner gaining extra endurance from training.
I think the biggest lie in the AI industry is that AI makes things easier. No, if anything you will find yourself working on harder and harder things because the easy parts are done so quickly that all that is left is the hard stuff.