I'm tired. I'm tired of developers/techies not realizing their active role in creating a net negative in the world. And acting like they are powerless and blameless for it. My past self is not innocent in this; but I'm actively trying to make progress as I make a concerted effort to challenge people to think about it whenever I can.
After countless of times that the tech industry (and developers specifically) have gone from taking an interesting technical challenge that quickly require some sort of ethical or moral tradeoff which ends up absolutely shaping the fabric of society for the worse.
Creating powerful search engines to feed information to all who want it; but we'll need to violate your privacy in an irreversible way to feed the engine. Connecting the world with social media; while stealing your information and mass exposing you to malicious manipulation. Hard problems to solve without the ethical tradeoff? Sure. But every other technical challenge was also hard and solved, why can't we also focus on the social problems?
I'm tired of the word "progress" being used without a qualifier of what kind of progress and at the cost of what. Technical progress at the cost of societal regression is still seen as progress. And I'm just tired of it.
Every time that "AI skeptics" are brought up as a topic; the focus is entirely on the technical challenges. They never mention the "skeptics" that are considered that because they aren't skeptical of what AI is and could be capable. I'm skeptical if the tradeoffs being made will benefit society overall; or just a few. Because at literally every previous turn for as long as I've been alive; the impact is a net negative to the total population, without developer questioning their role in it.
I don't have an answer for how to solve this. I don't have an answer on how to stop the incoming shift in destroying countless lives. But I'd like developers to start being honest in their active role in not just accepting this new status quo; but proactively pushing us us in a regressive manner. And our power to push back on this coming wave.
But, tech was not always a net negative.
As far as I can tell, the sharpest negative inflection came around the launch of the iPhone. Facebook was kind of fine when it was limited to universities and they weren't yet doing mobile apps, algorithmic feeds or extensive A:B testing..
It seems "optimizing engagement," was a grave initial sin...
Maybe some engineers should to go back to their childhoods and watch some Outer Limits and pay attention to the missed lessons..
Our lives are not our own. From womb to tomb, we are bound to others. Past and present. And by each crime and every kindness, we birth our future.
I am tired of people blaming bottom developers, while CEOs get millions for "the burden of responsibility".
No "we" don't want it. And those who do want it, let them go live in the early industrial England whete the lack of regulation degenerated masses.
Also, for some reason people still portray capitalism as being something completelky different with or without regulation, it's like saying a man is completelly different in a swimming swit and a costume.
> We, in the western world, were in the privileged position of having a choice, and we chose individual profit over the communal good
Again, "we" did not have a gathering a choose anything. Unless you have records of that zoom session.
> given the fact we're essentially animals.
This is a reductionist statement that doesn't get anywhere. Yes we are animals but we are more than that, similar to being quarks but also more than quarks.
We developers are not blameless. If we accept that we are playing a role; then we can be proactive in preventing this and influencing the direction things go. CEOs need developers to achieve what they want.
I'm not saying it's easy. I won't even hold it against folks that decide to go in a separate direction than mine. But I at least hope we can be open about the impact we each have; and that we are not powerless here.
This is because most people on HN who say they are skeptical about AI mean skeptical of AI capabilities. This is usually paired with statements that AI is "hitting a wall." See e.g.
> I'm very skeptical. I see all the hype, listen to people say it's 2 more years until coding is fully automated but it's hard for me to believe seeing how the current models get stuck and have severe limitations despite a lot of impressive things it can do. [>>44015865 ]
> As someone who is mildly skeptical of the current wave of LLM hype and thinks it's hitting a wall... [>>43634169 ]
(that was what I found with about 30 seconds of searching. I could probably find dozens of examples of this with more time)
I think software developers need to urgently think about the consequences of what you're saying, namely what happens if the capabilities that AI companies are saying are coming actually do materialize soon? What would that mean for society? Would that be good, would that be bad? Would that be catastrophic? How crazy do things get?
Or put it more bluntly, "if AI really goes crazy, what kind of future do you want to fight for?"
Pushing back on the wave because you take AI capabilities seriously is exactly what more developers should be doing. But dismissing AI as an AI skeptic who's skeptical of capabilities is a great way to cede the ground on actually shaping where things go for the better.
Everything else around it is a glamorous party cause everyones money is riding on it and one needs to appreciate it or risk being deserted by the crowd.
The basics of science is around questioning things until you get convinced. People depending on models too much may end up in a situation where they would loose the ability to triangulate information from multiple sources before being convinced about it.
Programming can be more complicated above a certain threshold even for humans so it would be interesting how the models perform with the complexity. I am skeptic but again I dont know the future either.
I’m definitely not skeptical of its abilities, I’m concerned by them.
I’m also skeptical that the AI hype is going to pan out in the manner people say it is. If most engineers make average or crappy code, then how are they going to know if the code they are using is a disaster waiting to happen?
Verifying an output to be safe depends on expertise. That expertise is gained through the creation of average or bad code.
This is a conflict in process needs that will have to be resolved.
For ai companies, its to get a model which can be better on benchmarks and vibes so that it can be sota and get higher valuation for stakeholders.
For coders, they just want the shit done. Everyone wants the easy way if his objective is to complete a project but for some it is learning and they may not choose the easy way.
Why they want to do it the easy way, mostly as someone whose cousin's and brother's are in this cs field(i am still in high school), they say that if they get x money then the company at least takes a 10x value of work from them. (Of course, it may be figuratively). One must imagine why they should be the one morally bound in case ai goes bonkers.
Also, the best not using ai would probably stop it a little but the ai world moves so fast, its unpredictable, deepseek was unpredicted. I might argue that now its a matter of us vs China in this new arms race of ai. Would that stop if you stop using it? Many people are already hating ai but has that done much to stop it? If that is, you call ai stopping at the moment.
Its paradoxical. But to be Frank, LLM was created for the reason Its excelling at. Its a technological advancement and a moral degradation.
Its already affecting supply chain tbh. And to be frank, I am still using ai to build projects which I just want to experiment with and see if it can really work without getting the domain specific knowledge. Though I also want to learn more and am curious but just don't have much time in high school.
I don't think people cared about privacy and I don't think people would care about it now. And its the same as not using some big social media giant, you can't escape it. The tech giants also made it easier but less private. People chose the easier part and they would still choose the easy part ie llm. So I guess the future is bleak eh? Well the present isn't that great either. Time to just enjoy life while the world burns by the regret of its past actions for 1% shareholder profit. (For shareholders, it was all worth it though, am I right?)
My 0.02$
Political-Economic analysis of technology is not super popular thing in a mainstream media, but disabling, sabotaging or vandalising anti-human tech might be.
And I am not sure but like, I have got this one life. Why can't I just be a good guy who wants to help others while still being in the system.
Why do I have to suffer for other peoples decision and have to bear the mental responsibility to.
Nobody's perfect. Neither do I intend to be. We are all gonna die. I just want to leave my community a little bit more charming place. Not bring revolution.
I can't escape the system because this thought terrifies me. It terrifies me because you have to pick your battles wisely. I won't leave my coding job because of llms.
Instead if I am really feeling like trying do good. I can donate extensively and live a frugal life for some time and donate to people who are dying due to hunger and such
And I would still have the freedom to go back at any I stant and stop donating.
The same can't be said about leaving a job. Its hard to reenter
I am not sure lol. But I would much rather build stuff that I like with llms and then donate instead of the project not existing or taking way longer time imo
Tech has always been a tool for control, power and accumulation of capital.
You counterbalance it with social and civic laws (ie. Counter power)
https://fred.stlouisfed.org/series/RSAHORUSQ156S
(This is somewhat but not entirely tautological.)
These LLMs may not be inherently evil, but their impact on society could be potentially destabilising.
Most of them got into tech because it's fun and because it pays royaly. Morals have little to do with that for lots of folks.
I'm not saying there is no evil, but that argument at least holds little ground.
Some would say "The Industrial Revolution and its consequences have been a disaster for the human race."
These systems (LLMs, diffusion) yield imitative results just powerful enough to eventually threaten the jobs of most non-manual laborers, while simultaneously being not powerful enough (in terms of capability to reason, to predict, to simulate) to solve the hard problems AI was promised to solve, like accelerating cancer research.
To put it another way, in their present form, even with significant improvement, how many years of life expectancy can we expect these systems to add? My guess is zero. But I can already see a huge chunk of the graphic designers, the artists, the actors, and the programmers or other office workers being made redundant.
When in actuality, I personally believe that it doesn't
Anyway, that's strictly better than renting the same house for the same rent because you can sell it. The downside of homeownership is extra expenses like repairs are now your problem.
Oh, and you don't get fixed-term mortgages like Americans do I guess.