zlacker

[parent] [thread] 9 comments
1. dehrma+(OP)[view] [source] 2025-01-03 06:02:33
> used/exploited by things like AI training bots

How is this worse than a human reading your blog/code, remembering the key parts of it, and creating something transformative from it?

replies(6): >>moron4+v >>dend+H1 >>sifar+V2 >>ykonst+ja >>mrweas+gg >>camgun+EO
2. moron4+v[view] [source] 2025-01-03 06:07:54
>>dehrma+(OP)
Seriously? How is rule utilitarianism different from act utilitarianism?
3. dend+H1[view] [source] 2025-01-03 06:20:33
>>dehrma+(OP)
In the grand scheme of things and at this point, it probably doesn't matter. I know for me it certainly is not in any shape a discouragement to continue writing on my blog and contributing code to open source communities (my own and others).

But if we're going to dig into this a bit, one person reading my code, internalizing it, processing it themselves, tweaking it and experimenting with it, and then shipping something transformative means that I've enhanced the knowledge of some individual with my work. It's a win. They got my content for free, as I intended it to be, and their life got a tiny bit better because of it (I hope).

The opposite of that is some massively funded company taking my content, training a model off of it, and then reaping profits while the authors don't even get as much as an acknowledgement. You could theoretically argue that in the long-run, a LLM would likely help other people through my content that it trained on, but ethically this is most definitely a more-than-gray area.

The (good/bad) news is that this ship has sailed and we now need to adjust to this new mode of operation.

replies(1): >>dehrma+12
◧◩
4. dehrma+12[view] [source] [discussion] 2025-01-03 06:24:27
>>dend+H1
> The opposite of that is some massively funded company taking my content, training a model off of it, and then reaping profits while the authors don't even get as much as an acknowledgement.

Taking out the "training a model" part, the same thing could happen with a human at the company.

replies(2): >>dend+k2 >>thieaw+r3
◧◩◪
5. dend+k2[view] [source] [discussion] 2025-01-03 06:29:20
>>dehrma+12
Oh, 100%. I mentioned this in another comment (>>42582518 ) - I've dealt with a fair share of stolen content (thankfully nothing too important, just a random blog post here and there), and it definitely stings. The difference is that this is now done at a massive scale.

But again - this doesn't stop me from continuing to write and publish in the open. I am writing for other people reading my content, and as a bouncing board for myself. There will always be some shape or form of actors that try to piggyback off of that effort, but that's the trade-off of the open web. I am certainly not planning to lock all my writing behind a paywall to stop that.

6. sifar+V2[view] [source] 2025-01-03 06:36:40
>>dehrma+(OP)
Scale.
◧◩◪
7. thieaw+r3[view] [source] [discussion] 2025-01-03 06:42:56
>>dehrma+12
This is already a scenario that people generally accept as bad, could you elaborate the point you are making?
8. ykonst+ja[view] [source] 2025-01-03 08:00:27
>>dehrma+(OP)
Scale makes all the difference in the world.
9. mrweas+gg[view] [source] 2025-01-03 09:08:58
>>dehrma+(OP)
Attribution. If you read a book, blog or code and others ask you where you got your ideas/inspiration you can refer them back to the original author. This helps people build a reputation. Even if it's just happens every once in a while, it still helps the original author.

Once an AI has hoover up your work and regurgitated it as it's own, all links back to the original creator is lost.

10. camgun+EO[view] [source] 2025-01-03 14:52:00
>>dehrma+(OP)
One of the--admittedly many--things that puts me off AI is the pitch starts off as "you will have abilities you never had before and probably never would have had, be excited", then when critics are like, "woof it's a little worrying you can <generate a million deepfakes>, <send a million personalized phishing emails>, <scrape a million websites and synthesize new ones>, etc.", the pitch switches to "you could always have done this, calm down".

The whole point of software engineering is to do stuff faster than you could before. It is THE feature. We could already add, we could already FMA, we could already do matrix math, etc. etc. Doing it billions of times faster than we could before at far less energy expenditure--even including what it takes to build and deliver computers--has led to an explosion of productivity, discovery, and prosperity. Scale is the point. It changes everything and we know it; we shouldn't pretend otherwise.

[go to top]