zlacker

[return to "Itch.io Taken Down by Funko"]
1. leafo+W4[view] [source] 2024-12-09 08:19:52
>>spiral+(OP)
I'm the one running itch.io, so here's some more context for you:

From what I can tell, some person made a fan page for an existing Funko Pop video game (Funko Fusion), with links to the official site and screenshots of the game. The BrandShield software is probably instructed to eradicate all "unauthorized" use of their trademark, so they sent reports independently to our host and registrar claiming there was "fraud and phishing" going on, likely to cause escalation instead of doing the expected DMCA/cease-and-desist. Because of this, I honestly think they're the malicious actor in all of this. Their website, if you care: https://www.brandshield.com/

About 5 or 6 days ago, I received these reports on our host (Linode) and from our registrar (iwantmyname). I expressed my disappointment in my responses to both of them but told them I had removed the page and disabled the account. Linode confirmed and closed the case. iwantmyname never responded. This evening, I got a downtime alert, and while debugging, I noticed that the domain status had been set to "serverHold" on iwantmyname's domain panel. We have no other abuse reports from iwantmyname other than this one. I'm assuming no one on their end "closed" the ticket, so it went into an automatic system to disable the domain after some number of days.

I've been trying to get in touch with them via their abuse and support emails, but no response likely due to the time of day, so I decided to "escalate" the issue myself on social media.

◧◩
2. Captai+kb[view] [source] 2024-12-09 09:28:52
>>leafo+W4
I really wish BrandShield didn't use AI as a marketing term. It just looks like it's doing a generic ctrl-F on webpages?

Then things like this happen, and people think "ooh AI is bad, the bubble must burst" when this has nothing to do with that in the first place, and the real issue was that they sent a "fraud/phishing report" rather than a "trademark infringement" report.

Then I also wish that people who knew better, that this really has nothing to do with AI (like, this is obviously not autonomously making decisions any more than a regular program is), to stop blindly parroting and blaming it as a way to get more clicks, support and rage.

◧◩◪
3. Captai+Qc[view] [source] 2024-12-09 09:41:46
>>Captai+kb
It's possible they were using LLMs (or even just traditional ML algorithms) to choose if a certain webpage was fraud/phishing instead of mere trademark infringement, though. In this case it makes sense that one would be angry that a sapient being didn't first check if the report was accurate before sending it off.
◧◩◪◨
4. acka+Og[view] [source] 2024-12-09 10:29:12
>>Captai+Qc
More than the hypothetical risk of Earth being consumed by a paperclip-making machine, I believe the real and present danger in the use of ML and AI technology lies in humans making irresponsible decisions about where and how to apply these technologies.

For example, in my country, we are still dealing with the fallout from a decision made over a decade ago by the Tax Department. They used a poorly designed ML algorithm to screen applicants claiming social benefits for fraudulent activity. This led to several public inquiries and even contributed to the collapse of a government coalition. Tens of thousands of people are still suffering from being wrongly labeled as fraudulent, facing hefty fines and being forced to repay so-called fraudulent benefits.

◧◩◪◨⬒
5. Captai+Bh[view] [source] 2024-12-09 10:35:06
>>acka+Og
Perhaps in certain cases requiring someone to sign off, and take the blame if anything happens, would help alleviate this problem. Much like how engineers need to sign off on construction plans.

(Layman here, obviously.)

◧◩◪◨⬒⬓
6. jerf+cI[view] [source] 2024-12-09 14:20:45
>>Captai+Bh
If the legal system is not itself either fundamentally corrupted or completely razzle-dazzled by the AI hype... and I mean those as serious clauses that are at least somewhat in question... then there are going to be some very disappointed people losing a lot of money or even going to jail when they find out that as far as the legal system is concerned, there already is legally speaking some person or entity composed of persons (a corporation) responsible for these actions, and it is already not actually legally possible to act like a bull in a china shop and then cover it over by just pointing to your internal AI and disclaiming all responsibility.

The legal system already acts that way when the issue is in its own wheelhouse: https://www.reuters.com/legal/new-york-lawyers-sanctioned-us... The lawyers did not escape by just chuckling in amusement, throwing up their hands, and saying "AIs! Amimrite?"

The system is slow and the legal tests haven't happened yet but personally I see no reason to believe that the legal system isn't going to decide that "the AI" never does anything and that "the AI did it!" will provide absolutely zero cover for any action or liability. If anything it'll be negative as hooking an AI directly up to some action and then providing no human oversight will come to be ipso facto negligence.

I actually consider this one of the more subtle reasons this AI bubble is substantially overblown. The idea of this bubble is that AI will just replace humans wholesale, huzzah, cost savings galore! But if companies program things like, say, customer support with AIs, and can then just deploy their wildest fantasies straight into AIs with no concern about humans being in the loop and turning whistleblower or anything, like, making it literally impossible to contact humans, making it literally impossible to get solutions, and so forth, and if customers push these AIs to give false or dangerous solutions, or agree to certain bargains or whathaveyou, and the end result is you trade lots of expensive support calls for a company-ending class-action lawsuit, the utility of buying the AI services to replace your support staff sharply goes down. Not necessarily to zero. Doesn't have to go to zero. It just makes the idea that you're going to replace your support staff with a couple dozen graphics cards a much more incremental advantage rather than a multiplicative advantage, but the bubble is priced like it's hugely multiplicative.

[go to top]