zlacker

[parent] [thread] 0 comments
1. gibson+(OP)[view] [source] 2026-02-04 22:54:10
My understanding is that it's quite easy to poison the models with inaccurate data, I wouldn't be surprised if this exact thing has happened already. Maybe not an AI company itself, but it's definitely in the purview of a hostile actor to create bad code for this purpose. I suppose it's kind of already happened via supply chain attacks using AI generated package names that didn't exist prior to the LLM generating them.
[go to top]