zlacker

[parent] [thread] 4 comments
1. thefz+(OP)[view] [source] 2026-02-04 22:32:43
You made me imagine AI companies maliciously injecting backdoors in generated code no one reads, and now I'm scared.
replies(3): >>gibson+N3 >>djeast+Pp >>bandra+3v
2. gibson+N3[view] [source] 2026-02-04 22:54:10
>>thefz+(OP)
My understanding is that it's quite easy to poison the models with inaccurate data, I wouldn't be surprised if this exact thing has happened already. Maybe not an AI company itself, but it's definitely in the purview of a hostile actor to create bad code for this purpose. I suppose it's kind of already happened via supply chain attacks using AI generated package names that didn't exist prior to the LLM generating them.
3. djeast+Pp[view] [source] 2026-02-05 01:27:06
>>thefz+(OP)
One mitigation might be to use one company's model to check the work of another company's code and depend on market competition to keep the checks and balances.
replies(1): >>thefz+gW
4. bandra+3v[view] [source] 2026-02-05 02:09:01
>>thefz+(OP)
Already happening in the wild
◧◩
5. thefz+gW[view] [source] [discussion] 2026-02-05 06:35:21
>>djeast+Pp
What about writing the actual code yourself
[go to top]