zlacker

[parent] [thread] 5 comments
1. weikju+(OP)[view] [source] 2026-02-04 22:24:51
Don’t read the code, test for desired behavior, miss out on all the hidden undesired behavior injected by malicious prompts or AI providers. Brave new world!
replies(1): >>thefz+q1
2. thefz+q1[view] [source] 2026-02-04 22:32:43
>>weikju+(OP)
You made me imagine AI companies maliciously injecting backdoors in generated code no one reads, and now I'm scared.
replies(3): >>gibson+d5 >>djeast+fr >>bandra+tw
◧◩
3. gibson+d5[view] [source] [discussion] 2026-02-04 22:54:10
>>thefz+q1
My understanding is that it's quite easy to poison the models with inaccurate data, I wouldn't be surprised if this exact thing has happened already. Maybe not an AI company itself, but it's definitely in the purview of a hostile actor to create bad code for this purpose. I suppose it's kind of already happened via supply chain attacks using AI generated package names that didn't exist prior to the LLM generating them.
◧◩
4. djeast+fr[view] [source] [discussion] 2026-02-05 01:27:06
>>thefz+q1
One mitigation might be to use one company's model to check the work of another company's code and depend on market competition to keep the checks and balances.
replies(1): >>thefz+GX
◧◩
5. bandra+tw[view] [source] [discussion] 2026-02-05 02:09:01
>>thefz+q1
Already happening in the wild
◧◩◪
6. thefz+GX[view] [source] [discussion] 2026-02-05 06:35:21
>>djeast+fr
What about writing the actual code yourself
[go to top]