zlacker

[parent] [thread] 0 comments
1. bossyT+(OP)[view] [source] 2026-01-14 23:35:19
> I'm putting "securing" in scare quotes because IMO it's fool's errand to even try - LLMs are fundamentally not securable like regular, narrow-purpose software, and should not be treated as such.

Indeed. Between this fundamental unsecurability and alignment, I struggle to see how OpenAI/Anthropic/etc will manage to give their investors enough RoI to justify the investment

[go to top]