zlacker

[parent] [thread] 8 comments
1. sho_hn+(OP)[view] [source] 2026-01-01 01:38:45
Not in this review: Also the record year in intelligent systems aiding in and prompting human users into fatal self-harm.

Will 2026 fare better?

replies(3): >>simonw+o >>measur+r >>andai+s
2. simonw+o[view] [source] 2026-01-01 01:42:15
>>sho_hn+(OP)
I really hope so.

The big labs are (mostly) investing a lot of resources into reducing the chance their models will trigger self-harm and AI psychosis and suchlike. See the GPT-4o retirement (and resulting backlash) for an example of that.

But the number of users is exploding too. If they make things 5x less likely to happen but sign up 10x more people it won't be good on that front.

replies(1): >>Nuzzer+nE
3. measur+r[view] [source] 2026-01-01 01:43:02
>>sho_hn+(OP)
The people working on this stuff have convinced themselves they're on a religious quest so it's not going to get better: https://x.com/RobertFreundLaw/status/2006111090539687956
4. andai+s[view] [source] 2026-01-01 01:43:03
>>sho_hn+(OP)
Also essential self-fulfilment.

But that one doesn't make headlines ;)

replies(1): >>sho_hn+C
◧◩
5. sho_hn+C[view] [source] [discussion] 2026-01-01 01:44:18
>>andai+s
Sure -- but that's fair game in engineering. I work on cars. If we kill people with safety faults I expect it to make more headlines than all the fun roadtrips.

What I find interesting with chat bots is that they're "web apps" so to speak, but with safety engineering aspects that type of developer is typically not exposed to or familiar with.

replies(1): >>simonw+b1
◧◩◪
6. simonw+b1[view] [source] [discussion] 2026-01-01 01:50:19
>>sho_hn+C
One of the tough problems here is privacy. AI labs really don't want to be in the habit of actively monitoring people's conversations with their bots, but they also need to prevent bad situations from arising and getting worse.
replies(1): >>walt_g+Z2
◧◩◪◨
7. walt_g+Z2[view] [source] [discussion] 2026-01-01 02:11:37
>>simonw+b1
Until AI labs have the equivalent of an SLA for giving accurate and helpful responses it don't get better. They've not even able to measure if the agents work correctly and consistently.
◧◩
8. Nuzzer+nE[view] [source] [discussion] 2026-01-01 10:22:17
>>simonw+o
How does a model “trigger” self-harm? Surely it doesn’t catalyze the dissatisfaction with the human condition, leading to it. There’s no reliable data that can drive meaningful improvement there, and so it is merely an appeasement op.

Same thing with “psychosis”, which is a manufactured moral panic crisis.

If the AI companies really wanted to reduce actual self harm and psychosis, maybe they’d stop prioritizing features that lead to mass unemployment for certain professions. One of the guys in the NYT article for AI psychosis had a successful career before the economy went to shit. The LLM didn’t create those conditions, bad policies did.

It’s time to stop parroting slurs like that.

replies(1): >>falken+JZ2
◧◩◪
9. falken+JZ2[view] [source] [discussion] 2026-01-02 06:08:46
>>Nuzzer+nE
‘How does a model “trigger” self-harm?’

By telling paranoid schizophrenics that their mother is secretly plotting against them and telling suicidal teenagers that they shouldn’t discuss their plans with their parents. That behavior from a human being would likely result in jail time.

[go to top]