That would actually increase their standing in my eyes.
Not too far from where I live, Russian bombing is destroying homes of people whose language is similar to mine and whose "fault" is that they don't want to submit to rule from Moscow, direct or indirect.
If OpenAI can somehow help stop that, I am all for it.
I got some bad news for you then.
And, according to UN, Turkey has used AI powered, autonomous littering drones to hit military convoys in Libya [1].
Regardless of us vs. them, AI shouldn't be a part of warfare, IMHO.
[0]: https://www.theguardian.com/world/2023/dec/01/the-gospel-how...
[1]: https://www.voanews.com/a/africa_possible-first-use-ai-armed...
I am not saying this is anything but it's definetely tingling my "something's up" senses.
Nor should nuclear weapons, guns, knives, or cudgels.
But we don’t have a way to stop them being used.
- NPR: https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d
- Lieber Institute: https://lieber.westpoint.edu/kargu-2-autonomous-attack-drone-legal-ethical/
- ICRC: https://casebook.icrc.org/case-study/libya-use-lethal-autonomous-weapon-systems
- UN report itself (Search for Kargu): https://undocs.org/Home/Mobile?FinalSymbol=S%2F2021%2F229&Language=E&DeviceType=Desktop&LangRequested=False
- Kargu itself: https://www.stm.com.tr/en/kargu-autonomous-tactical-multi-rotor-attack-uav
From my experience, Turkish military doesn't like to talk about all the things they have.I don't think so. In order to be virtuous, one should have some skin in the game. I would respect dedicated pacifists in Kyiv a lot more. I wouldn't agree with them, but at least they would be ready to face pretty stark consequences of their philosophical belief.
Living in the Silicon Valley and proclaiming yourself virtuous pacifist comes at negligible personal cost.
I will check out the links. Thanks a lot.
The second that this tech was developed it became literally impossible to stop this from happening. It was a totally foreseeable consequence, but the researchers involved didn't care because they wanted to be successful and figured they could just try to blame others for the consequences of their actions.
Such an absurdly reductive take. Or how about just like nuclear energy and knives, they are incredibly useful, society advancing tools that can also be used to cause harm. It's not as if AI can only be used for warfare. And like pretty much every technology, it ends up being used 99.9% for good, and 0.1% for evil.
If we cared about preventing LLMs from being used for violence, we would have poured more than a tiny fraction our resources into safety/alignment research. We did not. Ergo, we don't care, we just want people to think we care.
I don't have any real issue with using LLMs for military purposes. It was always going to happen.
We may lack the motivation and agreement to ban particular methods of warfare, but the means to enforce that ban exists, and drastically reduces their use.
Do we, though? Sometimes, against smaller misbehaving players. Note that it doesn't necessarily stop them (Iran, North Korea), even though it makes their international position somewhat complicated.
Against the big players (the US, Russia, China), "threat of warfare and prosecution" does not really work to enforce anything. Russia rains death on Ukrainian cities every night, or attempts to do so while being stopped by AA. Meanwhile, Russian oil and gas are still being traded, including in EU.
People don't participate in murder and they think others shouldn't either.
People don't participate in wars (which are essentially large scale murder) and they think others shouldn't.
Murder happens anyway. War happens anyway.
Yet if someone says 'war bad' people jump and say 'virtue signaling', but no one does that when people say 'murder bad'.
There's some really weird moral entanglement happening in the minds of people that are so eager to call out virtue signaling.