My impression is that its invention was for the sole purpose of eradicating the idea that Windows is insecure and prone to viruses, which explains why it can be overzealous and CPU hungry.
I would only enable it for family members who don't know what they are doing. For some reason, I haven't needed any form of active virus scanning in something like 15 years. If it turns out I've been infected this entire time, the criminals sure are taking their time stealing my money, etc.
A great example is Pytorch just recently had a supply chain attack, and installing the nightly version between December 25th and December 30th, 2022 - would result in your home directory getting uploaded including ssh keys.
Chrome also just had a 0 day 2022 - CVE-2022-3075
Pytorch supply chain attack via Triton 2022/2023 - https://www.bleepingcomputer.com/news/security/pytorch-discl...
EDIT: Also there's a misconception that linux somehow doesn't get viruses - however the Pytorch attack affected linux users. Making a virus for windows gives you far more targets then linux, which is why they're far more common.
I think this would describe the majority of computer users. And the majority of computer users are also using Windows.
> I haven't needed any form of active virus scanning in something like 15 years
Microsoft Defender antivirus was released alongside Windows 8 in 2012. And it's essentially a rewrite of Microsoft Security Essentials which came included starting with Vista. If you haven't been explicitly disabling it, which your comment sounds like, you've been running one without knowing it for 16 years
Not quite.
Windows Defender was released together with Windows Vista, this was very rudimentary and only handled malware and spyware not unlike Malwarebytes, it did not handle viruses.
Microsoft Security Essentials was released standalone sometime during Windows 7's era, this was fully fledged anti-virus.
Microsoft Security Essentials was renamed Microsoft Defender and bundled with Windows starting from Windows 8, where it has stayed to this day.
On the other side, you install a very invasive av software, which runs as privileged user and intercepts everything thats happening on your system. They even make a great target for malware by themself. Just recently ClamAV had a bug in it's file scanner, which let to an rce: CVE-2023-20032
And they're almost exclusively used in targeted attacks against valuable targets, because burning a 0-day to hack grandma's old laptop and steal her facebook password isn't a particularly good investment.
The version of Windows Defender that came with Vista was a bit different and included realtime scanning when executables were run.
At this point the only other antivirus I bother keeping an install of on my personal system is Malwarebytes free in case things really go tits up and I need to run it and rkill from safe mode.
The problem is that this also includes most people who think they know what they’re doing. We’re in the middle of a big change in how general purpose computers work and it’s basically driven by accepting that people make mistakes, trusted sites or things like their URL shorteners or social media are compromised periodically, etc. Maybe you’re really good at never visiting dodgy websites, always use an ad blocker, etc. … but have you never installed the wrong Python, NPM, etc. package by mistake?
Short term, something like Defender makes sense for most devices used for web or email. Longer term, I think we need more focus on sandboxing, hardware MFA, etc. so we aren’t using systems so brittle that everything just falls apart if you make a mistake. I don’t want the entire world to be iOS but the status quo sucked more.
README.md : "to get this to work, curl or wget the following script and run it as sudo"
Linux users: Aye
Well, during Windows XP days if you connect to a LAN with compromised devices (in some countries it was popular to just hook up the entire neighborhood to a series of switches or poorly managed office network) before you install every single update possible - too late, your machine is part of the botnet.
Also, some environments require antivirus running for certification even if the machine in question is a linux server with read-only volumes.
Installing software to the system should be handled by a package manager, but if you must install something like this, just throw it in a tmpfile and inspect the script before running it.
I know the response to this will be "but the things the script downloads and installs could be malicious", and while this is true, so long as the sources in the install script are fine, I consider this to be a separate issue (but still a big issue).
The issue of trusting source code or binaries is a thing but it doesn't justify copy pasta'ing random scripts in the shell.
Another thing to take note of, there in the past have been bugs in terminal emulators that allowed pasting certain characters that made the text look completely different than what it actually was, so pasting "ls $HOME" could have actually been "rm -rf ~/" for example.
The original team that worked on this was awesome but a bunch of bad managers came over from Exchange and ruined it.
source: worked on this several years ago
Do you think Defender would have helped with that? I'm highly doubtful.
What would probably have, is if MS's implementation of protected folders, or whatever it's called, wouldn't have been completely brain-dead.
> EDIT: Also there's a misconception that linux somehow doesn't get viruses - however the Pytorch attack affected linux users. Making a virus for windows gives you far more targets then linux, which is why they're far more common.
That's correct. But at least on Linux, if you're so inclined, you can spend a couple of hours setting up some AppArmor or SELinux profiles to prevent random crap for accessing ~/.ssh and ~/top-secret.
This is what makes it so doable since you don't need any privilege escalation.
The reason why this is a big deal for a lot of people is your ssh keys will give you access to your git repos and other servers unless you have them password protected or use gpg/sk ssh keys which I think a lot of people don't do.
And of course if you can see the known hosts file/bash_history you'll likely have access to more servers to propagate to.
Also things like your browser cache is stored there.
This is why I store keys on a hardware key that requires me to touch it when used and manually start ssh-agent when doing a lot of `git push`.
Originally it was a lot less hostile, over the years now itself became the villain it tried to fight.