You can explain this to them, they don't care, you can even demonstrate how you can access their data without permission, and they don't get it.
Their app "works" and that's the end of it.
Ironically enough even cybersecurity doesn't catch them for it, they are too busy harassing other teams about out of date versions of services that are either not vulnerable, or already patched but their scanning tools don't understand that.
Sysadmins were always the ones who focused on making things secure, and for a bunch of reasons they basically don’t exist anymore.
EDIT: what guidelines did I break?
My team where I work is responsible for sending frivolous newsletters via email and sms to over a million employees. We use an OTP for employees to verify they gave us the right email/phone number to send them to. Security sees "email/sms" and "OTP" and therefor, tickets us at the highest "must respond in 15 minutes" priority ticket every time an employee complains about having lost access to an email or phone number.
Doesn't matter that we're not sending anything sensitive. Doesn't matter that we're a team of 4 managing more than a million data points. Every time we push back security either completely ignores us and escalates to higher management, or they send us a policy document about security practices for communication channels that can be used to send OTP codes.
Security wields their checklist like a cudgel.
Meanwhile, our bug bounty program, someone found a dev had opened a globally accessible instance of the dev employee portal with sensitive information and reported it. Security wasn't auditing for those, since it's not on their checklist.
Security heard “otp” and forced us through a 2 month security/architecture review process for this sign-off feature that we built with COTs libraries in a single sprint.
We pushed back and initially they agreed with us and gave us an exception, but about a year later some compliance audit told them it was no longer acceptable and we had to change it ASAP. About a year after that they told us it needed to be ten characters alphanumeric and we did a find and replace in the code base for "verification code" and "otp" and called them verification strings, and security went away.
I guess it's fine if you get rid of sysadmins and have dev splitting their focus across dev, QA, sec, and ops. It's also fine if you have devs focus on dev, QA, code part of the sec and sysadmins focus on ops and network part of the sec. Bottom line is - someone needs to focus on sec :) (and on QAing and DBAing)
Wow, this really hits home. I spend an inordinate amount of time dealing with false positives from cybersecurity.
True, but over the last twenty years, simple mistakes by developers have caused so many giant security issues.
Part of being a developer now is knowing at least the basics on standard security practices. But you still see people ignoring things as simple as SQL injection, mainly because it's easy and they might not even have been taught otherwise. Many of these people can't even read a Python error message so I'm not surprised.
And your cybersecurity department likely isn't auditing source code. They are just making sure your software versions are up to date.
> My team where I work is responsible for sending frivolous newsletters via email and sms to over a million employees.
"frivolous newsletters" -- Thank you for your honesty!Real question: One million employees!? Even Foxconn doesn't have one million employees. That leaves only Amazon and Walmart according to this link: https://www.statista.com/statistics/264671/top-50-companies-...
And you go home at 5pm and had a good work day.
They might be a third party service for companies to send mail to _their_ employees
Docker, AWS, Kubernetes, some wrapper they've put around Kubernetes, a bunch of monitoring tools, etc.
And none of it will be their main job, so they'll just try to get something working by copying a working example, or reading a tutorial.