I see some core team on this thread, so just wanted to say THANK YOU! Awesome job! Keep fighting for the users!
I'm totally the wrong person to offer recommendations on mobile, but so far it works very well for me, but then, I use almost no third party apps, and none of them are Play store only. My only complaint is the hardware (outside of their control).
I recommend putting proprietary Play Store apps grabbed with Aurora Store in the work profile with Shelter[5].
[1] https://obtainium.imranr.dev/
[3] https://f-droid.org/packages/com.aurora.store/
[4] https://f-droid.org/packages/de.marmaro.krt.ffupdater/
So then what's the point of having a Play Store without Google Play services?
Also "private space" is now available with Android 15 and can provide the same separation within a single user profile.
Signal brings its own notifications, so they work perfectly.
The only app which was broken to the point of unusability was Too Good To Go, which demands that you pick locations on a map which relies on Play Services; the manual city entry is broken.
I use Google Maps only in Firefox Focus, but I've heard that builds of Google Maps up to about a year or so ago didn't rely on Play Services, and with Aurora Store you can manually enter a build number to install.
tl;dr: 10/10, fabulous experience.
Install Droidify, enable the repos, and install "microG Services" and "microG Companion".
I am personally more than okay with using the official, proprietary GP services from time to time if they abide by the same rules, especially that I can make these rules as strict as I want.
I wish that were true, but if you delete the 100s of binary blobs (many with effectively root access) copied from a stock donor vendor partition the phone won't function at all.
There is no such thing as a fully open source and user controlled Android device today.
Check if yours is on the list.
After opening the application, it complains about being installed through an "insecure method", and bails. Reinstalling through Google Play magically fixes that.
These "security checks" are spreading like measles, so expect to see this sooner or later.
I am alright with things that allow for improvement, at least in theory
Even more FOSS friendly graphics vendors like AMD and Intel rely on binary firmware.
I did do about three weeks of research, as I worried that maybe a number of apps wouldn't run on it or needed some form of deep attestation. Didn't find much, OpsGenie and other work apps are happy with the GOS level of attestation provided.
Great to have Google kicked off the phone. So nice to shut off the network permission for any apps that only require an internet connection to serve ads.
One tip from me, if you came from stock Pixel: You can download the default Pixel sounds and set them up like it was. Have a look for "Your New Adventure" online, the message sound is "Eureka".
> It doesn't matter that the app is trustworthy, because F-Droid are extremely incompetent with security and the apps you install from F-Droid are signed by F-Droid rather than the developer.
https://discuss.grapheneos.org/d/20212-f-droid-security-in-s... https://discuss.grapheneos.org/d/18731-f-droid-vulnerability...
They also say, if you use F-Droid, at least use F-Droid Basic:
> Dont use the main F-Droid client. Android is pretty strict about SDK versions and as F-Droid targets legacy devices, it is very outdated.
https://discuss.grapheneos.org/d/11439-f-droid-vsor-droid-if...
> If the app is only available on F-Droid / third party F-Droid repo, use F-Droid Basic and use the third party repo rather than the main repo if available. > > If the app is available on Github then install the APK first from Github then auto-update it using Obtanium. Be sure to check the hash using AppVerifier which can be installed from Accrescent (available on the GrapheneOS app store).
https://discuss.grapheneos.org/d/16589-obtainium-f-droid-bas...
By the way, while GrapheneOS recommends Accrescent, I don't use it anymore because they can't even add apps like CoMaps, while some of the apps they actually added are proprietary.
Different use cases. User profiles are only active when you manually switch to them, while work profiles are active _alongside_ your main profile.
So for untrusted apps that you only use occasionally and on-demand (like the myriads of travel / shopping / random services apps), user profiles are great. For apps that you want to keep in the background, such as the proprietary messaging apps that all your friends use, a work profile is much nicer.
For those of us who aren't ready to cut the umbilical cord to the mothership, you can also root/firewall on normal android to stop this. In fact I choose to not be able to use banking apps in order to cut out the crappy ads.
https://grapheneos.org/faq#baseband-isolation
Sure, it's not perfect, but it's still really, really good. Even with the binary blobs that are on it, Graphene phones have been impossible to unlock via commercial cracking tools since 2022.
https://osservatorionessuno.org/blog/2025/03/a-deep-dive-int...
That doesn't seem like a con if you take into account the context: F-droid is not shipping pre-build binaries from the developper, it asks for a buildable project from the developper.
If the source repo of the upstream dev are compromised, so will be hid own binaries anyway.
That's because apps that aren't published just on the Play Store but also on other stores or for direct sideloads (for users running Huawei for example which doesn't have Play Store) need to be able to detect the installation method to do updates on their own if there is no backing store.
It comes with some minor usability issues with captive Wifi portals sometimes, but the trade-off of not having ads in app or while browsing is way worth it IMHO.
Having recently gone through the F-Droid release process, I learned that this is not necessarily the case anymore.
F-Droid implements the reproducible builds concept. They re-build the developer's app, compare the resulting binary sans signature block, and if it matches they distribute the developer-signed binary instead of their re-built binary.
This is opt-in for developers so not all apps do it this way. I'd sure like to know how common this is, I wonder if there are any statistics.
What kind of issues did you have? I think it does require google play services (which can be installed easily).
I have used GOS on a pixel 6 for the past two years with no issues. The phone finally died on me last weekend, so I'm in the market for a new pixel which will be getting GOS right away.
As opposed to "being free in all senses of the word", which is what the comment was talking about.
It all seems like a security theater with the consequence that, ooops, we just vendor locked in all our customers to run a less secure OS by a company whose business it is to collect personal data and show ads that people don't want to see.
It looks like there's an app on F-Droid called "Rethink" that promises to do both firewalling, DNS blocking, and offers a WireGuard VPN. That seems promising, though I must add that I haven't tested it myself.
And now you're running a two year old phone and it's effectively obsolete.
If they would just upstream their firmware into the Linux kernel, you could upgrade these phones for years and years. Until the hardware is actually physically incapable of running the latest features.
Some vendors, like Google, promise to provide updates for a long time. But it's just that - a promise. There's no technical guarantee or mechanism for this, it's purely based on trust.
It's the responsible thing to do. Apple has done it a few times.
With the right hardware choices running blob-free linux is pretty straightforward.
On the other hand, the functionality is top notch. Easily the best integration of consumer level DNS + firewall blocking in any application on any platform. Just block everything of an application by default and then watch the connection logs for the app and start unblocking stuff via ips, domains or wildcards until the app starts working again.
The Precursor is promising, but software is not there yet.
I sit down at my desktop computer and send emails and type messages like this one. Then I get up from my desk and spend time with my family offline and present. It's pretty great.
And even if you install Google play on your graphene phone, it is still more isolated by default. Add that to the concept of storage scopes and more permissions control (apps have to ask for access to the network) and you have a more secure platform.
Which Nvidia card do you have, and at which clock speed does your GPU run?
> With the right hardware choices running blob-free linux is pretty straightforward.
Unfortunately no. Features like SSE are pretty amazing and have made CPUs really fast and efficient, but they're unfortunately also large attack vectors, so vulnerabilities like Spectre or Meltdown occur. You need proprietary microcode blobs to fix those security vulnerabilities in your CPU.
You can use Google apps and apps depending on them on GrapheneOS via sandboxed Google Play. The vast majority of Android apps can be used. You don't need to stop using Google apps/services or other mainstream apps to use GrapheneOS. It's likely nearly all the apps you use or even all of them work on GrapheneOS. There's a per-app exploit protection compatibility mode toggle (and finer-grained toggles) to work around buggy apps with memory corruption bugs. We avoid turning on features breaking non-buggy apps by default and hardware memory tagging is temporarily opt-in for user installed apps not marked as compatible due to how many memory corruption bugs it finds.
A small number of apps are unavailable due to checking for a Google certified device/OS via the Play Integrity API. These are mostly banking apps, but most banking apps do work on GrapheneOS. There are tap-to-pay implementations which can be used on GrapheneOS in the UK and European Economic Area. Several banking apps recently explicitly added support for GrapheneOS via hardware-based attestation as an alternative to the Play Integrity API. We're pushing for more apps to do this and for regulation disallowing Google from providing an API to app developers for enforcing devices licensing Google Mobile Services. Play Integrity API often portrayed as a security feature but Google chooses not to enforce a security patch level. They're permitting devices with years of missing important privacy and security patches but not a much more private and secure OS. Only their strong integrity level has a patch level check, but the check is only done for recent Android versions and only requires they aren't more than 12 months behind on patches which serves no real purpose.
> you can also root/firewall on normal android
This is different from our Network permission which not only blocks direct access but also indirect access via APIs requiring Android's low-level INTERNET permission. Our Network permission also pretends the network is down through many of the APIs. For example, scheduled jobs set to depend on internet access won't run.
In Sweden we typically use Swish, which again works great.
"Tap to pay" things are problematic though but it's not something I personally use (even before I migrated away from stock Android).
It's fairly pointless for apps to check for Mock Location being active without also verifying the OS via the Play Integrity API or hardware attestation API. Most apps checking for it are using or in the process of adopting the Play Integrity API. Apps enforcing the Play Integrity API basic/strong integrity level won't work on GrapheneOS unless they explicitly allow it. A growing number of apps doing this are explicitly allowing GrapheneOS. It would be counterproductive if our Location Scopes API didn't provide a way for apps to check if since those apps simply wouldn't permit GrapheneOS. However, it doesn't need to be the existing Mock Location API. It can be our own API which would only be used by apps explicitly choosing to permit GrapheneOS. This would allow apps like Pokemon Go and Ingress to permit GrapheneOS even if they insist on not allowing directly spoofing location.
SailfishOS is not open source itself. It's far less open source than Android which has the Android Open Source Project with the whole base OS.
It's basically only useful for debugging.
GrapheneOS supports having a Private Space in secondary users instead of only a single one in Owner. Supporting multiple Private Spaces per user is a planned feature at which point work profiles will be fully obsolete. The remaining use case for work profiles is to have both a Private Space and work profile in the Owner user.
The process adds a significant delay for updates but it does not actually protect users from developers in any meaningful way. This real world example with WireGuard demonstrates that.
The Precursor is the only pocket computer platform that is maximally open hardware, software, and firmware but you revert back to the 90s in terms of power as a consequence with alpha quality software today. If Bunnie is successful with his IRIS approach and making custom home-user-inspectable ASICS then maybe a middle ground path can be forged in the next few years.
For now the only modern computing experience with fully open hardware and software I am aware of are the ppc64le based devices by Raptor Engineering, but at a very high cost due to low demand, with huge form factor and no power management. I still own one anyway because we have to start somewhere.
For those that want this story to get better, please buy and promote the products of the few people trying to break us out of dependence on proprietary platforms.
Sadly this was, to your usual points, at the major expense of security making those devices purely research projects at best and not something anyone should ever actually use.
When you are stuck on a platform that requires closed firmware you are kind of stuck blindly accepting updates from the vendor to patch security bugs, stuck hoping they are not actually introducing new backdoors.
This is why I reject platforms that require closed firmware in the first place to the fullest extent I can.
That said, to your point, both are misrepresented as fully open frequently which is just not true, and obscures efforts by teams that are working on fully open hardware solutions the hard way.
- it’s a lie
- not even a white lie, they know perfectly well, that they can do way more
- most of the security “features” are completely useless
- they also know this
However, it’s very difficult to prove these, and laymen don’t and won’t understand the details.
If you are not running games (which you should not on a system you need to be able to trust) maximum clock speed from a modern GPU is not needed for most workstation applications.
I generally choose AMD GPUs for the best experience with open drivers these days on systems I need high GPU performance from.
> You need proprietary microcode blobs to fix those security vulnerabilities in your CPU.
Really? Which blobs do I need on RISC-V FPGA enclaves or my PPC64le Talos II workstation which has a fully open hardware motherboard and open CPU architecture?
I make different tradeoffs on different hardware to be sure depending on the threat model of the task I am working on. x86_64 is a bit of a shit show, but you still only have to trust your CPU vendor even there, as it is possible to have FOSS firmware/software for everything else.
Do you count binary firmware as 'open' or not? If not, AMD is not 'open' either. If you do, Nvidia now also has open kernel drivers. Mesa developers are exploring ways to get the new Mesa Nvidia Vulkan driver (NVK) to run on top of the open Nvidia kernel driver, which should eventually make Nvidia drivers as open as AMD.
Well at that point buying a GPU is definitely not worth your money. You're better off using a CPU's integrated graphics unit.
> I generally choose AMD GPUs for the best experience with open drivers these days on systems I need high GPU performance from.
Yeah I agree on that, I also purchase AMD cards exclusively now.
> Which blobs do I need on RISC-V FPGA enclaves or my PPC64le Talos II workstation
I assumed we were only talking about x86. But I also believe that POWER9 CPUs don't have SSE, prove me wrong. I guess you're running Linux? I'd be very interested in looking at the output of lscpu from one of these machines.
> x86_64 is a bit of a shit show
I fully agree there
It would be nice if the firmware itself was free software so that it could be shipped alongside the Linux kernel, maintained indefinitely and we could customize it however we want. The hardware is supposed to do what we want it to do, not what the manufacturer lets us do.
I don't like the fact every single device out there has entirely separate computers inside them running unknown proprietary software. It feels like our operating systems aren't operating the system anymore, it's like they're just some user app sandboxed away from the real system. This presentation explains what I mean:
It's an imperfect reality. Security by isolation of devices via IOMMU addresses real concerns such as devices being able to access RAM via DMA. It's great that GrapheneOS is doing this.
I generally only run gaming graphics cards on dedicated gaming machines, not on workstations I need to be able to trust. You can't use accelerated graphics in qubes anyway, specifically because graphics cards are hard to trust.
My requirements from a workstation are:
1. MUST have 100% open source code loaded in system memory
2. SHOULD have open source software in the boot trust path (coreboot/tpm2 secure boot, etc)
3. SHOULD have open hardware to the furthest extent possible that meets my use case
4. SHOULD be fully auditable and tamper evident using at-home tools and methods (like the Precursor)
Yeah I only use dead simple workstation cards or integrated graphics on my workstations, and AMD GPUs on my gaming systems which I don't trust at all (but still prefer to support companies that use open drivers)
> But I also believe that POWER9 CPUs don't have SSE, prove me wrong.
POWER9 has its own SIMD system (AltiVec/VMX/VSX) instead of SSE which is entirely its own thing. I have no idea of the performance tradeoffs here though for various use cases, as freedom is biggest factor for me.
> I'd be very interested in looking at the output of lscpu from one of these machines.
Here is an lscpu from an 8 core Blackbird though it will probably render poorly on HN.
Architecture: ppc64le Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Model name: POWER9, altivec supported Model: 2.3 (pvr 004e 1203) Thread(s) per core: 4 Core(s) per socket: 8 Socket(s): 1 Frequency boost: enabled CPU(s) scaling MHz: 58% CPU max MHz: 3800.0000 CPU min MHz: 2166.0000 Caches (sum of all): L1d: 256 KiB (8 instances) L1i: 256 KiB (8 instances) L2: 4 MiB (8 instances) L3: 80 MiB (8 instances) NUMA: NUMA node(s): 1 NUMA node0 CPU(s): 0-31 Vulnerabilities: Gather data sampling: Not affected Itlb multihit: Not affected L1tf: Mitigation; RFI Flush, L1D private per thread Mds: Not affected Meltdown: Mitigation; RFI Flush, L1D private per thread Mmio stale data: Not affected Reg file data sampling: Not affected Retbleed: Not affected Spec rstack overflow: Not affected Spec store bypass: Mitigation; Kernel entry/exit barrier (eieio) Spectre v1: Mitigation; __user pointer sanitization, ori31 speculation b arrier enabled Spectre v2: Mitigation; Software count cache flush (hardware accelerated ), Software link stack flush Srbds: Not affected Tsx async abort: Not affected
https://old.reddit.com/r/StallmanWasRight/comments/1l8rhon/a...
MNT Reform has a regular closed source ARM SoC as the main component along with a bunch of other closed source components. The chassis, board and boot chain being open doesn't make a device mostly open hardware. Anything simply using an ARM or x86_64 SoC at the core is not truly mostly open. It's a closed source system (the SoC) with open source components between it and other closed source components like radios, a display controller, SSD, etc. The same applies to other ARM and x86_64 laptops. They're built around closed source components even if the board many components go in and the boot chain is open source.
Having an open source boot chain and not requiring loading proprietary firmware from there or from the OS doesn't mean the device has open firmware. It's conflating not needing to load firmware with the firmware not existing or being open, which isn't the case.
> The Precursor is the only pocket computer platform that is maximally open hardware, software, and firmware but you revert back to the 90s in terms of power as a consequence with alpha quality software today. If Bunnie is successful with his IRIS approach and making custom home-user-inspectable ASICS then maybe a middle ground path can be forged in the next few years.
This is far closer to being how you're describing other platforms. However, it does have closed source components including the FPGA and Wi-Fi. It's as close as it gets to being open hardware and that has a huge cost. Platforms simply using a closed source ARM SoC and many other closed source components are not anywhere close to being open. This is what it takes to get close, and it's not fully there.
> For now the only modern computing experience with fully open hardware and software I am aware of are the ppc64le based devices by Raptor Engineering, but at a very high cost due to low demand, with huge form factor and no power management. I still own one anyway because we have to start somewhere.
It's the motherboard that's open source. The IBM CPUs used with it are not open hardware.
> For those that want this story to get better, please buy and promote the products of the few people trying to break us out of dependence on proprietary platforms.
Laptops with a nearly completely closed source SoC / CPU are not a fully open platform, especially when it's an SoC providing most of the functionality. Talos II has a lot of functionality on their open motherboard vs. an ARM SoC with most of it on the SoC, but either way the CPU being closed source is still the most core component being closed source.
Following this, we posted multiple threads correcting inaccurate claims about what we had said about this and made it clear GrapheneOS was continuing. GrapheneOS was fully ported to Android 16 before the end of June, which took longer than usual due to the changes but was still completed.
Snapdragon uses a fork of the open source EDK2 as their bootloader prior to the OS and publishes the source code. It doesn't mean Snapdragon is open source.
Most of the firmware has nothing to do with the boot chain leading up to the OS on the SoC.
Typical Android devices have fully open source kernel drivers. There are usually dozens of closed source libraries in userspace such as the well known Mali GPU driver library. Closed source libraries can still be reviewed. Open source doesn't make something secure and trustworthy. It also isn't a hard requirement to review a library. Auditing a low-level C library doesn't imply finding all the vulnerabilities, particularly something hidden. Widely used open source code still has many vulnerabilities lasting for long periods of time after many people have reviewed it. It does not solve security or trust.
> That said, to your point, both are misrepresented as fully open frequently which is just not true, and obscures efforts by teams that are working on fully open hardware solutions the hard way.
A closed source SoC with open source hardware built around it and other closed source components including radios is not a fully open source computer either.
The ISA is open source, not the whole CPU architecture and design. There are older open core designs from IBM but that's a different thing from the more modern and powerful Power9 and Power10 CPUs.
> you still only have to trust your CPU vendor even there, as it is possible to have FOSS firmware/software for everything else
A device with assorted closed source components including as part of the motherboard itself is hardly open beyond the CPU. Open source also doesn't mean you aren't trusting those vendors. With a fully open hardware design CPU, you're still trusting that it matches the open source design and you're trusting the open source design. The manufacturing process is also generally going to be proprietary.
Except the default browser is Chromium with some changes
This reminds me of a recent HN comment I saw that suggested using Firefox was "kicking Google where it hurts" or something like that
Like Firefox, this project depends on Google. For the hardware, the web browser and who knows what else
It even offers a sandboxed Google Play Store
It tries to copy Google paternalism
It swaps a Google mothership for a Graphene mothership
What if the computer owner does not want a mothership
Can connections to Graphene servers be blocked, i.e., are these connections optional or mandatory
Even Netguard which works on any hardware and does not require root makes unnecessary connections to ipinfo.io servers effectively giving them a list of almost every domain the user's phone trying to access
If the concern is apps that only require internet connection for ads, Netguard solves that problem without root
Most apps but not all will try to connect to the internet at some point, even if you never use them
The user-hostile design of Android is that apps keep running in the background after they are "closed"
(There are crude apps one can use to automate manually killing each process with "Force stop" but no one uses them. This doesn't prevent apps from trying to access the internet on some preset schedule)
Netguard will show when apps try to connect and block the connections. It provides DNS logs and PCAPs.
One does not even need Netguard to see this subversive activity
Try this at home
Enable IP forwarding on a computer you can control, i.e., one that is running an OS you can compile yourself such as Linux or BSD
Put the phone on the same network as this computer
Set the phone's gateway address to the address of the computer
Run tcpdump on the computer and filter for the phone's IP address
Is making a connection to our API a cause for concern? If that is the case, we welcome OSS projects to user our local IP databases, which includes our free IPinfo Lite database that we primarily designed for firewall and privacy applications.
Looks like they are doing what a small company is able to do.
If Bunnis ASIC efforts succeed, then we have auditable reasonably fast chips in the next few years and a truly 100% open device. Tropic Square is another to keep an eye on.
Fully aware of everything in your descriptions here, but you always repeat this stuff as though I am not. Probably useful for others though.
Where we always seem to disagree is you usually try to dismiss mostly open solutions as no better than mostly closed as though the effort to pursue transparency is pointless. I feel every single component with open firmware and open hardware is a huge win, making accountability and community improvement possible. Likewise every blob is an eyesore that should be reverse engineered and replaced... or switch to more transparent alternatives when they exist.
Sure, auditing never catches all bugs, but it catches a -lot- of them. There are many severe security flaws I would never have had a chance in hell of having the time to find in closed binaries, let alone fixing them.
Sure underhanded C and all sorts of sneaky bugs can exist, but an open C solution could be replaced with an open Rust solution structured for east auditing or another language that makes it harder to do many common types of sneaky in.
If a vendor will not let me look at their code, I am extra suspicious of glaring backdoors or bugdoors until proven otherwise given countless examples in the wild.
I have always agreed open source alone does not mean code can be trusted. Most open source code is shit and should -not- be trusted (I review it for a living) but I am absolutely certain open source is an prerequisite to a community maintainable trustworthy solution existing where we get both freedom and security.
They did not replace firmware with open alternatives. Not updating firmware is not replacing it.
> Sadly this was, to your usual points, at the major expense of security making those devices purely research projects at best and not something anyone should ever actually use.
They steer people to devices with severe unpatched firmware vulnerabilities and an enormous number of severe unpatched software vulnerabilities in the case of Replicant. This is covered up and people are misled about it. These projects claiming to be focused on avoiding backdoors are in fact deliberately backdoored through not patching known vulnerabilities for ideological reasons.
> When you are stuck on a platform that requires closed firmware you are kind of stuck blindly accepting updates from the vendor to patch security bugs, stuck hoping they are not actually introducing new backdoors.
You still trust the developers of open source software and firmware. Open source doesn't result in all vulnerabilities being found, including intentional ones. It's not even close to providing it.
> This is why I reject platforms that require closed firmware in the first place to the fullest extent I can.
The platforms you're describing as having fully open firmware still have closed source firmware.
It's near completely closed source hardware. The SoC providing nearly the whole core system is fully closed source. An open source boot chain after the closed source early boot doesn't change this. Other components are closed source too. It's closed source with open source bits in between. Compared to the complexity of the SoC, radios, etc. the open source parts are insignificant. Open source between closed source components with most of the complexity it not mostly open source. It's simply not true.
> and the Precursor as -maximally- open
It's possible to use an open source RISC-V SoC instead of programming a CPU with a closed source FPGA. They don't use a closed source FPGA to be maximally open but rather to be closer to being able to inspect it.
> Fully aware of everything in your descriptions here, but you always repeat this stuff as though I am not. Probably useful for others though.
I don't think you're unaware of it. You must be aware the MNT Reform has a fully closed source ARM SoC with most of the core system's complexity but you still call it mostly open source.
> Where we always seem to disagree is you usually try to dismiss mostly open solutions as no better than mostly closed as though the effort to pursue transparency is pointless. I feel every single component with open firmware and open hardware is a huge win, making accountability and community improvement possible. Likewise every blob is an eyesore that should be reverse engineered and replaced... or switch to more transparent alternatives when they exist.
They are not mostly open solutions. It's false marketing. Open source does not have the properties you claim it does of heavily avoiding trust in the developers or providing much better security.
> Sure underhanded C and all sorts of sneaky bugs can exist, but an open C solution could be replaced with an open Rust solution structured for east auditing or another language that makes it harder to do many common types of sneaky in.
Memory corruption isn't required for serious subtle vulnerabilities and even safe Rust has plenty or room for memory corruption. Rust does not making auditing easy. It makes it easier than C, which is a low bar. Auditing C for deliberate vulnerabilities can easily be harder than auditing assembly code without anything that looks obfuscated.
> If a vendor will not let me look at their code, I am extra suspicious of glaring backdoors or bugdoors until proven otherwise given countless examples in the wild. > > I have always agreed open source alone does not mean code can be trusted. Most open source code is shit and should -not- be trusted (I review it for a living) but I am absolutely certain open source is an prerequisite to a community maintainable trustworthy solution existing where we get both freedom and security.
There are many glaring vulnerabilities in the most widely inspected open source projects including the Linux kernel. Many have persisted for not only years but decades. Open source does not inherently result in all these vulnerabilities being found and fixed, whether they were intentional or not. Open source can help with it but it provides no guarantee of better security.
Linux kernel is a typical collaborative open source project where performance, scalability and features trample over security. It being such an expansive and collaborative project means there's massive attack surface for intentional vulnerabilities and it doesn't have serious protections against it. Lack of prioritizing correctness and security for nearly all of it is pretty much equivalent to intentional vulnerabilities. Deciding not to deploy very useful features for finding / fixing vulnerabilities due to minor work it creates is typical, such as not marking intended overflows to have automatic overflow checks as an option. There's massive pushback against very basic things. The effort to introduce Rust for drivers has gone horribly despite lots of resources and it's face far greater resistance in the core kernel. Meanwhile, iOS has a kernel increasingly focused on security where they overhaul the whole thing for it. This is an example where one company controlling a project without collaborative is a massive win for security. There are projects like SQLite which don't take on the collaborative and open development aspects of open source. AOSP is similar to an extent, but heavily uses collaborative open source projects like Linux as core parts of it which largely don't have the same significant focus on security it grew over time. AOSP is about as security focused as iOS itself, but open source projects they use including Linux certainly aren't.
There is no cause for concern necessarily. These are design choices, nothing more.
Users have no idea what happens to the data that leaves their computers. To quote from another story currently on the HN front page: "It's incredibly easy to give information away. But once that data is out there, it's nearly impossible to take back." >>44689059
Promises made by developers are reassuring to some, but rarely if ever legally enforceable in the event something goes wrong, and the harm already caused may be beyond redress. As a proactive measure users can, among other things, seek to minimise the amount of data they send. For example, some users might want the _option_ to stop their phones from constantly trying to ping or connect to remote servers _without any explicit user intent to do so_. Maybe they do not want their phone to act like a beacon to someone else's remote server.
The point of the comment is that sometimes there are remote connections being made to servers chosen by developers that are assumed to be OK with all users, e.g., connections to Graphene servers, IPinfo servers, or myriad other examples. Meanwhile there is no option for the user to disable this behaviour. There may be some users who prefer _zero_ remote connections except the ones they themselves choose to initiate or enable. The possibility of such users often seems to be overlooked or deliberately ignored.
Like Firefox constantly sending HTTP requests to remote servers to check for "connectivity". Even when the user is not trying to connect to any server. The requests are sent in the clear. This is not optional behaviour.
By that time the amount of money that will have been made can justify and exceed whatever fine they might expect to get in court.
You only ever read the parts of what I say and continue to argue as though I think open source code is inherently secure, and continually ignore every time I AGREE with you that most open source code is shit.
Please listen when people are largely agreeing with you instead of hitting back with walls of text as though they are not.
MNT reform is mostly open in terms of everything but the CPU. That is a -great- start as it means fewer parties you have to trust. Also they support two different CPU vendors as well as an FPGA option soon. The CPU is a swappable component in an otherwise open platform. How can you dismiss that level of flexibility and user power as anything but substantial progress over the status quo? Please give people working hard for more freedom respecting hardware their due credit. Minimizing such very hard work will not win you allies.
Also yes, the Linux kernel is a security shit show to be sure, as well as mot desktop Linux distros, but -because- it is open I can heavily patch it and customize it to reduce attack surface.
Also because it is open projects like Asterinas can reference it to make an ABI compatible modern replacement is Rust which is making rapid progress!
Transparency is step 1 to any major progress in freedom and security and that is why I am a broken record demanding more of it from all projects widely used in high risk scenarios.
Chromium has vastly superior security compared to Firefox. https://madaidans-insecurities.github.io/firefox-chromium.ht...
> It tries to copy Google paternalism > > It swaps a Google mothership for a Graphene mothership
Nonsense claims. All network connections made by the OS are well documented on the official website: https://grapheneos.org/faq#default-connections
There are only a few services GrapheneOS devices connect to:
- a time server (securely, over HTTPS, not insecure NTP)
- the OS update server (obvious; it's just plain HTTP requests, no user identifiers other than the IP address, which can easily be masked by using Tor or a VPN)
- the GrapheneOS App repository, which provides updates for preinstalled apps like Auditor, as well as the Vanadium browser and WebView (it's critical to get security patches for your browser in a timely manner)
- network connectivity checks (required to sign in to public wifis that use captive portals; can be entirely disabled in the settings)
- SUPL and PSDS through GrapheneOS proxies for A-GNSS because there is no network location service enabled by default
> Can connections to Graphene servers be blocked, i.e., are these connections optional or mandatory
You can block all the connections. You don't even need to, since they can all be disabled in the settings. If you disable the System Updater app, you're gonna have to adb sideload your system updates https://grapheneos.org/usage#updates-sideloading.
> If the concern is apps that only require internet connection for ads, Netguard solves that problem without root
You don't need Netguard, GrapheneOS has a built in network permission toggle, which offers even better protection than a firewall, since it completely blocks access to the underlying network socket (https://grapheneos.org/features#network-permission-toggle)
> The user-hostile design of Android is that apps keep running in the background after they are "closed"
You can deny apps running in the background, even on stock Android. This isn't unique to Android btw, I'm sure you've come across the system tray in Windows before. Those are all apps running in the background. Android basically has the same thing, it's in the notification center, and you can also stop background apps from there.
GrapheneOS definitely doesn't use it. It doesn't contact any third-party APIs. Everything is well documented: https://grapheneos.org/faq#default-connections
In both cases, they could opt to download our database locally and use it through their own API system.
We sponsor the AlmaLinux Foundation through a data sponsorship for their mirroring system: https://almalinux.org/blog/2024-08-07-mirrors-1-to-400/
But since privacy is a major concern for them, they should just use our IP-to-country database and host an API themselves on top of it: https://ipinfo.io/lite
We are happy to support and be part of any software that want to use our data.
I agree that it would be a more privacy-friendly solution for them to host their own API, but that got me thinking, wouldn't it be possible to just let users download the IPinfo data and use it locally? Does IPinfo offer database downloads? That's also how the Server-Status Firefox extension (https://github.com/tdulcet/Server-Status) works (but it doesn't use IPinfo). Also asking for potential personal use: How does the quality of IPinfo data compare to MaxMind, DB-IP, etc?
I wish GrapheneOS would support non-Pixel hardware, though, specifically my Fairphone 4. I get why that probably won't ever happen, but it feels like a massive regression in terms of repairable hardware to move away from that.
> wouldn't it be possible to just let users download the IPinfo data and use it locally? Does IPinfo offer database downloads?
Of course, you can download our free IP database right now: IPinfo Lite
> Also asking for potential personal use: How does the quality of IPinfo data compare to MaxMind, DB-IP, etc?
We are miles ahead of everyone in terms of accuracy. Currently, we have 1,100+ PoPs across the world running active measurements. While traditional IP geolocation services are no much more than ASN/ISP reported data aggregation and parsing services. Our priority above all is accuracy and at this moment we are likely the industry leader for that.
If you have the time, go through some of our posts in our community and you will be surprised how good our data is right now. I will share my recent favorite one:
https://community.ipinfo.io/t/the-north-korean-gamers-on-ste...
https://www.zdnet.com/article/3-ways-to-stop-android-apps-ru...
https://www.androidpolice.com/how-to-close-android-apps/
https://www.androidauthority.com/how-to-close-apps-on-androi...
An exception would be Windows applications that can be "run as a service". This is generally not default behaviour for user-installed Windows applications. It generally requires administrative privileges and manual configuration for each application
https://en.wikipedia.org/wiki/Windows_service
https://stackoverflow.com/questions/3582108/create-windows-s...
https://www.windowscentral.com/how-start-and-stop-services-w...
Unlike Windows, Android does not present an option to close an application while the application is in view. Android applications can be "swiped off" the screen, but they are not closed. By default, _all_ Android applications continue to run in the background. Closing an application in Android requires using a separate app. For example, opening the "Settings" app, then finding the app to be closed in a list of apps, then "stopping" the app by selecting "Force stop" as describeed in the articles above. If the Android user wants to close a number of applications running in the background simultaneously, she is out of luck. They can only be closed serially, one after the other. She must find each of them via the Settings app and Force stop each one, individually. This is extremely tedious and slow and, as one would expect, results in almost all Android users allowing applications to run in the background. The tedium mandated by this design could be purely coincidental.
NFC payments are through Google pay / wallet, which is unsupported.
No thanks; I choose to forego Too Good To Go instead of that. They are the only truly broken app I have found.
- RCS chats not sending/receiving, which has caused me to not be able to receive or send messages from/to multiple group chats with friends/family (probably going to be an issue with GrapheneOS, but at least plenty of people have reported that to be possible to work; currently doing the "disable RCS and wait 10 days" dance, so we'll see how that shakes out)
- Every other reboot the speakers would fail to initialize, meaning I couldn't hear anything except through Bluetooth (massive problem for getting up in the morning if I'm relying on my phone's alarms!).
- Microphone quality was inconsistent; sometimes I'd sound fine, and other times I'd sound muddy. Also not an issue through Bluetooth; it was just the phone's built-in microphone(s).
These were probably fixable issues, but I'm lazy and I wanted to give GrapheneOS a go, anyway (and so far I'm pretty happy with it, minus RCS still being a work-in-progress).
However it does not remove Google's control, e.g., ability to pull the plug
Google controls the hardware and the source code for the default browser
Some users might want more control, less dependence on Google
"Paternalism" is a belief by developers that they "know better" than the computer owner what choices should be made for someone else's computer
For example, pre-installing software, or connections to remote servers, and enabling these choices by default
Paternalism dismisses any idea of personal autonomy
Providing a computer user with choices rather than "defaults" could mean loss of control by the developer and any associated revenue