It's possible to learn something by aggregating a bunch of those individually-privatized statistics. Randomized response [1] is a canonical example. More generally, local differential privacy is a stronger privacy model where users privatize their own data before releasing it for (arbitrary) analysis. As you might expect, the stronger privacy guarantee means worse utility, sometimes much worse [2].