1. They should deliberately introduce noise into the raw data. Nazis with the raw census data can spend all month trying to find the two 40-something Jews that data says live on this island of 8400 people, but they were just noise. Or were they? No way to know.
2. Bucket everything and discard all raw data immediately. This hampers future analysis, so the buckets must be chosen carefully, but it is often enough for real statistical work, and often you could just collect data again later if you realise you needed different buckets.
3. They shouldn't collect _anything_ personally identifiable. Hard because this could be almost anything at all. If you're 180cm tall your height doesn't seem personally identifiable, but ask Sun Mingming. If you own a Honda Civic then model of car doesn't seem personally identifiable but ask somebody in a Rolls Royce Wraith Luminary...
Why not just ensure that any personally identifiable data is properly bucketed, and discarded if it is too strongly identifiable. If you are storing someone's height, age, and gender, you can just increase the bucket size for those fields until every combination of identifiable fields occurs several times in the dataset. If there are always a few different records with well distributed values for every combination of identifiable fields, you can't infer anything about an individual based on which buckets they fall into.
> Homogeneity Attack: This attack leverages the case where all the values for a sensitive value within a set of k records are identical. In such cases, even though the data has been k-anonymized, the sensitive value for the set of k records may be exactly predicted.
> Background Knowledge Attack: This attack leverages an association between one or more quasi-identifier attributes with the sensitive attribute to reduce the set of possible values for the sensitive attribute.
Optimal k-anonymization is also computationally hard [2].