zlacker

[return to "Humanness in the Age of AI"]
1. frabcu+a7[view] [source] 2023-04-01 06:54:10
>>allanb+(OP)
ELI5, how does this work?

It claims to both identify humans and be zero knowledge.

What’s to stop me registering a bunch of times then letting my bot use my identities?

The answer is implied to be my iris scan. But then it isn’t zero knowledge for some entity is it? Unless it is relying on the Orb never being hacked?

Any good third party write ups on it? The WorldCoin page is a bit long and doesn’t quickly explain how it works at a basic level.

◧◩
2. macrol+df[view] [source] 2023-04-01 08:19:55
>>frabcu+a7
Seems it creates an "iris code" from the iris scan. Then they want to develop some new hash function to hash the iris code to get an iris hash.

This iris hash should then be stored in some decentralised database like a blockchain or something.

https://worldcoin.org/blog/developers/privacy-deep-dive

◧◩◪
3. moonch+1u[view] [source] 2023-04-01 11:15:14
>>macrol+df
So if you damage your iris you're fucked ?
◧◩◪◨
4. aramac+zB[view] [source] 2023-04-01 12:31:34
>>moonch+1u
Yes. That is how biometric data works; if my finger gets cut off I can no longer unlock with touch ID on my iPhone.

Wonder if they have a backup ID method of some sort or if it’s solely iris data

◧◩◪◨⬒
5. Phemis+OC[view] [source] 2023-04-01 12:41:12
>>aramac+zB
It is likely recording the whole time - e.g. also while the person is moving their iris in view of the camera.

So, I would expect them to have captured a seriously significant AND deduplicated (through the iris) database of faces as well.

Obviously, this is an extremely valuable database on its own merits, because it contains biometric data on a ton of people from "non-restricted" countries and may well solve some of the systemic biases current biometric systems show, due to lack of a representative dataset to train on.

These systemic biases are currently one of the major arguments _against_ the use of biometric systems at an even bigger scale and thus solving it will take away a big tool in the privacy-rights activists toolbox.

Are biases still systemic if at one point a significant amount of people from every geographical location on the planet has been included in the training set?

I can imagine arguing _for_ this position at the very least becomes a lot more complex.

[go to top]