It never ceases to amaze me how companies choose the worst software!
https://www.fastcompany.com/4041720/heres-why-it-took-gmail-...
Most people just don’t care though.
> 31.3 Data Processing Addendum. If you are a business, enterprise, or education account owner and your use of the Services requires Zoom to process an End User’s personal data under a data processing agreement, Zoom will process such personal data subject to Zoom’s Global Data Processing Addendum.
Though it limits the scope of the data collection: https://explore.zoom.us/docs/doc/Zoom_GLOBAL_DPA.pdf
I made a video-conferencing app for virtual events (https://flat.social). No audio and video is ever recorded, packets are only forwarded to subscribed clients. Frankly, while designing the service it didn't even cross my mind to use this data for anything else than the online event itself.
https://techcrunch.com/2020/06/11/zoom-admits-to-shutting-do...
Quibbles over the definition of phrases like “Customer Content” and “Service Generated Data” are designed to obfuscate meaning and confuse readers to think that the headline is wrong. It is not wrong. This company does what it wants to, obviously, given it’s complicity with a regime that is currently engaging in genocide.
https://www.bbc.com/news/world-asia-china-22278037.amp
Why do you trust them to generate an AI model of your appearance and voice that could be used to destroy your life? I don’t.
Also, the linked document is effectively a license for the intellectual property rights. The data protection side of things would be covered by the privacy policy[0]. This all seems pretty standard?
That's personally not enough for many remote companies. So if we're going to have to have Zoom on our machines anyway (to handle an all-company meeting), why not just use it for the rest?
One part constantly fears it's missing a beat and jumps on new tech without thinking about it.
Another part believes that kids are able to construct all human knowledge by playing together in a field.
Education Technology seems to focus on selling education machines [1] (now with AI) to the first group while the second group focus on resenting any form of testing at all. Which leads to * , indeed, a huge legal minefield, that will be shirked up to government for 'not providing leadership' years down the road.
* If you are in any way involved with a school, ask them how many, and importantly what %, of non-statutory requests for comment from government agencies they've responded to, you may be surprised how low the number is or if they even count. Despite talking about 'leadership', not a lot walk the talk.
https://slate.com/technology/2022/08/4chan-ai-open-source-tr...
Adversarial machine learning explained: How attackers disrupt AI and ML systems
https://www.csoonline.com/article/573031/adversarial-machine...
How to attack Machine Learning ( Evasion, Poisoning, Inference, Trojans, Backdoors)
https://towardsdatascience.com/how-to-attack-machine-learnin...
AI Security and Adversarial Machine Learning 101
https://towardsdatascience.com/ai-and-ml-security-101-6af802...
The Road to Secure and Trusted AI
>>37022623 [a number of links regarding how to play with bots and bork training by"malforming" your inputs]
As far as I'm aware (not a lawyer) you must provide a easy opt-out from data collection and usage, plus you must not force your employees into such agreements[1]. ChatGPT already got blocked over GDPR issues, and with new regulations coming forward I really don't see how they think this can be the right call.
1. https://commission.europa.eu/law/law-topic/data-protection/r...
> Control over your data
> ...
> Google does not store video, audio, or chat data unless a meeting participant initiates a recording during the Meet session.
https://support.google.com/meet/answer/9852160
Though, I suppose this isn't exactly the same as a TOS.
https://en.wikipedia.org/wiki/There_Will_Come_Soft_Rains_(sh...
(Presuming of course that their closed source software really E2E encrypts without a backdoor)
Think how much more sharable and more complete digital medical records can be now. (And the breakthroughs that may come of it! Etc., etc.)
To wit, "As much as there's a law, HIPAA supposed to prevent people from revealing your medical data, suppose to be protected, which is completely false. 20% of the population works in health care. That 20% of pop can see the med data. the med data you are not ware of is being sent to insurance companies, forwarded to government health info exchanges...." - Rob Braxman Tech, "Live - Rant! Why I am able to give you this privacy information, + Q&A", https://youtube.com/watch?v=ba6wI1BHG9A
Not so much dystopian... as philosophical. Though, Uranus was both.
"Hereby grant" means the grant is (supposedly) immediately effective even for future-arising rights — and thus would take precedence (again, supposedly) over an agreement to grant the same rights in the future. [0]
(In the late oughts, this principle resulted in the biotech company Roche Molecular becoming a part-owner of a Stanford patent, because a Stanford researcher signed a "visitor NDA" with Roche that included present-assignment language, whereas the researcher's previous agreement with Stanford included only future-assignment language. The Stanford-Roche lawsuit on that subject went all the way to the U.S. Supreme Court.)
[0] https://toedtclassnotes.site44.com/Notes-on-Contract-Draftin...
* https://www.ftc.gov/news-events/news/press-releases/2023/07/...
* https://www.ftc.gov/business-guidance/blog/2023/07/ftc-hhs-j...
"The company has previously acknowledged that much of its technology development is conducted in China and security concerns from governments abound."
https://techcrunch.com/2020/06/11/zoom-admits-to-shutting-do...
→ https://apps.apple.com/us/app/jitsi-meet/id1165103905
And Zoom:
→ https://apps.apple.com/us/app/id546505307
Looks like one company likes to gobble data more than the other even if both privacy policies are gobble-open.
I am definitely not a fan of Zoom either and had my own issues with the Linux client, but if the problems you describe are unique to the Flatpak and not in the official Linux distribution, you can't blame Zoom for that.
https://web.archive.org/web/20230401045359/https://explore.z...
They have SFU support as of recently, so it should scale similarly to Jitsi et al.
Where were you when Microsoft announced the exact same for Teams? https://www.microsoft.com/en-us/microsoft-365/blog/2022/06/1...
By allowing Microsoft do as they please, we collectively gave up our rights for privacy, we deserve what's happening
I chose to not use their products at all, it starts from there if you care
https://www.microsoft.com/en-us/microsoft-365/blog/2022/06/1...
https://blog.zoom.us/answering-questions-about-zoom-healthca...
Not necessarily — in some circumstances, the law might not recognize a present-day grant of an interest that doesn't exist now but might come into being in the future. (Cf. the Rule Against Perpetuities. [1])
The "hereby grants and agrees to grant" language is a fallback requirement — belt and suspenders, if you will.
Covered entities (including the EMR and hospital itself) can use protected health information for quality improvement without patient consent and deidentified data freely.
Where this gets messy is that deidentification isn’t always perfect even if you think you’re doing it right (especially if via software) and reidentification risk is a real problem.
To my understanding business associates can train on deidentified transcripts all they want as the contracts generally limit use to what a covered entity would be allowed to do (I haven’t seen Zoom’s). I know that most health AI companies from chatbots to image analysis do this. Now if their model leaks data that’s subsequently reidentified this is a big problem.
Most institutions therefore have policies more stringent than HIPAA and treat software deidentified data as PHI. Stanford for example won’t allow disclosure of models trained on deidentified patient data, including on credentialed access sources like physionet, unless each sample was manually verified which isn’t feasible on the scale required for DL.
Edit: Zoom’s BAA: https://explore.zoom.us/docs/en-us/baa.html
“Limitations on Use and Disclosure. Zoom shall not Use and/or Disclose the Protected Health Information except as otherwise limited in this Agreement or by application of 42 C.F.R. Part 2 with respect to Part 2 Patient Identifying Information, for the proper management and administration of Zoom…”
“Management, Administration, and Legal Responsibilities. Except as otherwise limited in this BAA, Zoom may Use and Disclose Protected Health Information for the proper management and administration of Zoom…”
Not sure if “proper management and administration” has a specific legal definition or would include product development.
Edit 2: My non-expert reading of this legal article suggests they can. https://www.morganlewis.com/-/media/files/publication/outsid...
“But how should a business associate interpret these rules when effective management of its business requires data mining? What if data mining of customer data is necessary in order to develop the next iteration of the business associate’s product or service? … These uses of big data are not strictly necessary in order for the business associate to provide the contracted service to a HIPAA-covered entity, but they may very well be critical to management and administration of the business associate’s enterprise and providing value to customers through improved products and services.
In the absence of interpretive guidance from the OCR on the meaning of ‘management and administration’, a business associate must rely almost entirely on the plain meaning of those terms, which are open to interpretation.”
(FERPA is to higher ed in the US what HIPAA is to healthcare.)
Their executives love money as much as Apple so they comply to crush wrongthink about how amazing the CCP is.
Not that I’m an expert on the nuance here but I think it gives them permission to use PHI, especially if spun in the correct way, which then gives them permission to deid and do whatever with.
My experience has been that it’s pretty easy to spin something into QI.
> Are there examples of healthcare ai chatbots trained on de-id data btw? If you're familiar would love to see.
https://loyalhealth.com/ is one I’ve recently heard of that trains on de-id’d PHI from customers.
> What's your line of work out of curiosity?
Previously founded a health tech startup and now working primarily as a clinician and researcher (NLP) with some side work advising startups and VCs.
> WHY WON’T IT READ?!
HIPAA is the correct abbreviation of the Health Information Portability and Accountability Act which as an aside doesn't necessarily preclude someone from training on patient data.
HIPPA is the unnecessarily capitalized spelling of a (quite adorable) crustacean found in the Indo-Pacific and consumed in an Indonesian delicacy known as yutuk.
Perhaps zero knowledge is a poor choice of terms on my part as it is used in ZKP (as you pointed out). What I meant is “privacy-preserving”.
There's Galene, <https://galene.org>. It's easy to deploy, uses minimal server resources, and the server is pretty solid. The client interface is still a little awkward, though. (Full disclosure, I'm the main author.)
You can see that now clearly stated in our blog: https://blog.zoom.us/zooms-term-service-ai/
The reason why I commented in the first place is because you explicitly mentioned the Flatpak of the Zoom client which stood out to me.
It is my understanding that Flatpak sandboxes apps [1], which could cause various issues if the app is not expecting to be run inside one or of the permissions are misconfigured.
But it certainly doesn't have to. Of course the app itself can be buggy. My point is that an official release should be checked before reporting bugs.
[1] https://docs.flatpak.org/en/latest/sandbox-permissions.html