It's baffling how many people in previous threads thought a company that gets most of its money from enterprise/business clients, will burn all their reputation by surreptitiously using client data to train their AI.
> You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content.
> (ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof
> Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.
> Zoom has agreed to pay $85 million to settle claims that it lied about offering end-to-end encryption and gave user data to Facebook and Google without the consent of users. The settlement between Zoom and the filers of a class-action lawsuit also covers security problems [0]
> Mac update nukes dangerous webserver installed by Zoom [1]
> The 'S' in Zoom, Stands for Security - uncovering (local) security flaws in Zoom's macOS client [2]
[0] https://arstechnica.com/tech-policy/2021/08/zoom-to-pay-85m-...
[1] https://arstechnica.com/information-technology/2019/07/silen...
It’s worth mentioning that per this agreement they can still do almost anything else with that data. They could put your face up on a billboard if they wanted to.
I’m out. I was a paying user. Can’t run fast enough from ever doing business with them again.
Bold claim for a company that already lost a class action for deliberately lying to its users.
†but we'll prompt you an overly long privacy policy including such consent whose acceptation is just a checkbox you tick the first time your join a call without even paying attention (nor choice)
They'll do inference all day long, but not train without consent. Only being slightly paranoid here, but they could still analyze all of the audio for nefarious reasons (insider trading, identifying monetizable medical information from doctor's on Zoom, etc). Think of the marketing data they could generate for B2B products because they get to "listen" and "watch" every single meeting at a huge swath of companies. They'll know whether people gripe more about Jira than Asana or Azure Devops, and what they complain about.
Well, I consider that to be my data and actually it is since I canceled our company's Zoom account when they adjusted their TOS. I'll take my data elsewhere.
Occam's razor also applies here.
Enterprise may resonate with something with Signal level e2ee.
Has anyone tried Element IO, as an example, in a commercial setting?
Asking for a friend.
(Even if revenue was much higher. Revenue doesn't tell you anything about how well a company can take a financial hit)
How does this apply for court hearings, council meetings, etc…
This is very Technologic
I generally feel like the general slowdown of capital availability in our industry will lead/is leading to companies doing a lot more desperate things with data than they've ever done before. If a management team doesn't think they'll survive a bad couple of quarters (or that they won't hit performance cliffs that let them keep their jobs or bonuses), all of a sudden there's less weight placed on the long-term trust of customers and more on "what can we do that is permissible by our contract language, even if we lose some customers because of it." That's the moment when a slippery ethical slope comes into play for previously trustworthy companies. So any expansion of a TOS in today's age should be evaluated closely.
Hats off to zoom for the free contract drafting lesson!
[edit: thanks to HN commenter lolinder for the actual lesson].
> > Landlord will clean and maintain all common areas.
> In most basic contracts, I recommend using "will" to create obligations, as long as you are careful to be sure any given usage can't be read as merely describing future events. I'm generally against "shall" because it is harder to use correctly and it is archaic.
https://law.utexas.edu/faculty/wschiess/legalwriting/2005/05...
Yes, as with most terms of service. It's one of the things that makes terms of service statements unreliable.
One implication is that lawyers can no longer use Zoom for anything which is attorney-client privileged.
Agreed, and these kinds of short-term incentives are one of the problems with American companies. On the flip side...
Japanese companies think about products in decades -- the product line has to make money 10 years from now.
Some old European brands think about their brand in centuries -- this product made today has to be made with a process and materials that will make people in 100 years think that we made our products at the highest quality that was available to us at the time.
But, really, does it matter whether the bad thing is caused by incompetence or malice outside of a court of law? The bad thing happens either way.
Was zoom careful to be sure any usage can’t be read as merely describing future events? Will ambiguity exist until this agreement is tested ?
Although there are a ton of alternatives out there they are all "too hard" or something, so since Zoom mostly works OK most of the time and is dead simple to use it will continue to win out over everything else.
My position on Zoom hasn't changed since 2020: Anyone using Zoom will continue to get exactly what they deserve.
Meanwhile not once do they use "Zoom shall". It's pretty clearly just a stylistic choice and not anything sneaky.
Edit: They even use "will" in the all-important phrase "you will pay Zoom". Surely you don't think they meant to be sneaky in that usage, and that is merely meant as a prediction of future events?
There is however research (that aligns with a lot of people's experience) to suggest psychopaths and sociopaths are very over represented in leadership:
They claim they can’t read anything passing through the server. Is there some other way they’ll get access?
https://support.zoom.us/hc/en-us/articles/360048660871-End-t....
Especially for video communication, I'm not going to let some 3rd party spy on me and my business if I don't have to.
In sec. 10.4, Zoom says "... Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent."
Customer Content is defined in 10.1 and is broadly worded. But the first sentence of sec. 10.2 clearly states that "Customer Content" does NOT include "Service Generated Data."
Therein lies the rub. "Service Generated Data" = "any telemetry data, product usage data, diagnostic data, and similar content or data that Zoom collects or generates in connection with your or your End Users’ use of the Services ...." (sec. 10.2).
Zoom is allowed to use Service Generated Data for any purpose (sec. 10.2) because it is not "Customer Content."
This "clarification" does nothing meaningful to assuage the serious data privacy concerns posed by Zoom's use of captured user video content.
There’s already tosdr.org but I’m not sure they have that feature.
1) Until recently, Zoom's video/audio quality knocked everyone else's into a cocked hat. I don't think that's the case, anymore. Looks like a lot of folks got off their butts, and improved their quality, but I haven't seen this mentioned anywhere, by anyone.
2) Everyone else is using it.
#2 is a biggie. Monopoly inertia is pretty hard to overcome, for people not in the tech industry (we'll change on a whim).
Zoom is not easy to use. Its settings are a mess, but everyone is used to dealing with the Zoom pain, and don't want to switch.
We can be remarkably cavalier in dismissing non-tech folks, but I learned to stop doing that, many years ago. We're not the only smart people in the world.
People (in general) don't like getting sidetracked by their tools. They want to get a job done, and how they get it done is not irrelevant, but not that important to them. They develop and refine a workflow, which is usually heavily informed by their choice of tools, and that "wears a groove." They don't want to switch grooves; even if they are not enjoying their tool.
Most tech folks, on the other hand love tools. I had an employee that would stop his main project, and design a massive subsystem, just to make a simple command-line process a few seconds shorter. I had to keep on my toes. He was the best engineer I've ever worked with, but it was a chore to keep him focused.
Non-tech types are seldom like that, and we can sometimes miss it.
These are the folks that use our products, and we don't actually gain anything by disrespecting them, even when they really piss us off.
TL;DR: Want people to stop using Zoom? Produce something better, and make it something that non-tech folks will love.
That means easy to use, forget-about-it UX, and extremely high quality.
I guess it makes sense. Companies are people, after all
Users vote with their feet based on cost and UX. While intertia is certainly a thing, there's a reason Zoom got a foothold while others didn't. The ability to send out links and having people join the meeting without creating accounts or manually installing clients first is huge in most real-world scenarios. Could you do that with... Teams? Skype? Hangouts if they weren't gmail users? Do those people know anyone with the knowledge and gumption to host something?
From the beginning of my involvement in FOSS like 25 years ago, developers have griped about non-technical users being intimidated, or even just really annoyed by UX resistance that we consider trivial. That's the primary reasons open source alternatives are alternatives rather than the standard in user-facing software.
This might be a loophole Zoom is trying to use - while they technically not using customer data (Zoom client not sending video stream to train AI), but zoom client can process data locally and send only embeddings (numeric vectors without ties to customer PII data) and it still will be customer data
> [...] for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof [...]
Those two clauses, coupled with the current murky state of AI-from-copyrighted-material, should make everyone run screaming from Zoom as a product that can be entrusted with confidential information.
But what are the best alternatives at the moment?
Zoom is very popular…
What you and what you say need to be consistent to preserve user trust and then being inconsistent shows mismanagement by senior leadership or even potentially intent to deceive or spin the situation while still implementing the policy. It’s the PR classic do one thing say another.
Edit: Oh, and then this hits almost at the same time…
https://www.sfgate.com/tech/article/zoom-return-to-office-an...
Webex seems to be the "corporate" video conference service, when secrets are a concern, from my experience.
I now use Hanlon's Shaving Brush. Its a broad brush that I use to paint every sketchy move businesses make. "Is it malice? Or is it incompetence that merely looks like malice?". I don't care! I'll assume malice unless otherwise shown.
It's not my job to try and find out how evil shit was done accidentally. It doesn't matter if they "oopsied" into selling a firehose of my data to a "trusted partner" to analyze to death. Nobody actually gives a shit at these companies, so I need to treat them all as if they're malicious. If the underlying cause was a bit of incompetence a few years ago, that does nothing for me when I'm discovering the fuckery.
We’re not buying it
The days of the “corporate responsibility” letter are over. Nothing you say will be believed if it conflicts with your bottom line.
There’s saying in Texas…won’t be fooled again
I agree with this sentiment and it feels like a heuristic at this point.
I think it comes from a decade of watching when corporate officers get caught red handed then try and denial of service the bad press with their jingoistic pablum.
Can you imagine the response to telephone company saying they can use your voicemail messages for their own purposes.
this is how it used to be, until HTTPS and cloudflare-like hosting solutions, were guzzled back like electric kool-aid. all you really needed was an IP and perhaps a port number if endpoint was behind NAT.
Seems like it might be worth them including, IANAL. Otherwise can't they just change it in the website UI...? They don't promise any particular process for acquiring consent, but sure declare you give it to them for many many other things.
I was thinking of Bluejeans and Teams. GoToMeeting seems to have improved a lot, as well. WebEx is doing much better, but I have only used Bluejeans and Teams, in the last couple of years.
This has always struck me as a weird business/product choice, since I imagine most users simply don’t know about this, assume Meet is just bad, and use other products, rather than having the idea to upgrade for better audio quality.
You think a TOS that's biased towards the company, or the customer, has any legal effect on a Chinese domestic corporation that's subject to the laws and regulations of the Ministry of State Security? Really?
https://techcrunch.com/2019/07/10/apple-silent-update-zoom-a...
https://www.theverge.com/2019/7/10/20689644/apple-zoom-web-s...
https://www.macrumors.com/2019/07/10/apple-update-remove-zoo...
The few with smarter lawyers and IT departments, usually academic, do but a majority of all of the new "AI" health tech products I've heard about pitched to hospitals use customer PHI for product development.
> We will not use ... protected health information, to train our artificial intelligence models without your consent.
> We routinely enter into ... legally required business associate agreements (BAA) with our healthcare customers. Our practices and handling of ... protected healthcare data are controlled by these separate terms and applicable laws.
To my understanding there is nothing in the separate terms (BAA) or applicable laws (HIPAA) that actually guarantees this.
I don't want to assume malice but if in good faith I would have expected an updated BAA with an explicit declaration regarding data access and disclosure in a legally-binding fashion rather than a promissory blogpost vaguely referencing laws that don't themselves inherently restrict the use of PHI for training by Zoom.
It would really only require a single term.
But this is par for the course for Zoom.
Maybe it's both: malice to kick off the effort and incompetence because they got found out.
Don't we provide consent when we agree to the TOS? And we can't use the product without doing that?
If the specific misconduct they got caught for netted them $x, and they got fined for $5x, who cares how much % of their global revenue is? That specific crime was still a net negative for them. I'm not sure why conglomerates should be punished more harshly just because they have more revenue overall.
Konami vs Kojima and any of the DieselGate companies come to mind.
> Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.
The BAA still states: Zoom shall not Use and/or Disclose the Protected Health Information except as otherwise limited in this Agreement ... for the proper management and administration of Zoom ... Zoom will only use the minimum necessary Protected Health information necessary for the proper management and administration of Zoom’s business specific purposes
As discussed in my comments on yesterday's post "proper management and administration" is vague language copied from HHS and can be construed as improving products as described in a legal analysis I quoted. I would also hazard a guess that a provider signing this agreement could be construed to have implied consent.
Nevertheless, it would not be hard to explicitly state that this does not include training models in the only truly legally binding agreement at play. An explicit declaration was also recommended in said legal analysis.
In addition Skype's ToS granted MS a licence to any and all IP you might discuss during a Skype call.[1] I wonder why no businesses were bothered by that...?
[1] ...decades ago, I don't know how it reads now, can't be arsed to check.
What is the secure way to video conference? Webex? FaceTime offers end to end encryption, but can not easily share non-mac os screens.
Articles like this sure make me like Apple sometimes
https://9to5mac.com/2023/07/20/apple-imessage-facetime-remov...
This should work: https://web.archive.org/web/20230808072418/https://explore.z...
> Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.
Personally I think that C levels should automatically be disbarred if the corporation is found guilty of criminality as that puts responsibility on the people with the power to prevent it.
they also got caught being malicious and/or dumb in the past (https://www.businessinsider.com/china-zoom-data-2020-4) so there's no reason to bother with them now.
Until the TOS clearly says otherwise, as far as I can see, the TOS at least implies this:
1. We will not use your data to train AI without your consent.
2. By accepting these TOS, you give your consent to everything in this long list (which includes training AI).
However, many companies reckon they'll get away with it, the enforcement is not universal and rapid, and I don't trust Zoom as far as I can throw it on this particular score.
If my employer is the "customer" what say, if any, do I have as an individual?
By participating in a call am I giving Zoom permission to do things like train deep fakes of me?
This is all too Blackmirrory for my liking.
The word adequately, and the fact it was made when presuming good faith was more reasonable.
These days it's better to assume everything is theft, fraud, or marketing.
I never really understood why people like Zoom's UX, I find it unintuitive and awkward.
So they can create a transcript of the conversation and train with it. Or train on any document you may have shared during a Zoom meeting.
I woukd have preferred the exception - if that was the intent - to enumerate the components of the Customer Content that they want to use for training.
10.1 Customer Content. You or your End Users may provide, upload, or originate data, content, files, documents, or other materials (collectively, “Customer Input”) in accessing or using the Services or Software, and Zoom may provide, create, or make available to you, in its sole discretion or as part of the Services, certain derivatives, transcripts, analytics, outputs, visual displays, or data sets resulting from the Customer Input (together with Customer Input, “Customer Content”);
It's much the same as the issue that was raised a few days ago, where your employer instructs or expects you to lie. The only way they have to "force" you is to threaten dismissal; this is insufficient to justify the terms "compel" or "force".
A police officer holding a Glock can compel you. Your boss cannot.
Which means that in the case where Zoom is provided to you by your employer, they claim that the employer consent is just what matters. Once more "Fuck GDPR".
I had the same issue when my EU-based employer was sold to an US company. My personal data suddenly went from EU to US-based HR systems without my consent. Resigning would not have fixed anything. My personal data will be in the US forever.
They basically claim that the customer (the one who signs the contract, not the Zoom user) who hosts the meeting is responsible for GDPR compliance by defining the right account settings. So if you are invited on a call you basically have no rights.
Read the TOS again. They are only speaking about customer consent. Not "user". If you are not the one signing the contract or are just invited to a call (not hosting) you basically have no rights to define settings such as any form of opt-out (assuming they exist).
As for "who cares about %": every one who understands that fines that cost a company nothing, do nothing, all they say is "it'll cost you a trivial amount more to do this", turning what should be an instrument to rein in companies into simple monetary transaction that just goes on the books as an entirely expected and affordable expense.
It should be a crime, and they should have been found guilty in court over that, and the fine should be such that no matter your company's size, you can't risk running afoul of the law repeatedly. But it absolutely isn't.
They just made another edit and removed the line.
Here's the edit history going all the way back to March:
- 4/1 https://www.diffchecker.com/dCuVSMnp/
- 7/1 https://www.diffchecker.com/Zny4Rjqw/
Even worse, it tends to go hand-in-hand with astonishing overconfidence in their understanding of other fields. I've had two other primary careers-- designer, and chef-- and I can't count how many developers have "explained" parts of those fields to me despite knowing I'm a subject matter expert. Like their astonishing intellect and that related Metafilter thread they skimmed makes them authoritative. I get supernova-intensity cringe when I hear other developers shoot off Dunning-Kruger-esque oversimplifications of other fields' genuinely hard problems.
When I hear developers talk about the arrogance of designers, I can't help but laugh... then maybe cry. Many seem genuinely aggrieved that interface designers have more input on the interface design than they do.
Or you inherently can't make it "forget about it UX and extremely high quality" as most non techies define it. Because you have the issue that even if a company self hosts a meeting tool, they likely can't get the backbone connections Zoom etc can get. They at least need someone to use a URL to get there. It can be made mostly simple, but then you're back to some company running it - works for corporate use maybe, not for your home user. Even Signal lags compared to Zoom. And people really dislike Signal's phone number requirement, but it's what makes it somewhat possible to route connections for users.
What's a system that a home person could use that's not going to get them routing through one companies servers, but is actually simple enough to use?
The place where I do get somewhat exasperated as a techie is that the equivalent of asking for a phone number or address in any program that isn't an e-mail website is seen as "too hard". This makes pretty much any privacy respecting design impossible to scale beyond nerds.
> You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, … [rest already quoted several times in the thread]
so that promise to not do it without consent is meaningless as they have consent from anyone who has agreed to the ToS which anyone using the service/product has done.
The HN commenters tend to assume #1 when it comes to big companies, while more likely it's #2. The razors capture this situation well.
I think attributing everything to incompetence vastly underrepresents intent. Maybe not all bad acts are malice, but too many are attributed to incompetence. Maybe it is not malice, but it can still be intentional actions against or indifferent to your interests.
Well I might just take that heuristic and do some basic sentiment analysis to rank companies on their doublespeak.
While you thought you presented an argument against hefty fines, you actually gave the perfect reason for why they should be hefty. If illegal practices are affordable, they're not illegal. They're just the price of doing business. So make them hurt.
> American companies.
> Japanese companies
> Some old European brands
Unless the parent comment was edited, of course.