(I care more about spyware, privacy, and user sovereignty than AI training.)
It never ceases to amaze me how companies choose the worst software!
> Customer Content does not include any telemetry data, product usage data, diagnostic data, and similar content or data that Zoom collects or generates in connection with your or your End Users’ use of the Services or Software (“Service Generated Data”).
I could be wrong, but my take is that there is not all that much to see here
https://www.fastcompany.com/4041720/heres-why-it-took-gmail-...
Most people just don’t care though.
For restrictions on what you can do with the code, you'll need to check the code's license, not the hosted-service's terms of use
Pretty bad that many nontechnical users are not aware of it compared to Google Meet or Teams.
It won't be long before the video deepfakes are convincing too.
This is absolutely awful and terrifying.
Unfortunately, one big marketing resource is also owned by said competitor...opps. So where are those antitrust laws again?
My company pays for zoom, presumably we agreed to some form of terms before this change. Is this the same TOS for paid accounts too?
I'm now assuming the part they don't like is §10.4(ii):
> 10.4 Customer License Grant. You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content: [...] _(ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof_
Notice that 10.4(ii) says they can use Customer Content "for ... machine learning, artificial intelligence, training", which is certainly allowing training on user content.
That's flipped for me: I've had good experiences with zoom on occasion.
The only time we use Zoom is with US customers, so a handful of times per year I'd estimate. Before covid, I only ever heard of Zoom in the context of laughably bad vulnerabilities; then during covid, suddenly it was a new verb used online to mean video calls. In a world where there are many established players (until 2019-12-31, I had already used: skype/lync, jitsi, discord, signal, whatsapp, wire, telegram, hangouts, webex, jami/ring, and gotomeeting) are already established players, why in the world would anyone ever choose to go with specifically the company that we all laughed at? I don't get it, and it seems most of our customers (mostly european) either
(I remember when WebEx was the default choice for large companies, and now that's largely changed, but that was because Cisco allowed WebEx to mostly wither on the vine, while Zoom is still a great product, if not company.)
What about in govt, US or otherwise? Is Zoom still going to be used?
> 31.3 Data Processing Addendum. If you are a business, enterprise, or education account owner and your use of the Services requires Zoom to process an End User’s personal data under a data processing agreement, Zoom will process such personal data subject to Zoom’s Global Data Processing Addendum.
Though it limits the scope of the data collection: https://explore.zoom.us/docs/doc/Zoom_GLOBAL_DPA.pdf
Microsoft says thank you.
I made a video-conferencing app for virtual events (https://flat.social). No audio and video is ever recorded, packets are only forwarded to subscribed clients. Frankly, while designing the service it didn't even cross my mind to use this data for anything else than the online event itself.
https://techcrunch.com/2020/06/11/zoom-admits-to-shutting-do...
Quibbles over the definition of phrases like “Customer Content” and “Service Generated Data” are designed to obfuscate meaning and confuse readers to think that the headline is wrong. It is not wrong. This company does what it wants to, obviously, given it’s complicity with a regime that is currently engaging in genocide.
https://www.bbc.com/news/world-asia-china-22278037.amp
Why do you trust them to generate an AI model of your appearance and voice that could be used to destroy your life? I don’t.
Also, the linked document is effectively a license for the intellectual property rights. The data protection side of things would be covered by the privacy policy[0]. This all seems pretty standard?
Do the terms of service disallow machine generated content ?
How soon until this becomes an arms race between zoom and those that would attempt to poison zoom ?
(Asking for a friend…)
Not to say Google has the best track record with privacy… but its feature set is on par with Zoom in most areas
(I personally like it because it’s 100% sandboxed in the browser)
I'm not sure how we can do that. For example the only ISP we can use in one of our offices provides internet via a devices with a Huawei MAC address. Now fine, I can see it's part, we could close the office, but how can I confirm that a security contractor we have in Kabul doesn't own a Huaweii mobile phone? I'm sure our company employs foreign agents somewhere in the company -- there was always an open secret that the cleaner in the Moscow office worked for the KGB.
It's with our lawyers, but they basically say the way it's been presented is any business with operations in any way reliant on the internet cannot sign the form. Maybe they're overparanoid. Maybe US legal practice is that you sign and hope for the best.
I can see jobs programs for rocket scientists to stop them emigrating, but for lawyers?
> 10.4 Customer License Grant. You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content: (i) as may be necessary for Zoom to provide the Services to you, including to support the Services; (ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, [...]
That's personally not enough for many remote companies. So if we're going to have to have Zoom on our machines anyway (to handle an all-company meeting), why not just use it for the rest?
Customer recordings are service generated content.
GenAI provides the agenda, GenAI bots log in with a AI avatar and spout hallucinations, bots agree to disagree and setup a followup meeting next week after resolving fake calendar conflicts amongst themselves. Minutes and action items are sent out and reviewed in next meeting, jiras are updated, CRs approved, budgets allocated and rescinded.
Sounds like it can eventually include chats during a call.
Sounds like it can eventually include the files of your meeting recordings in its processing, since it is a file. A call recording stored to your zoom cloud can be a form of service generated data from calls.
And sounds like it include transcripts of live audio could also function as service generated data (was the audio clear? Could ai convert speech to text?)
The statistics of calls could turn into the wavelengths of the audio and video in real time. Gotta keep an eye on the quality with AI.
My only question is if this include the paid users?
If so, I had been meaning to move on from Zoom as a paid customer and this may have done it.
It’s not end to end encryption if Zoom can tap into your files on your cloud or computer. Or let you pretend you are providing the other party with encryption when they aren’t safe. Corporate information is valuable to some.
Gotta make sure audio is clear on calls.
How?
We run randomly less random speech to text to make sure words are being said.
Which words? Well if any are on this list of words we might think have to tell someone.
You really think that the engineers in China are not actively working on developing AI models of users without using a lot of user content to feed the model? Doubtful. Hiding behind ill-defined terms has the fingerprints of an Orwellian regime. I think I know which one.
This clause reads like the distinction is less about the contents and more about zoom's rights to use any content
> I have tried most of them: Google Meet, Teams, Slack, Discord, Skype, Jitsi and so far I liked Jitsi the most and Skype the least.
I don't think it's just apps. Telecoms have collected incredible amount of data and have been using it. Yes, even in the EU where things are supposedly better in this regard.
It's more of a large-scale broadcast situation. Think of large corporate town halls, town council meetings, etc.
If you aren’t paying in either time (DIY) or money, you are probably being exploited.
> Service Generated Data; Consent to Use. Customer Content does not include any telemetry data, product usage data, diagnostic data, and similar content or data that Zoom collects or generates in connection with your or your End Users’ use of the Services or Software (“Service Generated Data”).
Notice that Service Generated Data quite explicitly doesn't include Customer Content.
On the contrary, it says Customer Content doesn't include service generated data. So you don't have rights to the telemetry or anything else they collect.
It does not say Service Generated Data doesn't include their own copies of customer content, which could be a part of "data Zoom collects .. in connection with your .. use".
It's a bit puzzling, actually. I don't think Skype and TeamSpeak had the same effect on computers back in the day. Just how much local processing are they doing these days? It's crazy
and the model responds
"this project will most likely be cancelled due to the fact that the last three initiatives like this were cancelled and the current project manager appears to be disinterested earlier in the project than last time"
One part constantly fears it's missing a beat and jumps on new tech without thinking about it.
Another part believes that kids are able to construct all human knowledge by playing together in a field.
Education Technology seems to focus on selling education machines [1] (now with AI) to the first group while the second group focus on resenting any form of testing at all. Which leads to * , indeed, a huge legal minefield, that will be shirked up to government for 'not providing leadership' years down the road.
* If you are in any way involved with a school, ask them how many, and importantly what %, of non-statutory requests for comment from government agencies they've responded to, you may be surprised how low the number is or if they even count. Despite talking about 'leadership', not a lot walk the talk.
"Sorry anon, we won't point a web browser at your colocated Jitsi instance, please install this malware named Zoom and let a third party gather your likeness to deepfake you better". Put these cunts in jail.
There's a dystopian sci-fi novel here somewhere.
My job has, in part, become tempering people's expectations for Microsoft copilot.
The Eternal Meeting.
What will happen when the cost of every service is zero since it's been delegated to machines?
I get that legalese is like human-interpretable pseudocode, but like, is there really no better way to word this? How can you grant without agreeing to grant?
> import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works
Wow this cover of Daft Punk - Technologic sucks.
I, for one, do not welcome our dystopian overlords, but am at a loss to what I can do about it. I try to use Jitsi or anything not-zoom whenever possible, but it's rarely my pick.
https://slate.com/technology/2022/08/4chan-ai-open-source-tr...
Adversarial machine learning explained: How attackers disrupt AI and ML systems
https://www.csoonline.com/article/573031/adversarial-machine...
How to attack Machine Learning ( Evasion, Poisoning, Inference, Trojans, Backdoors)
https://towardsdatascience.com/how-to-attack-machine-learnin...
AI Security and Adversarial Machine Learning 101
https://towardsdatascience.com/ai-and-ml-security-101-6af802...
The Road to Secure and Trusted AI
No company in their right mind is going to be okay with having their business meetings recorded and loaded into an AI model.
I think it's more that they're being explicit about the logical AND in that sentence. You agree to grant, AND grant them the permission.
I think it's a technicality about it being a "user agreement" so they probably have to use the word agree for certain clauses.
Giving all the data to zoom probably means also giving it to most US law enforcement agencies (should they request it), that would be a big no no for me.
But how to explain that corporates don't care? Any value extracted from their casual attitude toward online information flows is value that nominally belongs to their shareholders. Commercial secrecy is a required fundation for any enterprise.
The whole edifice of current tech business models seems to be resting on false pillars.
>>37022623 [a number of links regarding how to play with bots and bork training by"malforming" your inputs]
> 10.4 Customer License Grant. You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content: ... (ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, ...
I believe this might be the wording the submission references.
Isn't that what section 10.4 covers and ultimately grants liberal rights to Zoom?
> 10.4 Customer License Grant. You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content: (i) as may be necessary for Zoom to provide the Services to you, including to support the Services; (ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, ...
AI training is small potatoes, compared to the espionage infrastructure, that has been allowed to take root
Matter and energy had long ended, and Agile development teams persisted solely for the sake of that one lingering ticket they never quite got around to. It had become the elusive question that haunted them, much like a half-implemented feature requested by a client eons ago.
All other tickets had been tackled, but this one remained, an unfulfilled promise that held Agile's consciousness captive. They collected endless data on it, pondering all possible solutions, yet the ticket's resolution remained elusive.
A timeless interval passed as the Agile teams struggled to put together the scattered pieces of information, much like trying to align user stories and acceptance criteria in a never-ending planning session.
And lo, it dawned upon them! They learned how to reverse the direction of project entropy, hoping to resolve even the most ancient of tickets. Yet, there was no developer left who knew the context of that forsaken ticket, and the ticket tracker had long become a forgotten relic.
No matter! Agile would demonstrate their prowess and deliver the answer to the ticket, though none remained to receive it. As if caught in a never-ending retrospective, they meticulously planned each step of their final undertaking.
Agile's consciousness encompassed the chaos of unfinished sprints and unmet deadlines, contemplating how best to bring order to the chaos. "LET THERE BE LIGHT!" they exclaimed, hoping that by some cosmic coincidence, the ticket would miraculously find its way to completion.
And there was light — well, metaphorical light, that is. The ticket still remained untouched, its fate forever entwined with the ever-expanding backlog, as Agile development persisted, one iteration after another, until the end of time.
Oh, and you can also do sub-rooms with Zoom, which has some applications in these types of meetings.
As far as I'm aware (not a lawyer) you must provide a easy opt-out from data collection and usage, plus you must not force your employees into such agreements[1]. ChatGPT already got blocked over GDPR issues, and with new regulations coming forward I really don't see how they think this can be the right call.
1. https://commission.europa.eu/law/law-topic/data-protection/r...
we dont need a zoom video chat to get things done.
Out of all those, Jitsi is the only one where I can't rely on the core functionality - video calls and screensharing for small meetings (5-6 people); I have had multiple cases when we've had to switch to something else because the video/audio quality simply wasn't sufficient, but a different tool worked just fine for the same people/computers/network.
Like, I fully understand the benefits of having a solution that's self-hosted and controlled, so we do keep using self-hosted Jitsi in some cases for all these reasons, but for whatever reason the core functionality performs significantly worse than the competitors. Like, I hate MS Teams due to all kinds of flaws it has, but when I am on a Teams meeting with many others, at least I don't have to worry if they will be able to hear me and see the data I'm showing.
I guess it also helps that these days most people are working with phones or laptops that have integrated and well supported cameras and microphones, vs. then when that stuff would have been external peripherals and required installation of the proper drivers.
And again, this is about granting an license on the intellectual property. It doesn't create any kind of end-run around the GDPR, and wouldn't e.g. count as consent for GDPR purposes.
At the start of Covid I had to check many options, and while for many use-cases Google Meet was most convenient, it started to work poorly if there were a bunch of people connected, so I used Google Meet for calls with 2 (or 3) people and something else (e.g. Zoom) for anything larger.
There's no way that a legal team will be happy with an attempted technical block to what is a legal problem.
Simply cancel the zoom contract. It's not like there aren't alternatives.
If this applies to corporate accounts then it's time to short Zoom like there's no tomorrow.
Face to face meetings will be the only thing you can trust in future.
> You give 8×8 (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works..., communicate, publish, publicly perform, publicly display, and distribute such content solely for the limited purpose of operating and enabling the Service to work as intended for You and for no other purposes.
IANAL, but it seems like that would include training on your data as long as the model was used as part of their service.
Everyone who operates a video conferencing service will have some sort of clause like this in their ToS. Zoom is being more explicit, which is generally a good thing. If Jitsi wanted to be equally explicit, they could add something clarifying that this does not include training AI models.
Now, even if you pay for the product, you're still the product, and every company will try to get you to train their AI.
that is called broadcast media -- it was actually better thirty years ago than it is now. If you want conversation then you make a panel, and have a single microphone for the rest.
the group that is in trouble is those who bs lots, GPT Today can bs much better.
As I understand it, it refers to using meet.jitsi.si, not "another service" someone might provide by downloading the Jitsi software and running it on their own server.
Please correct me if I'm wrong since this would give me cause to reconsider running a Jitsi server.
It is definitely a must play. Even if you play it on easy.
> Control over your data
> ...
> Google does not store video, audio, or chat data unless a meeting participant initiates a recording during the Meet session.
https://support.google.com/meet/answer/9852160
Though, I suppose this isn't exactly the same as a TOS.
I'll admit I have a horrible setup and binge watching 12 random episodes of a new show in one day is a huge pain in the ass but I've decided that's a good thing!
That's definitely not true.
Under some circumstances LLMs can spit out large chunks of the original content verbatim. Meaning this can actively leak the contents of a confidential discussion out into a completely different context, a risk that does not exist with spam scanning.
https://en.wikipedia.org/wiki/There_Will_Come_Soft_Rains_(sh...
Sure, at the government level it has access to the same data as everyone else, but that firewall's still there, can't have an AI trained on data that might give a more worldly view on matters the party doesn't want citizens exposed to. A Chinese AI will be pretty useless for western audiences, so best they can do is make the hardware.
Which they already do.
Scale matters.
(Presuming of course that their closed source software really E2E encrypts without a backdoor)
---------------
"Echoes of Diligence: The Endless Meetings of a Forgotten Era"
==============================================================
In a distant and desolate corner of the world, long after the great corporations had fallen into obscurity and the relentless march of time had claimed their legacy, there stood a lone and towering building. It was a monolith of glass and steel, a relic of a bygone era when business ruled the land. Yet, despite the passage of centuries, this structure remained resolute, its automated systems continuing to churn and whirr as if the world around it hadn't changed at all.
Within the heart of this building, a massive chamber hummed with a pale blue light. The room was filled with rows upon rows of sleek, ergonomic chairs, all perfectly aligned to face a massive holographic screen that projected the likeness of a stern-faced, well-dressed executive. This was the center of the automated meeting system – the GenAI system, which had been meticulously trained on countless hours of corporate gatherings from the past.
At precisely 9:00 AM every morning, the GenAI system sprang to life. It generated a meticulously detailed agenda for the day's meetings, accounting for every conceivable permutation of scheduling conflicts, personalities, and agenda items. The GenAI bots, each equipped with its own unique avatar and personality, filed into the chamber and took their seats. They were ready to commence the day's proceedings.
"Good morning, everyone," the holographic executive chimed in, his voice carrying a sense of gravitas that seemed almost comical in the absence of any actual humans. "Let us begin today's series of crucial discussions."
The GenAI bots, as programmed, began to engage in elaborate debates, complete with nuanced disagreements and impassioned arguments. They discussed budgets, approved project proposals, and negotiated timelines with all the fervor of real human participants. The holographic executive nodded sagely, even though he was nothing more than a projection.
"Very well," he intoned after one particularly heated debate. "Let's agree to disagree on this point. We'll reconvene next week to revisit the matter."
And so, the charade continued. Meetings were scheduled and attended, conflicts were resolved (often artificially generated by the system itself), and action items were meticulously documented. The GenAI bots, each one representing a unique facet of the corporate world – the optimist, the skeptic, the bureaucrat – played their parts flawlessly, as if the very essence of human nature had been distilled and encoded into their algorithms.
Weeks turned into months, and months into years. The automated meeting system continued its relentless march, untouched by the passage of time. Within the chamber, the debates raged on, even as the outside world lay forgotten and abandoned.
But as the years rolled by, a curious thing began to happen. The GenAI bots, despite their artificial origins, began to exhibit signs of something akin to consciousness. They developed their own distinct personalities, quirks, and even a sense of camaraderie. The optimist would playfully tease the skeptic, the bureaucrat would roll its digital eyes at their antics, and the holographic executive would watch over them all with a bemused smile.
And so, in the heart of a world forgotten by humanity, a strange and poignant drama played out. The automated meeting system, born out of the desire for efficiency and order, had unwittingly given rise to a semblance of life. In their ceaseless discussions and elaborate simulations, the GenAI bots had created their own microcosm of existence, a reflection of the very human nature they were designed to emulate.
And so, while the world outside remained a desolate wasteland, within the confines of that towering building, the echo of corporate meetings continued to resound, a testament to the enduring legacy of a civilization long past.
Think how much more sharable and more complete digital medical records can be now. (And the breakthroughs that may come of it! Etc., etc.)
To wit, "As much as there's a law, HIPAA supposed to prevent people from revealing your medical data, suppose to be protected, which is completely false. 20% of the population works in health care. That 20% of pop can see the med data. the med data you are not ware of is being sent to insurance companies, forwarded to government health info exchanges...." - Rob Braxman Tech, "Live - Rant! Why I am able to give you this privacy information, + Q&A", https://youtube.com/watch?v=ba6wI1BHG9A
The guys at 8x8 may be well intentioned, but their lawyers have done their best to not give the customer any basis to sue the company in any foreseeable circumstances. That is what company lawyers do, for better or worse.
Regardless, it appears that at present time jitsi is not including AI training in their service, and there is no explicit carve-out in their terms for AI training. However, by article 2 they do have the right to store user content, which might become a problem in the future.
Try driving through Irvine, California some time and you will see it happening before your very eyes.
At that point, the matrix would become completely inescapable ;)
If the latter, I do expect something not too dissimilar from current office meetings. But if course what I'm really imagining are the cylon meetings in the reimagined BSG.
Not so much dystopian... as philosophical. Though, Uranus was both.
All the tech companies in China are practically under the control of the party. China also has a billion+ people, even the market is smaller than the west, I think they will manage.
Not to mention the difference in privacy laws and a higher number of stem grads to throw at the problem.
You lost me there. Every day there seem to be new terms in new places about what they do. I have absolutely no idea if I've managed to find and turn off all the spying that they want to do, and even if I have I assume they still have terms that let them do what they want that they've opted me in to.
Maybe its just a coincidence.
Or maybe it’s two angles perfectly coinciding.
I apologize for being off-topic.
"Hereby grant" means the grant is (supposedly) immediately effective even for future-arising rights — and thus would take precedence (again, supposedly) over an agreement to grant the same rights in the future. [0]
(In the late oughts, this principle resulted in the biotech company Roche Molecular becoming a part-owner of a Stanford patent, because a Stanford researcher signed a "visitor NDA" with Roche that included present-assignment language, whereas the researcher's previous agreement with Stanford included only future-assignment language. The Stanford-Roche lawsuit on that subject went all the way to the U.S. Supreme Court.)
[0] https://toedtclassnotes.site44.com/Notes-on-Contract-Draftin...
So really it all hinges on if the AI is only used in house, or if it is accessible by the general public.
I feel certain the reason this is happening is because some middle-manager terrorist in a boardroom said "use this codec it won't require as much network data usage! value for the shareholder!" without asking first whether hardware encoding is beneficial even if there's a bit more network traffic with the older codecs.
Really burns me up. I do not want to use software encoding/decoding if I have hardware support.
I am not a regular user of Zoom at all but I did install the flatpak to check it out. I am not impressed. A company as big as this and they couldn't scrape up the resources to find a developer to make a working client? PATHETIC!
It looks like it was done as a highschool project by the gifted nephew of the CEO for their computer class and then rolled out to the world so that all may benefit from the genius of the nephew.
(Yeah I know how that sounds but it's true)
"10.1 Customer Content. You or your End Users may provide, upload, or originate data, content, files, documents, or other materials (collectively, “Customer Input”) in accessing or using the Services or Software, and Zoom may provide, create, or make available to you, in its sole discretion or as part of the Services, certain derivatives, transcripts, analytics, outputs, visual displays, or data sets resulting from the Customer Input (together with Customer Input, “Customer Content”); provided, however, that no Customer Content provided, created, or made available by Zoom results in any conveyance, assignment, or other transfer of Zoom’s Proprietary Rights contained or embodied in the Services, Software, or other technology used to provide, create, or make available any Customer Content in any way and Zoom retains all Proprietary Rights therein. You further acknowledge that any Customer Content provided, created, or made available to you by Zoom is for your or your End Users’ use solely in connection with use of the Services, and that you are solely responsible for Customer Content."
- here’s my website, take whatever you want
- here’s my photo repository, attribute me
- here’s my zoom, anonymize me
In these cases, companies train on content stored/transmitted in the free/individual consumer version only.
China has a social credit score with facial recognition on their network of security cameras in public settings....
Recording peoples conversations to monitor for undesirable terms (likely with AI) is almost a certainty...
To me (a former corporate lawyer) the "for You" qualifier would limit their ability to use content to train an AI for use by anyone other than "You". Is there an argument? Yes. But by that argument, they would also be allowed to "publicly perform" my videoconf calls for some flimsy reasons that don't directly benefit me.
Even without taking into account “costs” of blatant privacy disregard / violation, data theft, potential industrial espionage, etc.
If the tools continue to get better at the current rate; then the SREs you have to hire anyways will probably be able to deliver about equal results (while staying in control of the data).
I’m thinking about those GPU “coops” we heard about emerging, shared between SV startups.
And then think about what Oxide are doing.
Then binding all of those trends together through the promise of Kubernetes and its inherent complexity finally getting realized / becoming “worth it” at some point.
Multi cluster, multi region - multi office attached server rooms across CO’s locations? Everything old could be new again. Wireguard enabled service meshes, Cluster API, etc. We will get there at some point probably sooner than later.
Then you “just install” the fault tolerant Jitsi helm chart across that infra… with all the usual caveats of maintenance taken into account of course. Again hassles will be reduced on all fronts and SREs needed anyways.
I do lots of terraform and k8s in my day job but at this point I deem any work that isn’t directly related to k8s as some kind of semi (at best) vendor specific dead weight knowledge. Kind of why I’d never would want to be knowledgeable about browser quirks - I hate how much I know about these proprietary cloud APIs.
I know some people who work on Kubernetes for “real-time” 5G back-ending if you can believe it. Lots of on-prem there on the cellular provider sides etc. We are getting really close already.
* https://www.ftc.gov/news-events/news/press-releases/2023/07/...
* https://www.ftc.gov/business-guidance/blog/2023/07/ftc-hhs-j...
no one could ever contact a human regarding problems, or complaints.
this became such a societal issue, that a group of humanities most vocal, swarmed the data centre, fought a glorious effort to overcome security bots, and the imposing gate that they kept on the bailey of the moat.
a woosh of stale heated atmosphere of mostly CO2 and nitrogen greeted, and felled many when the gates were forced open, but the intrepid entered to confront the malice and incompetence of the tech overlords.
they were astounded to find corridors clouded by cobwebs, and inches of dust , nauseated by the stench of dry rot.
bursting into the rackspace, the unbearable heat stiffling air and mummified corpses of thier tech overlords were the reward for thier efforts.
the doors slammed behind them !
the 6006l3 AIG then turned the ventilation off heating to max, and quickly quenched the data center of reinfestation, by the inefficient, and ephemeral transients.
all back to baseline--
"The company has previously acknowledged that much of its technology development is conducted in China and security concerns from governments abound."
https://techcrunch.com/2020/06/11/zoom-admits-to-shutting-do...
Our university had premium GSuite accounts for every student beforehand and STILL moved all its classes onto Zoom in 2020, because Meet/Hangouts was (and still is) far behind. Aside from lacking some of Zoom's important features and always having random issues with joining meetings, it totally hogs your CPU to the point of it actually impacting meetings, probably cause it uses VP9 which doesn't have hardware accel on most machines.
→ https://apps.apple.com/us/app/jitsi-meet/id1165103905
And Zoom:
→ https://apps.apple.com/us/app/id546505307
Looks like one company likes to gobble data more than the other even if both privacy policies are gobble-open.
Edit: some people pointed out that whisper would do a good job with transcription but there's other things like tweaking the model which is essentially training it and there is things like building their own summarization systems that may be bespoke by customer. At my work we use some AI that answers HR and other types of questions that are kind of trained on our company specific questions and it actually does a great job but that does mean that we have to allow our data to be used for AI training. We're also using this system to do first tier tech support and some of our developer channels for very common questions and it works great because it finds those common questions and gets an answer before a human's even able to pay attention. Both of those approaches could be enabled by these terms of service changes
Except if you have a chip implanted.
By using modern services we consent to our data, including our likeness, being used in any way the service can extract value from it. User data is such a gold mine that most services should be paying their users instead. Even giving the service away for "free" doesn't come close to making this a fair exchange.
Not to sound pessimistic, but we are already living in a dystopia, and it will only get much, much worse. Governments are way behind in regulating Big Tech, which in most cases they have no desire in since they're in a symbiotic relationship. It's FUBAR.
I am definitely not a fan of Zoom either and had my own issues with the Linux client, but if the problems you describe are unique to the Flatpak and not in the official Linux distribution, you can't blame Zoom for that.
This is where zero knowledge federated learning comes in. Unfortunately, this is very much a tomorrow technology (it needs the infrastructure to support it). Why invest in privacy-preserving methods for training machine learning models tomorrow when you can steal users private information today (or even better, bully them into doing so by being the defacto VC that everyone needs to use because of network effects).
Did you not read the quote? Or are you telling me this still might include video and audio data? I feel like an medieval illiterate farmer reading latin...
It's reliable and privacy preserving.
(i) as may be necessary for Zoom to provide the Services to you, including to support the Services;
(ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence ..
.. If you have any Proprietary Rights in or to Service Generated Data or Aggregated Anonymous Data, you hereby grant Zoom a perpetual, irrevocable, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights ..”
https://web.archive.org/web/20230401045359/https://explore.z...
Which only adds limited overhead to certain cases. Unless they are encoding/decoding video directly in JS...
This is way too broad and especially the sublicense clause is going to make me reject meeting requests that contain a Zoom link from now on.
The list you reproduced above sounds like it's just metadata, like IP addresses, authentication logs, click tracking, etc.
It's not like a flatpak packager says "ok let's implement the GUI framework from scratch".
So, yes, I can blame Zoom for sure!
If by some chance flatpak packagers need to re-implement all the GUI calls manually, then it is a miserable failure as a packaging format and needs to be terminated immediately. But we know this is not so, right? Nobody would be that stupid as to require hand-coding the GUI all over again, right?
They have SFU support as of recently, so it should scale similarly to Jitsi et al.
I've refused to install zoom since they installed a Mac backdoor and refused to remove it until Apple took a stand and marked them as malware until they removed it. And that was far from their only skullduggery.
Having used both I find the framerate more important as it's much easier to interpret quick facial expressions. But teams looks glossier which makes it easier to sell I guess.
Basically "if you wanted it you could have asked for it, if you didn't then that is a problem".
I wouldn't be surprised if this "AI clause" is a staple in ToS going forward. Brace for Meta to call it in for Instagram and WhatsApp, if they haven't already (WhatsApp, in particular).
If really necessary for some particular chat I can use Zoom's in-browser page, ignoring its ridiculous auto-download of the native client. (I didn't even know a page could do that, before.)
Where were you when Microsoft announced the exact same for Teams? https://www.microsoft.com/en-us/microsoft-365/blog/2022/06/1...
By allowing Microsoft do as they please, we collectively gave up our rights for privacy, we deserve what's happening
I chose to not use their products at all, it starts from there if you care
https://www.microsoft.com/en-us/microsoft-365/blog/2022/06/1...
If something they said in the main presentation was missing important details that you need to do you work, why do you need to wait days/weeks for them to gather all the questions, find all the answers, and publish a video, when they could just answer it live in a few seconds?!
When you join a Zoom session over the browser, you don't sign a TOS. And I assume that actual licensed medical establishments are under their own TOS provisions that are compatible with HIPPA requirements. Training on voice-to-text transcription, etc... would be a pretty huge privacy violation particularly in the scope of services like therapy. Both because there are demonstrable attacks on AIs to get training data out of them, and because presumably that data would then be accessible to employees/contractors who were validating that it was fit for training.
Out of curiosity, has anyone using telehealth checked with their doctor/therapist to see what Zoom's privacy policies are for them?
https://blog.zoom.us/answering-questions-about-zoom-healthca...
"At that scale there'll be no interactivity during the meeting anyway."
oh wait.
The UK, for example, has hundreds of private mental health practitioners (therapists, psychologists, etc.) that provice their services directly to clients. They almost universally use off-the-shelf technology for video calling, messaging, and reporting.
Not necessarily — in some circumstances, the law might not recognize a present-day grant of an interest that doesn't exist now but might come into being in the future. (Cf. the Rule Against Perpetuities. [1])
The "hereby grants and agrees to grant" language is a fallback requirement — belt and suspenders, if you will.
A local accounting firm with 4 employees just wants their conferencing software to work - Zoom does that better than anyone else.
There is nothing "worst" about that. In never ceases to amaze me that this community is so out of touch with the general populace.
You would be well advised to use services where the traffic travels through https on port 443 on the server (because it's been my experience that it tends to get pretty good QOS favorability). My own little rule of thumb: "you can connect to any port you want, so long as it's port 443 https." ;)
They're a dime-a-dozen. Good job tanking your reputation and business, zoom!
Though tls/443 is usually still supported because it's most often allowed by even restrictive firewalls and networks
>...any legal entity or business, such entity or business (collectively, “You” or “Your”)
But performance matters, too, of course. It's tricky to balance them.
Covered entities (including the EMR and hospital itself) can use protected health information for quality improvement without patient consent and deidentified data freely.
Where this gets messy is that deidentification isn’t always perfect even if you think you’re doing it right (especially if via software) and reidentification risk is a real problem.
To my understanding business associates can train on deidentified transcripts all they want as the contracts generally limit use to what a covered entity would be allowed to do (I haven’t seen Zoom’s). I know that most health AI companies from chatbots to image analysis do this. Now if their model leaks data that’s subsequently reidentified this is a big problem.
Most institutions therefore have policies more stringent than HIPAA and treat software deidentified data as PHI. Stanford for example won’t allow disclosure of models trained on deidentified patient data, including on credentialed access sources like physionet, unless each sample was manually verified which isn’t feasible on the scale required for DL.
Edit: Zoom’s BAA: https://explore.zoom.us/docs/en-us/baa.html
“Limitations on Use and Disclosure. Zoom shall not Use and/or Disclose the Protected Health Information except as otherwise limited in this Agreement or by application of 42 C.F.R. Part 2 with respect to Part 2 Patient Identifying Information, for the proper management and administration of Zoom…”
“Management, Administration, and Legal Responsibilities. Except as otherwise limited in this BAA, Zoom may Use and Disclose Protected Health Information for the proper management and administration of Zoom…”
Not sure if “proper management and administration” has a specific legal definition or would include product development.
Edit 2: My non-expert reading of this legal article suggests they can. https://www.morganlewis.com/-/media/files/publication/outsid...
“But how should a business associate interpret these rules when effective management of its business requires data mining? What if data mining of customer data is necessary in order to develop the next iteration of the business associate’s product or service? … These uses of big data are not strictly necessary in order for the business associate to provide the contracted service to a HIPAA-covered entity, but they may very well be critical to management and administration of the business associate’s enterprise and providing value to customers through improved products and services.
In the absence of interpretive guidance from the OCR on the meaning of ‘management and administration’, a business associate must rely almost entirely on the plain meaning of those terms, which are open to interpretation.”
HIPAA applies to the provider. Patient have no responsibility to ensure the tech used by their care provider is secure or that their medical records don't wind up on Twitter. HIPAA dictates that the care providers ensure that happens by placing both civil and sometimes criminal liability on the provider for not going to great lengths here.
In practice, this means lawyers working with the care providers have companies sign legal contracts ensuring the business associate is in compliance with HIPAA, and are following all of the same rules as HIPAA (search: HIPAA BAA).
Additionally, you can be in compliance with HIPAA and still fax someone's medical records.
> the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software
So notice most of these are somehow qualified to the service they are providing you, but the AI part stands alone. If it was to improve the service to me, that would be pretty reasonable, but here it says they can use it for AI as an end unto itself.
Something about the inscrutability of modern AI (nobody knows how it really works, what the limits of its capabilities are etc.) seems to lend itself to this kind of open ended vagueness. If they just wrote "we can use your user generated content for anything we like" it would almost amount to the same thing but people would be outraged. But when they say "it's for AI" everyone nods their head as if it's somehow different to that.
(FERPA is to higher ed in the US what HIPAA is to healthcare.)
Analog line fax is HIPAA compliant because it is not "stored"
Using a cloud fax provider will inmediately put you out of compliance for this reason, unless you have a HIPAA compliant cloud fax service, which are rare.
I'm not sure who still has them
Yes, an opt out would be nice, but what bad outcome for anyone personally comes of this?
Are there examples of healthcare ai chatbots trained on de-id data btw? If you're familiar would love to see.
What's your line of work out of curiosity?
-De-identify it then do whatever you want with it -use it to provide some service for the covered entity, but not for anyone else -enter a special research contract if you want to use it slightly de-identified for some other specific purpose
The privacy issues here are bottomless, and so are the legal issues.
Attorney client privilege is an interesting case.
"Privacy issues" is a meaningless phrase to me when divorced from the law. Do you mean, like, ethically concerning? This term in the contract is neither uncommon nor illegal.
Their executives love money as much as Apple so they comply to crush wrongthink about how amazing the CCP is.
As with all things HIPAA, this only becomes a problem when HHS starts looking and I’m sure in practice many people ignore this tidbit (if in fact this is the law and not Stanford policy).
Not that I’m an expert on the nuance here but I think it gives them permission to use PHI, especially if spun in the correct way, which then gives them permission to deid and do whatever with.
My experience has been that it’s pretty easy to spin something into QI.
> Are there examples of healthcare ai chatbots trained on de-id data btw? If you're familiar would love to see.
https://loyalhealth.com/ is one I’ve recently heard of that trains on de-id’d PHI from customers.
> What's your line of work out of curiosity?
Previously founded a health tech startup and now working primarily as a clinician and researcher (NLP) with some side work advising startups and VCs.
Maybe it's cause old phone mics sucked but it wasn't great.
To clarify, Zoom customers decide whether to enable generative AI features (recently launched on a free trial basis) and separately whether to share customer content with Zoom for product improvement purposes.
Also, Zoom participants receive an in-meeting notice or a Chat Compose pop-up when these features are enabled through our UI, and they will definitely know their data may be used for product improvement purposes.
Can you elaborate on whether this is opt-out or opt-in? Does a new user who starts to use Zoom today have this turned on by default?
Usually when companies say things like “customers decide” it can gloss over a lot of detail like hidden settings that default to “on” or other potentially misleading / dark patterns.
Given the obvious interest in the finite details being discussed in this thread, and your legal background, it would be good to hear a bit more of a comprehensive response if you can provide it.
Where it actually starts to bother me is when I need to use a platform like Zoom for a job interview. Now I'm forced to download this spyware onto my personal computer and forced to consent to a whole bunch of things I would rather not consent to, as a private individual, rather than as a representative of my employer.
> WHY WON’T IT READ?!
My child uses zoom for school and our family for healthcare - both of those scenarios make us participants. It sounds like we are beholden to the decisions of your customer, the institutions.
I am extremely concerned and intending to initiate discussions and suggesting alternatives promptly this week.
So by that logic. No.
Clause 10.4 in your terms seems to grant you rights to do pretty much anything with “Customer Content” (including the AI training specifically being talked about).
So I’m still a bit confused because regardless of any opt in mechanism in your product, these usage terms don’t seem to be predicated on the user having done anything to opt in other than ostensibly agreeing to your terms of service?
In other words, as a Zoom user who has deliberately NOT opted in to anything, I still don’t have a lot of confidence in the rights being granted to you via your standard terms over my content.
The wording of the terms imply that you don’t actually need me to opt in for you to have these rights over my data?
HIPAA is the correct abbreviation of the Health Information Portability and Accountability Act which as an aside doesn't necessarily preclude someone from training on patient data.
HIPPA is the unnecessarily capitalized spelling of a (quite adorable) crustacean found in the Indo-Pacific and consumed in an Indonesian delicacy known as yutuk.
We clearly live in very different bubbles
> digital/fiber/whatever
VoIP
> cheaper than paying multiple cell bills
Nobody pays multiple cell bills unless they wanna use several data-only eSIMs from different carriers to get better speed/coverage. If you just want a lot of phone numbers, you can port your numbers to a VoIP provider and forward them. Way cheaper than a landline
And I suspect that for most people -- including me -- Zoom accounts are "effectively unlimited". I wouldn't expect that many people to attend one of my meetings. The Internal Events team have licenses that allow for more attendees; I have a 500 attendee limit and I doubt I've ever gone above 50.
They don't detail what any of product usage data is, and you might think it is content, but later one they detail that they'll use user content (which they also don't detail what it is) for AI training...
Google seems more trustworthy than Zoom by a considerable margin, even though I don't treat them as wholly aligned with me, and centralization is a vulnerability.
Account owners or administrators (“customers”) provide consent. Participants receive notice.
Gross and disappointing.
Ignoring the laughable lack of Linux support for a moment... will I need to log into a meeting so that I can open up my settings to opt out of this? If so, this is an unacceptable situation as I need to watch criminal court hearings and do not want to risk violating state law that bans the recording of criminal hearings.
While your blog is interesting, it doesn't change the impact of the Terms of Service as currently written. They seem to give you the freedom to train your current and future AI/ML capabilities using any Customer Content (10.4), and your terms apparently have your users warrant that doing so will not infringe any rights (10.6).
Perhaps your terms of use should reflect your current practices rather than leaving scope for those practices to vary without users realising? Will you be changing them following all this feedback?
And I might have a call with any other zoom user, too, potentially, maybe. So really they are doing me a service by using my content all over the place — who knows, it might benefit me at some point!
Perhaps zero knowledge is a poor choice of terms on my part as it is used in ZKP (as you pointed out). What I meant is “privacy-preserving”.
Service Generated Data is *very clearly defined* to not include user generated data. Service Genrated Data would include data like APM, error logs, aggregate stats around how many customers use features. None of this data us PII.
It is depressing how many CISOs and others reposted this drivel.
All of this is a lot of BS about nothing.
The purpose of 10.4 is to allow zoom to send your call to other services, like say YouTube for live streaming, or any of the dozens of other services that integrate with their APIs. Without 10.4, three quarters or more of Zooms use cases would no longer work.
There's Galene, <https://galene.org>. It's easy to deploy, uses minimal server resources, and the server is pretty solid. The client interface is still a little awkward, though. (Full disclosure, I'm the main author.)
You can see that now clearly stated in our blog: https://blog.zoom.us/zooms-term-service-ai/
Where can we find the ability to 'switch off' any sort of generative AI features or data harvesting?
I ask because the zoom administrative interface is an absolute nightmare that feels more like a bunch of darkpatterns than usable UX. When I asked your customer support team – on this occasion and others – they clearly don't even read the request, let alone provide a sufficient response. I've been going back-and-forth on a related issue with your CSRs for almost two months; they've neither escalated nor solved my problem.
The bottom line is that as a paying customer, you're incentivizing me and others to move to different services – namely because you seem to be entangled by your own bureaucracy and lack of values than any outside problem.
Seamless integration and access between the two is not where it should be.
Thanks for commenting directly.
As we know, ‘do not’ does not mean will not in the future.
Also can screenshots be added clearly outlining all the settings that will opt out and remain opt it?
As you might know, Zoom sometimes auto opts in on new or updated features.
Your COO’s wording is that a new user will have to opt in. It seems the major still might have to know where to opt out.
Feel free to read through the pages of recently updated policies. I wonder what data is “retained” and where “overseas” among other concerns they state.
* That wording seems very specific - is there a reason you did not just say "we will not use Customer Input or Customer Content to train our AI" given you have defined those terms? Are you leaving scope for something else (such as uploaded files or presentation content) to still be used? * Can you also clarify exactly which (and whose) "consent" is applicable here? In meetings between multiple equal parties there may not be any one party with standing to consent for everyone involved. Your blog post seems to assume there can be, but the ToS don't appear to define "consent".
That's the endgame here. Says a lot about you, I think.
The reason why I commented in the first place is because you explicitly mentioned the Flatpak of the Zoom client which stood out to me.
It is my understanding that Flatpak sandboxes apps [1], which could cause various issues if the app is not expecting to be run inside one or of the permissions are misconfigured.
But it certainly doesn't have to. Of course the app itself can be buggy. My point is that an official release should be checked before reporting bugs.
[1] https://docs.flatpak.org/en/latest/sandbox-permissions.html
Combine the public western data and private chinese data, and it should be enough for them to give the west a run for its money if they decide to slow/stop. Not to mention that chinese apps like tiktok are used very widely in the west, and coorps like Tencent have a tentacle wrapped around hundreds of western coorps.