zlacker

[return to "How Zoom’s terms of service and practices apply to AI features"]
1. berbec+Vp[view] [source] 2023-08-07 18:44:42
>>chrono+(OP)
This is a nice statement, but the TOS is the important part, not what this marketing piece says.

> You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content.

> (ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof

◧◩
2. mplewi+Ct[view] [source] 2023-08-07 18:56:57
>>berbec+Vp
The TOS has been updated to state the following:

> Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.

◧◩◪
3. pseudo+MG[view] [source] 2023-08-07 19:40:17
>>mplewi+Ct
"...will not use...to train..." (emphasis mine)

They'll do inference all day long, but not train without consent. Only being slightly paranoid here, but they could still analyze all of the audio for nefarious reasons (insider trading, identifying monetizable medical information from doctor's on Zoom, etc). Think of the marketing data they could generate for B2B products because they get to "listen" and "watch" every single meeting at a huge swath of companies. They'll know whether people gripe more about Jira than Asana or Azure Devops, and what they complain about.

◧◩◪◨
4. btown+ST[view] [source] 2023-08-07 20:41:25
>>pseudo+MG
This is really important, and I would further emphasize the word our. Zoom doesn't need permission to "train" their own in-house artificial intelligence model when it can just transmit/sublicense that data to someone else who will train a model, or to an internal team who will use it (perhaps in few-shot prompts at scale, which is not technically training a model!) for "consulting services" in the broadest sense that that team can imagine.

I generally feel like the general slowdown of capital availability in our industry will lead/is leading to companies doing a lot more desperate things with data than they've ever done before. If a management team doesn't think they'll survive a bad couple of quarters (or that they won't hit performance cliffs that let them keep their jobs or bonuses), all of a sudden there's less weight placed on the long-term trust of customers and more on "what can we do that is permissible by our contract language, even if we lose some customers because of it." That's the moment when a slippery ethical slope comes into play for previously trustworthy companies. So any expansion of a TOS in today's age should be evaluated closely.

◧◩◪◨⬒
5. bonest+811[view] [source] 2023-08-07 21:21:00
>>btown+ST
> If a management team doesn't think they'll survive a bad couple of quarters (or that they won't hit performance cliffs that let them keep their jobs or bonuses)

Agreed, and these kinds of short-term incentives are one of the problems with American companies. On the flip side...

Japanese companies think about products in decades -- the product line has to make money 10 years from now.

Some old European brands think about their brand in centuries -- this product made today has to be made with a process and materials that will make people in 100 years think that we made our products at the highest quality that was available to us at the time.

◧◩◪◨⬒⬓
6. callal+Y61[view] [source] 2023-08-07 21:54:02
>>bonest+811
Got any data to back this up or are you just spouting racist tropes?
[go to top]