ARTICLE SERIES: Part 3

Collecting and Using Data: Is There a Gap Between Law and Ethics When it Comes to Consent?

By Dylan Monorchio
October 10, 2023

Backlash around Zoom’s use of potentially sensitive data to train its artificial intelligence raises questions about how tech companies establish consent. Are terms of service still effective as companies ramp up the use of your data?

Recent backlash over concerns that Zoom was training its artificial intelligence (AI) on potentially sensitive information compelled the company to be more transparent and upfront about the terms and conditions for its newly launched AI features. Part of Zoom’s initial response was an emphasis that it would only collect data from those who consented: “For AI, we do not use audio, video or chat content for training our models without customer consent.”

But in many cases, consent is established through a tacit agreement made just by opting into some free features and perhaps clicking a checkbox. The feature had been in play months before a sleuthy tech blogger unearthed the problematic clause and brought it to light.

What does this say about how well user agreements, “terms of service” and “terms and conditions” work? 

Is the way we establish consent broken?

Despite a reputation already damaged by a failure to be transparent, Zoom came under fire once again for obtaining permission to use meeting recordings in training its own AI through a clause buried in the terms of service in March of 2023. Once it was ultimately discovered and revealed to the wider public, the backlash triggered an attempt to land on more acceptable terms that produced at least two reformulations of their policy in a 7-day period.

Originally, the policy granted Zoom “a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license” to record, reproduce, and share “Customer Content” including for the purpose of training and testing its AI. It was changed on August 8 to specify that “for AI, we do not use audio, video, or chat content for training our models without customer consent.”

That was not enough to assuage the rising voice of discontent, and it was then updated again on August 11 to explicitly state that Zoom would not use “any of your audio, video, chat, screen sharing, attachments or other communications-like Customer Content” for training its AI. They did, however, retain the right to use “Service Generated Data,” and they removed the verbiage around consent. (Agreeing to the terms and using the service establishes your consent, so in a sense it’s redundant, but in another sense it conveys that they don’t believe they should try to get special, separate consent for that.)

This is because we’re given to understand that “Service Generated Data” does not include sensitive customer content, but that might be a dubious distinction as it relates to protecting sensitive data. Moreover, it also may not be clear how terms like this relate in practice to transcripts (telemetry data, which is part of Service Generated Data) that contain personally identifiable information (PII).

“[We] don’t have enough context or information around proper encryption of PII when it comes to transcription,” cautions Jon Kazarian, CEO of Accelevents. “In order to build the model, companies need to transcribe every conversation, and now you have PII in these transcriptions. To me, that’s inherently a problem.”

If the above sounds like a complicated technical issue to you, you’re not alone. And the obstacles to understanding the terms well enough to meaningfully consent to them are not merely a matter of their technological nuances.

Phi Lan Tinsley is a partner at K&L Gates LLP who specializes in intellectual property counseling and litigation. She let me know that there are two key components to a company’s rightful use of data that might be considered owned or protected: notice and consent.

  • Notice is established when a company presents the terms of use to you. 
  • Consent is established when you accept them by signing a contract, checking a box in an account creation form or, in some cases, simply using a service or viewing a website.

“Everything you have consented to is in the terms of service and the relevant links. While it may be confusing, it nevertheless confers a responsibility on the part of the users who agree to it because it puts them on notice,” says Tinsley. “As long as there are opportunities to opt out, this is fair legal practice.”

User Agreements Are Not User Friendly

For Tinsley, the challenge is that contracts need to be extremely explicit about every detail related to the collection, storage and use of data, which makes it difficult to make user agreements simple or succinct.

“Clients usually intend to be transparent,” says Tinsley, “but it’s complicated to find a balance between being transparent and presenting all the information that people care about in a way that makes it easy to agree to so they can get on with using a service.”

This is exacerbated by the fact that user agreements are often actually a compilation of multiple agreements, which might include a separate privacy policy, a definition sheet that defines all the terms and services, references to relevant laws and a series of other sub-agreements that are found in various places. There may even be any number of regional variations that are often up to the user to sort through.

A case in point:

Zoom’s current terms of service agreement is a dry 14,000-word document that includes paragraph-length sentences. The document links to no less than 10 additional agreements, lists, laws, and policies (although not all of the links work). Each of these contextualizes the terms in the original agreement, so a user would theoretically need to familiarize themselves with all of them in order to fully comprehend everything they were signing up to.

However, Tinsley maintains that it is essentially impossible not to structure the terms this way because, if lawyers actually included all the required information in one place, the result would be an impossibly long document that clients would balk at.

Nobody Expects Normal Professionals to Read Terms of Service

What is the upshot of all of the above?

Terms and conditions are not written to be read so much as to meet a legal notice requirement. 

Companies knowingly (if not deliberately) write them in a way that essentially requires a considerable research project and a law degree to understand. The result is a burden so onerous that the norm for almost everyone is a blind, dismissive box-ticking ritual just to get the technology up and running so they can get on with their lives.

Does the total impracticality of wading through multiple lengthy documents to turn on a feature give users any recourse if they accidentally consent to something problematic?

Well, when it comes to interpreting anything you’ve signed, a combination of state and federal laws and the “reasonable man” standard is applied to determine whether an abstract person should have known better. 

While it’s not at all obvious that the average Zoom user would find it reasonable to fully understand the terms they’re tacitly agreeing to when they enable a new feature, Tinsley points out that the definition of “reasonable” depends on the parties to a given agreement. If the agreement is between corporate entities, for example, what is “reasonable” could extend to whatever corporate lawyers should be able to manage.

The rub is that corporate clients who are likely to have dedicated legal resources don’t actually get the same agreements as everyone else. “Large enterprises will likely have different contracts than standard users,” notes Panos Moutafis, CEO of Zenus. “Big companies with processes and resources to address risky situations are treated differently than smaller ones.”

For example, NoteAffect has begun collecting data to train AI that may in future be used to analyze presentations and provide sentiment data. “Corporates have specific user agreements and settings that allow them to opt out of making their presentation materials or user-generated content available to train AI,” says NoteAffect founder and CEO Jay Tokosch. These permissions don’t exist for the education sector or events. “If a speaker or organizer doesn’t want to allow a presentation to be collected, they’re simply advised not to use the service.”

Corporate contracts usually default to friendlier, more agreeable terms in part because tech companies know someone qualified will actually read them, vet them and redline anything exploitative, overreaching, unacceptable or otherwise in conflict with their interests. For tech companies, it’s worth it to make these concessions to win more valuable enterprise-level business, and they will often remove problematic clauses from the beginning in order to expedite the process.

“All other Zoom users simply need to be conscientious about how they use tools and where they have certain conversations,” says Tinsley. This is because trade secrets divulged in compromising situations can simply cease to qualify as trade secrets based on how employees treated them. As such, employees who accidentally disclose anything proprietary or sensitive while using technology they don’t understand may find themselves in violation of the policies their companies’ enacted to protect those secrets.

This is true whether or not that confidentiality was violated by using AI trained on user inputs, as might have been the case with employees using ChatGPT before people became aware that OpenAI collects the information users input and may reproduce it elsewhere.

As such, the AI learning curve exposes companies large and small to risk. While large corporations have dedicated legal teams, lawyers are generally not AI experts, and it’s conceivable that those unfamiliar with the technology may not recognize a potential risk to their organizations even if they came across a problematic clause in a corporate contract.

The same concept applies to account- or user-level permissions. One of the solutions Zoom enacted was to reserve the ability to enable AI features for account administrators. 

For Kazarian, whether it’s an account-level permission or a user-level permission is not really relevant unless there’s transparency and clarity around the risk. Administrators for an average corporation are not necessarily going to be in a better position to assess risk than other staffers. “It’s just like ChatGPT. Anyone can go on it, and the way it works, and therefore the risks, are poorly understood.”

This lack of understanding is also reflected in the standards developers use to substantiate the integrity of their data protection when it comes to training AI models.

“As part of SOC2 compliance, processors are required to disclose who their subprocessors are, but they’re not obligated to disclose how their subprocessors may use data in models,” says Kazarian. “I don’t think consumers should have to go out of their way to investigate that information.” 

But Kazarian points out that there is a level of societal, normative knowledge that consumers will eventually need to adopt as a reasonable standard to compensate for potentially risky user agreements. “Everyone understands that they shouldn’t say their credit card number in a public space. We’ll learn to bear similar things in mind when dealing with AI. It’s just a cultural change that needs to happen.”

For Adam Parry, editor and chief of Event Industry News, part of the ethical responsibility also lies with organizations implementing new technologies to provide the correct training so employees don’t end up in hot water. “There won’t be enough companies with enough processes in place early on to safeguard them from that,” says Parry, who worries that some organizations may have to learn from publicized mishaps that become industry warnings.

Some generative AI applications raise questions about whether there are new classes of information in events that participants might have an interest in protecting – and whether or not these applications might ethically require some additional level of disclosure or consent.

While consent for the purposes of PII, GDPR, and CCPA is fairly straightforward, it gets more complicated at events when you’re using novel technology to collect different kinds of information for particular purposes, says Bob Vaez, CEO of EventMobi. “If attendees at an event are sharing their expertise and that is going into an AI model, you probably want to get consent and tell them where this data is going to go in your terms of service.” 

A positive use case for collecting participant data at an event could be using attendee feedback to generate ideas for the next event, but in the interest of establishing limits, clarity, and transparency, “you need to be clear about the usage and why you’re collecting the data,” says Vaez.

This is especially true if new types of data are being collected or for new or undefined purposes that event participants might not be used to. However, in some cases, it’s not even clear to the company collecting the data where it will ultimately go or what it might be used for in future.

Presumably that’s why clauses in terms of service agreements are often so all-encompassing and far-reaching. But as the recent Zoom controversy demonstrated, companies need to strike a careful balance between leaving the door open to new ideas for development on the one hand, and setting clear expectations that won’t leave consumers uneasy or wondering how risky their participation is on the other. 

“When you introduce a new product to a marketplace, you need to go slow. [Otherwise], people can’t comprehend it and they don’t want it,” says Tokosch. NoteAffect has developed an interactive notation tool and is currently training its AI to be able to understand user-generated notations and whether they’re positive or negative. “When we introduced our offering, we tried to make it easy to understand. That way, people can ask for more information as they get the hang of what we’re already offering, and we’ll already be collecting it.”

Tokosch hasn’t decided exactly how the data will be used and was initially cautious in using terms like “AI” to describe the features being developed, but doesn’t believe consent is an issue.

“We’re collecting a lot of data; some will be useful and some won’t,” says Tokosch. “I don’t feel like I’m collecting anything that is private. I’m not personally looking at their notes, and I don’t think anyone would be upset if I monitored the activity with the content itself and then produced information about whether the notes were positive or negative.”

Aggregated sentiment analysis can indeed be beneficial if it is anonymized, but one of the use-cases Tokosch is considering is assessing the performance of particular observers of a presentation. 

Artificial intelligence changes the nature of both what can be recorded and what recordings can be used for. Attendees and other event participants might object to having their comments, notes, or questions traced back to them personally, especially if they express objections that could damage a professional relationship. 

This could also produce an “observer effect” wherein users who know they’re being tracked personally might not be disposed to leave honest feedback, which could undermine the sentiment analysis itself. 

“If attendees ask a question from the speaker, where is the data going to go and who is going to use it? I might not put a random comment on a public forum that could come back to me personally,” says Vaez.

The Zoom example can also be instructive in that AI that records event content presents a risk that potentially sensitive or proprietary material from talks, presentations, etc. might become part of the AI application’s general knowledge set. This could have implications if the AI is ever used to inform or generate content. For example, if you’re using the AI to evaluate presentations, it’s not hard to imagine a follow-up feature that uses the same AI to help people fine-tune or even wholly create presentations based on what performs well.

Could AI give event participants a new reason to worry about sharing expert insights?

“If I have worked hard to develop some research or expertise, I might not share my insights in a comment or a Q&A because I don’t want to feed it to an AI engine that might make it freely available in some form,” notes Vaez.

Vaez adds that consent around virtual event participation is somewhat easier to manage because individuals can be segmented out, but the same doesn’t apply onsite. “That’s why most events make you acknowledge and consent to the fact that you might be photographed. If you’re there, you’ll probably be captured.”

Indeed, the impracticality of establishing meaningful consent from everyone on a trade show floor to have their biometric data collected is in part why Zenus switched from offering facial recognition to offering anonymized facial analysis as part of its “ethical AI” rebrand. 

Even though facial analysis doesn’t require explicit consent under GDPR, Zenus recommends that event organizers be transparent about using it at their events to establish trust.

Companies have policies around sensitive data, and both managers and employees using new technology should be trained appropriately to protect proprietary information and trade secrets. Otherwise, the confidentiality of this information can be sacrificed easily and unknowingly by, for example, asking ChatGPT to summarize an internal document or talking about projects on a work call with the wrong meeting technology.

Companies with the resources for properly vetting unreadable contracts can push back on terms that conflict with corporate interests, but the rest of the market isn’t powerless. Zoom’s response to backlash proves that the collective commercial power of smaller-scale users also creates leverage that can compel a company to act more ethically – even if it takes them six months to unearth the problem and start talking about it.

To support this process, use purchasing power to establish industry standards around clear, concise user agreements by simply refusing to work with providers who fail to make that process straightforward. At the very least, make it a requirement that the issuing company highlight and summarize any conditions that deal with the collection, storage, and use of data and anything that involves AI.

You should take the same approach when establishing transparency around your collection and use of attendee and other event participant data. Moreover, event professionals bear a responsibility in helping to establish norms and best practices that ensure everyone can participate safely and comfortably without being caught out by a new piece of technology.

Kazarian believes the onus is on tech companies for establishing clear and explicit consent around AI. For example, the same opt-in mechanism used for call recordings should also apply to anything transcribed for the purpose of training AI. “Any time anyone is recorded or transcribed in any format, there needs to be disclosure to all participants.”

In Zoom’s defense, when logging into a meeting, participants are warned if AI features have been enabled. They can then either leave the meeting or acknowledge that they’ve received notice and proceed.

Michael Dodd, CEO of PlanningHub, agrees that opting in rather than having to opt out would be the most ethical way to go forward. “At the very least, it should be something you need to actively agree to at the time of sign-up, and it should be very clear what you’re signing up for.”

To make it clear, an easy solution would be a separate tick box with a short line explaining what it is you’re opting into, stating the fact that there are implications on your data privacy and providing a link to those terms specifically. 

“At the very least, it would make it easy to opt out of those terms separately,” says Dodd. “That way, people who don’t really care can just opt in. People who are curious can see the terms without having to go through the whole terms and conditions. And anyone who wants to exercise caution can just not check the box.”

Moutafis would also like to see more transparency around the ability to opt out after consenting, and around data storage more generally. “In many cases, even if you delete your account, you don’t know whether your data has really been deleted,” says Moutafis. “And in some applications, the mechanism for deleting it is difficult to find by design because some companies put a lot of stock in their numbers of users.”

While Zoom rolled back its problematic terms, its implementation in the first place signaled an increasing burden of vigilance on the part of professionals using common platforms. It also highlighted how flaws in the norms around establishing consent can have practical implications for your data security.

Opting into a service may amount to signing away your data protections in a legal sense, but when terms of service are knowingly written in a way that discourages reading them, establishing consent becomes more of a “gotcha” than an ethical practice.

This is a particular risk when it comes to new technology for which the norms and best practices are not yet well or widely established. Services like Zoom and ChatGPT are being used liberally in the course of our daily work, and adoption tends to outpace risk awareness at the peril of unwitting users. 

While corporate clients may have dedicated resources for reviewing user agreements and internal policies to control information sharing, users still need to be conscientious whenever a new technology is brought into the mix as they may be consenting to things they don’t understand.

Moreover, normal users in small- to medium-sized businesses have some leverage as well, and can compel companies to operate more ethically when it comes to consent. 

“Zoom acknowledged that they made a mistake and took steps to remedy the problematic terms and to establish transparency around it,” notes Tinsley. “The more transparent a company is, the more they protect themselves from backlash and ultimately from any legal liability.”

Subscribe here for more content like this:

El Gazzette Not your average scroll

LET'S CONNECT