ARTICLE SERIES: Part 2

AI Pitfalls: What Risks Come with Artificial Intelligence for Events?

By Dylan Monorchio
September 26, 2023

Artificial intelligence (AI) has the potential to revolutionize the way we work, but there’s a proportionate potential for risk – not just to our businesses, but to each other and to the quality of our events – that requires consideration, careful planning, and caution as we move forward.

In the last issue, El Gazzette offered a taste of the ethical considerations around jumping headlong into AI implementations. This week, we’ll delve deeper into some of the better-known issues as well as some reasonably foreseeable concerns.

The dangers of AI in its current state range from a decline of the quality, diversity, and authenticity of events to weightier consequences involving data privacy, intellectual property, and outright fraud.

Some believe that the event industry is in a strong position to preserve its integrity because the nature of our business is to create meaningful, productive connections between humans. Others see this as the locus of the risk, and believe a community slow to adopt technology may have to learn the hard way.

What should event professionals be aware of, and what can they use as a guiding light as they begin to experiment with AI applications in their own tech stacks?

 Runaway AI: Rapid Development Outpacing Human Control

While not specifically relevant to AI applications in events, the doomsday headlines dominating the coverage outside of B2B media outlets do contextualize some of the more immediate concerns and temper the optimism of some who choose to focus solely on the benefits of generative AI.

The rapid development and deployment of increasingly capable AI from all major players has some experts reeling as they watch the rules of safe development being broken one at a time (e.g. don’t teach AI to write code, open it up to the general public, or give it access to the internet).

While the launch of ChatGPT amazed and delighted us a few months ago, the race on the part of tech giants to get in the game and one-up each other is well underway. Many CEOs and high-ranking experts in the field are pushing on while themselves issuing warnings “that the technology they are building may someday pose an existential threat to humanity comparable to that of nuclear war and pandemics.” Others have simply quit in protest.

Producing an AI that is difficult to control is one version of the existential concern, but an AI with its own agency isn’t the only outcome that could have dire consequences. Many aspects of the technology are open to the public, although very expensive to develop, and bad actors using AI systems for malevolent purposes is a significant risk. This is partly why the pace of development is so unhinged: the impracticality of halting development is compelling developers to outdo each other to protect their own security interests.

“Business-to-business (B2B) use cases should be fairly straightforward to regulate because, if someone falls out of compliance, you have fines and other penalties,” explains Panos Moutafis, CEO of Zenus, which brands itself as an ethical AI provider. “Where it gets tricky are governments training it without control or regulations. The weaponization of AI is a big unknown.”

Fake Information and Phantom Events

Even a less dystopian future could be problematic for the event industry as event professionals and marketers increasingly rely on ChatGPT and other forms of AI to conduct research and ideation for events.

“Hallucination” is the term used when generative AI is asked for real information and provides incorrect, inaccurate, or blatantly fabricated outputs. These hallucinations fundamentally undermine the efficiency promise of AI research at an operational and decision-making level; they often require at least as much legwork to validate as conducting original research in the first place. In the worst case scenario, they can even endanger the reputation of businesses (and individuals) who use AI as a stand-in for their own expertise.

Event professionals using generative AI for procurement have to be cautious when sourcing speakers, venues and other partners as results might not exist or might be a misleadingly plausible composite of things that do exist.

A lack of authenticity and accuracy also has implications for fake events, which remain a significant problem. By eliminating most of the labor in executing event websites, writing content, and rapidly coming up with speaker listings, generative AI makes it that much easier for malicious actors to fabricate events. As such, speakers, sponsors, and other stakeholders also have to be more vigilant about verifying any requests.

 Copyright Infringement and Content Ownership

False information is not the only problematic content an output can contain. Generative AI also frequently sources material that might be protected by a copyright or might otherwise belong to an existing author, artist, etc.

“Large language models (LLMs) could pull from things that constitute intellectual property,” explains Phi Lan Tinsley, a partner at K&L Gates LLP who specializes in intellectual property counseling and litigation. “Once they produce an output derived from it, that establishes a question of ownership and responsibility. Have we violated a license or a copyright? Have we used IP that we have no right to use?”

Violations of this nature have led to a number of lawsuits from those who claim their work has been used in AI-generated written content, images, audio, or video without their permission. According to Tinsley, anyone using generative AI to produce an output should be very careful when it comes to attribution.

 Zoom, Data Ownership and Proprietary Information

In terms of problematically publishing content that someone else owns, plagiarism is only the tip of the iceberg. A lack of clarity and control over what might end up in an AI’s output has fuelled recent controversy over Zoom’s potential use of meeting recordings in training its own AI.

While bad actors present definite ethical risks, Zoom’s practices raise questions about how cautious we need to be when giving potentially sensitive information to trusted actors as well.

A recent version of their terms of service read as follows (bold is mine):

“You consent to Zoom’s access, use, collection, creation, modification, distribution, processing, sharing, maintenance, and storage of Service Generated Data for any purpose, to the extent and in the manner permitted under applicable Law, including for the purpose of … machine learning or artificial intelligence (including for the purposes of training and tuning of algorithms and models).”

The primary fear is that companies building AI into their platforms will collect, store, and possibly even share proprietary information – a potentiality with implications for any organization using a meeting platform for potentially sensitive internal meetings. 

“When you’re opting in at the account level for a meeting platform that’s going to record your information, transcribe it, store it who knows where, and feed it into a model that’s a little bit of a black box – or at least might produce hallucinations that unveil it to anyone – that’s a problem,” says Jon Kazarian, CEO of Accelevents.

The default safeguard is a combination of transparency and consent, but for Kazarian, the onus for establishing consent needs to go beyond an open-ended clause buried in a terms and services agreement. Kazarian believes that the same active opt-in consent mechanism typically used for call recordings should also be applied to any sort of transcription that trains an LLM. 

In Zoom’s defense, they eventually back-tracked and introduced more transparency around their terms. Customer protection now hinges on a distinction between “Customer Content” and “Service Generated Content,” the fact that only account admins can opt into AI features, and (now that Zoom is being more transparent) consent to the data collection. Moreover, when individual meeting attendees log into a meeting, they are warned that the AI features have been enabled, so there is at least theoretically an opportunity for them to opt out.

Trade secrets and business-related data isn’t the only (or even the primary) data this sort of application puts at risk. “We also don’t have enough context or information around proper encryption of personally identifiable information (PII) when it comes to transcription,” cautions Kazarian. “In order to build the model, companies need to transcribe every conversation, and now you have PII in these transcriptions. To me, that’s inherently a problem.”

Within the European Union (EU), GDPR regulations mandate that PII is only collected with consent and for explicit purposes and that users can access and delete it at will. However, many platforms follow these rules more in a strictly technical sense than in a way that preserves their spirit. 

“Just training a model is not that problematic,” says Moutafis. “The real problems come from storing the data without explicit consent for unspecified durations and purposes.” 

Moutafis believes there also needs to be transparency around data storage, and more importantly, data removal: “In many cases, even if you delete your account, you don’t know whether your data has really been deleted. And in some applications, the mechanism for deleting it is difficult to find by design because some companies put a lot of stock in their numbers of users.”

 Reinforcing Social Biases

Having information within an organization be dispersed into the wider public may not necessarily be a bad thing, notes Bob Vaez, CEO of EventMobi: “If you’re an association and lobbying for issues that your members care about, feeding an AI algorithm all this content created by your members or at your conferences allows it to become part of the AI’s knowledge milieu and it can then reinforce your messaging through that kind of dissemination.”

But Vaez also cautions that the same mechanism of reinforcing certain kinds of inputs in the public sphere can have negative effects, especially when AI is relied upon too heavily to make decisions about your event.

Artificial intelligence has come under fire for generating outputs or assessments that reinforce societal stereotypes that offend our sensibilities around racism, for example. This has led to the development of some diversity-oriented data sets, but “bias reinforcement is always an issue and requires constant vigilance and intentionality in how systems are designed,” says Moutafis.

However, counteracting biases is a complicated issue that many companies experimenting with AI have yet to master.

“We’re working with people to build some of that [diversity] data into the profiles on our site,” says Michael Dodd, CEO of PlanningHub, which is exploring recommendation engines for partner networks. “At a minimum, we’d like to be able to let clients know how diverse their partner networks are.”

I asked if this would entail collecting data about suppliers’ race, ethnicity, sexual orientation, etc., but it’s not clear yet how this would work, and no set approach has been determined: “It’s not something we’ve solved but something we’re looking at carefully.”

An AI that incorporates race, ethnicity, age and other potentially bias-forming information into its selection criteria can be especially problematic when it comes to suggesting speakers, defining audiences, and networking at events.

“Artificial intelligence can be powerful in bringing people together, but can also create bubbles that undermine diversity, especially if we start giving it decision-making authority about who to invite,” warns Vaez:

“Imagine a scenario where LinkedIn creates an AI feature that lets you plug your most successful attendees or decision-makers into a tool in order to find more attendees like them. If the majority of the decision-makers and people with influence at your last event were old, rich white guys, and this is how you train the AI to find more attendees, then you might end up losing diversity.”

This could have a knock-on effect on the event content. If AI serves a homogenous audience of one type of person, it’s not difficult to imagine how it might also serve content designed to reinforce their perspectives. Vaez describes an echo chamber effect where a serendipitous flow of people and a diversity of thought are sacrificed in the service of marketing efficiencies. 

Event professionals as well as tech companies will have to be vigilant as they employ AI to explore new markets and home in on target audiences, and policies for measuring diversity may need to be implemented.

How Susceptible Is the Event Industry to Harmful AI?

For Moutafis, the mission-driven purpose of the event industry in some sense immunizes it from problematic AI implementations.

“Events are one of the lowest risk industries because they are about bringing people together and facilitating human interactions,” says Moutafis. “The nature of the applications in general are less disposed to ethical issues.” Moreover, Moutafis points out that the industry tends to adopt technology reservedly and with a lot of supervision, which also reduces the potential for things to go wrong.

But for Vaez, marketers and event professionals still need to be conscientious precisely because the authenticity of these experiences is so important and central to the value of attending in person. 

“There’s a fine line between seeking efficiencies and improvements and allowing poorly (or problematically) trained AI to make event design decisions that defeat the purpose of attending,” says Vaez. “People go to events to see real human beings, to connect, and put themselves in a different environment and a different mindset. If the experience becomes too synthetic, it takes away from the event.”

Moreover, the sheer depth of the unknown and the burden of awareness on event professionals has many concerned, notes Adam Parry, editor in chief at Event Industry News and the force behind Event Tech Live. “Trying to ponder on all of the layers of risk is almost like asking a health and safety expert how to protect against any potential accident or eventuality in this event. It’s impossible,” says Parry. While you can rely on precedent (what to expect from this audience or that demographic), the reality is that anything could happen. 

“The event industry is at a disadvantage because there’s no one organization as big as any of the major tech companies, and we have the added challenge of some attendees who will be scrutinizing our use of AI from every angle,” says Parry, adding that implementing AI-powered technology may eventually entail the creation of a compliance role (e.g. AI architect) to help organizations establish a framework for managing the risks.

Transparency Is the Key

Transparency emerged as the ethical lowest common denominator in every conversation I had for this series. 

While Moutafis believes the risk for the event industry is low, one problem he thinks will become more significant as AI capabilities improve will be authenticating “when we are dealing with other humans, and when products and content are generated by humans versus AI.” This may require sophisticated systems, which Moutafis adds may even require AI to monitor AI. (Talk about a conflict of interest.)

Consider booking meetings before the event. “We’re going to see a real human being, to speak with real people,” says Vaez. “Would you really want people to use AI to mass-message each other with personalized messages that are so persuasive that recipients don’t know whether they’re corresponding with a real person or a robot?”

Given recent developments in the licensing of AI clones, it’s even conceivable that in-demand speakers and other event stakeholders could allow an AI facsimile to replace themselves at events. How much of a departure would this be from pre-recorded event content being featured in virtual events as if it were playing in real time?

“The number one thing that needs to happen is to be transparent about the use of AI,” says Vaez. “If you’re going to use AI, you need to label your work. We’re human beings. We’re not computers. There’s no way we can figure out what is real or not. If a session title, slide, or topic was generated by AI – even if the content has been proofread by AI – we need to have labels that indicate where AI was used.”

Subscribe here for more content like this:

El Gazzette Not your average scroll

LET'S CONNECT