ARTICLE SERIES: Part 4

Regulating AI: Who Will Create Ethical Standards for Events?

By Dylan Monorchio
October 25, 2023

A new class of risks introduced by artificial intelligence has moved the European Union (EU) to draft dedicated legislation, but the process is slow and limited to Europe. Who sets the ethical best practices in the meantime? Does the event industry need its own set of standards?

When data security became more of a priority 15 to 20 years ago, event professionals and their supplier partners had to take stock of a new set of rules and best practices. This involved becoming familiar with a range of regional laws (sometimes deferring to the EU’s General Data Protection Regulation or “GDPR” as the most comprehensive set) and a new language for discussing them.

These were supplemented by independently established certifications, like SOC2 for data privacy and PCI for payment security, that served as an easy way for professionals in all industries to vet partners for compliance.

The combination of legislation and certification effectively established a set of standards that both limits tech companies from acting unethically and supports event professionals in meeting their duty of care to attendees and other stakeholders.

Artificial intelligence (AI) challenges these standards by allowing companies to collect and use data in new ways, for unfamiliar purposes, and at an unprecedented scale. The existing legislation and certifications were not designed to deal with the new risks it poses. These risks are often poorly understood by the companies rushing to develop new AI applications, let alone the people consenting to the collection of their data – and in many cases, this consent is given through something as simple and seemingly innocuous as accessing a new feature on Zoom.

This has spurred the EU to draft legislation to control the development of AI applications. How well those regulations will serve the purpose and whether or not they will be sufficient in defining comprehensive ethics around AI remains to be seen. Regardless, they would only limit the development and use of AI in Europe.

In the absence of legislation, tech companies are left to define the ethics around the new technologies they’re developing with very limited oversight. In the case of AI, we have already seen a number of blunders, and according to the 2022 NewVantage Partners annual survey, only 21.6% of data experts think the data security industry has “done enough to address data and AI ethics issues and standards.”

While intrepid tech companies end up saddled with this ethical responsibility and are likely trying their best, it’s clear from the discussions for this series that many of them are still figuring out what that means in practical terms.

Could event industry associations step up to supply a set of standards? What responsibility do event professionals themselves bear, and how can they act as checks and balances against misguided corporate ethics and opportunism?

This article is going to look at standards coming down the line, where they might come from, and how they might impact events.

Current Legislation Protects Your Data… Somewhat

Formal regulations that govern the collection and use of data exist in many regions already. These are primarily concerned with personally identifiable information (PII) that can compromise an individual’s personal privacy. One of the best-known examples is the General Data Protection Regulation (GDPR), which governs the collection and processing of personal data within Europe. Others include the California Consumer Privacy Act (CCPA) in the United States and Canada’s Anti-Spam Legislation (CASL).

The mechanism each of these uses to protect your data is largely consent; to collect data that can be used to identify you, companies need to obtain permission from you first. They also have to be transparent about how and why they want to use it, though they often use broad or all-encompassing language designed to cover use cases they haven’t thought of yet.

The primary issue is that the way companies currently establish consent is broken. The status quo is to establish consent via a vague clause in a document they know you won’t read attached to the checkbox everyone dismissively ticks just to access a service. The vast majority of service users don’t know what they’re consenting to, and especially in a professional context where the tool is part of a workplace tech stack, the option to refuse consent is merely hypothetical.

According to Coretelligent, another issue with the existing regulations is a lack of consistency, particularly in the United States: “Currently, there are no comprehensive data security or privacy laws at the federal level. As a result, individual states are implementing laws to protect their residents. Unfortunately, this creates a complex maze of overlapping data privacy laws businesses must follow.”

Artificial intelligence presents an additional challenge in that data privacy is not the only kind of concern these new applications pose, and there simply isn’t yet legislation that covers the vast range of unforeseeable impacts.

An Overview of Europe’s AI Regulation Draft

In response, the EU has moved to quickly establish the European Artificial Intelligence Act to categorically prohibit AI applications that are too potentially harmful and to establish transparency when AI is being used in communications and content generation.

The new legislation would regulate specific use cases according to their assessed risk categories. For example: 

  • “Unacceptable risks” will be prohibited outright, though exceptions exist (e.g. use cases that benefit law enforcement – which is itself ethically controversial). 
  • “High risk applications” are those that “negatively affect safety or fundamental rights,” and how they are regulated will depend on whether they are already covered in the EU’s product safety legislation or, if not, whether they fit into one of eight high-risk categories that would require them to be registered in an EU database.
  • “Limited risk” applications carry transparency requirements that ensure users understand they are interacting with AI in order to make informed decisions about whether they want to continue using it.
      

In the case of generative AI, there are three special transparency requirements:

  • Disclosure that the content was generated by AI
  • Product design that inhibits generating illegal content
  • An inability to publish summaries of copyrighted data used for training
      

These would theoretically protect event professionals from applications that posed a significant risk of harm to attendees, staffers, or anyone else whose data might be involved, but it would also require that any use of AI to come up with or deliver content be tagged for the audience.

While the draft regulation has been approved, it’s not clear when it will be enacted and come into effect, but the expectation is to have it in place by the end of 2023.

“Regulation has to play a role in delimiting the capacity for AI to be developed around contentions issues, but we should be wary of too much regulation,” says Panos Moutafis, CEO of Zenus, a company that uses AI to provide sentiment data through facial analysis. “Regulations should evaluate the level of impact according to particular use cases. This is the best way to allow technologies to grow while minimizing the impact on society.”

Moutafis explains that a safe way to strike a balance would be to have a human person involved in giving the AI prompts and also reviewing its output, as is the case with content and image generators. 

While this reflects the EU’s draft proposal, it contrasts with the cautious view of people like Fanny Hidvegi, a human rights lawyer and the policy and advocacy director at Access Now. Hidvegi points out that unregulated generative AI, which already has human oversight, is negatively impacting people now, and that the technology currently in development has the potential to be even more harmful. 

For Hidvegi, the fact that tech corporations are so intertwined in the discourse informing the legislative process creates a significant conflict of interest. This was evident at the Computer Privacy and Data Protection Conference in Brussels, where the tech companies with the biggest stake in generative AI were also the biggest sponsors. The conversations responsible for founding regulatory principles are often surrounded if not funded by corporate interest.

Lax Legislation Elsewhere Complicates Competition

Another reason companies are wary of regulations that would limit their product development is that less stringent legislation in other regions may put them at a disadvantage. Their ability to remain competitive on an international market in some sense depends on these regulations not being significantly more restrictive than the rules their competitors have to abide by.

However, Moutafis doesn’t believe this should restrict legislators. He believes that the EU regulations will catch on elsewhere, and points to the way GDPR inspired more stringent data privacy laws in other regions:

When it comes to lax markets, leading by example is important. Even for deployments in the US, companies will ask if we are GDPR compliant because they recognize it as a good standard compared to whatever their local regulations are. Different regional versions of GDPR have emerged, so having done that work and setting a precedent is key.”

However, that process was not immediate, and the Covid19 pandemic betrayed the fact that this principle does not always apply. During Covid, the difference in regulations seemed to have little to do with the difference in actual risk, and economic interests often trumped other rationales.

This position is also at odds with the sentiment of AI developers like OpenAI, whose CEO, Sam Altman, has said that he will simply move the company to a less restrictive market if the EU’s regulations become too limiting.

If that happens, the more likely outcome is that consumers in areas with stronger regulations will have access to less (or less powerful) AI applications. It would also shift the ethical responsibility in less regulated markets from corporations to consumers, who will have to wrap their heads around the ethical implications in order to make purchasing decisions that align with their values and protect their data. But as we saw last week, that is not a consumer’s strong suit.

All the more reason for major AI developers to push for a basic set of regulations in all markets, says Michael Dodd, CEO of PlanningHub, which is exploring recommendation engines for partner networks. “This would level the playing field so that, moving forward, consumers don’t have to be the ones choosing between ethical providers – that ethics are baked into the system. But as we saw with Web3 and Crypto, the wheels of government are often slow to act.”

In the absence of legislation, companies are able to operate with little oversight. The upshot is a tenuous balance between pursuing their own interests and maintaining the trust of their customers. In this regard, corporations on the whole don’t have a stellar track record.

Many leading AI developers have made an effort to take this responsibility seriously and have been vocal about the larger risks of the technology they’re developing. DeepMind cofounder Shane Legg leads an internal “AI safety” group while the other cofounder, Demis Hassabis, was one of many tech signatories to an open letter warning about the dangers of AI.

But what exactly ethical conduct looks like when it comes to a new, burgeoning technology is largely subjective. 

“There is currently no single, universal ethics around how to deploy AI—different countries and companies have varying perspectives on ethics, privacy, and data use,” said Rana el Kaliouby, an AI researcher and Deputy CEO of Smart Eye in an interview with Peter Diamandis. “In many cases, it’s up to individual leaders to ensure ethical deployment.”

This is why Moutafis believes that, while the responsibility lies primarily with the companies to ensure their AI is executed ethically, the primary defense mechanism against harmful use cases and applications is still legislation.

The other side of the balancing act – meeting clients’ ethical standards – is increasingly important as markets everywhere become more familiar with AI and more value oriented.

Event professionals can use their purchasing power to compel companies to align with their ethical mandates, but this only applies insofar as they are aware of the risks. The ability for event professionals to impose their own checks and balances depends on how well they educate themselves.

“There needs to be regulation,” says Dodd, “but that’s going to be so far down the road that the onus really should be first and foremost on AI companies creating the models, and secondarily the consumers to educate themselves in order to opt into systems that are operating more ethically.”

However, the sheer depth of the unknown and the burden of awareness on event professionals has many concerned, notes Adam Parry, Editor in Chief at Event Industry News and the force behind Event Tech Live. 

Parry agrees that the onus for establishing ethical value systems to inform best practices in business and tech will be borne by society at large, and is wary of corporations coming up with those value systems in a silo as their imperatives often conflict with society’s best interests. However, this may present special challenges for the event industry.

“The event industry is at a disadvantage because there’s no one organization as big as any of the major tech companies,” says Parry. “We have the added challenge of some attendees who will be scrutinizing our use of AI from every angle.”

Event organizers have to operate responsibly and transparently to mitigate their risk of liability. They have to show that they’ve taken steps as part of an established process to minimize negative outcomes, but AI-related risks are an unfamiliar territory for many. It may not be reasonable for every event professional to become an AI expert as a requirement for procuring modern event tech.

“Trying to ponder on all of the layers of risk is almost like asking a health and safety expert how to protect against any potential accident or eventuality in this event,” says Parry. “While you can rely on precedent (what you can expect from this audience or that demographic), the reality is that anything could happen.”

As a result, Parry believes that some organizations without the internal wherewithal required to parse these ethical concerns may simply resist the use of companies with AI elements, and points to the fact that some companies already have policies that prohibit the use of things like ChatGPT and other AI tools. 

Organizers that do move ahead with AI-powered technology may eventually have to add a specialized compliance role (e.g. AI architect) to help establish a framework for managing the risks and ethical concerns.

In the past, associations have developed and structured standards that are practical for average professionals to verify, and the event industry has adopted them. For example, SOC2 was developed by the American Institute of Certified Public Accountants (AICPA), and it became the industry standard for data security. However, where new sets of standards need to be established, event industry associations may have a difficult time collaborating on a set of guidelines for professionals within the industry.

“There isn’t an association now that spans the tech companies and the entire base of stakeholders,” says Parry, “and it’s not clear that this would be at the top of any one association’s member agenda.”

Parry also speculates that there may not be enough expertise within the industry to pull it together, and associations who try to step in could also encounter resistance from suppliers if their guidelines become too restrictive.

However, while Parry is wary of tech companies defining ethics independently, he does see industry standards potentially emerging from really large corporate stakeholders: “[Massive event producers] that have a lot of clout and the ability to take a macroscopic, event-led view of the situation will start setting terms according to their own compliance conditions, and this could eventually lead to a de facto industry standard.”

As an example of this, Parry recalls that Informa spearheaded a pilot called the Better Stands initiative as part of their sustainability commitment. “They decided to shift it from an advantageous branded initiative to an open source, unbranded set of standards being established by a larger working group of industry players. Now, organizations like GES as well as trade associations are coming in to support it as an industry standard.”

Another potential source could be the trade show market, says Parry. Globally, they represent a big enough market to compel the creation of a set of standards to meet their needs. Independent associations who serve this market also tend to work together.

AI is already influencing events and the capabilities of event professionals in profound ways. Regulations, or at least guidelines, will be crucial to guide the technology and the industry toward the best possible outcomes. As momentum builds, all stakeholders will have a role to play.

Subscribe here for more content like this:

El Gazzette Not your average scroll

LET'S CONNECT