AI Pitfalls and Predicaments: Ethics in the Age of Generative AI

By Dylan Monorchio
September 13, 2023

Artificial intelligence (AI) is moving faster than any technology before it. While the applications in events are a fairly shallow representation of its larger potential, the industry is still learning about these tools and their pitfalls. In part one of this article series, Dylan Monorchio provides an overview of the wider discussion.

Technology tends to leap forward in bursts that jettison it beyond the public’s ability to appreciate the risks and pitfalls. The slow wheels of government practically guarantee that legislation will lag as well.

AI is no different. While there may be legislation that governs the collection and use of our data, AI has access to enormous quantities, and its developers are constantly seeking more from their user bases.

Most recently, this manifested in controversy over Zoom’s sly inclusion of a provision allowing them to use meeting content to train the AI behind Zoom IQ.

Moreover, the same AI systems can be used for wildly different use cases, and the EU is introducing legislation to triage and prohibit them.

In the meantime, the onus on defining ethical best practices largely falls on the tech companies developing and pushing AI – and, to a lesser extent, customers who create the market for those solutions.

Corporations are taking charge in a number of ways. Still, the conflict of interest has not escaped some who criticize the narrative that the need to maintain space for “innovation” should limit the regulation.

On the other hand, customers’ ethical responsibility primarily comes from their role as a counterbalancing force in profit-motivated development. The general expectation that companies will run ethically serves as a theoretical check/balance on the corporate pursuit of profit.

In practice, the customers within a business-to-business environment are themselves profit-motivated companies, so the language they use to discuss ethical conduct reflects the impact on their business interests rather than a larger sense of moral right and wrong.

Data security, proprietary data, risk management, profit-motivated practices, and when it comes to events in particular, responsibility and liability are topics that need to be addressed.

But in a world where a new generation of stakeholders is increasingly value-conscious, we should also embrace the broader philosophical questions:’

  • Are we creating a market for tools that may replace us?
  • Will using AI to help us create content lead to better content?
  • What is the difference between operating lawfully and ethically, and can we trust corporations to do both?
  • Where are the ethical standards our industry must follow likely to come from?
  • How much responsibility do event professionals have to educate and protect themselves and their attendees against new technologies?

I spoke with various experts and industry leaders to examine these questions. There were too many interesting insights for one post, so I’m taking this opportunity to introduce an Ethics in AI series.

I’ll be taking a deep dive into the risks and ethical considerations that inform how these technologies are developed and deployed, what guardrails are in place, whether they’re sufficient, and how event professionals can protect themselves.

Here’s a tidbit of what’s to come:

4 AI Pitfalls and Predicaments to Watch Out For

1) Data mischief.

Data protection is all-important when it comes to events that host anywhere from hundreds to thousands of attendees. Artificial intelligence features that promise to manage administrative tasks or personalize the experience seem harmless, but the devil is in the details. As these features are released, their potency will depend on being able to collect relevant data, and tech companies may try to dupe you into relinquishing rights to that data by hiding permissions in user agreements they know most people don’t read. At the end of the day, it’s your responsibility to protect your organization’s and attendees’ data.

2) Social biases.

Artificial intelligence can be trained on vast bodies of unstructured data from everywhere. ChatGPT, for example, was trained on the contents of the Internet as of September 2021. As a result, it and tools like it are influenced by patterns in broader society. Unfortunately, this means that widespread social racism and other prejudices are also reflected in its training data. Developers can counteract it, but it requires constant monitoring and moderation. Event professionals need to be aware of these potential biases as they use generative AI tools to supply content, speaker recommendations, and guidance on event content.

 3) Fake news.

Biased information is one challenge; wholly made-up but plausible information is another. Generative AI developers have yet to figure out how to get their chat-based tools to stop supplying fabricated sources composed of other stuff it has seen on the Internet. Called “hallucinations,” this fake information can be potentially embarrassing for event professionals who try to use ChatGPT and similar services as a stand-in for their expertise.

 4) Bad content.

Apart from the possibility of AI generating false or socially problematic content, event professionals should be conscientious about publishing or producing too derivative content. In saturated content spaces, relying too heavily on AI to produce ideas or key takeaways guarantees that you’ll be creating more of the same boring, cookie-cutter marketing and events. It’s lazy, and it’s a bad way to keep your brand differentiated.

To learn more about the impact of AI on the event industry, subscribe to El Gazzette for the rest of the articles in the series.

Subscribe here for more content like this:

El Gazzette Not your average scroll