Meet Skye, a trusted partner for your AI journey. This AI-generated and very well-dressed llama will help you make the most of AI innovations while staying true to your organization’s values. She’s bright-eyed, sure-footed, and wired up to assist you in solving your business challenges efficiently and ethically. This llama is your guide to realizing the potential and power of AI.
You: Llamas’ strength makes them great pack animals. What’s the maximum weight that llamas can carry?
Skye: Llamas can carry up to 30 percent of their body weight. But don’t try to overload us – we’re known to lie down or simply refuse to move if there’s too heavy a load strapped to our backs.
You: So, llamas are stubborn?
Skye: Let’s just say we know our limits. In general, llamas are smart and easy to train, like AI.
You: Those are some of the benefits of AI that our organization is eager to take advantage of, but what are the drawbacks that we should be aware of?
Skye: Like any other technology, AI can be used for good or for bad. Organizations need to ensure they are using AI responsibly, considering social, legal, environmental, and governance considerations.
In the constantly evolving world of technology, AI stands out for its potential to change the way we live and work. And much like innovations that have come before it, such as the internet, AI also presents a myriad of ethical considerations. Organizations are already grappling with how to navigate the murky waters of AI and ethics so they can take advantage of the technology’s benefits without risking ramifications.
Every organization wants to leverage technology innovation to drive them ahead in the market and get a leg up over the competition. But as we innovate, it’s important to keep in mind the impacts we may be having on the people and the world around us. We must take responsibility to understand and prevent the negative outcomes that our drive to innovate with AI could have, while also asking how what we’re doing can contribute to the greater good.
“Something I like talking about is the fact that responsible AI is not a completely separate field of study. I’m not an ethical AI or responsible AI practitioner. Everyone is a responsible AI practitioner,” said Dr. Sasha Luccioni, research scientist and climate lead at Hugging Face, and one of MIT Technology Review’s “35 Innovators Under 35,” during a recent conversation. “Can you imagine if there were `cars’ and `safe cars’? All cars are supposed to be safe! So, all AI is supposed to be responsible.”
As more organizations adopt AI, however, they’re learning that its somewhat opaque nature makes it difficult to ensure ethical practices are adhered to. Here are three things to keep in mind as you embark on your AI journey:
- Does the AI model you’re using have a bias? Particularly if you’re using AI to perform a task that was previously done by humans, this question needs to be answered and a plan should be put in place to rectify it. And it should be re-asked periodically, as the model develops.
- Is the AI model you’re using leaking information? If you start conversing with AI, you need to understand that knowledge might leak to other places if you don’t take the necessary precautions. For example, if you type in your social security number into an AI model while making a request, don’t assume the software will automatically delete it when you’re done – more likely, that information becomes available to others.
- Do you have rights to the data that the AI model you’re using was trained on? Often organizations will import models that have already been trained, which opens up a number of issues including legal ownership and copyright of information. For example, if an AI model reads a book as part of its training, who pays for it?
“As we move forward, a consensus must be reached on how to deal with such situations,” says Lars Rossen, senior vice president and chief architect at OpenText. For example, the European Union is already working on regulation regarding AI. “At the organizational level, striving for legal and ethical correctness is key.”
Learning from ESG
Rossen likens developing responsible AI practices to the work that organizations started a handful of years ago with Environmental, Social and Governance (ESG) initiatives. If organizations are to fully harness the potential of AI in a responsible manner, he says, it is crucial to view AI through the lens of ESG principles. The responsibility we have to leverage AI ethically mirrors the responsibility we shoulder toward employees, customers, and society at large. In the past, ESG principles have successfully instilled structure where there was none, and the AI agenda warrants the same level of attention.