2024年5月13日~16日まで、米国ラスベガスのVenetianで開催されるInspire 2024をお見逃しなく!皆様のお申し込みをお待ちしております。

 

Principal Components: Abhishek Gupta on Actionable AI Ethics

Technology   |   Susan Currie Sivek   |   Oct 1, 2021

Abhishek Gupta, author of the forthcoming book Actionable AI Ethics, discusses how to move from discussing AI ethics in the abstract to putting them into practice.

Abhishek Gupta, founder of the Montreal AI Ethics Institute and a machine learning engineer at Microsoft, joined us on the Data Science Mixer podcast to talk about why conversations about AI ethics can be difficult to shift into everyday organizational practices. There may be consensus on what matters, but not a lot of concrete implementation of strategies to address those issues.

Here are three “principal components” of what Abhishek shared, including practical advice on how to enact AI ethics within organizations and in data scientists’ daily work.

The time has come to make specific plans for implementing ethical AI approaches.

We’ve largely settled on at a high level what we should be doing.

When you’re on a business deadline or a project deadline, you don’t necessarily have the time to go out and search for different kinds of literature and see what’s the state of the art. Unfortunately, at the moment, this seems secondary to the primary business objectives. The ethical aspects aren’t necessarily included as a core value offering. That’s one of the things that’s been a problem. But also, there’s this overwhelming amount of information. It’s a little bit like trying to go on a diet. If someone throws 25 different diets at you, you’re confused, right? You don’t know where to start. The fewer and more carefully thought out choices you provide, the higher the likelihood that you actually go out and do something. I think we’ve largely settled on at a high level what we should be doing, and now it’s really a matter of trying it out.

Machine learning security is critical to AI ethics in a “lifecycle” view of AI ethics.

All of these ideas fit as pieces of the puzzle in that lifecycle

We don’t have enough of an emphasis on acknowledging that machine learning security is sort of the foundational tenet of AI ethics. If, with all good intentions, you’ve applied some bias mitigation techniques at the start of a lifecycle, you hope that the results won’t be as biased as if you had not applied the technique. The use of adversarial examples can through data poisoning trigger things that still produce biased outcomes even after you apply bias mitigation. So it almost renders that whole effort ineffectual just because you didn’t think of machine learning security as something that you have to do. And so, again, if you take that lifecycle view, you can now see all of these various pieces. So we’re talking about interpretability, we’re talking about accountability mechanisms, technical or organizational bias mitigation, privacy, transparency — all of these ideas then fit as pieces of the puzzle in that lifecycle, which means that they become mutually reinforcing and comprehensive and holistic, leaving behind few gaps.

Data scientists need to try practical strategies for AI ethics and share their successes — and failures.

Why aren’t we being data-driven about some of these ethical AI practices?

What I would like to see more of is for people to try these ideas out in practice, because a lot of the time when we try something out, we realize it doesn’t work in its current form and we need to do something different or we need to iterate. Perhaps that’s also a mindset that I bring to this, in the sense that when we talk about AI being data-driven, why aren’t we being data-driven about some of these ethical AI practices also? Let’s, as an organization, talk about, “Hey, I tried out this principle set or set of guidelines and you know what? X, Y and Z worked, A, B and C didn’t. Here’s what we tried to do to get A, B and C to work, which led it to become D, E and F.” I would encourage folks to share if they’ve seen case studies which talk about people actually trying these principles out in practice and where things have worked, but more importantly, where things have not worked, because that’s where we’ll get our lessons from. That’s where we’ll get ideas from an operational perspective, in terms of how any of this is going to materialize in practice.

For more from Abhishek Gupta, check out the links below:

Montreal AI Ethics Institute

Actionable AI Ethics

The AI Ethics Brief

Responsible Use of Technology: The Microsoft Case Study

 

These interview responses have been lightly edited for length and
clarity.


The podcast show notes and a full transcript are available on the Alteryx Community.

Tags