Don't miss Inspire 2024, taking place May 13 - 16, 2024 at the Venetian, Las Vegas. Register Now.

Whitepaper

The Essential Guide to Explainable AI (XAI)

Explainable AI (XAI)

is artificial intelligence (AI) that helps anyone with a non-technical or non-data science background understand how machine learning (ML) models arrive at a decision.

For example, if you trained an ML model using a collection of financial data to help you approve or deny a loan applicant, XAI would give you the answer, plus tell you how and why it arrived at the answer it did.

This is different from black box models, which are more common in the world of ML where some models are more interpretable than others. For example, decision trees provide more easily interpretable results while neural networks are more opaque. Because of this, there is often a tradeoff between accuracy and explainability.

Additionally, even the people who design and deploy black box models don’t know what information the model used, or how it used the information, to come up with a result.

In the previous example of the loan application, a black model would only provide a score that you could use to inform your decision to approve or deny the loan.

For businesses and leaders that need to ensure transparency, accuracy, and unbiased decisions, black box results are dangerous. And many use proxy models to explain how they work.

Denying someone a loan who otherwise had the creditworthiness and financial ability to be approved could lead to claims of discrimination. For any company, black box models and poor decisions made using them could also lead to losing customers through eroded trust or missed opportunities.

Other uses for XAI include:

  • Financial institutions using ML models to approve or deny loans
  • Insurance companies using ML models to set insurance rates
  • Claims adjustments using ML models to calculate payout amounts
  • Educational institutions using ML to accept or reject applicants
  • HR Departments using ML models to filter through applicants
  • Healthcare payers and providers using ML models to explain treatment options and claims decisions

The Importance of AI Explainability

XAI is important for two main reasons: external and internal transparency and trust.

External Transparency and Trust

The importance of AI explainability goes beyond making the wrong decision. As a whole, people still don’t trust AI — at least, when it’s a replacement for people.

The Harvard Business Review (HBR) ran blind experiments between descriptions written by AI and people of different items, such as coats, perfumes, homes, and cake.

And people favored the AI descriptions more, at least when people based their decisions on “utilitarian and functional qualities.”

More importantly, though, HBR “found that people embrace AI’s recommendations as long as AI works in partnership with humans” rather than replacing them.

XAI plays well with augmented, or assisted, AI modeling and explanations. XAI helps people from any background explain why certain data is being used and how answers are derived.

Consider an example of someone looking for retirement investing advice. The investor may provide you with their income and goals, then look to you for advice. You may run their information through a machine learning model and provide them with recommendations.

Working in tandem with AI, you could explain why one option was picked over another and what information was used for the final results.

There are many other uses, too.

That’s the customer-facing part of the equation. The next part is the internal part of the equation.

Internal Transparency and Trust

Even with current ML models, there’s always an opportunity for bias and degradation to seep in.

Data often comes with bias, whether intentionally or not. Age, race, gender, health history, financial standing, income, location, and more can produce bias — in fact, any data can introduce bias, and some are protected characteristics that cannot be used in models at all. And there’s always some risk that those inputs affect how AI learns.

Throw all of those into a black box, and an ML model could produce results that favor one group of people over another.

On top of this, training AI is never a clean process. And training is never the same as real-world performance.

A model trained on one data set may produce wonderful results with that data, but once real-world data is entered into the model, it may produce very different results.

Simply put, an analyst or a data scientist could build a black box ML model that performs perfectly and without bias during training, but then produces biased or poor results once deployed.

And sometimes the models have to adapt to changes in data or regulations.

From November 18, 2021, to December 14, 2021, a Brookings tracking report showed 43 regulations in effect, in rulemaking, or rescinded affecting finance, manufacturing, healthcare, and more.

Ensuring and explaining how black box models have been updated to factor in new or revised regulations would be problematic, if not impossible.

XAI makes it easier for developers to update and improve models, plus measure their effectiveness — all while complying with new regulations. Of course, there’s the added benefit of an easily auditable trail of data.

Interpretable AI vs Explainable AI

While analysts and data scientists build ML models, it’s often those in executive positions and other leadership roles that need to understand the results.

And this is the main importance of XAI, and one of the biggest differences between it and Interpretable AI (IA).

Whereas XAI is a subset of IA that is focused on making ML models more understandable to people who don’t have a data science background, IA is a subfield of machine learning that focuses on the transparency and interpretability of models.

IA methods guide the development and deployment of XAI through four key questions:

  1. Who might the results need to be explained to?
  2. Why would the results need to be explained and for what reason(s)?
  3. What are the different ways the results could be explained?
  4. What would need to be explained before, during, and after building the model?

In the example of the financial recommendation above, a business might set out to develop an XAI model and use the following questions to guide them as such:

Who might the results need to be explained to? 

  • Customer
  • Manager/Executive
  • Regulatory agency

Why would the results need to be explained and for what reason(s)?

  • Help customers understand why the recommended investment options make the most sense for them
  • Help decisions makers understand why you recommend certain investment plans over others
  • Maintain compliance and speed up auditing

What are the different ways the results could be explained?

  • Explain recommendations based on financial and/or emotional reasoning
  • Demonstrate the customer-centric and business-based value of recommendations
  • Demonstrate how decisions adhere to all regulations and how the model ensures that

What would need to be explained before, during, and after building the model?

  • Why certain personal information is collected, what it’s used for, and how it affects the recommendations
  • The importance of building the model and what needs to be changed or modified to help justify decisions to customers and executives
  • How the model addresses current regulations and how it can be modified to meet new and upcoming regulations

Explainable AI Examples

Arguably, one of the most common uses for XAI is in a regulatory context. Risk assessments, credit scores, and claims decisions often require deeper questioning.

But XAI can help in multiple industries and departments, even when regulations aren’t involved. Arguably, the explainability aspect of XAI leaves can augment decision making across multiple industries and departments, including:

  • Ensuring sentiment analysis is correctly understanding the context and meaning of words used
  • Modifying demand forecasts when new data conflicts with training data and may affect model performance
  • Increasing accuracy of medical diagnosis in healthcare and helping to explain to patients how the diagnosis was made
  • Making real-time decisions about emergency medical treatment and triage
  • Explaining final hiring decisions made by HR departments
  • Providing marketing recommendations to customers and increasing relevancy of messaging and offerings
  • Recommending next actions for sales reps and calculating sales commission
  • Explaining the reasoning behind price optimization decisions
  • Adjusting customer service chatbots based on feedback in line with sentiment analysis

There are many more use cases for XAI, but the bottom line is that XAI helps you make sense of the complex results from machine learning models. Not everyone needs to understand how a machine learning model works, but everyone needs to trust the results. And XAI helps us do that.

 

Choosing the Right Platform for Model Explainability

The right ML platform for you depends on a multitude of factors, but the four main ones are these:

  • Who needs to use it to build models?
  • Who needs to use it to explain the results?
  • What kind of data do you use?
  • What kind of answers do you need?

Who needs to use it to build and deploy models?

You might not have a data science team. Or even a data scientist.

Nearly 80% of companies don’t have a single data scientist on staff, so the person using your platform is most likely going to be an analyst or knowledge worker.

You’ll need a platform that almost anyone can pick up and use to create models — and not just for your current team, but also future hires.

Cloud-based and on-premise platforms with drag-and-drop capabilities (or no-code, low-code) offer the shortest learning curve for data science understanding and implementation.

Platforms with Automated ML (AutoML) further shorten the learning curve by providing guided paths and recommendations for model usage, eliminating the initial need to understand which models and mathematical processes should be used first.

If you have a dedicated team of data scientists, you can absolutely use AutoML and no-code, low-code platforms to speed up production, but your team will probably want to use Python, R, or another language along with it.

Who needs to use it to explain the results?

Similar to your analytics team, there is going to be a wide range of experience and knowledge amongst your leadership and executive teams, too.

Your platform will need to cater to both newly hired managers externally or promoted internally and top executives who might not interact with XAI regularly.

ML models that provide clear data lineage trails, notes for explaining data points, and automated explanations will shorten the learning curve for you and your team.

Additionally, platforms that provide schedule results and can convert models into analytic apps that can be shared will help your organization share data and increase transparency.

What kind of data do you use?

Consult with your team. Ask everyone what types of datasets they use and how they incorporate them into analysis now.

One of the largest drawbacks to developing ML models is all the prep work that goes into the early stages.

AutoML platforms that also automate analytics, such as preparing and cleaning data, can considerably speed up the deployment process.

Check to see how a platform handles different datasets and data sources. Can it handle all of them or does data need extra work before it can be handled? How easily can you repeat the process if you’re using it for forecasting and other models dependent on changing data?

What kind of answers do you need?

Some people can wait a week or a month or even a quarter for a report. Others need up-to-the-minute results.

The faster you need insights, the more you’ll need a platform that can quickly adapt to and incorporate new data. That includes the previously mentioned processes such as preparing and cleaning it.

All XAI models should help you answer multiple questions for your customers, your colleagues, and yourself. As we mentioned before, the shorter the learning curve for you, the faster you can provide answers for others.

Look for an ML platform that you feel comfortable using, too.

 

Recommended Resources

 
Webinar
Time Series Forecasting with Alteryx Machine Learning
Learn how Time series - the newest feature of Alteryx Machine Learning - will transform the forecasting process by making predictions robust, and handling seasonal trending with cutting edge ML modelling.
  • Alteryx Inspire
  • Data Science and Machine Learning
  • Machine Learning
Watch Now
 
Use Case
Predicting Email Open Rate
Email is the wide end of the sales funnel. Alteryx Machine Learning helps Marketing predict email open rates, so it can concentrate on the most productive leads, channels, and offers.
  • Data Science and Machine Learning
  • Marketing
  • Machine Learning
Learn More
 
Blog Post
The Art of the Analytics KPI, Part 2
  • Data Science and Machine Learning
  • Analytics Leader
  • Business Leader
Read Now