What’s the Difference Between Black Box vs. Explainable AI (XAI) Models?

Published by
Nico Lassaux
on
April 29, 2024
What’s the Difference Between Black Box vs. Explainable AI (XAI) Models?

More and more professionals rely on artificial intelligence capabilities to streamline their day-to-day work. These models and subsequent algorithms can be in charge of critical decision-making processes, such as loan approvals, medical diagnoses, and even criminal sentencing.

However, the lack of transparency in these models can lead to biased outcomes, lack of accountability, and even security risks. This is where the difference between black box and explainable AI (XAI) models comes into play.

In this article, I’ll provide a high-level overview of the differences between the two.

What is a Black Box AI Model?

Black box AI models are systems where the internal workings are hidden, making it impossible to understand how they get to their outputs. We put data in, we get results out, but the process in between remains a mystery.

Why the Mystery?

The complexity of certain AI models, especially deep learning ones, with neural networks, makes them difficult to interpret by design. Imagine a long succession of calculations – understanding the logic behind each output becomes near impossible. It can be difficult to build a model that is both accurate on complex data and interpretable.

Why is This a Concern?

This lack of transparency raises several concerns:

- Bias and Fairness: Hidden biases in the training data can lead to discriminatory outcomes.

- Accountability and Trust: Without understanding the decision-making process, it's hard to trust the model's results or hold anyone accountable for potential errors.

- Safety and Security: Opaque models are vulnerable to manipulation, leading to potentially harmful consequences.

Black Box vs. Explainable AI (XAI)

The concerns surrounding black box models have fueled the movement for XAI, which aims to develop techniques for making AI models more transparent and accountable.

In most cases, these methods keep the complexity and accuracy intact but try to add a layer of interpretability on top. For example, for image classifications, XAI might highlight the regions of the image that were most influential in the model's decision. Or for numerical predictions, it would show which features drive the outputs up or down the most.

Humans as Black Boxes

Interestingly, even humans operate as black boxes to a certain extent. We can explain our thought process at a high level, but the more in-depth details of our decision-making remain hidden.

Gen AI to the Rescue?

Generative AI offers a promising path towards making AI models more explainable. By training models on both text generation and other tasks (like image recognition), we can enable them to articulate their reasoning, similar to how humans do.

Example in Real Estate

The real estate industry is used to black box price recommendation models for multi-family properties. According to recent lawsuits, it looks like these existing models might be, by design, colluding to keep prices artificially high by using each other's private data.

Where a human would have had to explain their reasoning for each estimation, these models have no such obligation, and these issues can go unnoticed for years. At HelloData, we have worked on price recommendation for years, and designing these models to be transparent can be challenging, since we are held accountable for the results from the start. But we believe that the benefits of transparency far outweigh the costs.

Conclusion

Training black box algorithms is the easy path for many companies. If it makes perfect sense for some non-critical applications, some other operations should not rely on such models. 

End users are now starting to question methodologies more than ever. We believe that companies should learn to develop and use explainable AI models to ensure transparency, accountability, and fairness in their decision-making processes. This will not only help build trust with customers but also ensure that the models are safe and secure.

For us, our mission is to provide meaningful and explainable rent recommendations that can be edited based on personal knowledge - built on top of 100% public, daily refreshed data!

Property managers, investors, brokers and appraisers all use HelloData to analyze multifamily comps, optimize rents, and increase deal flow.

Nico Lassaux

Data Scientist Nicolas Lassaux, with expertise in real estate analytics, was pivotal at Enodo and Walker & Dunlop. Co-founder of Hello Data, he's elevating real estate decisions through innovative data use. Passionate about running, cycling, and music.

Recommended Articles