Artificial Intelligence Can Simultaneously Deliver Precision and Clarity in Its Functions
Artificial Intelligence Puzzle: White Box or Black Box?
As the reliance on computers escalates for our daily tasks, we often encounter mysterious and arbitrary decisions, blaming it on the ominous algorithm. Organizations are deploying automated tools in a multitude of applications, where opaque and erroneous conclusions are frequently reached with no obvious reasoning behind how the decision was made.
This predicament comes with severe risks, harming customer trust and business performance. The absence of clear explanations is one of the primary concerns regarding AI, and it significantly impacts an employee's readiness to use such technology.
However, organizations continue to invest and roll out these systems, taking the stance that inscrutable logic is inherently superior. For years, technology leaders have believed that the more a user comprehends the logic, the less efficient the algorithm would be.
The Cat-and-Mouse Game of White Box and Black Box
Data scientists and researchers coin the terms white box and black box. Whitebox AI models consist of a limited number of straightforward rules that can easily be understood by the average individual. On the other hand, black box AI employs numerous decision trees, many of them nested, resulting in a labyrinthine system that leaves many wondering, "Does it make it more accurate?"
The cognitive load theory states that humans can only comprehend up to seven levels of nodes or rules, making it impossible for designers to explain the decisions made by the program. But does this added confusion translate to better accuracy?
The Elusive Question: Accuracy vs. Explainability
Researchers examined both white box and black box AI models extensively, seeking answers to this quandary. Results show that, astonishingly, in 70% of cases, both models produced accurate conclusions. These findings suggest that there is no clear tradeoff between accuracy and explainability, and implementing a more transparent model does not require sacrificing the algorithm's precision.
This revelation aligns with the emerging trend of Explainable AI (XAI) models. Delving into real-life applications, it is evident that these systems are no more accurate than a basic predictive model that considers the available data. Companies planning to adopt a black box approach should consider the following factors before making a decision.
- Default to white box: Start with white box models to determine if the complex decision trees of black box models are truly necessary. If the performance gains from the black box approach are insignificant, it's best to stick with white box models.
- Know your data: The type of data at hand largely dictates which model will be employed. White box models are more effective when data quality is questionable, requiring corrections. Conversely, media-rich data like images and videos demands black box models. In complex scenarios, such as face recognition, medical diagnostics, and autonomous vehicles, black box models provide a valuable advantage.
- Know your user: Transparency is pivotal for building and maintaining trust. In situations where fair decision-making is crucial for users, emphasis should be placed on explainability.
- Know your organization: An organization's readiness to use AI systems is a crucial factor in deciding between white box and black box models. For those new to digital technologies, start with simpler models (white box) and gradually evolve to complex models.
- Know the regulations: In certain industries, explainability might be a legal requirement. In such cases, white box models are a safer choice.
- Explain the unexplainable: In rare instances, black box models may offer exceptional accuracy. These instances are acceptable for specific regulations and user concerns, like medical diagnoses, fraud detection, etc. In these situations, organizations should take precautions to reinforce trust, mitigate risks, and build user confidence, such as simulating a white box model to remove biases and expand trust.
While there is no one-size-fits-all solution for AI implementation, it is imperative to balance risks with benefits, tailored to the unique business context, data quality, and complexity of tasks at hand. In many cases, AI models can achieve impressive results without compromising the trust of the users or introducing hidden biases in decision-making.
Balancing Act: Maximizing AI Accuracy and Transparency by François Candelon, Theodoros Evgeniou, and David Martens, HBR 2023/05
Sharing Options:
- Click to print (Opens in new window)Print
- Click to share on LinkedIn (Opens in new window)LinkedIn
- Click to share on [X] (Opens in new window)[X]
- Click to email a link to a friend (Opens in new window)Email
- Click to share on Facebook (Opens in new window)Facebook
- Click to share on Pinterest (Opens in new window)Pinterest
- Click to share on WhatsApp (Opens in new window)WhatsApp
- Click to share on Clipboard (Opens in new window)Clipboard
- Click to share on AddToAny (Opens in new window)AddToAny
- Click to share on Pocket (Opens in new window)Pocket
- Click to share on Reddit (Opens in new window)Reddit
Additional Topics
Transparency in AI Projects
Transparency is essential to build trust with users. Transparency is about providing clear and concise information about the AI's purpose, function, and impact.
In the coming years, AI will play an ever-increasing role in our lives, so it's essential to be more open about where it is used and its purpose. Being transparent about AI technology fosters trust, which is essential for its wide-reaching acceptance[1][4].
Industry Leaders and AI Ethics
In an open letter signed by over 2,300 experts in the field, AI leaders and researchers emphasized the importance of prioritizing societal well-being and ethical considerations in AI development[5]. AI can serve humanity or wreak havoc, depending on the intentions and actions of those who create and control the technology.
To ensure that AI serves humanity, AI leaders must place ethical concerns and social welfare at the forefront of development, creating technology that is transparent, accountable, fair, and discriminatory[6].
Where AI and Finance Collide
"AI is not a substitute for human intelligence; it is a tool to amplify human creativity and ingenuity" - Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence and IT Professor at the Graduate School of Business.
AI programs are gaining traction in the finance industry as they offer powerful predictive capabilities, ensuring firms remain competitive. AI-based debit and credit decisions are already influencing consumers' financial lives, solidifying the importance of addressing transparency, accountability, and ethical concerns in AI adoption[2].
grid
[1] Karadi, R. (2021). Artificial Intelligence in Retail Industry: Why it's Important and What it Offers. E-book.
[2] Saif, J., & Schawbel, D. (2023, November 23). The Ethics and Governance of Algorithmic Decision-Making in the Financial Sector. Harvard Law School Forum on Corporate Governance.
[3] Hoffmann, A., & Mataric, M. (2019). AI for Everyone: Smarter Humans, Smarter Machines, and Better Teams. Prentice Hall Professional Computing Series.
[4] Dignum, F. (2022). Ethics and Accountability of AI for Human-Centred and Critical Decision-Making. Springer Science+Business Media.
[5] Schmidhuber, J. (2015). CLARIFY: A Call for Constraining and Proving Lifelong Autonomous Learning for Embodied Agents. arXiv preprint arXiv:1503.07512.
[6] Amodeo, J. (2019). Ethics in artificial intelligence: developments in the European Union. European Parliamentary Research Service, EPRS (European Parliament Research Service).
- Technology's advancement in artificial intelligence, such as the debate between white box and black box AI models, poses questions about accuracy and explainability, particularly in situations where transparency is crucial for building user trust.
- Implementing AI systems, whether it's for business decisions or everyday tasks, involves weighing the benefits against the risks, which includes balancing the need for accuracy with the necessity of transparency, following the current trend of Explainable AI (XAI) models.