Does Machine Learning Struggle with Explainability?
It is quite common to hear the phrase that "AI/ML models are black boxes". In this article, let's try to analyze how true this is and if the state-of-affairs can be improved? Why seek explainability in ML models? You might be tempted to ask how it matters that ML/AI models are difficult to explain as long as they work? Let's start by answering this particular question. Why do we seek explainability in Machine Learning? Satisfying natural human curiosity - These can be asnwers to the following questions - Why do ML models work where traditional methods fail? Why do classical ML algorithms like random forest algorithms or SVMs show superior performance over deep neural networks in certain areas? All such questions stem from our natural curiosity to understand things. Adding to scientific knowledge - If something works and one is not able to explain why it has worked. Then you might be adding nothing to the existing scientific knowledge. However if the model works ...