Explainable AI(XAI)

Explainable AI collectively refers to techniques or methods, which help explain a given AI  model’s decision-making process. This newly found branch of AI has shown enormous potential, with newer and more sophisticated techniques coming each year. Some of the most famous XAI techniques include SHAP (Shapley Additive exPlanations), DeepSHAP, DeepLIFT, CXplain, and LIME. This article covers LIME in detail.

Explainable AI(XAI) Using LIME

The vast field of Artificial Intelligence(AI) has experienced enormous growth in recent years. With newer and more complex models coming each year, AI models have started to surpass human intellect at a pace that no one could have predicted. But as we get more accurate and precise results, it’s becoming harder to explain the reasoning behind the complex mathematical decisions these models take. This mathematical abstraction also doesn’t help the users maintain their trust in a particular model’s decisions. 

e.g., Say a Deep Learning model takes in an image and predicts with 70% accuracy that a patient has lung cancer. Though the model might have given the correct diagnosis, a doctor can’t really advise a patient confidently as he/she doesn’t know the reasoning behind the said model’s diagnosis.

Here’s where Explainable AI(XAI) comes in. 

Similar Reads

Explainable AI(XAI)

Explainable AI collectively refers to techniques or methods, which help explain a given AI  model’s decision-making process. This newly found branch of AI has shown enormous potential, with newer and more sophisticated techniques coming each year. Some of the most famous XAI techniques include SHAP (Shapley Additive exPlanations), DeepSHAP, DeepLIFT, CXplain, and LIME. This article covers LIME in detail....

LIME (or Local  Interpretable Model-agnostic Explanations)

The beauty of LIME its accessibility and simplicity. The core idea behind LIME though exhaustive is really intuitive and simple! Let’s dive in and see what the name itself represents:...