This video explains why we need interpretability for our AI and the two approaches typically used for interpretability namely Global and Local Interpretability. While global interpretability can give global insights about the data, it cannot explain why the prediction for a specific data point is the way it is. Local interpretability models such as LIME can be used to tackle this problem. Look at the video to get deeper insights!
Global vs Local Interpretability
Posted on