Global vs Local Interpretability

This video explains why we need interpretability for our AI and the two approaches typically used for interpretability namely Global and Local Interpretability. While global interpretability can give global insights about the data, it cannot explain why the prediction for a specific data point is the way it is. Local interpretability models such as LIME…

MAP at K : An evaluation metric for Ranking

This video talks about the Mean Average Precision at K (popularly called the MAP@K) metric that is commonly used for evaluating recommender systems and other ranking related problems. For a more detailed post on measuring how well recommender systems are doing, checkout out this post on evaluation metrics for recommender systems.

Fairness in ML: How to deal with bias in ML pipelines?

In this 30 minute video, we talk about Bias and Fairness in ML workflows: Why we need to handle fairness in ML models How biases typically creep into the ML pipeline How to measure these biases How to rectify the biases in our pipeline. A usecase with word embeddings Click here to get the latest…

Naive Bayes Classifier : Advantages and Disadvantages

Recap: Naive Bayes Classifier Naive Bayes Classifier is a popular model for classification based on the Bayes Rule. Note that the classifier is called Naive – since it makes a simplistic assumption that the features are conditionally independant given the class label. In other words: Naive Assumption:  P(datapoint | class) = P(feature_1 | class) *…

Evaluation Metrics for Recommendation Systems

This video explores how one can evaluate recommender systems. Evaluating a recommender system involves (1) If the right results are being recommended (2) Whether more relevant results are being recommended at the top/ first compared to less relevant results. There are two popular types of recommender systems. Explicit Feedback recommender systems and implicit feedback recommender…

Target Encoding for Categorical Features

This video describes target encoding for categorical features, that is more effecient and more effective in several usecases than the popular one-hot encoding. Recap: Categorical Features and One-hot encoding Categorical features are variables that take one of discrete values. For instance: color that could take one of {red, blue, green} or city that can take…

Bias in Machine Learning : Types of Data Biases

Bias in Machine Learning models could often lead to unexpected outcomes. In this brief video we will look at different ways we might end up building biased ML models, with particular emphasis on societal biases such as gender, race and age. Why do we care about Societal Bias in ML Models? Consider an ML model…

What is AUC : Area Under the Curve?

What is AUC ? AUC is the area under the ROC curve. It is a popularly used classification metric. Classifiers such as logistic regression and naive bayes predict class probabilities¬† as the outcome instead of the predicting the labels themselves. A new data point is classified as positive if the predicted probability of positive class…