This video shows the process of feature selection with Decision Trees and Random Forests. Why do we need Feature Selection? Often we end up with large datasets with redundant features that need to be cleaned up before making sense of the data. Check out this related article on Recursive Feature Elimination that describes the challenges…
Category: Machine Learning
Recursive Feature Elimination for Feature Selection
This video explains the technique of Recursive Feature Elimination for feature selection when we have data with lots of features. Why do we need Feature Elimination? Often we end up with large datasets with redundant features that need to be cleaned up before making sense of the data. Some of the challenges with redundant features…
Berkson’s Paradox
This video explains the Berkson’s Paradox. Berkson’s Paradox typically arises from selection bias when we create our dataset, that could lead to unintended inferences from our data. Summary of contents: Berkson’s Paradox illustrated with Burger and Fries example Berkson’s Paradox in the dating scenario Mathematical explanation of Berkson’s Paradox Berkson’s Paradox Example in understanding correlation…
Bayesian Neural Networks
Bayesian Neural networks enable capturing uncertainity in the parameters of a neural network. This video contains: A brief Recap of Feedforward Neural Networks Motivation behind a Bayesian Neural Network What is a Bayesian Neural Network Inference in a Bayesian Neural Network Pros and Cons of using a Bayesian Neural Network References to Code samples to…
What is Bayesian Logistic Regression?
Bayesian Logistic Regression In this video, we try to understand the motivation behind Bayesian Logistic regression and how it can be implemented. Recap of Logistic Regression Logistic Regression is one of the most popular ML models used for classification. It is a generalized linear model where the probability of success can be expressed as a…
Do we need to learn Linear Algebra for Machine Learning ?
A lot of things we do in the ML pipeline involve vectors and matrices Linear Algebra helps us understand how these vectors interact with each other, how to perform vector & Matrix operations. This video talks about whether we need Linear Algebra for Machine Learning.
What is Stacking ? Ensembling Multiple Dissimilar Models
Many of us have heard of bagging and boosting, commonly used ensemble learning techniques. This video describes ways to combine multiple dissimilar ML models through voting, averaging and stacking to improve the predictive performance.
What is Bayesian Modeling?
This video explains Bayesian Modeling : Why do we need Bayesian Modeling? What is Bayesian Modeling? What are some examples where we can practically use Bayesian Modeling ? Check out https://www.tensorflow.org/probability for code examples. We will have more videos and articles explaining Bayesian extensions of popular models shortly… bayesian_complete_short
What is the Maximum Likelihood Estimate (MLE)?
Probabilistic Models help us capture the inherant uncertainity in real life situations. Examples of probabilistic models are Logistic Regression, Naive Bayes Classifier and so on.. Typically we fit (find parameters) of such probabilistic models from the training data, and estimate the parameters. The learnt model can then be used on unseen data to make predictions….
Bias in Machine Learning : How to measure Fairness based on Confusion Matrix ?
Machine Learning models often give us unexpected and biased outcomes if the underlying data is biased. Very often, our process of collecting data is incomplete or flawed leading to data often not being representative of the real world. In this article, we see why we need to measure fairness for ML models and how we…