Ridge Regression

Table of contents

Automate your business at $5/day with Engati

REQUEST A DEMO
Switch to Engati: Smarter choice for WhatsApp Campaigns 🚀
TRY NOW
Ridge regression

What is Ridge Regression?

Ridge regression is a specialized technique used to analyze multiple regression data that is multicollinear in nature. It is a fundamental regularization technique, but it is not used very widely because of the complex science behind it. However, it is fairly easy to explore the science behind ridge regression in r if you have an overall idea of the concept of multiple regression. Regression stays the same, but in regularization, the way the model coefficients are determined is different.

The main idea of what is ridge regression is that it focuses on fitting a new line that does not fit.

ridge-regression
Source: Towards Data Science

What is Multicollinearity?

Multicollinearity is a phenomenon in which one predicted value in multiple regression models is linearly predicted with others in order to attain a certain level of accuracy.

Multicollinearity essentially occurs when there are high correlations between more than two predicted variables. 

You could say that multicollinearity refers to the existence of a correlation between independent variables in modeled data. It could cause inaccuracy in the regression coefficient estimates.

It could even magnify the standard errors in regression coefficients and reduce the efficiency of any t-tests.

Multicollinearity can cause deceiving results and p-values to be produced, making the model more redundant and reducing the efficiency and reliability of its predictability.

Multicollinearity can enter data from various sources. This could happen during data collection, from the population or linear model constraints, or an over-defined model, outliers, or model specification or choice.

During data collection, multicollinearity could be introduced if the data is sourced by making use of an inappropriate sampling procedure. It could even happen if the data is from a smaller sample set than expected. 

Multicollinearity could also be caused by population or model constraints because of physical, legal, or political constraints.

If the model is over-defined, you will see multicollinearity being caused because of the existence of more variables than observations. You can avoid this while deploying the model.

You can also reverse multicollinearity by eliminating the outliers (extreme variable values that can cause multicollinearity) before applying ridge regression.

What is the formula of Ridge Regression? 

formula-for-ridge-regression

How does Ridge Regression work?

Ridge regression carries out L2 regularization. In this, the penalty equivalent is added to the square of the magnitude of coefficients. Here is the minimization objective:

With a a response vector y ∈ Rn and a predictor matrix X ∈ Rn×p, you can define it's coefficients like this:

  • λ is the turning factor which has control over the strength of the penalty term.
  • When λ = 0, the objective is similar to simple linear regression. You will get the same coefficients as simple linear regression.
  • When λ = ∞, the coefficients that you get would be zero due to infinite weightage on the square of coefficients as anything less than zero makes the objective infinite.
  • When 0 < λ < ∞, the magnitude of λ decides the weightage that is allotted to the various parts of the objective.
  • The minimization objective = LS Obj + λ (the sum of the square of coefficients)
    Here, LS Obj is the Least Square Objective. This is the linear regression objective without regularization.

When ridge regression in r shrinks the coefficients down towards zero, it tends to introduce some level of bias. However, it can also reduce the variance to a great extent, which gives you a better mean-squared error.  λ multiplies the ridge penalty and controls the amount of shrinkage. A large λ denotes a greater level of shrinkage and varying coefficient estimates can be got for varying values of λ.

3x your revenue with Chatbots and Live Chat
Schedule a demo

Where is Ridge Regression used?

It is used for the purpose of creating parsimonious models when the number of predictor variables in a given set exceeds the number of observations or when the dataset has multicollinearity. It is essentially used for the analysis of multicollinearity in multiple regression data.


What is Lasso Regression?

Lasso regression or Least Absolute Shrinkage and Selection Operator regression is very similar to ridge regression from a conceptual point of view. Like ridge regression, it too adds a penalty for non-zero coefficients. But, while ridge regression imposes an L2 penalty (penalizing the sum of squared coefficients), lasso regression imposes an L1 penalty (penalizing the sum of their absolute values). Because of this, in lasso regression, for high values of λ, many coefficients are completely reduced to zero.


How does Ridge Regression deal with Multicollinearity?

When there is multicollinearity, least squares estimates tend to be unbiased, however, there variances are very large and they might be quite far off from the true value. Ridge regression reduces the standard errors by introducing a degree of bias to the regression estimates. It essentially aims to get estimates that are more reliable.

What are the advantages of Ridge Regression?

  • It protects the model from overfitting.
  • It does not need unbiased estimators.
  • There is only enough bias to make the estimates reasonably reliable approximations to the true population values.
  • It performs well when there is a large multivariate data with the number of predictors (p) larger than the number of observations (n).
  • The ridge estimator is very effective when it comes to improving the least-squares estimate in situations where there is multicollinearity.
  •  Model complexity is reduced.

What are the disadvantages of Ridge Regression?

  • It includes all the predictors in the final model.
  • It is not capable of performing feature selection.
  • It shrinks coefficients towards zero.
  • It trades variance for bias.

What is the difference between Ridge Regression & Least Squares? 

Linear Regression is one of the most commonly used regression modeling techniques. In LR, the dependent variable is continues, where independent variables can be continuous or discreet depending on the equation. Linear Regression establishes a relationship between dependent variable (Y) and one or more independent variables (X) using a best fit straight line (also known as regression line).

Ridge Regression is a technique used to avoid data complexities from multicollinearity (independent variables are highly correlated) using shrinkage parameter λ (lambda).

In multicollinearity, even though the least squares estimates (OLS) are unbiased, their variances are large which deviates the observed value far from the true value. By adding a degree of bias to the regression estimates, ridge regression reduces the standard errors.

What is Shrinkage & Regularisation in Ridge Regression Model? 

Shrinkage methods is a more modern technique in which we don't use variables explicitly but rather we fit a model containing all parameters. The determinant uses a technique that constrains or regularizes the coefficient estimates, or equivalently, that shrinks the coefficient estimates towards zero relative to the least-squares estimates.

The shrinkage is also known as regularization, can have the effect of reducing variance and can also perform variable selection.

What are the alternative interpretations of Ridge Regression? 

Ridge regression could also be given a Bayesian interpretation. If we assume that every parametric statistic has expectation zero and variance, then ridge regression is often shown to be the Bayesian solution. Another viewpoint is mentioned by detractors because of the “phony data” viewpoint. 

It is often shown that the ridge regression solution is achieved by adding rows of knowledge to the first data matrix. These rows are constructed using 0 for the dependent variables and therefore the root of 'k' or zero for the independent variables. 

One extra row is added for every experimental variable. The thought that manufacturing data yields the ridge regression results has caused tons of concern and has increased the controversy in its use and interpretation.

Close Icon
Request a Demo!
Get started on Engati with the help of a personalised demo.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
*only for sharing demo link on WhatsApp
Thanks for the information.
We will be shortly getting in touch with you.
Oops! something went wrong!
For any query reach out to us on contact@engati.com
Close Icon
Congratulations! Your demo is recorded.

Select an option on how Engati can help you.

I am looking for a conversational AI engagement solution for the web and other channels.

I would like for a conversational AI engagement solution for WhatsApp as the primary channel

I am an e-commerce store with Shopify. I am looking for a conversational AI engagement solution for my business

I am looking to partner with Engati to build conversational AI solutions for other businesses

continue
Finish
Close Icon
You're a step away from building your Al chatbot

How many customers do you expect to engage in a month?

Less Than 2000

2000-5000

More than 5000

Finish
Close Icon
Thanks for the information.

We will be shortly getting in touch with you.

Close Icon

Contact Us

Please fill in your details and we will contact you shortly.

This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
Thanks for the information.
We will be shortly getting in touch with you.
Oops! Looks like there is a problem.
Never mind, drop us a mail at contact@engati.com