Share on facebook
Share on twitter
Share on linkedin
Share on pinterest

Regression Evaluation Metrics

Once we build our regression model, how can we measure the goodness of fit?

We have various regression evaluation metrics to measure how well our model fits the data.

In this article, we will see some of the most commonly used metrics to asses the regression model.

MEAN SQUARED ERROR:

The first metric we are going to see is the mean squared error.

It calculates the average of the square of the errors between the actual and the predicted values.

Lower the value, better the regression model.

mse formula

Here yi denotes the true score for the ith data point, and ŷi indicates the predicted value and n is the number of data points.

However, the problem with MSE is since the values are squared, the unit of measurement is changed.

To overcome this problem, we use the root mean squared error.

ROOT MEAN SQUARED ERROR:

RMSE is the most popular metric to measure the error of a regression model.

This metric is calculated as the square root of the average squared distance between the actual and the predicted values.

Taking the square root of the mean squared error will give you RMSE.

Since we are, taking the square root it reverts the unit of measurement to its original scale.

rmse

It can be used to compare models only whose errors are measured in the same units.

MEAN ABSOLUTE ERROR:

It is calculated as the mean of the absolute difference between the actual and the predicted values.

regression evaluation metrics

Where ŷi is the predicted value of the ith sample, and yi is the corresponding actual value, and N is the number of samples.

Both RMSE and MAE are scale dependent and can be used to compare models only if they are measured in the same units.

To compare models with different units, we can use metrics like MAPE or RAE.

MEAN ABSOLUTE PERCENTAGE ERROR(MAPE):

MAPE measures the error in percentage terms.

MAPE is calculated as the absolute difference between the actual and predicted values divide over every observation.

regression evaluation metrics

It is multiplied by 100 to make it a percentage error.

Where n is the size of the sample, ŷt is the value predicted by the model, and yt is the actual value.

However, the problem here is, it produces infinite or undefined values for zero or close-to-zero actual values.

RELATIVE ABSOLUTE ERROR:

RAE is defined as the ratio between the sum of absolute errors and the sum of absolute deviations.

regression evaluation metrics

pi is the predicted value, and ai is the actual value, and a_bar is the mean of actual values.

R-SQUARE:

R-square, also known as the coefficient of determination, is one of the commonly used regression evaluation metrics.

It measures the proportion of variance of the dependent variable explained by the independent variable.

If the R-squared value is 0.90, then we can say that the independent variables have explained 90% of the variance in the dependent variable.

It ranges from 0 to 1, where 0 indicates that the fit is poor.

It is determined as the ratio of the sum of squares and the total sum of squares,

r-squared

where SSE is the sum of squared errors and computed as,

sse

and SST(total sum of squares) is given as,

sst

However, the problem with r-square is that the value spuriously increases as more number of independent variables are added.

Even if the variables are irrelevant, the value of r-square will still increase.

Assume we are comparing two models A and B with the same dependent variable and A having more independent variables than model B.

Then there is a chance that the r-square value of model A is greater than or equal to model B just because model A has more independent variables.

So r-squared cannot be used for having meaningful comparison between models.

ADJUSTED R-SQUARE:

To counter the problem which is faced by r-square, Adjusted r-square penalizes adding more independent variables which don’t increase the explanatory power of the regression model.

The value of adjusted r-square is always less than or equal to the value of r-square.

It ranges from 0 to 1, the closer the value is to 1, the better it is.

adjusted r squared

n = the sample size

k = the number of independent variables

MEDIAN ABSOLUTE ERROR:

Calculated as the median of all absolute differences between the actual and predicted values.

median absolute error

where yi is the actual value, and ŷ is the predicted value.

MAE is robust to outliers.

 

SUMMARY:

Performance metrics are vital for any machine learning model.

In this article, we discussed several important regression evaluation metrics.

We first discussed Mean Squared Error(MSE), which measures the average squared error of our predictions.

The problem with MSE is that since the values are squared the unit of measurement is changed.

To fill this deficiency, we looked at another metric called RMSE, which reverts the value to its original unit of measurement by taking a square root.

Then we discussed MAPE and RAE, which can be used to compare two models of different scales.

We then discussed r-squared and learned why it could not be used to have a sensible comparison between two models.

To counter the problem faced by r-squared, we discussed the adjusted r-squared.

Finally, we discussed Median Absolute Error, which is resilient to outliers.

Love What you Read. Subscribe to our Newsletter.

Stay up to date! We’ll send the content straight to your inbox, once a week. We promise not to spam you.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe Now! We'll keep you updated.