80% discount promotion

Sum of Squared Errors

Loading...


Sum of Squared Errors (SSE) Calculation in Linear Regression Analysis

Introduction

In linear regression analysis, the Sum of Squared Errors (SSE) measures the total deviation in the model's predictions, that is the difference between the observed and the predicted values. It is calculated as the difference between the total variability in the observed data and the variability explained by the regression model. When SSE is zero, it indicates that the model perfectly explains all the variability in the data.

Calculation Steps

To compute SSE, follow these steps:

Step 1: Calculate SSXXSS_{XX} (Sum of Squares for X)

SSXXSS_{XX} represents the total variability in the predictor variable XX. It is calculated using the formula:

SSXX=i=1nXi21n(i=1nXi)2SS_{XX} = \sum_{i=1}^{n} X_i^2 - \frac{1}{n} \left(\sum_{i=1}^{n} X_i\right)^2

where XiX_i are the values of the predictor variable, and nn is the number of observations.

Step 2: Calculate SSYYSS_{YY} (Sum of Squares for Y)

SSYYSS_YY represents the total variability in the response variable YY. It is calculated using:

SSYY=i=1nYi21n(i=1nYi)2SS_{YY} = \sum_{i=1}^{n} Y_i^2 - \frac{1}{n} \left(\sum_{i=1}^{n} Y_i\right)^2

where YiY_i are the values of the response variable, and nn is the number of observations.

Step 3: Calculate SSXYSS_{XY} (Sum of Squares for X and Y)

SSXYSS_{XY} measures the covariance between XX and YY. It is calculated using:

SSXY=i=1nXiYi1n(i=1nXi)(i=1nYi)SS_{XY} = \sum_{i=1}^{n} X_i Y_i - \frac{1}{n} \left(\sum_{i=1}^{n} X_i\right) \left(\sum_{i=1}^{n} Y_i\right)

Step 4: Calculate Regression Coefficients

  • Slope (β1^\hat{\beta_1}):

β1^=SSXYSSXX\hat{\beta_1} = \frac{SS_{XY}}{SS_{XX}}

  • Intercept (β0^\hat{\beta_0}):

β0^=i=1nYinβ1^×i=1nXin\hat{\beta_0} = \frac{\sum_{i=1}^{n} Y_i}{n} - \hat{\beta_1} \times \frac{\sum_{i=1}^{n} X_i}{n}

Step 5: Calculate SS_Regression (Sum of Squares for Regression)

SS_Regression represents the portion of the total variability in YY that is explained by the regression model. It is given by:

SSR=β1^×SSXYSS_{R} = \hat{\beta_1} \times SS_{XY}

Step 6: Calculate SSE (Sum of Squared Errors)

SSE measures the portion of the total variability in YY that is not explained by the regression model. It is given by:

SSE=SSYYSSRSSE = SS_{YY} - SS_{R}

Example Calculation

Given the following data:

  • X values: [1, 2, 3, 4, 5]
  • Y values: [2, 4, 6, 8, 10]

We calculate:

SSXXSS_{XX}:

SSXX=Xi21n(Xi)2=5515(15)2=10SS_{XX} = \sum X_i^2 - \frac{1}{n} \left(\sum X_i\right)^2 = 55 - \frac{1}{5}(15)^2 = 10

SSYYSS_{YY}:

SSYY=Yi21n(Yi)2=22015(30)2=40SS_{YY} = \sum Y_i^2 - \frac{1}{n} \left(\sum Y_i\right)^2 = 220 - \frac{1}{5}(30)^2 = 40

SSXYSS_{XY}:

SSXY=XiYi1n(Xi)(Yi)=11015(15×30)=20SS_{XY} = \sum X_i Y_i - \frac{1}{n} \left(\sum X_i\right) \left(\sum Y_i\right) = 110 - \frac{1}{5}(15 \times 30) = 20

Slope (β1^\hat{\beta_1}):

β1^=SSXYSSXX=2010=2\hat{\beta_1} = \frac{SS_{XY}}{SS_{XX}} = \frac{20}{10} = 2

Intercept (β0^\hat{\beta_0}):

β0^=Yinβ1^×Xin=62×3=0\hat{\beta_0} = \frac{\sum Y_i}{n} - \hat{\beta_1} \times \frac{\sum X_i}{n} = 6 - 2 \times 3 = 0

SS_Regression:

SSR=β1^×SSXY=2×20=40SS_{R} = \hat{\beta_1} \times SS_{XY} = 2 \times 20 = 40

SSE:

SSE=SSYYSSR=4040=0SSE = SS_{YY} - SS_{R} = 40 - 40 = 0

Conclusion

SSE quantifies the discrepancy between the observed data and the predictions made by the regression model. A SSE of 0 indicates that the model perfectly explains the variability in the response variable.

Alternatives to SSE in statistics

Mean Absolute Error (MAE)

Definition: MAE measures the average magnitude of errors in a set of predictions, without considering their direction. It is the mean of the absolute differences between predicted values and actual values.

Formula:

MAE=1ni=1nyiy^i\text{MAE} = \frac{1}{n} \sum_{i=1}^{n} |y_i - \hat{y}_i|

Advantages:

  • MAE is less sensitive to outliers compared to MSE because it does not square the errors.
  • Provides a more intuitive measure of average error magnitude.

Disadvantages:

  • Does not differentiate between larger and smaller errors, as all errors are treated equally.

Root Mean Squared Error (RMSE)

Definition: RMSE is the square root of the average of the squared differences between predicted and actual values. It penalizes large errors more severely than MAE due to the squaring of errors.

Formula:

RMSE=1ni=1n(yiy^i)2\text{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2}

Advantages:

  • RMSE provides a measure of error in the same units as the dependent variable, making it easier to interpret.
  • Emphasizes larger errors more than MAE, which can be useful if large errors are particularly undesirable.

Disadvantages:

  • Sensitive to outliers due to the squaring of errors.

Mean Absolute Percentage Error (MAPE)

Definition: MAPE measures the size of the error in percentage terms. It is the mean of the absolute percentage errors between predicted and actual values.

Formula:

MAPE=1ni=1nyiy^iyi×100%\text{MAPE} = \frac{1}{n} \sum_{i=1}^{n} \left| \frac{y_i - \hat{y}_i}{y_i} \right| \times 100\%

Advantages:

  • Provides error metrics in percentage terms, which can be easier to interpret in some contexts.
  • Useful for comparing forecast performance across different scales.

Disadvantages:

  • Can be problematic if actual values are close to zero, leading to very high percentage errors.