- Published on
Notes on Linear Regression – A Deeper Exploration
- Authors
- Name
- Matthew Chang
There are notes on linear regression concepts I created for reference in stock price and energy cost prediction research.
Source: An Introduction to Statistical Learning with Applications in R Found Here.
Abstract
Chapter 3 focused on the use cases and interpretation of linear regression, a simple technique with high interpretability but low flexibility in model fitting (pg. 25 ISL). The chapter covers simple regression, multiple linear regression, key considerations when fitting a model, and a comparison to KNN regression.
The basic model is basically just a line like from Algebra I, but written as
We are trying to estimate a good and . This is done using linear algebra, which was not provided in the book. exists as error that we get from the model, since no model can fit the data 100% perfectly in practice.
Assessing Accuracy of Coefficients
We can see how off the data is from our prediction line. This will tell us how well our model is fitting. One way to test the accuracy is with the variance the standard error . A good thing to note here is that as increases, gets smaller, making the model more accurate. This is because more data points give the model a better idea of the overall trend.
If we want to find out how close and are to the actual values, we can use the following equations:
is the actual value of the point minus the mean.
- If actual and mean differ more, then the dataset is more spread out.
Is There Even a Relationship?
To determine if there is a relationship between our predictor variable and output variable , we can use a t-test. This test is similar to looking at a normal distribution but is used when dealing with samples rather than entire populations. For example, if we wanted to measure income, we might get a sample of 1,000 people, not the entire world.
We will get two results from testing:
- A t-statistic, which indicates how many standard deviations away we are from "normal".
- A p-value, which helps us determine if there is any relationship between the variables. We can set our null hypothesis as (no relationship) and our alternative hypothesis as (some relationship).
We can calculate our t-statistic using the equation:
Notice that we have a - 0, which is usually (the actual value), but our hypothesis says there is no relation, so we set it to 0 and see what happens.
- If t is large, maybe greater than 3, we can reject .
- Similarly, we can reject if our p-value is less than 0.05, or our level.
Assessing Accuracy of Model
Another important metric is the Residual Sum of Squares (RSS), which measures the distance between the line and the actual points, squared, and summed up:
While RSS can tell us how much error we have, it is not easily interpretable. We have two other tools to help with this, starting with Residual Standard Error (RSE):
RSE normalizes RSS, making it more interpretable. However, it is still hard to know what a good RSE is. That's why we use , which is the proportion of variance explained (a number between 0 (no fit) and 1 (perfect model)):
Total Sum of Squares (TSS) is the sum of all the differences between actual points and the mean, squared: .
In this model, TSS is how much varies, while RSS is the amount of variability the model is not able to explain. The difference is the amount of variability in the explained response.
- We can also use correlation, given by another equation (pg. 70). In linear regression, , and , so the correlation squared is .
- Even with easier interpretation, a good still depends on the context. 0.2 might seem really bad, but in some contexts, it can be good.
Multiple Linear Regression
We can now add more variables. Instead of just , we can have many:
We interpret as the average effect on of a 1-unit increase in , with all other predictors fixed.
Similarly, RSS is given by:
With simple regression, two different factors (like radio and news ads) might show an increase in sales the more spending is put into both forms of advertisement.
- However, in the example, multiple regression shows non-rejection of for news ads (indicating there isn't actually an increase in sales from increasing news ads).
- When looking at a correlation matrix, you can see that radio and news ads are correlated around 0.35.
- This is like saying that an increase in ice cream sales increases the number of shark attacks. Here, news ads don't increase sales; it's just that more radio ad spending tends to mean more news ad spending.
Questions to Consider
Relationship Between Response and Predictors?
Similar to our simple linear regression model, we can set up a hypothesis test to see if there is any sort of relationship between our variables and the outcome. We use an F-test here, as there are multiple variables. First, we define the null and alternative hypotheses as:
Next, we find our F-statistic:
- The F-stat adjusts for the number of predictors , so with many predictors, it is good to look at the F-stat, not just the p-value.
- Normal F-stat values are around 1 for . If large values, then reject , meaning there is a relationship.
- This also means that , or the expected value of the average amount of variability is greater than the noise/error.
- This is good because it suggests that the model is explaining the variability in our data.
- Remember: You can't fit a linear regression model if the number of parameters is greater than the number of samples .
Deciding Important Variables
After rejecting , we need to figure out which variables are affecting the outcome to make a new model with the important variables (called variable selection). Since there are possible models (way too many to try all), there are three possible approaches:
- Forward Selection: Begin with no predictors. Add a variable that gives the lowest RSS. Continue until some stopping rule.
- Backward Selection: Similar, but start with all variables and delete those with the largest p-values one at a time.
- Mixed: Combine the first two methods - Do a forward selection and monitor the results. If needed, apply backward selection.
We will learn more about model fit and selection in Chapter 5.
Model Fit
will always increase with more added variables. If we see that adding a parameter only increases slightly, we can drop it from the model. Keep in mind that overfitting will lead to bad results on independent test samples.
Predictions
There are three types of uncertainty:
- The least squares plane is only estimated for true . We can quantify this uncertainty by computing our confidence interval (CI).
- This is a linear approximation, so model bias exists.
- Random error will exist as irreducible error. We can use prediction intervals (rather than CI) which give us a wider range of uncertainty.
Qualitative Predictors
For non-numeric predictors (like marital status, employment status, etc.) with two possible values, we can create an indicator/dummy variable:
*Personal note: Recall that from linear algebra, a test matrix for least squares looks like:
- You can also use -1 instead of 0. It means that the coefficient will be interpreted differently, but the model will still work the same.
- Think of it as adding a gray area instead of black and white (yes or no). Here, the value of would be between the values of yes and no.
- For more than two levels/possible values, you need more dummy variables. Here is an example:
We are looking at four sections of the US - North, South, West, East. We want to first define our default value , which will be East here. This means that we will be comparing all the other areas to East:
- Example: South will have $18 less debt than East if .
- We use an F-test to test relationships and importance. By using difference coding (0/1 vs -1/1), we can measure contrasts differently.
Extending The Linear Model
So far, we have assumed that the relationship between variables is additive and linear - that is, predictor and response don't depend on values of other predictors, and the change in from a 1-unit change in is constant.
However, this isn't the case many times. For example, splitting a marketing budget on different things is better than putting 100% of the budget on just YouTube ads. This is known as synergy/interaction.
We can extend the model by adding an interaction term:
Rewritten:
Now, notice that there is an term as part of , meaning that a change in will affect .
- is the increase in effectiveness of associated with a 1-unit increase in .
- If was rejected for , then we have evidence that the true relationship is not additive (predictor and response do depend on other predictors).
Hierarchical principle: If we include the interaction , we should also include the main effects and in our model, even if p-values are large. This is denoted as .
Another thing you can try is polynomial regression, which is calculated like linear regression with multiple dimensions:
Be careful though, adding more powers may result in overfitting. More on this in Chapter 7.
Potential Issues with Regression Model
Non-Linearity
To see if the relationship is non-linear, we can plot the residuals vs. predictor . Ideally, there is no discernible pattern. However, if we see something like a U-shape, then the relationship might be non-linear. A simple way to fix this is to add powers (like polynomial regression) e.g., .
In the case of multiple regression, plot residuals vs. predicted (fitted) values .
Correlation of Error Terms
We assume that error terms are uncorrelated (if is positive, that doesn't mean is positive as well).
- If there is a correlation, the model will underestimate true standard errors (the CI will be too narrow).
- Plot the residuals as a function of time. If it is positively correlated, we may see tracking, or adjacent residuals having similar values.
Non-Constant Variance of Error Terms
We are also assuming that . However, sometimes variances of increase with the value of the responses (giving a funnel shape in the residual plot). This is called heteroscedasticity.
- Can reduce this error by using a concave function for our response , like or .
- This will result in a greater amount of shrinkage of larger responses.
- Since this is changing the data, we have to interpret it differently. If we use , the model now explains changes in the logarithm of the response, rather than the response itself. This is for more reliable interpretation, as model parameters will be optimized for the transformed relationship without heteroscedasticity. To then estimate values, we would run it through the model, then do a conversion back to the original, in this case with .
- We change our average variance , and can fit the model by weighted least squares, where .
Outliers
To find outliers, plot studentized residuals (divide each residual by estimated ).
- If the studentized residuals are greater than 3, it is a possible outlier.
- Usually, just remove that data point. Make sure to be careful, as it might indicate a deficiency with the model (like a missing predictor).
High Leverage Points
These are unusual values for (our data might be modeling family sizes to income, and we have one family size of 20). It becomes hard to tell some of these points for multiple linear regression, where we have a point floating away from the rest but don't have extreme values either. To find these points, we use the leverage statistic, where high values are bad.
For simple linear regression,
- increases with distance between and .
- is always between and 1.
- Average leverage for all observations is always . (If leverage stat is higher, then suspicious).
Collinearity
When we have two or more predictor variables that are closely related to each other, it's hard to separate individual effects. This causes to increase, causing the t-statistic to decrease. This means the power of the hypothesis test of correctly detecting non-zero coefficients is reduced. A simple way to test this is to look at a correlation matrix and look for linear patterns.
However, it is still possible for collinearity between three or more variables. To deal with this, compute the Variance Inflation Factor (VIF), the ratio of:
The smallest number possible is 1 (no collinearity at all), and a VIF greater than 5 or 10 indicates issues.
- : estimated coefficient for the jth predictor variable.
- : value when jth predictor is regressed on all other predictors (all predictors except ).
- Example: With collinearity between and , we will have a high . = small number, .
To deal with it,
- Drop one of the variables that is redundant.
- Combine variables (could make a new variable by taking the average of both).
Linear Regression vs KNN
Linear regression is parametric - it assumes a linear form. Meanwhile, K-Nearest Neighbors (KNN) regression is a simple non-parametric model (doesn't take a particular form). How it works:
- Pick a (small for rough fit, high for smooth).
- Pick a prediction point .
- Given a set of data points, KNN looks for number of points closest to (). Then, it estimates with the average of . The optimal value of depends on the bias-variance trade-off.
- Small = most flexible - low bias, high variance.
- Large = changing just one prediction observation won't mess with the model too much, but smoothing may cause bias by masking some structure in .
Some other points:
- If the data matches with a parametric form (, etc.), a parametric form model will perform better.
- In higher dimensions, KNN is worse than linear regression.
- It becomes a reduction in sample size, as we now have to accommodate for extra dimensions.
- This is known as the curse of dimensionality (where an observation has no nearby neighbors).