Mathematics/Statistics/Regression Lines: Difference between revisions
Brodriguez (talk | contribs) (Create page) |
Brodriguez (talk | contribs) m (Brodriguez moved page Statistics/Regression Lines to Mathematics/Statistics/Regression Lines) |
||
(2 intermediate revisions by the same user not shown) | |||
Line 6: | Line 6: | ||
Since this is regression line is a standard straight line, it can be represented by a standard {{ ic | <math>y = mx + b</math>}} equation. | Since this is regression line is a standard straight line, it can be represented by a standard {{ ic | <math>y = mx + b</math>}} equation. | ||
== Residuals == | == Residuals == | ||
Line 16: | Line 17: | ||
We can then take this a step further and combine all residuals in our dataset to get an | We can then take this a step further and combine all residuals in our dataset to get an overall view of "how closely the regression line matches our dataset". | ||
<math>\sum_{i=1}^n (r_i)^2</math> | <math>\sum_{i=1}^n (r_i)^2</math> | ||
Line 23: | Line 24: | ||
== Least Squares Regression == | == Least Squares Regression == | ||
'''Least Squares Regression''' is one of the more popular ways to minimize the sum of all residuals in our dataset. | '''Least Squares Regression''' is one of the more popular ways to minimize the sum of all residuals in our dataset. We can calculate this regression line with a few steps: | ||
First, we note that the equation we want to calculate is ultimately some form of {{ ic | <math>y = mx + b</math>}}. | |||
Our regression line will always go through the point <math>(\bar{x},\bar{y})</math>, which denotes the [[Statistics/Core_Measurements#Mean|means]] of our x and y. So it's safe to use that as the <math>x</math> and <math>y</math> in our equation. From there, we can move to calculating our <math>m</math> and <math>b</math>. | |||
=== Calculating our M and B === | |||
Next, we can calculate line slope with the equation | |||
<math>m = r \frac{S_y}{S_x}</math> | |||
Where | |||
* <math>r</math> is the [[Statistics/Correlation Coefficients|correlation coefficient]] for our dataset. | |||
* <math>S_y</math> is the [[Statistics/Core_Measurements#Standard Deviation|standard deviation]] of our x. | |||
* <math>S_x</math> is the [[Statistics/Core_Measurements#Standard Deviation|standard deviation]] of our y. | |||
We can then proceed to get our <math>b</math> with simple algebra: | |||
<math>b = y - mx</math> | |||
where | |||
* <math>x</math> is the [[Statistics/Core_Measurements#Mean|mean]] of our x. | |||
* <math>y</math> is the [[Statistics/Core_Measurements#Mean|mean]] of our y. | |||
* <math>m</math> is the m we just calculated, above. | |||
Putting all of this together, we can replace the m and b in our {{ ic | <math>y = mx + b</math>}}, which will give us a line equation for our regression. | |||
For further explanation, see [https://www.khanacademy.org/math/statistics-probability/describing-relationships-quantitative-data/regression-library/v/calculating-the-equation-of-a-regression-line this Khan Academy video]. |
Latest revision as of 17:21, 25 October 2020
A regression line is effectively an attempt to fit some data to a straight line.
Basically, given some set of data points, you create a line that as closely as possible fits the data. This is accomplished by trying to minimize the total distance between all points and the line itself.
Since this is regression line is a standard straight line, it can be represented by a standard
equation.
Residuals
Given a single data point, a residual is the distance between that single point and the regression line.
A positive residual value indicates that the data point is somewhere above the regression line. A negative residual value indicates that the data point is somewhere below the regression line. Larger values indicate the point is farther away.
To calculate the residual for some point at , we have the following equation for some and from our regression line:
We can then take this a step further and combine all residuals in our dataset to get an overall view of "how closely the regression line matches our dataset".
Squaring means means that negative residual values don't cancel out positive values. It also means points that are farther away from the line end up with more weight than points closer to the line.
Least Squares Regression
Least Squares Regression is one of the more popular ways to minimize the sum of all residuals in our dataset. We can calculate this regression line with a few steps:
First, we note that the equation we want to calculate is ultimately some form of
.
Our regression line will always go through the point , which denotes the means of our x and y. So it's safe to use that as the and in our equation. From there, we can move to calculating our and .
Calculating our M and B
Next, we can calculate line slope with the equation
Where
- is the correlation coefficient for our dataset.
- is the standard deviation of our x.
- is the standard deviation of our y.
We can then proceed to get our with simple algebra:
where
Putting all of this together, we can replace the m and b in our
, which will give us a line equation for our regression.
For further explanation, see this Khan Academy video.