10.2. OLS Linear Regression#

We begin with the familiar topic of ordinary least squares (OLS) linear regression. The table below shows an excerpt of Chicago Public School data for 2011–2012 from the Chicago Data Portal. One expects a higher average ACT score to be associated with a higher percentage of college eligibility.

School zipcode

Average ACT Score

College Eligibility

60605

25.1

80.7

60607

27

91.6

60660

16.5

14.2

Source: Chicago Data Portal,

https://data.cityofchicago.org/Education/Chicago-Public-Schools-Progress-Report-Cards-2011-/9xs2-f89t

The figure below shows a scatterplot of n=83 data points partly reproduced in the above table together with the OLS regression line.

../../../_images/fig110.png

10.2.1. Least-Squares Solutions#

Consider a collection of n data points (x1,y1),(x2,y2),,(xn,yn) in R2. We seek the line y=c0+c1x that best fits these data, where c0 is the y-intercept of the line and c1 the slope of the line. As shown in the figure below, an OLS loss function J(c0,c1) sums the squared vertical separations between the data points (x1,y1),(x2,y2),,(xn,yn) and the line y=c0+c1x.

../../../_images/fig22.png

If the n data points are collinear, then there exists an exact solution (c0,c1) to the system

c0+c1x1=y1,
c0+c1x2=y2,
c0+c1xn=yn.

In matrix form, this system of equations becomes

Ac=y,

where

A=(1x11x21xn),c=(c0c1),andy=(y1y2yn).

In general, this linear system with n>2 equations and 2 unknowns (c0,c1) will not have an exact solution. Instead, our goal to find a linear fit to the data is based on a least-squares solution, one that solves the following optimization problem:

10.2.2. OLS LINEAR REGRESSION OPTIMIZATION PROBLEM#

OLS Linear Regression Optimization Problem

Find (c0,c1) which minimizes the loss function J(c0,c1) defined as

J(c0,c1)=12i=1n(yi(c0+c1xi))2=12yAc2.

(The factor of 1/2 multiplying the sum is introduced to simplify the theoretical analysis of the loss function.)

The loss function J is zero when the points are collinear and situated on the line y=c0+c1x; otherwise, J is positive, since it is half the sum of the squared vertical separations between data points and the line y=c0+c1x, as shown in the previous figure.

10.2.3. Minimizing the OLS Loss Function via Normal Equations#

Let A be an n×2 matrix of real numbers, and let the linear transformation L:R2Rn be defined by L(c)=Ac, where c is a 2×1 column vector. The range of A is the set of all n×1 vectors defined by

rangeofA={AccR2}.

By choosing c=(1,0)T, we see that the first column of A (denoted a1) is in the range of A. By choosing c=(0,1)T, we see that the second column of A (denoted a2) is also in the range of A.

Minimizing the loss function J(c)=12yAc2 is equivalent to minimizing yAc, the latter being the distance between y and an arbitrary vector Ac that is in the range of A (see figure below). If there is no exact solution (i.e., y is not in the range of A), this minimization is accomplished by choosing c^ such that vector yAc^ is orthogonal to the range of A. That is, for c^ to be a least-squares solution to Ac=y, the vector yAc^ must be orthogonal (perpendicular) to each vector in the range of A. Since both column vectors of A (namely, a1 and a2) are in the range of A, it follows that ai(yAc^)=aiT(yAc^)=0 for each column vector ai of matrix~A.

../../../_images/fig32.png

Hence,

AT(yAc^)=0
ATyATAc^=0
ATy=ATAc^

The last equation is called the normal equation for the system Ac=y. Solving the normal equations, one obtains optimal values for c0,c1.

Example 2.1.#

Example 2.1

Consider the data (1,1), (0,0), (1,2), (2,3). Use the normal equations to find the OLS regression line for the data.

Click to show

Solution.

The system is

c0c1=1,c0=0,c0+c1=2,c0+2c1=3;

or, in matrix form,

(11101112)(c0c1)=(1023).

The normal equations are

(10.1)#(11111012)(1023)=(11111012)(11101112)(c0c1),

or

(67)=(4226)(c0c1).

The least-squares solution is therefore c0=11/10, c1=4/5.

10.2.4. Equivalence of Gradient-Based Optimization#

Note that the normal equations are equivalent to gradient-based minimization of the OLS linear regression loss function J(c0,c1)=12i=1n(yi(c0+c1xi))2:

AT(yAc^)=(111x1x2xn)(y1(c0+c1x1)y2(c0+c1x2)yn(c0+c1xn))=(i=1n(yi(c0+c1xi))i=1nxi(yi(c0+c1xi)))=(Jc0Jc1)=(00).

The normal equations are equivalent to setting both partial derivatives of J equal to zero, as is required to minimize the loss function J(c0,c1).

Example 2.2.#

Example 2.2

For the data points in Example 2.1, show how gradient-based optimization of the loss function J gives the same values for c0 and c1.

Click to show

Solution:

The loss function J(c0,c1) is

J(c0,c1)=12[(1(c0c1))2+(0c0)2+(2(c0+c1))2+(3(c0+2c1))2].

To minimize J, we solve the linear system for J(c1,c2)=0:

Jc0=4c0+2c16=0,Jc1=2c0+6c17=0.

Solving this system, we obtain c0=11/10 and c1=4/5.

The system used to find these critical values is equivalent to:

(10.2)#(Jc0Jc1)=((1(c0c1))+(0c0)+(2(c0+c1))+(3(c0+2c1)(1(c0c1))+0(0c0)+(2(c0+c1))+2(3(c0+2c1)))=(11111012)(1023)(11111012)(11101112)(c0c1)=(00),

Gradient-based optimization expressed is indeed equivalent to the normal equations.

To say that solving the normal equations is mathematically equivalent to gradient-based optimization of the OLS loss function does not imply that the normal equations offer the best numerical method for optimization of the loss function [Epperly 2022]. Beyond the scope of this Module is an assessment of different numerical approaches such as gradient descent and its variants including the Adam method [Sun et. al. 2019], matrix factorizations such as SVD or QR [Aggarwal 2020], and mean-centering and standardizing data [Toth 2020].

In addition to facilitating numerical analysis, mean-centering simplifies the linear algebra approach. Mean-centered data is obtained from a collection of data points by replacing the original dataset (xi,yi) with the data points (xix¯,yiy¯), i=1,n, where x¯ and y¯ are the respective means of the x and y coordinates of the original data. One can show that for mean-centered data, c^0=0, and the regression line is y=c^1x. Let x=(x1,x2,...,xn)tr. For mean-centered data, the regression vector y^=c^1x is obtained by projecting the vector y directly onto the vector x as shown in the figure below.

../../../_images/fig42.png

The variation equation states that the total variation in the y-values is the sum of the explained variation (variation in the corresponding y-values on the regression line) and the unexplained variation (the sum of squared residuals or twice the value of the loss function J). For mean-centered data, the total variation is ∣∣y2, the explained variation is ∣∣c^1x2 and the unexplained variation is ∣∣yc^1x2. In other words, the variation equation for mean-centered data is simply the Pythagorean theorem.

10.2.5. Exercises#

Exercises

  1. Consider the data (1,0), (4,5), (7,8). Use the normal equations to find the least-squares solution line y=a+bx that best fits the data.

  2. Consider the data (1,1), (0,0), (1,2), (2,3). Use the normal equations to find the least-squares solution for the parabola y=a+bx+cx2 that best fits the data.