**Multiple Regression**

Chapter Goals

**After completing this chapter, you should be able to:**

§ Apply multiple regression analysis to business decision-making situations

§ Analyze and interpret the computer output for a multiple regression model

§ Perform a hypothesis test for all regression coefficients or for a subset of coefficients

§ Fit and interpret nonlinear regression models

§ Incorporate qualitative variables into the regression model by using dummy variables

§ Discuss model specification and analyze residuals

The Multiple Regression Model

Multiple Regression Equation

Multiple Regression Equation

Standard Multiple Regression Assumptions

§ The values x_{i }and the error terms ?_{i }are independent

§ The error terms are random variables with mean 0 and a constant variance, s^{2}.

(The constant variance property is called

homoscedasticity)

Standard Multiple Regression Assumptions

§ The random error terms, ?_{i} , are not correlated with one another, so that

§ It is not possible to find a set of numbers, c_{0}, c_{1}, . . . , c_{k}, such that

(This is the property of no linear relation for

the X_{j}’s)

Example:

2 Independent Variables

§ A distributor of frozen desert pies wants to evaluate factors thought to influence demand

§ Dependent variable: Pie sales (units per week)

§ Independent variables: Price (in $)

Advertising ($100’s)

§ Data are collected for 15 weeks

Pie Sales Example

Sales = b_{0} + b_{1} (Price)

+ b_{2} (Advertising)

Estimating a Multiple Linear

Regression Equation

§ Excel will be used to generate the coefficients and measures of goodness of fit for multiple regression

§ Excel:

§ Tools / Data Analysis… / Regression

§ PHStat:

§ PHStat / Regression / Multiple Regression…

Multiple Regression Output

The Multiple Regression Equation

Coefficient of Determination, R^{2}

§ Reports the proportion of total variation in y explained by all x variables taken together

§ This is the ratio of the explained variability to total sample variability

Coefficient of Determination, R^{2}

Estimation of Error Variance

§ Consider the population regression model

§ The unbiased estimate of the variance of the errors is

where

§ The square root of the variance, s_{e} , is called the standard error of the estimate

Standard Error, s_{e}

Adjusted Coefficient of Determination,

§ R^{2} never decreases when a new X variable is added to the model, even if the new variable is not an important predictor variable

§ This can be a disadvantage when comparing models

§ What is the net effect of adding a new variable?

§ We lose a degree of freedom when a new X variable is added

§ Did the new X variable add enough explanatory power to offset the loss of one degree of freedom?

Adjusted Coefficient of Determination,

§ Used to correct for the fact that adding non-relevant independent variables will still reduce the error sum of squares

(where n = sample size, K = number of independent variables)

§ Adjusted R^{2} provides a better comparison between multiple regression models with different numbers of independent variables

§ Penalize excessive use of unimportant independent variables

^{§ }Smaller than R^{2}

Coefficient of Multiple Correlation

§ The coefficient of multiple correlation is the correlation between the predicted value and the observed value of the dependent variable

§ Is the square root of the multiple coefficient of determination

§ Used as another measure of the strength of the linear relationship between the dependent variable and the independent variables

§ Comparable to the correlation between Y and X in simple regression

Evaluating Individual

Regression Coefficients

§ Use t-tests for individual coefficients

§ Shows if a specific independent variable is conditionally important

§ Hypotheses:

§ H_{0}: ?_{j} = 0 (no linear relationship)

§ H_{1}: ?_{j} ? 0 (linear relationship does exist

between x_{j} and y)

Evaluating Individual

Regression Coefficients

H_{0}: ?_{j} = 0 (no linear relationship)

H_{1}: ?_{j} ? 0 (linear relationship does exist

between x_{i} and y)

Test Statistic:

(df = n – k – 1)

Evaluating Individual

Regression Coefficients

H_{0}: ?_{j} = 0

H_{1}: ?_{j} ¹ 0

Confidence Interval Estimate

for the Slope

Confidence Interval Estimate

for the Slope

Test on All Coefficients

§ F-Test for Overall Significance of the Model

§ Shows if there is a linear relationship between all of the X variables considered together and Y

§ Use F test statistic

§ Hypotheses:

H_{0}: ?_{1} = ?_{2} = … = ?_{k} = 0 (no linear relationship)

H_{1}: at least one ?_{i} ? 0 (at least one independent

variable affects Y)

F-Test for Overall Significance

§ Test statistic:

where F has k (numerator) and

(n – K – 1) (denominator)

degrees of freedom

§ The decision rule is

F-Test for Overall Significance

F-Test for Overall Significance

H_{0}: ?_{1} = ?_{2} = 0

H_{1}: ?_{1} and ?_{2} not both zero

a = .05

df_{1}= 2 df_{2} = 12

Tests on a Subset of Regression Coefficients

§ Consider a multiple regression model involving variables x_{j} and z_{j} , and the null hypothesis that the z variable coefficients are all zero:

Tests on a Subset of Regression Coefficients

§ Goal: compare the error sum of squares for the complete model with the error sum of squares for the restricted model

§ First run a regression for the complete model and obtain SSE

§ Next run a restricted regression that excludes the z variables (the number of variables excluded is r) and obtain the restricted error sum of squares SSE(r)

§ Compute the F statistic and apply the decision rule for a significance level a

Prediction

§ Given a population regression model

§ then given a new observation of a data point

(x_{1,n+1}, x _{2,n+1}, . . . , x _{K,n+1})

the best linear unbiased forecast of y_{n+1} is

§ It is risky to forecast for new X values outside the range of the data used to estimate the model coefficients, because we do not have data to support that the linear model extends beyond the observed range.

Using The Equation to Make Predictions

Predictions in PHStat

§ PHStat | regression | multiple regression …

Predictions in PHStat

Input values

Residuals in Multiple Regression

Nonlinear Regression Models

§ The relationship between the dependent variable and an independent variable may not be linear

§ Can review the scatter diagram to check for non-linear relationships

§ Example: Quadratic model

^{§ }The second independent variable is the square of the first variable

Quadratic Regression Model

§ where:

?_{0} = Y intercept

?_{1 }= regression coefficient for linear effect of X on Y

?_{2 }= regression coefficient for quadratic effect on Y

?_{i} = random error in Y for observation i

Linear vs. Nonlinear Fit

Quadratic Regression Model

Testing for Significance: Quadratic Effect

§ Testing the Quadratic Effect

§ Compare the linear regression estimate

§ with quadratic regression estimate

§ Hypotheses

§ (The quadratic term does not improve the model)

§ (The quadratic term improves the model)

Testing for Significance: Quadratic Effect

§ Testing the Quadratic Effect

Hypotheses

§ (The quadratic term does not improve the model)

§ (The quadratic term improves the model)

§ The test statistic is

Testing for Significance: Quadratic Effect

§ Testing the Quadratic Effect

Compare R^{2} from simple regression to

R^{2} from the quadratic model

§ If R^{2} from the quadratic model is larger than R^{2} from the simple model, then the quadratic model is a better model

Example: Quadratic Model

§ Purity increases as filter time increases:

Example: Quadratic Model

§ Simple regression results:

y = -11.283 + 5.985 Time

Example: Quadratic Model

The Log Transformation

§ Original multiplicative model

§ Transformed multiplicative model

Interpretation of coefficients

For the multiplicative model:

§ When both dependent and independent variables are logged:

§ The coefficient of the independent variable X_{k} can be interpreted as

a 1 percent change in X_{k } leads to an estimated b_{k} percentage change in the average value of Y

§ b_{k} is the elasticity of Y with respect to a change in X_{k}

Dummy Variables

§ A dummy variable is a categorical independent variable with two levels:

§ yes or no, on or off, male or female

§ recorded as 0 or 1

§ Regression intercepts are different if the variable is significant

§ Assumes equal slopes for other variables

§ If more than two levels, the number of dummy variables needed is (number of levels – 1)

Dummy Variable Example

Dummy Variable Example

Interpreting the

Dummy Variable Coefficient

Interaction Between Explanatory Variables

§ Hypothesizes interaction between pairs of x variables

§ Response to one x variable may vary at different levels of another x variable

§ Contains two-way cross product terms

§

Effect of Interaction

§ Given:

_{§ }Without interaction term, effect of X_{1} on Y is measured by ?_{1}

_{§ }With interaction term, effect of X_{1} on Y is measured by ?_{1} + ?_{3 }X_{2}

§ Effect changes as X_{2} changes

Interaction Example

Significance of Interaction Term

§ The coefficient b_{3 }is an estimate of the difference in the coefficient of x_{1} when x_{2} = 1 compared to when x_{2} = 0

§ The t statistic for b_{3} can be used to test the hypothesis

§ If we reject the null hypothesis we conclude that there is a difference in the slope coefficient for the two subgroups

Multiple Regression Assumptions

**Assumptions**:

§ The errors are normally distributed

§ Errors have a constant variance

§ The model errors are independent

Analysis of Residuals

in Multiple Regression

§ These residual plots are used in multiple regression:

_{§ }Residuals vs. y_{i}

§ Residuals vs. x_{1i}

§ Residuals vs. x_{2i}

_{§ }Residuals vs. time (if time series data)

Chapter Summary

§ Developed the multiple regression model

§ Tested the significance of the multiple regression model

§ Discussed adjusted R^{2 }( R^{2 })

§ Tested individual regression coefficients

§ Tested portions of the regression model

§ Used quadratic terms and log transformations in regression models

§ Used dummy variables

§ Evaluated interaction effects

§ Discussed using residual plots to check model assumptions