Nonlinear Regression - Simon Fraser University

44
Nonlinear Regression Chapter 2 of Bates and Watts © Dave Campbell 2009 Friday, June 12, 2009

Transcript of Nonlinear Regression - Simon Fraser University

Page 1: Nonlinear Regression - Simon Fraser University

Nonlinear RegressionChapter 2 of Bates and Watts

© Dave Campbell 2009

Friday, June 12, 2009

Page 2: Nonlinear Regression - Simon Fraser University

So far we’ve considered linear models

Here the expectation surface is a plane spanning a subspace of the observation space.

Our expectation surface has a flat shape .

Our model has a linear shape

When things are flat projections are easy to do and understand.

Friday, June 12, 2009

Page 3: Nonlinear Regression - Simon Fraser University

nonlinear least squares

Gauss-Newton

Geometry

Matlab Functions

Friday, June 12, 2009

Page 4: Nonlinear Regression - Simon Fraser University

The N observations are modeled by

Where might be:

Yn = f (Xn ,θ) + Zn

f (Xn ,θ) Y0e−αX + β /α

θ1Xθ2 + X

11+ e−β0 −β1X

Friday, June 12, 2009

Page 5: Nonlinear Regression - Simon Fraser University

We will define the expectation function

And the observation process

And again assume

η(θ) = f (Xn ,θ)

Yn = η(θ) + Zn

E(Z ) = 0 var(Z ) = E(Z 'Z ) = σ 2I

Friday, June 12, 2009

Page 6: Nonlinear Regression - Simon Fraser University

Lipoprotein problem from Bates and Watts

% of the original tracer observations by time (days)

Friday, June 12, 2009

Page 7: Nonlinear Regression - Simon Fraser University

Consider the 1 compartment model:

Although we could transform the data into the model:

We will ignore this linearization and use the nonlinear model as a simple example

η(θ) = f (Xn ,θ)η(θ) = θ1e

−θ2X

log(Yn ) = log(θ1) −θ2Xn + errorn

Friday, June 12, 2009

Page 8: Nonlinear Regression - Simon Fraser University

The response surface is defined by changing values of the single parameter in

Our expectation surface is then a one dimensional manifold

η(θ) = 100e−θ2X

Friday, June 12, 2009

Page 9: Nonlinear Regression - Simon Fraser University

Lipoprotein observation and response surface

Friday, June 12, 2009

Page 10: Nonlinear Regression - Simon Fraser University

Again we wish to minimize the residual vector

But there may be more than 1 location where the angle between the residual vector and the tangent to the expectation surface are orthogonal

Friday, June 12, 2009

Page 11: Nonlinear Regression - Simon Fraser University

The problem is even worse in the 2 dimensional model where initial conditions are unknown

η(θ) = f (Xn ,θ)η(θ) = θ1e

−θ2X

Friday, June 12, 2009

Page 12: Nonlinear Regression - Simon Fraser University

Lipoprotein observation and 2 parameter response surface

lines show the response surface where theta 1 is fixed but altering theta2dot connecting spreading lines making the kinks show changes in theta1 with fixed theta 2

Friday, June 12, 2009

Page 13: Nonlinear Regression - Simon Fraser University

steps:

1. Find the point on the expectation surface closest to

2. Find corresponding to this point

η = YY

θ

Friday, June 12, 2009

Page 14: Nonlinear Regression - Simon Fraser University

Use a linear approximation to the expectation surface to iteratively improve an initial guess for

We will use a linear Taylor approximation to the expectation surface, and then use linear regression methods.

We will need to keep updating the Taylor expansion and keep updating our estimate.

Gauss-Newton Method

θ (0)

θ

Friday, June 12, 2009

Page 15: Nonlinear Regression - Simon Fraser University

The expectation function

For a single observation the Taylor expansion for the p dimensional parameter vector:

Including all observations we get

where

η(θ) = θ1e−θ2X

η(θ) ≈ η(θ (0) )+ ∂η(θ)∂θk θ=θ (0 )

θk −θ(0)⎡⎣ ⎤⎦

k=1

p

η(θ) ≈ η(θ (0) )+V (0) (θ −θ (0) )

V (0) =

∂η(X1,θ)∂θ1

... ∂η(X1,θ)∂θ p

... ... ...∂η(Xn ,θ)

∂θ1... ∂η(Xn ,θ)

∂θ p

⎢⎢⎢⎢⎢⎢

⎥⎥⎥⎥⎥⎥θ=θ (0 )

Friday, June 12, 2009

Page 16: Nonlinear Regression - Simon Fraser University

The model η(θ) = θ1e−θ2X

V (0) = e−θ2X −θ1Xe−θ2X⎡

⎣⎤⎦ θ=θ (0 )

Friday, June 12, 2009

Page 17: Nonlinear Regression - Simon Fraser University

The fit to the observations on the response surface and the Gauss Newton path

Friday, June 12, 2009

Page 18: Nonlinear Regression - Simon Fraser University

The first step jump really far away from the region where the linear Taylor approximation is valid.

We can improve the Gauss Newton algorithm by enforcing the condition

This means that at each step we have to adjust the step size ∂ so that it doesn’t take us to a worse location in the response surface

SSE(θ (i+1) ) < SSE(θ (i ) )

Friday, June 12, 2009

Page 19: Nonlinear Regression - Simon Fraser University

We adjust the algorithm by halving ∂ if the SSE condition is not met and trying again.

Friday, June 12, 2009

Page 20: Nonlinear Regression - Simon Fraser University

Friday, June 12, 2009

Page 21: Nonlinear Regression - Simon Fraser University

1. Approximate the expectation surface by an expectation plane at the current value

2. generate a residual vector

3. Project the residual onto the tangent plane to get new value of expectation surface

4. Map the move to through the linear approximation to get a step

5. move to the point on the actual expectation surface

Geometryη(θ (0) )η(θ (0) )

z = y −η(θ (0) )

η(θ (1) )

η(θ (1) )δ

η(θ (0) + δ )Friday, June 12, 2009

Page 22: Nonlinear Regression - Simon Fraser University

Gauss Newton Convergence

Friday, June 12, 2009

Page 23: Nonlinear Regression - Simon Fraser University

Gauss Newton Convergence

In Matlab code I told it when to stop

Friday, June 12, 2009

Page 24: Nonlinear Regression - Simon Fraser University

Gauss Newton Convergence

In Matlab code I told it when to stop

Ideal convergence is based on the angle of the residual vector

Friday, June 12, 2009

Page 25: Nonlinear Regression - Simon Fraser University

Inference

Inference is based on a linear approximation to the expectation surface

then we project a disk onto the approximated expectation plane

Then transform the values back to the parameter space to get an inference region

Friday, June 12, 2009

Page 26: Nonlinear Regression - Simon Fraser University

Other IntervalsMarginal intervals:

Use the same linear approximation and apply the linear marginal inference regions

Inference bands for expected response

Replace in linear case with and replace the matrix by and the derivative vector with the corresponding derivative matrix entry

x0β f (x0 ,θ)X V

x0v0

Friday, June 12, 2009

Page 27: Nonlinear Regression - Simon Fraser University

1. Why is it so wide near time 0?

2. Why is it so narrow near time 10?

3. Why is it so wide generally everywhere?

4. why is it wavy?Friday, June 12, 2009

Page 28: Nonlinear Regression - Simon Fraser University

Using 3 or 12 observationsFriday, June 12, 2009

Page 29: Nonlinear Regression - Simon Fraser University

Using 3 or 12 observationsFriday, June 12, 2009

Page 30: Nonlinear Regression - Simon Fraser University

[beta,r,J,COVB,mse] = nlinfit(X,y,fun,beta0)

X is the same as our X

y is a vector of observations

fun is a function handle for the nonlinear function, the function takes inputs (beta,X) and gives Yhat as an output

beta0 is the starting point for the iterative estimates

Matlab NLS function

Friday, June 12, 2009

Page 31: Nonlinear Regression - Simon Fraser University

[beta,r,J,COVB,mse] = nlinfit(X,y,fun,beta0)

beta is NLS point estimate

r is a residual vector

J is the Jacobian of the function fun evaluated at each observation: this is our V

COVB is the estimated covariance matrix for parameters

mse is an estimate of the error variance term: our s2

Matlab NLS function

Friday, June 12, 2009

Page 32: Nonlinear Regression - Simon Fraser University

Marginal Confidence intervals for parameters

CI_NLS = nlparci(beta,resids,'jacobian',J)

beta is the parameter estimate output from nlinfit

resids is the residual vector output from nlinfit

J is the Jacobian output from nlinfit

Friday, June 12, 2009

Page 33: Nonlinear Regression - Simon Fraser University

Confidence interval for a new value or x

[ypred,delta] = nlpredci(fun,x,beta,resid,'jacobian',J)

beta is the parameter estimate output from nlinfit

resids is the residual vector output from nlinfit

J is the Jacobian output from nlinfit

ypred ± delta is the CI

Friday, June 12, 2009

Page 34: Nonlinear Regression - Simon Fraser University

Or do it all using a GUI

nlintool(X,Y,@lipomodel_Xlast,theta)

Give it your data, the model and a starting point and it will give you point estimates for parameters

confidence bounds for single points, the response function, and future observations

bounds - simultaneous are for all points simultaneously, non-simultaneous are for individual points

Friday, June 12, 2009

Page 35: Nonlinear Regression - Simon Fraser University

The downside to the Matlab built-in functions:

They do not produce joint interval estimate ellipses for parameters.

We must use the QR decomposition for this.

Friday, June 12, 2009

Page 36: Nonlinear Regression - Simon Fraser University

Simplest method:

Combine NLS and ODE solvers, Just use a solver built into fun

Nonlinear Least Squares for ODEs

[beta,r,J,COVB,mse] = nlinfit(X,y,fun,beta0)

Friday, June 12, 2009

Page 37: Nonlinear Regression - Simon Fraser University

dVdt

= γ V −V3 / 3+ R( )dRdt

= −1γ

βR +α −V( )Friday, June 12, 2009

Page 38: Nonlinear Regression - Simon Fraser University

dVdt

= γ V −V3 / 3+ R( )dRdt

= −1γ

βR +α −V( )Friday, June 12, 2009

Page 39: Nonlinear Regression - Simon Fraser University

Inputs to fun must be ß and X but in our case X is time. X must a vector with the same length as data Y

The output of fun must be the fit to the data, (the expectation surface at the current parameter point), we have to make it a vector.

function [yfit] = NLS_FhN(pars,time) odefn = @fhnfunode;odeopts = odeset('RelTol',1e-13);[junk,path] = ode45(odefn,time(1:401),pars(4:5),odeopts,pars(1:3)); yfit=[path(:,1);path(:,2)];

Friday, June 12, 2009

Page 40: Nonlinear Regression - Simon Fraser University

To run this program:clearload(strcat('/Volumes/iamdavecampbell/MCMC many times/',... '1000 random data sets/FhN MCMC data set_38.mat'))time=[time;time]; % stack timeY=[Ydata(:,1);Ydata(:,2)]; % stack databeta0=[.5,.5,1,-.5,.5];fun=@NLS_FhN [beta,r,J,COVB,mse] = nlinfit(time,Y,fun,beta0);

Friday, June 12, 2009

Page 41: Nonlinear Regression - Simon Fraser University

To run this program:

[Ypred,delta] = nlpredci(fun,time,beta,r,'jacobian',J)n=length(Ydata(:,1));CI_Y=repmat(Ypred,1,3)+[-delta,zeros(2*n,1),delta]; plot(time(1:n),CI_Y(1:n,:),'b',time(1:n),Ydata(:,1),'.b',... time(1:n),CI_Y(n+1:2*n,:),'k',time(1:n),Ydata(:,2),'.k')

Friday, June 12, 2009

Page 42: Nonlinear Regression - Simon Fraser University

CI for the expectation functionFriday, June 12, 2009

Page 43: Nonlinear Regression - Simon Fraser University

To run this program:CI_NLS = nlparci(beta,r,'jacobian',J)[CI_NLS(:,1),beta',CI_NLS(:,2)]

>> [CI_NLS(:,1),beta',CI_NLS(:,2)]

ans =

0.1718 0.1958 0.2199 0.0599 0.1682 0.2764 2.9338 2.9744 3.0150 -1.1795 -1.0844 -0.9893 0.9081 0.9896 1.0712

Friday, June 12, 2009

Page 44: Nonlinear Regression - Simon Fraser University

nlinfit uses a numerical Jacobian to produce the Taylor approximation to the response surface.

There are other functions that let you include the formula for the Jacobian: lsqnonlin

Friday, June 12, 2009