3DR Mandrewglaws.com/wp-content/uploads/2018/05/Active... · 1Constantine, P. G. Active Subspaces:...

1
A CTIVE S UBSPACES AND S UFFICIENT D IMENSION R EDUCTION IN 3DR ESISTIVE M AGNETOHYDRODYNAMICS A NDREW G LAWS 1 ,P AUL C ONSTANTINE 1 ,T IM W ILDEY 2 , AND J OHN S HADID 2 1 Colorado School of Mines (contact:[email protected]), 2 Sandia National Laboratories A CTIVE S UBSPACES Assumptions: Deterministic model with m scalar inputs f (x) R, x R m Input parameters drawn according to a density function ρ(x) Model output is differentiable with f (x) R m Goal: Determine directions in parameter space along which f (x) changes the most, on average Method: C = Z (f )(f ) T ρdx Interpretation: Importance of basis vectors relates directly to eigenvalues λ i = Z (f ) T w i 2 ρdx Dimension of reduced subspace not explicitly defined Obtain a low-dimensional approximation to model f (x) g (W T 1 x), W T 1 x R n , n<m S UFFICIENT D IMENSION R EDUCTION Assumptions: m scalar predictors to a scalar response y = f (x)+ , x R m ,y R Goal: Determine subspace of predictor space where conditional distribution of response is unchanged Method: Sliced inverse regression (SIR): C = Z E(x|y )E(x|y ) T dy Sliced average variance estimate (SAVE): C = Z (I - cov (x|y )) 2 dy Principal Hessian directions (PHD): C = E ( 2 E (y |x) ) Interpretation: Reduction defined by subspace and not by any specific basis Subspace maintains conditional distribution of response y |x y |η T 1 x, ··· , η T n x H ARTMANN P ROBLEM P arameter Description Range μ viscosity [0.05, 0.2] ρ density [1, 5] P 0 volumetric applied pressure drop [0.5, 3] η resistivity [0.5, 3] B 0 applied magnetic field [0.1, 1] Active subspace analysis Sufficient dimension reduction (using SIR) Subspace convergence Subspace error = || ˆ W T 1 W 2 || 2 MHD G ENERATOR P arameter Description Range μ viscosity [0.001, 0.01] P 0 volumetric applied pressure drop [0.1, 0.5] η resistivity [0.1, 10] ρ density [0.1, 10] Active subspace analysis Sliced inverse regression Sliced average variance estimate Principal Hessian directions C ONCLUSIONS For the simulations examined, we observed similar dimension reduction between ac- tive subspaces and the SDR methods. However, in studying the subspace conver- gence of the various techniques, we saw that active subspaces best approximates its true dimension reduction space when only a small number of samples is available (M< 100). Furthermore, active subspaces had much smaller dependence on the sampling than the SDR methods as shown through the bootstrap errors. R EFERENCES 1 Constantine, P. G. Active Subspaces: Emerging Ideas in Dimension Reduction for Parameter Studies. SIAM, Philadelphia (2015) 2 Cook, R. D. Regression Graphics: Ideas for Studying Regressions through Graphics. Wiley, New York(1998) 3 Cook, R. D. and Weisberg, S. “Sliced Inverse Regression for Dimensional Reduction: Comment.” Journal of the American Statistical Association. 86: 328-332. (1991) 4 Li, K. C. “On Principal Hessian Directions for Data Visualization and Dimension Reduction: Another Application of Stein’s Lemma.” Journal of the American Statistical Association. 87: 1025-1039. (1992) 5 Li, K. C. “Sliced Inverse Regression for Dimensional Reduction.” Journal of the American Statistical Association. 86: 316-342. (1991)

Transcript of 3DR Mandrewglaws.com/wp-content/uploads/2018/05/Active... · 1Constantine, P. G. Active Subspaces:...

Page 1: 3DR Mandrewglaws.com/wp-content/uploads/2018/05/Active... · 1Constantine, P. G. Active Subspaces: Emerging Ideas in Dimension Reduction for Parameter Studies. SIAM, Philadelphia

ACTIVE SUBSPACES AND SUFFICIENT DIMENSION REDUCTIONIN 3D RESISTIVE MAGNETOHYDRODYNAMICS

ANDREW GLAWS1, PAUL CONSTANTINE1, TIM WILDEY2, AND JOHN SHADID2

1Colorado School of Mines (contact:[email protected]), 2Sandia National Laboratories

ACTIVE SUBSPACES

Assumptions:• Deterministic model with m scalar inputs

f(x) ∈ R, x ∈ Rm

• Input parameters drawn according to a density function ρ(x)• Model output is differentiable with∇f(x) ∈ Rm

Goal:• Determine directions in parameter space along which f(x)

changes the most, on average

Method:C =

∫(∇f) (∇f)T ρ dx

Interpretation:• Importance of basis vectors relates directly to eigenvalues

λi =

∫ ((∇f)T wi

)2ρ dx

• Dimension of reduced subspace not explicitly defined• Obtain a low-dimensional approximation to model

f(x) ≈ g(WT1 x), WT

1 x ∈ Rn, n < m

SUFFICIENT DIMENSION REDUCTION

Assumptions:• m scalar predictors to a scalar response

y = f(x) + ε, x ∈ Rm, y ∈ R

Goal:• Determine subspace of predictor space where conditional

distribution of response is unchanged

Method:• Sliced inverse regression (SIR):

C =

∫E(x|y)E(x|y)T dy

• Sliced average variance estimate (SAVE):

C =

∫(I − cov (x|y))2 dy

• Principal Hessian directions (PHD):

C = E(∇2E (y|x)

)Interpretation:• Reduction defined by subspace and not by any specific basis• Subspace maintains conditional distribution of response

y|x ∼ y|ηT1 x, · · · ,ηT

nx

HARTMANN PROBLEM

Parameter Description Rangeµ viscosity [0.05, 0.2]ρ density [1, 5]∇P0 volumetric applied pressure drop [0.5, 3]η resistivity [0.5, 3]B0 applied magnetic field [0.1, 1]

Active subspace analysis

Sufficient dimension reduction (using SIR)

Subspace convergence

Subspace error = ||WT1 W2||2

MHD GENERATOR

Parameter Description Rangeµ viscosity [0.001, 0.01]∇P0 volumetric applied pressure drop [0.1, 0.5]η resistivity [0.1, 10]ρ density [0.1, 10]

Active subspace analysis

Sliced inverse regression

Sliced average variance estimate

Principal Hessian directions

CONCLUSIONS

For the simulations examined, we observed similar dimension reduction between ac-tive subspaces and the SDR methods. However, in studying the subspace conver-gence of the various techniques, we saw that active subspaces best approximates itstrue dimension reduction space when only a small number of samples is available(M < 100). Furthermore, active subspaces had much smaller dependence on thesampling than the SDR methods as shown through the bootstrap errors.

REFERENCES1 Constantine, P. G. Active Subspaces: Emerging Ideas in Dimension Reduction for Parameter Studies. SIAM, Philadelphia (2015)2 Cook, R. D. Regression Graphics: Ideas for Studying Regressions through Graphics. Wiley, New York (1998)3 Cook, R. D. and Weisberg, S. “Sliced Inverse Regression for Dimensional Reduction: Comment.” Journal of the American

Statistical Association. 86: 328-332. (1991)4 Li, K. C. “On Principal Hessian Directions for Data Visualization and Dimension Reduction: Another Application of Stein’s

Lemma.” Journal of the American Statistical Association. 87: 1025-1039. (1992)5 Li, K. C. “Sliced Inverse Regression for Dimensional Reduction.” Journal of the American Statistical Association. 86: 316-342.

(1991)