Post on 16-Jan-2022
Solving PDE related
problems using
deep-learning
Adar KahanaTel Aviv University
Under the supervision of
Eli Turkel (TAU) and
Dan Givoli (Technion)
Shai Dekel (TAU)
Waves seminar,
UC Merced
February 4th , 2021
1
Agenda
Motivation
Data driven problems
Obstacle identification and deep-learning
Dealing with CFL instability using deep-learning
2
Underwater acoustics
Sonar imaging
3
The wave problem
4
แท๐ข ิฆ๐ฅ, ๐ก = ๐๐๐ฃ ๐ ิฆ๐ฅ 2๐ป๐ข ิฆ๐ฅ, ๐ก , ิฆ๐ฅ โ ฮฉ, t โ (0, ๐]
๐ข ิฆ๐ฅ, 0 = ๐ข0 ิฆ๐ฅ , ิฆ๐ฅ โ ฮฉ
แถ๐ข ิฆ๐ฅ, 0 = แถ๐ข0 ิฆ๐ฅ , ิฆ๐ฅ โ ฮฉ
๐ข ิฆ๐ฅ, ๐ก = ๐ ิฆ๐ฅ, ๐ก , ิฆ๐ฅ โ ๐ฮฉ1, t โ 0, ๐
๐ป๐ข ิฆ๐ฅ, ๐ก = ๐ ิฆ๐ฅ, ๐ก , ิฆ๐ฅ โ ๐ฮฉ2, t โ [0, ๐]
๐ฮฉ1 โช ๐ฮฉ2 = ๐ฮฉ
where แถ๐ข0 ิฆ๐ฅ = 0 and ๐ ิฆ๐ฅ, ๐ก = ๐ ิฆ๐ฅ, ๐ก = 0
Ill-posed problems
In an experiment, we store the pressure at a small
number of sensors for all time steps
We wish to find the properties of the source or
obstacle from the data stored at these sensors
where the number of sensors << mesh
This is an inverse problem which is highly ill-posed
Hence, one cannot usually reconstruct the initial
conditions perfectly
Can we solve these types of ill-posed problems
with learning?
5
Partial information โRecordingโ the solution at a small set of
sensors placed in the domain ิฆ๐ฅs๐ ๐=1
๐พโ ฮฉ
Data โ
๐ข ิฆ๐ฅs1 , t
๐ข ิฆ๐ฅs2 , t
โฎ๐ข ิฆ๐ฅsn , t
+ ๐ฉ ๐, ๐2
The ill-posedness raises sensitivity to noise at
the sensors
6
Agenda
Motivation
Data driven problems
Obstacle identification and deep-learning
Dealing with CFL instability using deep-learning
7
Data driven problems
Supervised learning
Input data
Output labels
Training
Prediction (testing)
Drawback - sensitivity
8
Deep-learning
Training โweightsโ to learn connections in the data
Hidden multi-dimensional embeddings
Convolutions, Fully connected
โDeepโ and non-linear
Loss
9
1 0 10 1 01 0 1
โ
๐ค1 ๐ค2 ๐ค3
๐ค4 ๐ค5 ๐ค6
๐ค7 ๐ค8 ๐ค9
Physically-informed NN
Input: set of points from the initial and boundary
conditions
Output: solution in the domain
Loss: the problem
10
M. Raissi, P. Perdikaris, G.E. Karniadakis, Journal of computational Physics, 2018
Results
11
Agenda
Motivation
Data driven problems
Obstacle identification and deep-learning
Dealing with CFL instability using deep-learning
12
Problem definitionGiven the position of the source/s and data at a few
sensors but many time slices find the location, size and
shape of the unknown scatterers
Input: Sensors recordings (๐๐ ๐๐๐๐๐๐ ร ๐๐ก๐ ๐ก๐๐๐ ร ๐๐ ๐๐๐ ๐๐๐ )
Output: Obstacle?
13
Prior work
Location ิฆ๐ฅ
Shape and size โ
Circles: Radius
Rectangles: Height and Width
Complex shapes: need to be parametrized
โSoftโ obstacles โ
Semi-penetrable
Multiphysics
14
Labels solution - segments
15
Labels are ๐ ร ๐binary matrices
Predictions will be
๐ ร ๐ probability
matrices
Loss: NLL
Spatio-temporal architecture
16
Loss diagram
17
Probability
map
Physically informed loss
Using the segmentation network and output เทจ๐
Define a loss component based on:
Solve: ๐ข๐ก๐ก = 1 โ เทจ๐ ๐ฅ ๐2(๐ฅ) ฮ๐ข
Get sensor data: ๐ข๐ ๐ฅ๐ ๐ ๐=1
#๐ ๐๐๐ ๐๐๐ for each sample
Calculate MSE between ground truth
๐ข๐ ๐ฅ๐ ๐ ๐=1
#๐ ๐๐๐ ๐๐๐ and the prediction as component ๐2
Define the loss function for our network as:
๐ผ โ ๐1 + 1 โ ๐ผ โ ๐2such that ๐1 is the NLL loss described earlier
18
Numerical experiments Dirichlet BC
Compact Gaussian initial condition
Arbitrary polygonal obstacles โ
Generate number of edges
Generate edge length and angle
Generate location (๐ฅ0, ๐ฆ0, ๐ง0)
โ Enormous samples space
Generated only 25,000 samples
19
Probability images
20
Neural network โ results Intersection over union: 0 โค ๐ผ๐๐ ๐ด, ๐ต =
๐ดโฉ๐ต
๐ดโช๐ตโค 1
Up to 66% IOU score
21
A.K., E. Turkel, D. Givoli, S. Dekel, Journal of computational Physics, 2020
Agenda
Motivation
Data driven problems
Obstacle identification and deep-learning
Dealing with CFL instability using deep-learning
22
Explicit schemes and CFL One-dimensional wave equation
CFL condition for stability: ๐ผ =๐ฮ๐ก
ฮ๐ฅโค 1
FDCD: ๐ข๐๐+1 = 2๐ข๐
๐ โ ๐ข๐๐โ1 + ๐ผ2 ๐ข๐+1
๐ โ 2๐ข๐๐ + ๐ข๐โ1
๐
23
Network architecture
Input: ๐ข ๐โ1 ๐, ๐ข๐๐
Output: ๐ข ๐+1 ๐
Spatio-temporal architecture
Non-linear activation (PReLU)
Loss: MSE between ๐ข ๐+1 ๐ and เท๐ข ๐+1 ๐
24
Network diagram
25
Physics informed loss
Use ๐ข ๐โ1 ๐, ๐ข๐๐ to predict เท๐ข ๐+1 ๐
Inside the loss:
Use ๐ข ๐+1 ๐, ๐ข ๐+1 ๐+1 to calculate ๐ข ๐+1 ๐+๐
Use เท๐ข ๐+1 ๐, ๐ข ๐+1 ๐+1 to predict เท๐ข ๐+1 ๐+๐
Calculate the MSE between ๐ข ๐+1 ๐+๐ and เท๐ข ๐+1 ๐+๐
Network loss is the linear combination of the two
MSE losses
26
Numerical experiments Dirichlet BC
Data:
Linear combinations with random coefficients
created from the basis sin ๐๐๐ฅ ๐=120
1250 different initial condition and 397 time-steps for
each one, total of 496,250 samples
Samples created with CFL = 0.875 and only each
10th sample was taken to get CFL = 8.75
27
Results
28
Results29
O. Ovadia, A. K, E. Turkel, S. Dekel, Journal of computational physics, submitted
Summary and future work
Obstacle location and identification
Investigating source location
High measurement noise
Stability
Extending to 2,3 dimensions
Dispersion relation problem โ optimized kernels
Experimental data
30
Thanks!
31