Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( )...

27
Reinforcement Learning via Policy Optimization Hanxiao Liu November 22, 2017 1 / 27

Transcript of Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( )...

Page 1: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Reinforcement Learning via Policy

Optimization

Hanxiao Liu

November 22, 2017

1 / 27

Page 2: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Reinforcement Learning

Policy a ∼ π(s)

2 / 27

Page 3: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Example - Mario

3 / 27

Page 4: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Example - ChatBot

4 / 27

Page 5: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Applications - Video Games

5 / 27

Page 6: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Applications - Robotics

6 / 27

Page 7: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Applications - Combinatorial Problems

7 / 27

Page 8: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Applications - Bioinformatics

8 / 27

Page 9: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Supervised Learning vs RL

Supervised setup

I st ∼ P (·)I at = π(st)

I immediate reward

Reinforcement setup

I st ∼ P (·|st−1, at−1)

I at ∼ π(st)

I delayed reward

Both aims to maximize the total reward.

9 / 27

Page 10: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Markov Decision Process

What is the underlying assumption?

10 / 27

Page 11: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Landscape

11 / 27

Page 12: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Monte Carlo

I Return R(τ) =∑T

t=1R(st, at).

I Trajectory τ = s0, a0, . . . , sT , aT

Goal: finding a good π such that R is maximized.

Policy evaluation via MC

1. Sample a trajectory τ given π.

2. Record R(τ)

Repeat the above for many times and take the average.

12 / 27

Page 13: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Policy Gradient

Let’s parametrize π using a neural network.

I a ∼ πθ(s)

Expected return

maxθ

U(θ) = maxθ

∑τ

P (τ ; πθ)R(τ) (1)

Heuristic: Raise the probability of good trajectories.

1. Sample a trajectory τ under πθ

2. Update θ using gradient ∇θ logP (τ ; πθ)R(τ).

13 / 27

Page 14: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Policy Gradient

The log-derivative trick

∇θU(θ) =∑τ

∇θP (τ ; πθ)R(τ) (2)

=∑τ

P (τ ; πθ)

P (τ ; πθ)∇θP (τ ; πθ)R(τ) (3)

=∑τ

P (τ ; πθ)∇θ logP (τ ; πθ)R(τ) (4)

= Eτ∼P (·;πθ)∇θ logP (τ ; πθ)R(τ) (5)

≈ ∇θ logP (τ ; πθ)R(τ) τ ∼ P (·; πθ) (6)

14 / 27

Page 15: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Policy Gradient

∇θU(θ) ≈ ∇θ logP (τ ; πθ)R(τ) τ ∼ P (·; πθ) (7)

I Analogous to SGD (so variance reduction isimportant), but data distribution here is a movingtarget (so we may want a trust region).

15 / 27

Page 16: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Policy Gradient

We can subtract any constant from the reward

∇θ

∑τ

P (τ ; πθ)(R(τ)− b) (8)

=∇θ

∑τ

P (τ ; πθ)R(τ)−∇θ

∑τ

P (τ ; πθ)b (9)

=∇θ

∑τ

P (τ ; πθ)R(τ)−∇θb (10)

=∇θ

∑τ

P (τ ; πθ)R(τ) (11)

16 / 27

Page 17: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Policy Gradient

The variance is magnified by R(τ)

∇θU(θ) ≈ ∇θ logP (τ ; πθ)R(τ) (12)

More fine-grained baseline?∑t=0

∇θ log πθ(at|st)∑t′=0

R(st′ , at′) (13)

→∑t=0

∇θ log πθ(at|st)∑t′=t

R(st′ , at′)︸ ︷︷ ︸Rt

(14)

→∑t=0

∇θ log πθ(at|st) (Rt − bt) (15)

17 / 27

Page 18: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Policy Gradient

∑t=0

∇θ log πθ(at|st) (Rt − bt) (16)

Good choice of bt?

b∗t = argminb E (Rt − b)2 (17)

≈ Vφ(st) (18)

∑t=0

∇θ log πθ(at|st) (Rt − Vφ(st))︸ ︷︷ ︸At

(19)

I Promote actions that lead to positive advantage.

18 / 27

Page 19: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Policy Gradient

∑t=0

∇θ log πθ(at|st)At (20)

I Sample a trajectory τ

I For each step in τI Rt =

∑t′=t rt′

I At = Rt − Vφ(st)I take a gradient step ∇φ

∑t=0 ‖Vφ(st)−Rt‖2

I take a gradient step ∇θ

∑t=0 log πθ(at|st)At

19 / 27

Page 20: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

TRPO

In PG, our opt objective a moving target defined based ontrajectory samples given the current policy

I each sample is only used once

An opt objective that reuses all historical data?

20 / 27

Page 21: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

TRPO

Policy gradient ∑t=0

∇θ log πθ(at|st)At (21)

Objective (on-policy)

Eπ [A(s, a)] (22)

Objective (off-policy), via importance sampling

Eπold

[π(a|s)πold(a|s)

Aold(s, a)

](23)

21 / 27

Page 22: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

TRPO

maxπ

Eπold

[π(a|s)πold(a|s)

Aold(s, a)

]≈

N∑n=1

π(an|sn)

πold(an|sn)An (24)

s.t. KL(πold, π) ≤ δ (25)

I Quadratic approximation is used for the KL.

I Use conjugate gradient for the natural gradientdirection F−1g.

22 / 27

Page 23: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

PPO

maxπ

N∑n=1

π(an|sn)

πold(an|sn)An − βKL(πold, π) (26)

Fixed β does not work well

I shrink β when KL(πold, π) is small

I increase β otherwise

23 / 27

Page 24: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

PPO

Alternative ways to penalize large change in π? Modify

π(an|sn)

πold(an|sn)An = rn(θ)An (27)

As

min (rn(θ)An, clip(1− ε, 1 + ε, rn(θ))An) (28)

I being pessimistic whenever “a large change in π leadsto a better obj.”

I same as the original obj otherwise.

24 / 27

Page 25: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

Advantage Estimation

Currently

At = rt + rt+1 + rt+2 + · · · − V (st) (29)

More bias, less variance (ignoring long-term effect)

At = rt + γrt+1 + γ2rt+2 + · · · − V (st) (30)

Bootstrapping

At = rt + γrt+1 + γ2rt+2 + · · · − V (st) (31)

= rt + γ(rt+1 + γrt+2 + . . . )− V (st) (32)

= rt + γV (st+1)− V (st) (33)

We may unroll for more than one steps.

25 / 27

Page 26: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

A2C

Advantage Actor-Critic

I Sample a trajectory τ

I For each step in τI Rt = rt + γV (st+1)I At = Rt − V (st)

I take a gradient step using

∇θ,φ

∑t=0

[− log πθ(at|st)At + ‖Vφ(st)− Rt‖2

]

May use TROP, PPO for policy optimization.

26 / 27

Page 27: Reinforcement Learning via Policy Optimizationhanxiaol/slides/rl-po.pdf · Policy Gradient r U( ) ˇr logP(˝;ˇ )R(˝) ˝˘P(;ˇ ) (7) I Analogous to SGD (so variance reduction is

A3C

Variance reduction via multiple actors

Figure : Asynchronous Advantage Actor-Critic

27 / 27