Monte-Carlo Planning Look Ahead Trees

60
1 Monte-Carlo Planning Look Ahead Trees Alan Fern

description

Monte-Carlo Planning Look Ahead Trees. Alan Fern . Monte-Carlo Planning Outline. Single State Case (multi-armed bandits) A basic tool for other algorithms Monte-Carlo Policy Improvement Policy rollout Policy Switching Monte-Carlo Look-Ahead Trees Sparse Sampling - PowerPoint PPT Presentation

Transcript of Monte-Carlo Planning Look Ahead Trees

Page 1: Monte-Carlo  Planning Look Ahead Trees

1

Monte-Carlo PlanningLook Ahead Trees

Alan Fern

Page 2: Monte-Carlo  Planning Look Ahead Trees

2

Monte-Carlo Planning Outlineh Single State Case (multi-armed bandits)

5 A basic tool for other algorithms

h Monte-Carlo Policy Improvement5 Policy rollout5 Policy Switching

h Monte-Carlo Look-Ahead Trees 5 Sparse Sampling5 Sparse Sampling via Recursive Bandits5 UCT and variants

Page 3: Monte-Carlo  Planning Look Ahead Trees

3

Sparse Samplingh Rollout and policy switching do not guarantee optimality or

near optimality5 Guarantee relative performance to base policies

h Can we develop Monte-Carlo methods that give us near optimal policies?5 With computation that does NOT depend on number of states!5 This was an open problem until late 90’s.

h In deterministic games and search problems it is common to build a look-ahead tree at a state to select best action5 Can we generalize this to general stochastic MDPs?

Page 4: Monte-Carlo  Planning Look Ahead Trees

4

Online Planning with Look-Ahead Treesh At each state we encounter in the environment we

build a look-ahead tree of depth h and use it to estimate optimal Q-values of each action5 Select action with highest Q-value estimate

h s = current state of environmenth Repeat

5 T = BuildLookAheadTree(s) ;; sparse sampling or UCT ;; tree provides Q-value estimates for root action5 a = BestRootAction(T) ;; action with best Q-value5 Execute action a in environment5 s is the resulting state

Page 5: Monte-Carlo  Planning Look Ahead Trees

5

Planning with Look-Ahead Trees

… …sa2 a1

Build look-ahead tree Build look-ahead tree

Real worldstate/action sequence

sa1 a2

a1 a2

s11

a1 a2

s1w…

a1 a2

s21

a1 a2

s2w…

R(s11,a1) R(s11,a2)R(s1w,a1)R(s1w,a2) R(s21,a1)R(s21,a2)R(s2w,a1)R(s2w,a2)

s

a1 a2

a1 a2

s11

a1 a2

s1w…

a1 a2

s21

a1 a2

s2w…

R(s11,a1) R(s11,a2)R(s1w,a1)R(s1w,a2) R(s21,a1)R(s21,a2)R(s2w,a1)R(s2w,a2)

………… …………

Page 6: Monte-Carlo  Planning Look Ahead Trees

Sparse Samplingh Again focus on finite-horizons

5 Arbitrarily good approximation for large enough horizon h

h h-horizon optimal Q-function (denoted Q*)5 Value of taking a in s and following * for h-1 steps5 Q*(s,a,h) = E[R(s,a) + βV*(T(s,a),h-1)]

h Key identity (Bellman’s equations):5 V*(s,h) = maxa Q*(s,a,h)5 *(x) = argmaxa Q*(x,a,h)

h Sparse sampling estimates Q-values by building sparse expectimax tree

Page 7: Monte-Carlo  Planning Look Ahead Trees

Sparse Samplingh Will present two views of algorithm

5 The first is perhaps easier to digest and doesn’t appeal to bandit algorithms

5 The second is more generalizable and can leverage advances in bandit algorithms

1. Approximation to the full expectimax tree

2. Recursive bandit algorithm

Page 8: Monte-Carlo  Planning Look Ahead Trees

Expectimax Treeh Key definitions:

5 V*(s,h) = maxa Q*(s,a,h)5 Q*(s,a,h) = E[R(s,a) + βV*(T(s,a),h-1)]

h Expand definitions recursively to compute V*(s,h)V*(s,h) = maxa1 Q(s,a1,h)

= maxa1 E[R(s,a1) + β V*(T(s,a1),h-1)]

= maxa1 E[R(s,a1) + β maxa2 E[R(T(s,a1),a2)+Q*(T(s,a1),a2,h-1)] = ……

h Can view this expansion as an expectimax tree5 Each expectation is a weighted sum over states

Page 9: Monte-Carlo  Planning Look Ahead Trees

Exact Expectimax Tree for V*(s,H)Max

Exp Exp

Max Max HorizonH

k

# states

......

......

........................

...... ...... ...... ......

...... ...... ............ ...... ...... ......

# actions

(kn )H leaves

n

Alternate max &expectation

Compute root V* and Q* via recursive procedure

Depends on size of the state-space. Bad!

V*(s,H)

Q*(s,a,H)

Page 10: Monte-Carlo  Planning Look Ahead Trees

Sparse Sampling TreeMax

Exp Exp

Max MaxHorizon H

k

# states

......

......

........................

...... ...... ...... ......

Samplingdepth H s

Samplingwidth C

...... ...... ............ ...... ...... ......

(kC )Hs leaves

# actions

(kn )H leaves

n

V*(s,H)

Q*(s,a,H)

Replace expectation with average over w samples

w will typically be much smaller than n.

(kw)H leaves

Sampling width w

Page 11: Monte-Carlo  Planning Look Ahead Trees

Sparse Sampling Tree

sa1 a2

a1 a2

s11

a1 a2

s1w…

a1 a2

s21

a1 a2

s2w…

sampledaverage

maxnode

… … … … … … … …

V*(s,H)Q*(s,a1,H)

We could create an entire tree at each decision step and returnaction with highest Q* value at root.

High memory cost!

Page 12: Monte-Carlo  Planning Look Ahead Trees

Sparse Sampling [Kearns et. al. 2002]

SparseSampleTree(s,h,w)

If h=0 Return [0, null]

For each action a in sQ*(s,a,h) = 0For i = 1 to w

Simulate taking a in s resulting in si and reward ri

[V*(si,h-1),a*] = SparseSample(si,h-1,w)

Q*(s,a,h) = Q*(s,a,h) + ri + β V*(si,h-1)

Q*(s,a,h) = Q*(s,a,h) / w ;; estimate of Q*(s,a,h)

V*(s,h) = maxa Q*(s,a,h) ;; estimate of V*(s,h)

a* = argmaxa Q*(s,a,h)

Return [V*(s,h), a*]

The Sparse Sampling algorithm computes root value via depth first expansion

Return value estimate V*(s,h) of state s and estimated optimal action a*

Page 13: Monte-Carlo  Planning Look Ahead Trees

Sparse Sampling (Cont’d)h For a given desired accuracy, how large

should sampling width and depth be?5 Answered: Kearns, Mansour, and Ng (1999)

h Good news: gives values for w and H to achieve PAC guarantee on optimality5 Values are independent of state-space size!5 First near-optimal general MDP planning algorithm

whose runtime didn’t depend on size of state-space

h Bad news: the theoretical values are typically still intractably large---also exponential in H

5 Exponential in H is the best we can do in general5 In practice: use small H & heuristic value at leaves

Page 14: Monte-Carlo  Planning Look Ahead Trees

Sparse Sampling w/ Leaf Heuristic

SparseSampleTree(s,h,w)

If h=0 Return [0, null]

For each action a in sQ*(s,a,h) = 0For i = 1 to w

Simulate taking a in s resulting in si and reward ri

[V*(si,h-1),a*] = SparseSample(si,h-1,w)

Q*(s,a,h) = Q*(s,a,h) + ri + β V*(si,h-1)

Q*(s,a,h) = Q*(s,a,h) / w ;; estimate of Q*(s,a,h)

V*(s,h) = maxa Q*(s,a,h) ;; estimate of V*(s,h)

a* = argmaxa Q*(s,a,h)

Return [V*(s,h), a*]

Let be a heuristic value function estimatorGenerally this is a very fast function, since it is evaluated at all leaves

Page 15: Monte-Carlo  Planning Look Ahead Trees

Shallow Horizon w/ Leaf Heuristics

a1 a2

a1 a2

s11

a1 a2

s1w…

a1 a2

s21

a1 a2

s2w…

… … … … … … … …

Often a shallow sparse sampling search with a simple at leaves can be very effective.

Page 16: Monte-Carlo  Planning Look Ahead Trees

Sparse Samplingh Will present two views of algorithm

5 The first is perhaps easier to digest5 The second is more generalizable and can leverage

advances in bandit algorithms

1. Approximation to the full expectimax tree

2. Recursive bandit algorithm5 Consider horizon H=2 case first5 Show for general H

Page 17: Monte-Carlo  Planning Look Ahead Trees

Sparse Sampling Tree

sa1 a2

a1 a2

s11

a1 a2

s1w…

a1 a2

s21

a1 a2

s2w…

sampledaverage

maxnode

… … … … … … … …

V*(s,H)Q*(s,a1,H)

Each max node in tree is just a bandit problem.

I.e. must choose action with highest Q*(s,a,h)---approximate via bandit.

Page 18: Monte-Carlo  Planning Look Ahead Trees

Bandit View of Sparse Sampling (H=2)s

a1 a2

a1 a2

s11

a1 a2

s1w…

a1 a2

s21

a1 a2

s2w…

R(s11,a1) R(s11,a2) R(s1w,a1) R(s1w,a2) R(s21,a1) R(s21,a2) R(s2w,a1) R(s2w,a2)

h=1: Traditional bandit problem(stochastic arm reward R(s11,ai))

V*(s11,1)estimate

Implement bandit alg. to return estimated expected reward of best arm

Consider 2-horizon problem

Page 19: Monte-Carlo  Planning Look Ahead Trees

Bandit View of Sparse Sampling (H=2)s

a1 a2

a1 a2

s11

a1 a2

s1w…

a1 a2

s21

a1 a2

s2w…

R(s11,a1) R(s11,a2) R(s1w,a1) R(s1w,a2) R(s21,a1) R(s21,a2) R(s2w,a1) R(s2w,a2)

h=2: higher level bandit problem (finds arm with best Q* value for h=2)

Pulling an arm returns a Q-value estimate by: 1) sample next state s’, 2) run h=1 bandit at s’, return immediate reward + estimated value of s’

Page 20: Monte-Carlo  Planning Look Ahead Trees

Bandit View of Sparse Sampling (h=2)s

a1a2

a1 a2

s11

a1 a2

s1w…

a1 a2

s21

a1 a2

s2w…

R(s11,a1) R(s11,a2) R(s1w,a1) R(s1w,a2) R(s21,a1) R(s21,a2) R(s2w,a1) R(s2w,a2)

V*(s1w,1)estimate

V*(s11,1)estimate

Q*(s, a1, 2)estimate

Consider UniformBandit using w pulls per arm

V*(s2w,1)estimate

V*(s21,1)estimate

Q*(s, a2,2)estimate

𝑟11 ,… ,𝑟1𝑤𝑟21 ,…,𝑟2𝑤𝑟11 ,… ,𝑟1𝑤𝑟21 ,…,𝑟2𝑤𝑟11 ,… ,𝑟1𝑤𝑟21 ,…,𝑟 2𝑤𝑟11 ,… ,𝑟1𝑤𝑟21 ,…,𝑟2𝑤

Page 21: Monte-Carlo  Planning Look Ahead Trees

21

Bandit View: General Horizon Hs

a1 a2 ak

SimQ*(s,a1,h) SimQ*(s,a2,h) SimQ*(s,ak,h)

• SimQ*(s,a,h) : we want this to return a random sampleof the immediate reward and then h-1 value of resultingstate when executing action a in s

• If this is (approx) satisfied then bandit algorithm will select near optimal arm.

Page 22: Monte-Carlo  Planning Look Ahead Trees

Bandit View: General Horizon Hs

a1 a2 ak

SimQ*(s,a1,h) SimQ*(s,a2,h) SimQ*(s,ak,h)

BanditValue( returns estimated expectedvalue of best arm(e.g. via UniformBandit)

SimQ*(s,a,h) r = R(s,a)

If h=1 then Return r

s’ = T(s,a)

Return

Definition:

k-arm bandit problem at state s’

Page 23: Monte-Carlo  Planning Look Ahead Trees

Recursive UniformBandit: General Hs

a1a2

a1 a2

s11

a1 a2

a1 a2

and so on …..

a1 a2

a1 a2

s1w

a1 a2

Clearly replicating SparseSampling.

. . .

. . . . . .

Consider UniformBandit

Page 24: Monte-Carlo  Planning Look Ahead Trees

Recursive Bandit: General Horizon H

SelectRootAction(s,H) Return

• When bandit is UniformBandit same as Sparse Sampling

• Can plug in more advanced bandit algorithms for possible improvement!

SimQ*(s,a,h) r = R(s,a)

If h=1 then Return r

s’ = T(s,a)

Return

Page 25: Monte-Carlo  Planning Look Ahead Trees

25

Uniform vs. Non-Uniform Bandits

h Sparse sampling wastes time on bad parts of tree5 Devotes equal resources to each

state encountered in the tree5 Would like to focus on most

promising parts of tree

h But how to control exploration of new parts of tree vs. exploiting promising parts?

h Use non-uniform bandits

Page 26: Monte-Carlo  Planning Look Ahead Trees

26

Non-Uniform Recursive BanditsUCB-Based Sparse Sampling

5 Use UCB as bandit algorithm5 There is an analysis of this algorithm’s bias

(it goes to zero)

H.S. Chang, M. Fu, J. Hu, and S.I. Marcus. An adaptive sampling algorithm for solving Markov decision processes. Operations Research, 53(1):126--139, 2005.

Page 27: Monte-Carlo  Planning Look Ahead Trees

Recursive UCB: General Hs

a1a2

a1 a2

s11

a1 a2 a1 a2 a1 a2

a1 a2

s1w

a1 a2

UCB in Recursive Bandit

s1w

a1 a2

Page 28: Monte-Carlo  Planning Look Ahead Trees

28

Non-UniformRecursive Banditsh UCB-Based Sparse Sampling

5 Is UCB the right choice? 5 We don’t really care about cumulative regret. 5 My Guess: part of the reason UCB was tried was

for purposes of leveraging its analysis

h Sparse Sampling5 Use as the bandit algorithm5 I haven’t seen this in the literature5 Might be better in practice since it is more geared

to simple regret5 This would raise issues in the analysis (beyond

the scope of this class).

Page 29: Monte-Carlo  Planning Look Ahead Trees

29

Non-Uniform Recursive Banditsh Good News: we might expect to improve over

pure Sparse Sampling by changing the bandit algorithm

h Bad News: this recursive bandit approach has poor “anytime behavior”, which is often important in practice

h Anytime Behavior: good anytime behavior roughly means that an algorithm should be able to use small amounts of additional time to get small improvements5 What about these recursive bandits?

Page 30: Monte-Carlo  Planning Look Ahead Trees

Recursive UCB: General Hs

a1a2

a1 a2

s11

a1 a2 a1 a2 a1 a2

a1 a2

• After pulling a single arm at root wewait for an H-1 recursive tree expansionuntil getting the result.

Page 31: Monte-Carlo  Planning Look Ahead Trees

31

Non-Uniform Recursive Banditsh Information at the root only increases after each

of the expensive root arm pulls5 Much time passes between these pulls

h Thus, small amounts of additional time does not result in any additional information at root!5 Thus, poor anytime behavior5 Running for 10sec could essentially the same as

running for 10min (for large enough H)

h Can we improve the anytime behavior?

Page 32: Monte-Carlo  Planning Look Ahead Trees

32

Monte-Carlo Planning Outlineh Single State Case (multi-armed bandits)

5 A basic tool for other algorithms

h Monte-Carlo Policy Improvement5 Policy rollout5 Policy Switching

h Monte-Carlo Look-Ahead Trees 5 Sparse Sampling5 Sparse Sampling via Recursive Bandits5 Monte Carlo Tree Search: UCT and variants

Page 33: Monte-Carlo  Planning Look Ahead Trees

h UCT is an instance of Monte-Carlo Tree Search5 Applies bandit principles in this framework5 Similar theoretical properties to sparse sampling5 Much better anytime behavior than sparse sampling

h Famous for yielding a major advance in computer Go

h A growing number of success stories5 Practical successes still not understood so well

UCT Algorithm Bandit Based Monte-Carlo Planning. (2006). Levente Kocsis & Csaba Szepesvari. European Conference, on Machine Learning,

Page 34: Monte-Carlo  Planning Look Ahead Trees

h Builds a sparse look-ahead tree rooted at current state by repeated Monte-Carlo simulation of a “rollout policy”

h During construction each tree node s stores: 5 state-visitation count n(s) 5 action counts n(s,a) 5 action values Q(s,a)

h Repeat until time is up1. Execute rollout policy starting from root until horizon

(generates a state-action-reward trajectory)2. Add first node not in current tree to the tree3. Update statistics of each tree node s on trajectory

g Increment n(s) and n(s,a) for selected action ag Update Q(s,a) by total reward observed after the node

Monte-Carlo Tree Search

What is the rollout policy?

Page 35: Monte-Carlo  Planning Look Ahead Trees
Page 36: Monte-Carlo  Planning Look Ahead Trees

h Monte-Carlo Tree Search algorithms mainly differ on their choice of rollout policy

h Rollout policies have two distinct phases5 Tree policy: selects actions at nodes already in tree

(each action must be selected at least once)5 Default policy: selects actions after leaving tree

h Key Idea: the tree policy can use statistics collected from previous trajectories to intelligently expand tree in most promising direction 5 Rather than uniformly explore actions at each node

Rollout Policies

Page 37: Monte-Carlo  Planning Look Ahead Trees

Current World State

DefaultPolicy

Terminal(reward = 1)

1

1

1

At a leaf node tree policy selects a random action then executes default

Initially tree is single leaf

Assume all non-zero reward occurs at terminal nodes.

new tree node

Iteration 1

Page 38: Monte-Carlo  Planning Look Ahead Trees

Current World State

1

1

Must select each action at a node at least once

0

DefaultPolicy

Terminal(reward = 0)

new tree node

Iteration 2

Page 39: Monte-Carlo  Planning Look Ahead Trees

Current World State

1

1/2

Must select each action at a node at least once

0

Iteration 3

Page 40: Monte-Carlo  Planning Look Ahead Trees

Current World State

1

1/2

0

When all node actions tried once, select action according to tree policy

Tree Policy

Iteration 3

Page 41: Monte-Carlo  Planning Look Ahead Trees

Current World State

1

1/2

When all node actions tried once, select action according to tree policy

0

Tree Policy

0

DefaultPolicy

new tree node

Iteration 3

Page 42: Monte-Carlo  Planning Look Ahead Trees

Current World State

1/2

1/3

When all node actions tried once, select action according to tree policy

0Tree

Policy

0

Iteration 4

Page 43: Monte-Carlo  Planning Look Ahead Trees

Current World State

1

1/2

1/3

When all node actions tried once, select action according to tree policy

0

0

Iteration 4

Page 44: Monte-Carlo  Planning Look Ahead Trees

Current World State

2/3

2/3

When all node actions tried once, select action according to tree policy

0Tree

Policy

0

What is an appropriate tree policy?Default policy?

1

Page 45: Monte-Carlo  Planning Look Ahead Trees

45

h Basic UCT uses a random default policy5 In practice often use hand-coded or learned policy

h Tree policy is based on UCB:5 Q(s,a) : average reward received in current

trajectories after taking action a in state s5 n(s,a) : number of times action a taken in s5 n(s) : number of times state s encountered

),()(ln),(maxarg)(asnsncasQs aUCT

Theoretical constant that is empirically selected in practice

(theoretical results based on c equal to horizon H)

UCT Algorithm [Kocsis & Szepesvari, 2006]

Page 46: Monte-Carlo  Planning Look Ahead Trees

Current World State

1

1/2

1/3

When all state actions tried once, select action according to tree policy

0Tree

Policy

0

a1 a2),()(ln),(maxarg)(asnsncasQs aUCT

Page 47: Monte-Carlo  Planning Look Ahead Trees

Current World State

1

1/2

1/3

When all node actions tried once, select action according to tree policy

0Tree

Policy

0

),()(ln),(maxarg)(asnsncasQs aUCT

Page 48: Monte-Carlo  Planning Look Ahead Trees

Current World State

1

1/2

1/3

When all node actions tried once, select action according to tree policy

0Tree

Policy

0

0

),()(ln),(maxarg)(asnsncasQs aUCT

Page 49: Monte-Carlo  Planning Look Ahead Trees

Current World State

1

1/3

1/4

When all node actions tried once, select action according to tree policy

0Tree

Policy

0/1

),()(ln),(maxarg)(asnsncasQs aUCT

0

Page 50: Monte-Carlo  Planning Look Ahead Trees

Current World State

1

1/3

1/4

When all node actions tried once, select action according to tree policy

0Tree

Policy

0/1

),()(ln),(maxarg)(asnsncasQs aUCT

0

1

Page 51: Monte-Carlo  Planning Look Ahead Trees

Current World State

1

1/3

2/5

When all node actions tried once, select action according to tree policy

1/2Tree

Policy

0/1

),()(ln),(maxarg)(asnsncasQs aUCT

0

1

Page 52: Monte-Carlo  Planning Look Ahead Trees

52

UCT Recap

h To select an action at a state s5 Build a tree using N iterations of monte-carlo tree

searchg Default policy is uniform randomg Tree policy is based on UCB rule

5 Select action that maximizes Q(s,a)(note that this final action selection does not take the exploration term into account, just the Q-value estimate)

h The more simulations the more accurate

Page 53: Monte-Carlo  Planning Look Ahead Trees

Computer Go

“Task Par Excellence for AI” (Hans Berliner)

“New Drosophila of AI” (John McCarthy)

“Grand Challenge Task” (David Mechner)

9x9 (smallest board) 19x19 (largest board)

Page 54: Monte-Carlo  Planning Look Ahead Trees

A Brief History of Computer Go

2005: Computer Go is impossible!

2006: UCT invented and applied to 9x9 Go (Kocsis, Szepesvari; Gelly et al.)

2007: Human master level achieved at 9x9 Go (Gelly, Silver; Coulom)

2008: Human grandmaster level achieved at 9x9 Go (Teytaud et al.)

2012: Zen program beats former international chapion Takemiya Masaki with only 4 stone handicap in 19x19

Computer GO Server rating over this period: 1800 ELO 2600 ELO

Page 55: Monte-Carlo  Planning Look Ahead Trees

Other Successes

Klondike Solitaire (wins 40% of games)

General Game Playing Competition

Probabilistic Planning Competition

Real-Time Strategy Games

Combinatorial Optimization

List is growing

Usually extend UCT is some ways

Page 56: Monte-Carlo  Planning Look Ahead Trees

Some ImprovementsUse domain knowledge to handcraft a more intelligent default policy than random

E.g. don’t choose obviously stupid actionsIn Go a fast hand-coded default policy is used

Learn a heuristic function to evaluate positions

Use the heuristic function to initialize leaf nodes(otherwise initialized to zero)

Page 57: Monte-Carlo  Planning Look Ahead Trees

57

Other bandits? h UCT was partly motivated by the question of

how to use a UCB-like rule for tree search5 It is questionable whether UCB is the best choice

for a tree policy

h Root Node:5 We only care about selecting the best action.5 Suggests trying at root

h Non-Root Nodes: 5 The cumulative reward at these nodes is used as

the value estimate by parent nodes5 Suggests we would like a small cumulative regret5 Suggests UCB might be more appropriate

Page 58: Monte-Carlo  Planning Look Ahead Trees

58

Varying the Root Bandit

h Recent work has considered such a UCT variant5 Use 0.5-Greedy as tree policy at root and UCB at

non-root nodes5 They also consider some other alternatives at the

root that are more tuned to simple regret5 The results in that paper show that this simple

change can improve performance significantlyg The generality of this result remains to be seen

Tolpin, D. & Shimony, S, E. (2012). MCTS Based on Simple Regret. AAAI Conference on Artificial Intelligence.

Page 59: Monte-Carlo  Planning Look Ahead Trees

59

Tolpin, D. & Shimony, S, E. (2012). MCTS Based on Simple Regret. AAAI Conference on Artificial Intelligence.

UCB at root

0.5-Greedy at root

Results in the Sailing Domain

Page 60: Monte-Carlo  Planning Look Ahead Trees

60

Summaryh When you have a tough planning problem

and a simulator5 Try Monte-Carlo planning

h Basic principles derive from the multi-arm bandit

h Policy rollout and switching are great way to exploit existing policies and make them better

h If a good heuristic exists, then shallow sparse sampling can give good results

h UCT is often quite effective especially when combined with domain knowledge