COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016,...

50
COMP3411/ 9414 / 9814: Artificial Intelligence Week 5: Games Russell & Norvig, Chapter 5. UNSW c Alan Blair, 2013-18

Transcript of COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016,...

Page 1: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814: Artificial Intelligence

Week 5: Games

Russell & Norvig, Chapter 5.

UNSW c©Alan Blair, 2013-18

Page 2: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 1

Outline

� origins

� motivation

� minimax search

� resource limits and heuristic evaluation

� α-β pruning

� stochastic games

� partially observable games

� continuous, embodied games

UNSW c©Alan Blair, 2013-18

Page 3: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 2

Origins

� 1769 Wolfgang von Kempelen (Mechanical Turk)

� 1846 Charles Babbage & Ada Lovelace (tic-tac-toe)

� 1952 Alan Turing (Chess algorithm)

� 1959 Arthur Samuel (Checkers)

� 1961 Donald Michie (MENACE machine learner)

UNSW c©Alan Blair, 2013-18

Page 4: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 3

Mechanical Turk

UNSW c©Alan Blair, 2013-18

Page 5: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 4

Mechanical Turk

UNSW c©Alan Blair, 2013-18

Page 6: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 5

Charles Babbage Difference Engine

UNSW c©Alan Blair, 2013-18

Page 7: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 6

Funding Problems

“What shall we do to get rid of Mr. Babbage and his calculatingmachine?”

(Prime Minister Robert Peel, 1842)

UNSW c©Alan Blair, 2013-18

Page 8: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 7

Ada Lovelace

“For the machine is not a thinking being,

but simply an automation which acts

according to the laws imposed upon it.”

(Ada Lovelace, 1843)

UNSW c©Alan Blair, 2013-18

Page 9: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 8

Babbage & Lovelace tic-tac-toe machine

UNSW c©Alan Blair, 2013-18

Page 10: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 9

Types of Games

� Discrete Games

◮ fully observable, deterministic (chess, checkers, go, othello)

◮ fully observable, stochastic (backgammon, monopoly)

◮ partially observable (bridge, poker, scrabble)

� Continuous, embodied games

◮ robocup soccer, pool (snooker)

UNSW c©Alan Blair, 2013-18

Page 11: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 10

Key Ideas

� Computer considers possible lines of play (Babbage, 1846)

� Algorithm for perfect play (Zermelo, 1912; Von Neumann, 1944)

� Finite horizon, approximate evaluation (Zuse, 1945; Wiener, 1948;

Shannon, 1950)

� First chess program (Turing, 1951)

� Machine learning to improve evaluation accuracy (Samuel, 1952-57)

� Pruning to allow deeper search (McCarthy, 1956)

UNSW c©Alan Blair, 2013-18

Page 12: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 11

Why Games ?

� “Unpredictable” opponent⇒ solution is astrategy

◮ must respond to every possible opponent reply

� Time limits⇒ must rely onapproximation

◮ tradeoff between speed and accuracy

� Games have been a key driver of new techniques in CS and AI

UNSW c©Alan Blair, 2013-18

Page 13: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 12

Samuel’s Checkers Program

“Elaborate table-lookup procedures, fast sorting and searching

procedures, and a variety of new programming tricks were

developed...”

Samuel’s 1959 paper contains groundbreaking ideas in theseareas:

� hash tables

� data compression

� parameter tuning via machine learning

UNSW c©Alan Blair, 2013-18

Page 14: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 13

Game Tree (2-player, deterministic)

XXXX

XX

X

XX

MAX (X)

MIN (O)

X X

O

OOX O

OO O

O OO

MAX (X)

X OX OX O XX X

XX

X X

MIN (O)

X O X X O X X O X

. . . . . . . . . . . .

. . .

. . .

. . .

TERMINALXX

−1 0 +1Utility

UNSW c©Alan Blair, 2013-18

Page 15: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 14

Minimax

Perfect play for deterministic, perfect-information games

Idea: choose move to position with highestminimax value= best

achievable payoff against best play

MAX

3 12 8 642 14 5 2

MIN

3

A1

A3

A2

A13A

12A

11A

21 A23

A22

A33A

32A

31

3 2 2

UNSW c©Alan Blair, 2013-18

Page 16: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 15

Minimax algorithm

function minimax( node, depth )if node is a terminal node or depth = 0

return heuristic value of nodeif we are to play at node

let α =−∞foreach child of node

let α = max(α, minimax( child, depth-1 ))returnα

else // opponent is to play at nodelet β =+∞foreach child of node

let β = min( β, minimax( child, depth-1 ))returnβ

UNSW c©Alan Blair, 2013-18

Page 17: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 16

Minimax and Negamax

The above formulation of Minimax assumes that all nodes are evaluated

with respect to afixed player(e.g. White in Chess).

If we instead assume that each node is evaluated with respectto the player

whose turn it is to move, we get a simpler formulation known asNegamax.

UNSW c©Alan Blair, 2013-18

Page 18: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 17

Negamax formulation of Minimax

function negamax( node, depth )

if node is terminal or depth = 0

return heuristic value of node

// from perspective of player whose turn it is to move

let α =−∞foreach child of node

let α = max(α, -negamax( child, depth-1 ))

returnα

UNSW c©Alan Blair, 2013-18

Page 19: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 18

Properties of Minimax

� Complete?

� Optimal?

� Time complexity?

� Space complexity?

UNSW c©Alan Blair, 2013-18

Page 20: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 19

Reducing the Search Effort

For chess,b ≈ 35, m≈ 100 for “reasonable” games⇒ exact solution

completely infeasible

Two ways to make the search feasible:

� don’t search to final position; use heuristic evaluation at the leaves

� α-β pruning

UNSW c©Alan Blair, 2013-18

Page 21: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 20

Heuristic Evaluation for Chess

� material◮ Queen = 9, Rook = 5, Knight = Bishop = 3, Pawn = 1

� position◮ some (fractional) score for a particular piece on a particular square

� interaction◮ some (fractional) score for one piece attacking another piece, etc.

� KnightCap used 2000 different features, but evaluation is rapid

because very few features are non-zero for any particular board state

(e.g. Queen can only be on one of the 64 squares at a time)

� the value of individual features can be determined by reinforcement

learning

UNSW c©Alan Blair, 2013-18

Page 22: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 21

Pruning – Motivation

Q1: Why would “Queen to G5” be a bad move for Black?Q2: How many White “replies” did you need to consider in answering?

Once we have seen one reply scary enough to convince us the move isreally bad, we can abandon this move and continue searching elsewhere.

UNSW c©Alan Blair, 2013-18

Page 23: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 22

α-β pruning example

MAX

3 12 8

MIN 3

2

2

X X

3

UNSW c©Alan Blair, 2013-18

Page 24: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 23

α-β pruning example

MAX

3 12 8

MIN 3

2

2

X X14

14

3

UNSW c©Alan Blair, 2013-18

Page 25: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 24

α-β pruning example

MAX

3 12 8

MIN 3

2

2

X X14

14

5

5

3

UNSW c©Alan Blair, 2013-18

Page 26: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 25

α-β pruning example

MAX

3 12 8

MIN

3

3

2

2

X X14

14

5

5

2

2

3

UNSW c©Alan Blair, 2013-18

Page 27: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 26

α-β search algorithm

function alphabeta( node, depth,α, β )

if node is terminal or depth = 0{ return heuristic value of node}

if we are to play at node

foreach child of node

let α = max(α, alphabeta( child, depth-1,α, β ))

if α ≥ β { returnα }

returnαelse // opponent is to play at node

foreach child of node

let β = min( β, alphabeta( child, depth-1,α, β ))

if β ≤ α { returnβ }

returnβ

UNSW c©Alan Blair, 2013-18

Page 28: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 27

Negamax formulation of α-β search

function minimax( node, depth )return alphabeta( node, depth,−∞, ∞ )

function alphabeta( node, depth,α, β )if node is terminal or depth = 0

return heuristic value of node// from perspective of player whose turn it is to move

foreach child of nodelet α = max(α, -alphabeta( child, depth-1, -β, -α ))if α ≥ β

returnαreturnα

UNSW c©Alan Blair, 2013-18

Page 29: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 28

Why is it called α-β?

..

..

..

MAX

MIN

MAX

MIN V

α is the best value for usfound so far, off the current pathβ is the best value for opponentfound so far, off the current path

If we find a move whose value exceedsα, pass this new value up the tree.

If the current node value exceedsβ, it is “too good to be true”, so we“prune off” the remaining children.

UNSW c©Alan Blair, 2013-18

Page 30: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 29

Properties of α-β

α-β pruning is guaranteed to give the same result as minimax, butspeeds

up the computation substantially

Good move ordering improves effectiveness of pruning

With “perfect ordering,” time complexity =O(bm/2)

To prove that a “bad” move is bad, we only need to consider one (good) reply.

But to prove that a “good” move is good, we need to consider allreplies.

This meansα-β can search twice as deep as plain minimax.

An increase in search depth from 6 to 12 could change a very weak player

into a quite strong one.

UNSW c©Alan Blair, 2013-18

Page 31: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 30

Chess

Deep Blue defeated human world champion Gary Kasparov in a six-gamematch in 1997.

Traditionally, computers played well in the opening (usinga database)and in the endgame (by deep search) but humans could beat themin themiddle game by “opening up” the board to increase the branching factor.Kasparov tried this, but because of its speed Deep Blue remained strong.

Some experts believe Kasparov should have been able to defeat Deep Bluein 1997 if he hadn’t “lost his nerve”. However, chess programs strongerthan Deep Blue are now running on standard PCs and could definitelydefeat the strongest humans.

Modern chess programs rely on quiescent search, transposition tables andpruning heuristics.

UNSW c©Alan Blair, 2013-18

Page 32: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 31

Checkers

Chinook failed to defeat human world champion Marion Tinsley prior tohis death in 1994, but has beaten all subsequent human champions.

Chinook used an endgame database defining perfect play for all positionsinvolving 8 or fewer pieces on the board – a total of 443,748,401,247positions. This database has since been expanded to includeall positionswith 10 or fewer pieces (38 trillion positions).

In 2007, Jonathan Shaeffer released a new version of Chinookandpublished a proof that it will never lose. His proof method fills out thegame tree incrementally, ignoring branches which are likely to be pruned.After many months of computation, it eventually converges to a skeletonof the real (pruned) tree which is comprehensive enough to complete theproof.

UNSW c©Alan Blair, 2013-18

Page 33: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 32

Go

UNSW c©Alan Blair, 2013-18

Page 34: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 33

Go

The branching factor for Go is greater than 300, and static board evaluationis difficult. Traditional Go programs broke the board into regions and usedpattern knowledge to explore each region.

Since 2006, new “Monte Carlo” players have been developed using UCBsearch. A tree is built up stochastically. After a small number of moves,the rest of the game is played out randomly, using fast pattern matching togive preference to “urgent” moves.

In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a4-1 match. AlphaGo uses MCTS, with deep learning neural networks formove selection and board evaluation. The networks are trained initiallyon a database of thousands of human championship Go games, and thenrefined with millions of games of self-play.

UNSW c©Alan Blair, 2013-18

Page 35: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 34

Stochastic games: backgammon

1 2 3 4 5 6 7 8 9 10 11 12

24 23 22 21 20 19 18 17 16 15 14 13

0

25

UNSW c©Alan Blair, 2013-18

Page 36: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 35

Stochastic games in general

In stochastic games, chance introduced by dice, card-shuffling, etc.

Expectimaxis an adaptation of Minimax which also handleschance

nodes.

...

if node is a chance node

return average of values of successor nodes

...

Adaptations ofα-β pruning are possible, provided the evaluation is

bounded.

UNSW c©Alan Blair, 2013-18

Page 37: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 36

Expectimax algorithm

MIN

MAX

2

CHANCE

4 7 4 6 0 5 −2

2 4 0 −2

0.5 0.5 0.5 0.5

3 −1

UNSW c©Alan Blair, 2013-18

Page 38: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 37

For Minimax, Exact values don’t matter

MIN

MAX

21

1

42

2

20

1

1 40020

20

Move choice is preserved under anymonotonictransformation of EVAL .

Only the order matters:

payoff in deterministic games acts as anordinal utility function.

UNSW c©Alan Blair, 2013-18

Page 39: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 38

For Expectimax, Exact values DO matter

DICE

MIN

MAX

2 2 3 3 1 1 4 4

2 3 1 4

.9 .1 .9 .1

2.1 1.3

20 20 30 30 1 1 400 400

20 30 1 400

.9 .1 .9 .1

21 40.9

Move choice only preserved by positive linear transformation of EVAL

Hence EVAL should be proportional to the expected payoff.

UNSW c©Alan Blair, 2013-18

Page 40: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 39

Partially Observable games

Card games are partially observable, because (some of) the opponents’

cards are unknown.

This makes the problem very difficult, because some information is known

to one player but not to another.

Typically we can calculate a probability for each possible deal.

Idea: compute the minimax value of each action in each deal,

then choose the action with highest expected value over all deals.

GIB, a strong and well-known bridge program, approximates this idea by

1) generating 100 deals consistent with bidding information

2) picking the action that wins most tricks on average

UNSW c©Alan Blair, 2013-18

Page 41: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 40

Infinite Mario

Currently best solution uses A*Search, after reverse engineering the world

model.

UNSW c©Alan Blair, 2013-18

Page 42: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 41

Pacman

Combines path planning, low-level control, reasoning under uncertainty

and (for ghosts) multi-agent coordination.

UNSW c©Alan Blair, 2013-18

Page 43: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 42

Robocup Soccer

UNSW c©Alan Blair, 2013-18

Page 44: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 43

Deep Green pool playing robot

UNSW c©Alan Blair, 2013-18

Page 45: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 44

Deep Green pool playing robot

Low level technical issues

� undistortion of overhead camera image

� ball appears “egg-shaped”, need to find centre accurately

High level strategy

� easy to sink current ball

� more complicated to “set up” for the next ball

� competition using physical simulator

UNSW c©Alan Blair, 2013-18

Page 46: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 45

MENACE

Machine Educable Noughts And Crosses Engine

Donald Michie, 1961

UNSW c©Alan Blair, 2013-18

Page 47: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 46

MENACE

UNSW c©Alan Blair, 2013-18

Page 48: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 47

Game Tree (2-player, deterministic)

XXXX

XX

X

XX

MAX (X)

MIN (O)

X X

O

OOX O

OO O

O OO

MAX (X)

X OX OX O XX X

XX

X X

MIN (O)

X O X X O X X O X

. . . . . . . . . . . .

. . .

. . .

. . .

TERMINALXX

−1 0 +1Utility

UNSW c©Alan Blair, 2013-18

Page 49: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 48

Summary

� games are fun to work on!

� games continue to be a driver of new technology

� tradeoff between speed and accuracy

� probabilistic reasoning

� force us to build “whole systems” – chain is as strong as its weakest

link

UNSW c©Alan Blair, 2013-18

Page 50: COMP3411/9414/9814: Artificial Intelligencecs3411/18s1/lect/1page/wk05_Game… · In March 2016, AlphaGo defeated the human Go champion Lee Sedol in a 4-1 match. AlphaGo uses MCTS,

COMP3411/9414/9814 18s1 Games 49

References

Tom Standage, 2002. The Mechanical Turk, Penguin Books.

Arthur Samuel, 1959.Some studies in machine learning using the game

of checkers, IBM Journal on Research and Development, pages 210-229.

Chinook: www.cs.ualberta.ca/ ˜chinook

Robocup: www.robocup.org

[look for Infinite Mario and Deep Green on youtube]

UNSW c©Alan Blair, 2013-18