Rina Dechter University of California Irvine

62
Systematic vs non-systematic algorithms for constraint optimization Rina Dechter University of California Irvine Collaborators: Kalev Kask Radu Marinescu, Robert mateescu

Transcript of Rina Dechter University of California Irvine

Page 1: Rina Dechter University of California Irvine

Systematic vs non-systematic algorithms

for constraint optimization

Rina DechterUniversity of California

IrvineCollaborators: Kalev KaskRadu Marinescu,Robert mateescu

Page 2: Rina Dechter University of California Irvine

Overview

! Introduction and background:! Combinatorial optimization tasks: CSP, Max-

CSP, belief updating, MPE

! Bounded inference: mini-bucket and mini-clustering

! Heuristic generation for Brunch and Bound! BBMB(i), BBBT(i)

! Empirical evaluation on Max-CSPs, MPE, CSP! Conclusions

Page 3: Rina Dechter University of California Irvine

Probabilistic Networks

= P(S) P(C|S) P(B|S) P(X|C,S) P(D|C,B)

lung Cancer

Smoking

X-ray

Bronchitis

DyspnoeaP(D|C,B)

P(B|S)

P(S)

P(X|C,S)

P(C|S)

P(S, C, B, X, D)

P(S|d)= ?MPE: argmax P(S) P(C|S) P(B|S) P(X|C,S) P(D|C,B)

Θ) (G,BN =

CPD:C B D=0 D=10 0 0.1 0.90 1 0.7 0.31 0 0.8 0.21 1 0.9 0.1

Page 4: Rina Dechter University of California Irvine

! The belief updating problem is the task of computing the posterior probability P(X|e) of query node X given evidence e..

y tables)probabilit al(condition CPTs are )|(},,...,{

over graph) acyclic (directedDAG a is domains their ofset theis },...,{ variablesrandom ofset a is },...,{

: where,,,quadruple a is A

1

1

1

iiin

n

n

paXPpppPXG

DDDXXXPGDXBN

network belief

==

==

>=<

Belief networksA

B C

D F

G

P(A)

P(B|A) P(C|A)

P(F|B,C)P(D|A,B)

P(G|D,F)

Page 5: Rina Dechter University of California Irvine

A Bred greenred yellowgreen redgreen yellowyellow greenyellow red

Constraint SatisfactionExample: map coloring

Variables (X) - countries (A,B,C,etc.)Values (D) - colors (e.g., red, green, yellow)

Constraints (C): etc. ,ED D, AB,A ≠≠≠

C

A

B

DE

FG

Semantics: set of all solutionsPrimary task: find a solution

Page 6: Rina Dechter University of California Irvine

Belief and constraint networksA

B C

D F

G

A

B C

D F

G

a) Belief network b) Constraint network

c) Dual graph

A

AB AC

ABD BCF

DFG

A

AB

A

A

AB

C

B

D F

A

P(A)

P(B|A)

P(C|A)

P(D|A,B)

P(F|B,C)

P(G|D,F)

R(A)

R(A,B)

R(A,C)

R(A,B,D)

R(B,C,F)

R(D,F,G)

Page 7: Rina Dechter University of California Irvine

Belief and constraint networksA

B C

D F

G

A

B C

D F

G

a) Belief network b) Constraint network

c) Dual graph

A

AB AC

ABD BCF

DFG

A

AB

A

A

AB

C

B

D F

A

P(A)

P(B|A)

P(C|A)

P(D|A,B)

P(F|B,C)

P(G|D,F)

R(A)

R(A,B)

R(A,C)

R(A,B,D)

R(B,C,F)

R(D,F,G)

"Belief updating: ΣX-y ∏j Pj

"MPE: maxX ∏j Pj

"CSP: ∏X ×j Cj

"Max-CSP: minX Σj Fj

all are np-hard,Also hard to approximate

Page 8: Rina Dechter University of California Irvine

Solution Techniques

Search: Conditioning

Inference: Elimination

Hybrids

CompleteBranch-and-Bound

Depth-First (Backtracking)

Breadth-First

Iterative Deepening

A*

IncompleteSimulated Annealing

Gradient DescentSLS

Complete

Incomplete

Adaptive ConsistencyTree Clustering

Dynamic ProgrammingResolution

Local Consistency

Unit Resolutionmini-bucket(i)

Time: exp(n)Space: linear

Time: exp(w*)Space:exp(w*)

Page 9: Rina Dechter University of California Irvine

Overview

! Introduction and Background! Bounded inference:

! mini-bucket and mini-clustering

! Belief propagation: IJGP(i)

! Heuristic generation for Brunch and Bound! BBMB(i), BBBT(i)

! Empirical evaluation on Max-CSPs, MPE, CSP! Conclusions

Page 10: Rina Dechter University of California Irvine

Tree decomposition

! Each function in a cluster

! Satisfy running intersection property

G

E

F

C D

B

A

A B Cp(a), p(b|a), p(c|a,b)

B C D Fp(d|b), p(f|c,d)

B E Fp(e|b,f)

E F Gp(g|e,f)

EF

BF

BC

Page 11: Rina Dechter University of California Irvine

u v

x1

x2

xn

∑ ∏ −∈=

),( )},({)(),(

:message theCompute

vu uvhuclusterffvuh

elim

Belief Propagation

h(u,v)

)},(),,(),...,,(),,({)( 21 uvhuxhuxhuxhu n∪ψ

Page 12: Rina Dechter University of California Irvine

),|()|()(),()2,1( bacpabpapcbha

⋅⋅= ∑

),(),|()|(),( )2,3(,

)1,2( fbhdcfpbdpcbhfd

⋅⋅= ∑

),(),|()|(),( )2,1(,

)3,2( cbhdcfpbdpfbhdc

⋅⋅= ∑

),(),|(),( )3,4()2,3( fehfbepfbhe

⋅= ∑

),(),|(),( )3,2()4,3( fbhfbepfehb

⋅= ∑

),|(),()3,4( fegGpfeh e==G

E

F

C D

B

AABC

2

4

1

3 BEF

EFG

EF

BF

BC

BCDF

Time: O ( exp(w*+1 ))Space: O ( exp(sep))

CTE: Cluster Tree Elimination

For each cluster P(X|e) is computed

Page 13: Rina Dechter University of California Irvine

Bucket elimination Algorithm Elim-MPE (Dechter 1996)

Πbmax Elimination operator

mpe

W*=4�induced width� (max clique size)

bucket B:

P(a)

P(c|a)

P(b|a) P(d|b,a) P(e|b,c)

bucket C:

bucket D:

bucket E:

bucket A:

e=0

B

C

D

E

A

e)(a,h D

(a)h E

e)c,d,(a,h B

e)d,(a,h C

Page 14: Rina Dechter University of California Irvine

Two Principles for Bounded Inference! Bounded-Partitioning

! mini-bucket(i), MC(i)! Computes a bound! Exp(i) time space

XX gh ≤

Page 15: Rina Dechter University of California Irvine

Approx-mpe(i) Algorithm Approx-MPE (Dechter&Rish 1997)

! Input: i � max number of variables allowed in a mini-bucket! Output: [lower bound (P of a sub-optimal solution), upper bound]

Example: approx-mpe(3) versus elim-mpe

2* =w 4* =w

Page 16: Rina Dechter University of California Irvine

Mini-Clustering idea

),|()|()(),(1)2,1( bacpabpapcbh

a⋅⋅= ∑

sep(2,3)={B,F}

elim(2,3)={C,D} ),|(max)(,

2)3,2( dcfpfh

dc=

),()|()( 1)2,1(

,

1)3,2( cbhbdpbh

dc⋅= ∑

2

4

1

3

A B Cp(a), p(b|a), p(c|a,b)

B C D Fp(d|b), h(1,2)(b,c)

p(f|c,d)

B E Fp(e|b,f),

h1(2,3)(b), h2

(2,3)(f)

E F Gp(g|e,f)

EF

BC

BF

Page 17: Rina Dechter University of California Irvine

EF

BF

BC

),|()|()(:),(1)2,1( bacpabpapcbh

a⋅⋅= ∑

)2,1(H

∑=

⋅=

fd

fd

dcfpch

fbhbdpbh

,

2)1,2(

,

1)2,3(

1)1,2(

),|(:)(

),()|(:)(

)1,2(H

∑=

⋅=

dc

dc

dcfpfh

cbhbdpbh

,

2)3,2(

,

1)2,1(

1)3,2(

),|(:)(

),()|(:)()3,2(H

),(),|(:),( 1)3,4(

1)2,3( fehfbepfbh

e⋅= ∑)2,3(H

)()(),|(:),( 2)3,2(

1)3,2(

1)4,3( fhbhfbepfeh

b⋅⋅= ∑)4,3(H

),|(:),(1)3,4( fegGpfeh e==)3,4(H

ABC

2

4

1

3 BEF

EFG

BCDF

Mini-Clustering � MCTE(i)

Page 18: Rina Dechter University of California Irvine

Properties of MC(i)

! MCTE(i) computes a bound on the exact value : ⊗f∈M(u,v)

f is an approximation of m(u,v).

! Time & space complexity: O(N × hw* × di)

! Approximation improves with i but takes more time

Page 19: Rina Dechter University of California Irvine

Two Principles for Bounded Inference! Bounded-Partitioning

! mini-bucket(i), MC(i)! Computes a bound! Exp(i) time space

! Belief propagation on join-graphs! IBP, IJGP(i)! No guuarantees! Each iteration is exp(i)

XX gh ≤

)( 11uXλ

1U 2U 3U

2X1X

)( 12xUπ

)( 12uXλ

)( 13xUπ

Page 20: Rina Dechter University of California Irvine

A

D

I

B

E

J

FG

C

H

A

ABD

FGI

ABC

BCDE

GHIJ

ACDEF

FGH

C

H

A

A C

A AB BC

BD

C

CAD CDE

F H

FFG GH H

GI

a) Belief network a) The graph IBP works on

Iterative Join-Graph Propagation

Page 21: Rina Dechter University of California Irvine

A

ABDE

FGI

ABC

BCE

GHIJ

CDEF

FGH

C

H

A C

A AB BC

BE

C

CDE CE

F H

FFG GH H

GI

A

ABDE

FGI

ABC

BCE

GHIJ

CDEF

FGH

C

H

A

AB BC

CDE CE

H

FF GH

GI

ABCDE

FGI

BCE

GHIJ

CDEF

FGH

BC

DE CE

FF GH

GI

ABCDE

FGHI GHIJ

CDEF

CDE

F

GHI

more accuracy

less complexity

Join-graphs

Page 22: Rina Dechter University of California Irvine

Empirical results showed:

! Mini-bucket(i) and MC(i)! Accuracy/time increase with i-bound! Compute bounds.! demonstrate impressive performance for

many problem classes for both optimization and belief updating.

! IJGP(i) is generally superior to MC for belief updating. But no bound.

Page 23: Rina Dechter University of California Irvine

BnB for Constraint Optimization

! Max-CSP! MPE! CSP

Page 24: Rina Dechter University of California Irvine

Overview

! Introduction and background:! Combinatorial optimization tasks: CSP, Max-

CSP, belief updating, MPE

! Bounded inference: mini-bucket and mini-clustering

! Heuristic generation for Brunch and Bound! BBMB(i), BBBT(i)

! Empirical evaluation on Max-CSPs, MPE, CSP! Conclusions

Page 25: Rina Dechter University of California Irvine

BnB with inferred heuristics

B

Guiding heuristic evaluation function,Upper-bound h(x).If h < L search is pruned.

A

B C

DE …

A

E

D

C

L

a

e

Page 26: Rina Dechter University of California Irvine

Two BnB schemes

a

B

A

E

D

C

A

B C

DE

BBMB(i): h(x) computed by MB(i),before search, static ordering

BBBT(i): h(x), computed via MCTE(i) at every node for every un-instantiated variable

Lower Bound

Page 27: Rina Dechter University of California Irvine

Optimization Task

! The Most Probable scenarioproblem is to find a most probably complete assignment that is consistent with the evidence e.

A

B C

D

E

evidence

! Systematic Search (BnB)! Non-Systematic Search (SLS, BP)

Page 28: Rina Dechter University of California Irvine

The main idea

! BnB with exact heuristic: ! h(x1,�,xi) if equals max-cost extenion of partial

solution, (x1,�,xi) will yield backtrack-free search

! Idea: ! mini-bucket(i) compiles an upper bound h(x1,�,xi)

for max-cost extensions of (x1,�.,x_i) for every partial solution along the fixed ordering.

! # Run MB(i), then run BnB in reverse order using the mini-bucket heuristics

Page 29: Rina Dechter University of California Irvine

BBMB

B

A

E

D

C

L

BBMB(i): h(x) computed by MB(i),before search, static ordering

B: P(E|B,C) P(D|A,B) P(B|A)

A:

E:

D:

C: P(C|A) hB(E,C)

hB(D,A)

hC(E,A)

P(A) hE(A) hD(A)

f(a,e,D) = P(a)·hB(D,a)· hC(e,a)

A

B C

DE

a

e

Page 30: Rina Dechter University of California Irvine

maxB P(e|b,c) P(d|a,b) P(b|a)

maxC P(c|a) hB(e,c)

maxD hB(d,a)

maxE hC(e,a)

maxA P(a) hE(a) hD (a)

Heuristic Function

The evaluation function f(xp) can be computed using function

recorded by the Mini-Bucket scheme and can be used to estimate

the probability of the best extension of partial assignment xp={x1, …, xp},f(xp)=g(xp) • H(xp )

For example,

H(a,e,d) =hB(d,a) • hC (e,a)

g(a,e,d) =P(a)

Page 31: Rina Dechter University of California Irvine

Properties

! Heuristic is monotone, admissible! Heuristic is computed in linear time! IMPORTANT:

! Mini-buckets generate heuristics of varying strength using i.

! Higher i-bound ⇒ more pre-processing ⇒stronger heuristic ⇒ less search.

! Allows controlled trade-off between preprocessing and search

Page 32: Rina Dechter University of California Irvine

Experimental Methodology

Test networks:! Random Coding (Bayesian)! CPCS (Bayesian)! Random (CSP)

Measures of performance! Compare in terms of accuracy given a fixed amount of time - how

close is the probability/cost of the assignment they find to theprobability/cost of the optimal solution

! Compare trade-off performance as a function of time

Page 33: Rina Dechter University of California Irvine

Empirical Evaluation of mini-bucket heuristics

Time [sec]

0 10 20 30

% S

olve

d Ex

actly

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

BBMB i=2 BFMB i=2 BBMB i=6 BFMB i=6 BBMB i=10 BFMB i=10 BBMB i=14 BFMB i=14

Random Coding, K=100, noise=0.28 Random Coding, K=100, noise 0.32

Time [sec]

0 10 20 30

% S

olve

d Ex

actly

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

BBMB i=6 BFMB i=6 BBMB i=10 BFMB i=10 BBMB i=14 BFMB i=14

Random Coding, K=100, noise=0.32

Page 34: Rina Dechter University of California Irvine

34

Max-CSP experiments(Kask and Dechter, 2000)

Page 35: Rina Dechter University of California Irvine

BBBT(i) � Search Space

! Branch-and-Bound search where MCTE(i) is executed at each visited node

! Domain pruning! Dynamic variable

ordering! Dynamic value ordering

B

A

E

D

C

L

MCTE(i) computes h(x), at every Node for every uninstantiatedvariable.

Page 36: Rina Dechter University of California Irvine

BBBT and the singleton optimality task

! Computes for each variable and each value the max-cost extension for the rest of the free variable given the instantiated ones.

Page 37: Rina Dechter University of California Irvine

ABC

2

4

),()2,1( cbh1

3 BEF

EFG

),()1,2( cbh

),()3,2( fbh

),()2,3( fbh

),()4,3( feh

),()3,4( fehEF

BF

BC

BCDF

),(1)2,1( cbh

)(

)(2

)1,2(

1)1,2(

ch

bh

)(

)(2

)3,2(

1)3,2(

fh

bh

),(1)2,3( fbh

),(1)4,3( feh

),(1)3,4( feh

)2,1(H

)1,2(H

)3,2(H

)2,3(H

)4,3(H

)3,4(H

ABC

2

4

1

3 BEF

EFG

EF

BF

BC

BCDF

Cluster Tree Elimination vs. Mini Clustering

Compute bounds to singleton optimality taskfor every var-val

Page 38: Rina Dechter University of California Irvine

BBBT � Search Space

B

A

E

D

C

L

BBBT(i): h(x), computed via MC(i) at every Node for every uninstantiated variable. Dynamic, Variable and value ordering

BCDEA

DEA

EA

CDEA

A

{P(E|B,C), P(D|A,B), P(B|A)}

{P(C|A)}

{P(A)}

1

2

3

4

5

Page 39: Rina Dechter University of California Irvine

BBBT: BnB Search with MBTE(i) Heuristics

! Main Idea:! During search, maintain Lower Bound L (the best

MPE cost so far).! When processing variable Xp:

! Compute heuristic estimates mZi for all un-instantiated variables (using MBTE(i)).

! Use the costs to prune the domains of un-instantiated variables.

! Backtrack when an empty domain occurs, otherwise expand the current assignment (smallest domain variable).

Page 40: Rina Dechter University of California Irvine

BnB with Lower Bound Heuristics

! BBMB(i), the earlier algorithm: ! Heuristic, computed by MB(i), is static,

variable ordering fixed.

! BBBT(i), the new algorithm: ! Lower bound is computed at each node of the

search by MCTE(i). ! Used for dynamic variable and value ordering. ! Domain pruning.

Page 41: Rina Dechter University of California Irvine

Non-Systematic Algorithms

! Stochastic Local Search [Park, 2002]! Guided Local Search (GLS) [Mills and Tsang,2000]

! Discrete Lagrange Multipliers (DLM) [Wah and Shang,1997]

! Stochastic Local Search (SLS) [Kask and Dechter,1999]

! Iterative Belief Propagation! Iterative Join Graph Propagation (max-IJGP)

Page 42: Rina Dechter University of California Irvine

Stochastic Local Search (I)

! Discrete Lagrange Multipliers (DLM) [Wah and Shang,1997]! For each clause C: weight wc , Lagrange multiplier λc

Cost function: sum(wc+ λc).! At local maxima, increase λs of all unsatisfied clauses.

! Guided Local Search (GLS) [Mills and Tsang,2000]! For each clause C: weight wc , Lagrange multiplier λc! Cost function: sum(λc).! At local maxima, increase λs of the unsatisfied

clauses with maximum utility.

Page 43: Rina Dechter University of California Irvine

Stochastic Local Search (II)

! Stochastic Local Search (SLS) [Kask and Dechter, 1999]

! At each step performs either a hill climbing or a stochastic variable change.

! Periodically, the search is restarted in order to escape local maxima.

Page 44: Rina Dechter University of California Irvine

Experimental Results! Algorithms:

! Complete! BBBT(i)! BBMB(i)

! Incomplete, competing methods! DLM! GLS! SLS! IJGP! IBP (coding)

! Benchmarks:! Coding networks! Bayesian Network Repository! Grid networks (N-by-N)! Random noisy-OR networks! Random networks

! Measures:! Time! Accuracy (% exact)! #Backtracks! Bit Error Rate (coding)

Page 45: Rina Dechter University of California Irvine

Random Networks � Average Accuracy

0

10

20

30

40

50

60

70

80

90

100

0 5 10 15 20 25 30

Time [sec]

% S

olve

d E

xact

ly

BBBT-2

BBBT-4

BBBT-10

GLS

(a)

0

10

20

30

40

50

60

70

80

90

100

0 10 20 30 40 50 60

Time [sec]

% S

olve

d E

xact

ly

BBBT-2GLSBBMB-2BBMB-6

(b)

Average Accuracy. Random Bayesian (N=100, C=90, P=2), w* = 17100 samples, 10 observations

(a) K = 2, (b) K = 3.

We see: GLS is good for small domains,GLS is poor for large domains BBMB is best for strong heuristicsBBBT exploit better weak heuristics

Page 46: Rina Dechter University of California Irvine

Average Accuracy and Time. Random Grid (N=100), w*=15,100 samples, 10 observations

Grid Networks � Accuracy and Time

Page 47: Rina Dechter University of California Irvine

Average BER. Random Coding (N=200, P=4), w*=22,100 samples, 60 seconds

Random Coding Networks � Bit Error Rate

Page 48: Rina Dechter University of California Irvine

Real World Benchmarks

Average Accuracy and Time. 30 samples, 10 observations, 30 seconds

Page 49: Rina Dechter University of California Irvine

Empirical Results: Max-CSP

! Random Binary Problems: <N, K, C, T>! N: number of variables! K: domain size! C: number of constraints! T: Tightness

! Task: Max-CSP

Page 50: Rina Dechter University of California Irvine

BBBT(i) vs. BBMB(i)

BBBT(i) vs BBMB(i), N=50

i=2 i=3 i=4 i=5 i=6 i=2

Page 51: Rina Dechter University of California Irvine

BBBT(i) vs. BBMB(i).

BBBT(i) vs BBMB(i), N=100

i=2 i=3 i=4 i=5 i=6 i=7 i=2

Page 52: Rina Dechter University of California Irvine

A new BnB search algorithm for solving CSPs

! Method! Solution Counting heuristic is computed by Iterative Join Graph

Propagation (IJGP)

! Hypothesis! Better scalability that competing BnB for CSP

! Results! Competitive with CSP, best algorithm in practice for random CSPs;

SLS is incomplete, our new algorithm is complete.

! Strength: quickly finding a solution if one exists

Page 53: Rina Dechter University of California Irvine

Min-Conflict Heuristic

! Each constraint C(Xi) is represented by a cost function:! fC(Xi)=0, iff C(Xi) not violated, 1 otherwise! Solution is when ∑fj =0

! Basic function of interest used to guide BnB! MC = min (∑fj | E, Xi) = lowest cost over assignments that agree

with evidence E and assignment Xi

! BnB Search: Compute mc heuristic, lower bound on MC! Prune nodes Si whose mc(Si)>0! Allows dynamic variable ordering! Does not allow value ordering (all legal nodes have mc=0)

Page 54: Rina Dechter University of California Irvine

Solution-Count Heuristic

! Each constraint C(Xi) is represented by a solution count function:! fC(Xi)=1, iff C(Xi) not violated, 0 otherwise! Solution is when ∏fj = 1

! Basic function of interest used to guide BnB! SC = sum (∏fj | E, Xi) = number of solutions that agree with

evidence E and assignment Xi

! BnB Search! Compute sc heuristic, lower bound on SC! Prune nodes Si whose mc(Si)=0! Allows dynamic variable and value ordering

Page 55: Rina Dechter University of California Irvine

Example

A=1B=2

C

D

E

F

Computationbound is 2

C

D��

E

FD�

MC:SC:

E=1 E=2 E=3

0 1 01 0 2

mc:sc:

E=1 E=2 E=3

0 0 0.4 .2 .4

A=1B=2

Page 56: Rina Dechter University of California Irvine

Heuristics & Algorithms

! Need lower bounds for BnB

! Compute ! min (∑fj | E, Xi) � min conflicts! sum (∏fj | E, Xi) � solution count

! Using ! MC(i)! IJGP(i)

Page 57: Rina Dechter University of California Irvine

BnB Search

! IJGP [MC, SC] + MBTE [MC, SC]! IJGP-SC! IJGP-MC! MBTE-SC! MBTE-MC! MBTE-MC + IJGP-SC! etc.

Page 58: Rina Dechter University of California Irvine

N=200,K=4,C=760,T=4, 5 min

Page 59: Rina Dechter University of California Irvine

N=200,K=4,C=765,T=4, 5 min

IJGP-SC

MBTE-MC

MBTE-SC

Page 60: Rina Dechter University of California Irvine

N=300,K=4,C=1125,T=4 10 min

w* time solved nodesIJGP-SC 2 112 24 4IJGP-SC 3 112 30 3SLS 112 28 4 450K

Page 61: Rina Dechter University of California Irvine

Summary

! A new algorithm for solving CSP, based on Solution Counting heuristic computed by IJGP

! Competitive with CSP, best algorithm in practice for random CSPs; SLS is incomplete, our new algorithm is complete

! Strength: good for consistent problems

! Weakness: not as good for inconsistent problem

Page 62: Rina Dechter University of California Irvine

Conclusions

! We introduce two general BnB schemes that generate bounding heurisics automatically.

! BBBT and BBMB do not dominate each other. ! When large i-bounds are effective, BBMB is more powerful.! However, when space is at issue, BBBT with small i-bound is often

more powerful.

! BBBT/BBMB together are often superior to stochastic local search, except in cases when the domain size is small, in which case they are competitive.

! BBBT/BBMB as complete algorithms can prove optimality if given enough time, unlike SLS.

! BBBT can be extended to CSPs: heuristics based on approximate counting are very promising.