Evolutionary Computing Chapter 7. / 37 Chapter 7: Parameters and Parameter Tuning History Taxonomy...

download Evolutionary Computing Chapter 7. / 37 Chapter 7: Parameters and Parameter Tuning History Taxonomy Parameter Tuning vs Parameter Control EA calibration.

of 37

  • date post

    17-Jan-2016
  • Category

    Documents

  • view

    221
  • download

    2

Embed Size (px)

Transcript of Evolutionary Computing Chapter 7. / 37 Chapter 7: Parameters and Parameter Tuning History Taxonomy...

Evolutionary Computing 2014

Evolutionary ComputingChapter 7

Chapter 7:Parameters and Parameter TuningHistoryTaxonomyParameter Tuning vs Parameter ControlEA calibrationParameter TuningTestingEffortRecommendation

2/ 37Brief historical account1970/80ies GA is a robust method1970ies + ESs self-adapt mutation stepsize 1986 meta-GA for optimizing GA parameters1990ies EP adopts self-adaptation of as standard 1990ies some papers on changing parameters on-the- fly 1999 Eiben-Michalewicz-Hinterding paper proposes clear taxonomy & terminology

3 / 373Taxonomy4

/ 37Parameter tuningParameter tuning: testing and comparing different values before the real run

Problems:users mistakes in settings can be sources of errors or sub-optimal performancecosts much timeparameters interact: exhaustive search is not practicablegood values may become bad during the run5/ 37Parameter controlParameter control: setting values on-line, during the actual run, e.g.predetermined time-varying schedule p = p(t)using (heuristic) feedback from the search processencoding parameters in chromosomes and rely on natural selection

Problems:finding optimal p is hard, finding optimal p(t) is harderstill user-defined feedback mechanism, how to optimize?when would natural selection work for algorithm parameters?6/ 37Notes on parameter controlParameter control offers the possibility to use appropriate values in various stages of the searchAdaptive and self-adaptive control can liberate users from tuning reduces need for EA expertise for a new applicationAssumption: control heuristic is less parameter-sensitive than the EABUTState-of-the-art is a mess: literature is a potpourri, no generic knowledge, no principled approaches to developing control heuristics (deterministic or adaptive), no solid testing methodology

WHAT ABOUT AUTOMATED TUNING?7/ 37Historical account (contd)Last decade: More & more work on parameter control Traditional parameters: mutation and xover Non-traditional parameters: selection and population sizeAll parameters parameterless EAs (name!?)

Not much work on parameter tuning, i.e.,Nobody reports on tuning efforts behind their EA publishedA handful papers on tuning methods / algorithms

8/ 37Control flow of EA calibration / design9Design layerApplication layerAlgorithm layeroptimizesoptimizesOne-maxGAMeta-GASymbolic regressionGPUser/ 37Information flow of EA calibration / design10Design layerApplication layerAlgorithm layerAlgorithm qualitySolution quality/ 37Lower level of EA calibration / design11SearchesDecision variables Problem parametersCandidate solutionsEASpace of solution vectorsEvaluatesApplicationThe whole field of EC is about this/ 37Upper level of EA calibration / design12Design methodSearchesDesign variables, Algorithm parameters,Strategy parametersSpace of parameter vectorsEvaluatesEA/ 37Parameter performance landscapeAll parameters together span a (search) spaceOne point one EA instance Height of point = performance of EA instance on a given problemParameter-performance landscape or utility landscape for each { EA + problem instance + performance measure }This landscape is unlikely to be trivial, e.g., unimodal, separableIf there is some structure in the utility landscape, then we can do better than random or exhaustive search13/ 37Ontology - TerminologyLOWER PARTUPPER PARTMETHODEATunerSEARCH SPACESolution vectorsParameter vectorsQUALITYFitnessUtilityASSESSMENTEvaluationTest 14Fitness objective function valueUtility = ? Mean Best Fitness Average number of Evaluations to SolutionSuccess Rate Robustness, Combination of some of these/ 37Off-line vs. on-line calibration / designDesign / calibration methodOff-line parameter tuningOn-line parameter control

Advantages of tuningEasierMost immediate need of usersControl strategies have parameters too need tuning themselvesKnowledge about tuning (utility landscapes) can help the design of good control strategiesThere are indications that good tuning works better than control

15/ 37Tuning by generate-and-testEA tuning is a search problem itselfStraightforward approach: generate-and-test16Generate parameter vectorsTest parameter vectorsTerminate/ 37Testing parameter vectorsRun EA with these parameters on the given problem or problemsRecord EA performance in that run e.g., by Solution quality = best fitness at termination Speed time used to find required solution quality EAs are stochastic repetitions are needed for reliable evaluation we get statistics, e.g.,Average performance by solution quality, speed (MBF, AES, AEB)Success rate = % runs ending with successRobustness = variance in those averages over different problemsBig issue: how many repetitions of the test17/ 37Numeric parametersE.g., population size, xover rate, tournament size, Domain is subset of R, Z, N (finite or infinite)Sensible distance metric searchable

18Parameter valueEA performanceParameter valueEA performanceRelevant parameterIrrelevant parameter/ 37Symbolic parametersE.g., xover_operator, elitism, selection_methodFinite domain, e.g., {1-point, uniform, averaging}, {Y, N}No sensible distance metric non-searchable, must be sampled19Parameter valueEA performanceABDEFGHCParameter valueEA performanceBDAHFGECNon-searchable orderingSearchable ordering/ 37Notes on parametersA value of a symbolic parameter can introduce a numeric parameter, e.g., Selection = tournament tournament sizePopulations_type = overlapping generation gapParameters can have a hierarchical, nested structureNumber of EA parameters is not defined in generalCannot simply denote the design space / tuning search space by S = Q1 x Qm x R1 x x Rn with Qi / Rj as domains of the symbolic/numeric parameters

20/ 3720What is an EA? (1/2)ALG-1ALG-2ALG-3ALG-4SYMBOLIC PARAMETERSRepresentationBit-stringBit-stringReal-valuedReal-valuedOverlapping popsNYYYSurvivor selectionTournament Replace worstReplace worstParent selectionRoulette wheelUniform determTournament Tournament MutationBit-flip Bit-flipN(0,)N(0,)RecombinationUniform xoverUniform xoverDiscrete recombDiscrete recombNUMERIC PARAMETERSGeneration gap0.50.90.9Population size100500100300Tournament size2330Mutation rate0.010.1Mutation stepsize0.010.05Crossover rate0.80.710.821/ 3721What is an EA? (2/2)Make a principal distinction between EAs and EA instances and place the border between them by:

Option 1There is only one EA, the generic EA schemePrevious table contains 1 EA and 4 EA-instances

Option 2An EA = particular configuration of the symbolic parametersPrevious table contains 3 EAs, with 2 instances for one of them

Option 3An EA = particular configuration of parametersNotions of EA and EA-instance coincidePrevious table contains 4 EAs / 4 EA-instances22/ 3722Generate-and-test under the hood23Generate initial parameter vectorsGenerate p.v.sTest p.v.sSelect p.v.sTerminate Non-iterative Multi-stage Iterative

/ 3723Tuning effortTotal amount of computational work is determined by A = number of vectors testedB = number of tests per vectorC = number of fitness evaluations per test Tuning methods can be positioned by their rationale: To optimize A (iterative search)To optimize B (multi-stage search)To optimize A and B (combination)To optimize C (non-existent)24/ 3724Optimize A = optimally use AApplicable only to numeric parametersNumber of tested vectors not fixed, A is the maximum (stop cond.)Population-based search:Initialize with N