# 2 Non-stationary and Noisy Function Optimisation

• What is a non-stationary problem?
• Effect of uncertainty
• Algorithmic approaches
• Example: Time-varying knapsack problem

# 3 What is a non-stationary problem?

• Real-world environments contain sources of uncertainty
• Measuring the fitness more than once will not always result in same fitness
• Not finding single optimum but a sequence of values over time
• Focus on measuring the quality of a solution x, f(x) with different sources of uncertainty:
• The genotype to phenotype mapping is not exact and one-to-one f observed (x) = f( x+δx )
• The act of measurement itself is prone to error or uncertainty f observed (x) = f mean ( x ) + f noise ( x ), where noise is _ N(0,σ)_
• The environment changes over time f observed (x) = f( x,t )

# 4 Effect of uncertainty (1/2)

• Measuring performance of algorithms dealing with uncertainty is done by:
• Running them for a fixed period of time
• Calculating two time averaged metrics
• On-line measure
• Offline measure
• where is time dependent fitness function, the best individual in a population at time t

This figure shows f(x) = 1/(0.1+x2) and the values estimated after 5 samples with two different sorts of uncertainty (uniform between +/- 0.4)

# 5 Algorithmic Approaches (1/2)

• Increasing robustness/reducing noise by repeatedly re-evaluate solutions and take an average
• Implicitly via population management
• Explicitly:
• Resample when degree of variation present is greater than the range of estimated fitnesses in population
• Law of diminishing returns
• Resampling decisions independently for each solution
• Dynamic environments
• Make sure that there is enough diversity in the population
• Memory based approaches for switching or cyclic environments
• Expanding memory of EA
• Example: GA with diploid representation, structured GA
• Explicitly increasing diversity in dynamic environment
• Examples: GA with a hypermutation operator, random immigrants GA
• Preserving diversity and resampling: modifying selection and replacement policies
• Steady-state GAs with “delete-oldest” replacement strategy

# 6 Example: Time-varying knapsack problem

• Number of items having value vit and weight or cost cit
• Select a subset maximising total value meeting time-varying capacity constraint C(t)
• Smith and Vavak did multiple experiments
• Binary-coded SSGA 100 members
• Parent selection by binary tournament
• Uniform crossover
• Hypermutation operator (triggered if running average drops)
• Parameter values are decided after initial experiments
• Best performance when combining conservative (binary) tournament with delete-oldest. Using this policy with hypermutation results in finding the global optima in switching environment and continuously moving optima