Reflections

It has been a day after the GECCO conference (and summer school) and wow, has it been a ride. Personally, it has been an amazing journey of learning, of experimentation and of course fun!

This is the first time I had such an experience of a conference cum summer school (the 100 euros was more than worth it!), but not my first time at a conference or a summer school. Upon reflection, this is what I think were key factors of the success of #2018S3

  • Teamwork: The presence of team work forced us to hang out with each other, discuss the problem and just spend lots of time with each other to debate over what we want to work on and solve. Although I would say that if we were given slightly more time, we could have a better plan!
  • Mentorship: Mentors, other than ensuring that we get some work done, I am just glad to get some life advice (from the wiser and more experienced people) and also tips in terms of presentation skills and structure. Also, really glad that Anna (my group’s mentor) was friendly, approachable and always finding time to meet us despite her busy schedule
  • Team: Teammates, for their openness and willingness to meet during coffee breaks and lunch and to just discuss and work hard together on this project, even though we have many doubts on the feasibility and how realistic the problem is. Just glad that we managed to stitch up everything together and made a (kickass) presentation 😉

    Last but not least, the existence of telegram for communication and JJ for organising the summer school 🙂

    It has been a blast! Hope to meet you guys somewhere, somehow! :p

Advertisements

In my first conference Gecco, my first presentation

The first time I wrote paper, I got my first presentation session. My article is CNN’s hyper parameter to optimize GA and Machinelearing. I was very anxious because I had insufficient English skills. So I made a mistake when I made a presentation in front of it and made an unsuccessful announcement. However, it was better than I thought when I explained it to people in front of the poster. So it is finished well. I hope I can do better with the improved English skills next time. Thank you very much for visiting.  and thanks to Tushar Semwal who is adviced me

현창.jpg

Tutorial on Evolutionary Multiobjective Optimisation

  • Multiobjective Optimisation: We have multiple objectives which have to be optimised simultaneously. In this, there is no single optimal solution but some solutions are better than the others.
  • Pareto set: is the set of all non-dominated solutions in the decision space.
    Pareto front: is its image in the objective space.
  • Ideal points are the best values and the nadir points are for the worst values. These two are obtained for the pareto-optimal points.
  • Basically multiobjective optimisation is a combination of optimisation and a decision for a solution.
  • Possible approaches for selecting a solution would be to have a ranking and have some constraints.
  • When to make a decision about the solution:
    Before optimisation:  Rank objectives, define constraints
    After optimisation:  Search for a set of good solutions and then select one solution considering the constraints.
  • The speaker  also spoke about the history of the Multiobjective optimisation as well.
  • The principal fitness assignment  approaches are aggregation based, criterion based and dominance based.
  • It is also possible to transform multiple objectives into single objective.
  • Selection of the solutions could be hyper volume based and indicator based.

Report on GECCO (2nd day)

Theory of Estimation-of-Distribution-Algorithms

This is the lecture about theory of genetic drift with the EDA on the discrete domain. The EDA can be separate as the univariate EDA (for example cGA, UMDA) and the multivariate EDA (for example FDA, MIMIC). Although most theoretical analysis concern the univariate EDA, I got interested in the multivariate case as well.

Evolutionary Multiobjective Optimization

This lecture tells what idea the latest approaches are based on, the hypervolume indicator and the hypervolume contribution. However, I couldn’t understand several term like “refinement” or “μ-distribution”, and I will check up later.

Dynamic Parameter Choice in Evolutionary Computation

I couldn’t attend this lecture from the beginning because I attended Shota’s presentation on the Student Workshop. I heard the success-based parameter selection like adaptive parameter control, and learning-inspired selection. I understand why the dynamic parameter adaptation is needed, and I want to use them if I have chance.

Solving Complex Problem with Coevolution Algorithms

Although I didn’t understand well the lecture, I could understand what the CoEC can do. Considering the definition of CoEC, the stochastic algorithms with mixture probabilistic models may be one of them. This image is easiest understand for me if it is true. Anyway, I note several terms in this lecture so as to check them later.

On MOEA/D: Decomposition methods of multi-objective evolutionary algorithms

Sizzz. Decomposition-based methods in MOEA is one of the hottest methods in the last 2 years, evidently given that about 50% of the papers in the EMO track is on MOEA/D improvements. The talk by Qingfu Zhang and Ke Li started with an introduction of MOEA/D and its initial set of problems and how methods have been developed over the years to resolve them. I like how they have organised the methods into 3 broad categories.

  • Setting of weight vectors
  • Search methods
  • Collaboration method between neighbourhoods
    • Mating Selection
    • Replacement

Also loved how they concluded with future opportunities in the area of MOEA/D and more generally EMO.

Here’s a sneak preview of the future opportunities in EMO :p

IMG_20180717_152445_691.jpg

Overview of tutorial ‘Promoting Diversity in Evolutionary Optimization’ GECCO Day 2

Promoting Diversity in Evolutionary Optimization: Why and How by Professor Giovanni Squillero and Dr. Alberto Tonda

Premature convergence is one of the common issues during an evolutionary algorithm as the population converges too early to a point that is a sub-optimal solution. Therefore, diversity in the population is necessary for an algorithm to be able to reach the optimal solution. “The basic point of the principle of divergence is simplicity itself: the more the coinhabitants of an area differ from each other in their ecological requirements, the less they will compete with each other; therefore natural selection will tend to favor any variation toward greater divergence.”

Diversity in the population of an evolutionary algorithm can be measured in three levels; phenotype, genotype and fitness but it is usually done in the level of genotype. Typical metrics of diversity in the level of genotype are the distance between different genotypes and the amount of entropy and free energy in the population.

Even though the end goal of optimization is to find the best solution in the minimum amount of time and diversity promoting mechanisms slow down the procedure, they are necessary as they can help us achieving a better solution. In theory a method for promoting diversity alters the selection probability of individuals. Many methods exist that can promote diversity based on the lineage, the genotype or the phenotype. Genotype based methods can be employed when a relation between the distance in the genotype and the distance in the phenotype can be defined and they are used to avoid overexploitation of peaks in the fitness landscape and maintain a certain amount of variability in the gene pool. Phenotype based methods are almost impractical to use, even though diversity in nature works in that level.

The tutorial presented a very detailed overview of existing methods for promoting diversity among which:

-Island model: the population is partitioned into sub-populations called islands and only local interactions are allowed. In this way, different populations can explore different parts of the search space. Periodically, champions of one island can ‘migrate’ to another island so that they may converge to a better solution.

-Segregation: the population is partitioned again into N sub-populations and upon stagnation they are merged into N-1 sub-populations.

-Deterministic crowding: the whole offspring compete with the parents for survival and there is no need for similarity evaluation.

-Allopatric selection: the whole offspring competes for survival, the champion offspring can be selected and be placed back in the population

-Fitness sharing: scaling down the fitness so that attractiveness of a densely populated area can be reduced

 

 

 

GECCO 2018 – 2nd day overview

Black box Discrete Optimization Benchmarking Workshop

– Solve the gap between theory and pratice

Compiling a Benchmarking Test-Suite for Combinatorial Black-Box Optimization (Mr. Ofer M. Shir)

  • Considering a Combinatorial Optimization.
  • How does the algorithm perform on different classes of problems.
  • Current focus: Formulating a set of benchmark prblems and/or a test-suite for CO problems.
  • Instance-Based Problems:
    • Should be incorporated within the test-suite? Should be avoided and should use instance-free problems
  • Function evaluations as the main performance measure – as in COCO.
  • The human factor plays a crutial role in such processes

Discrete Real-world Problems in a Black-box Optimization Benchmark (Mr. Sebastian Raggl)

  • Transport-lot building problem
    • Steel industry
    • Need to be transported to further processing
    • Combined grouping and ordering problem
  • Real problem have complex constraints.
  • Problem instances must be scaled properly, knowing the type of problem is very valuable.
  • Interfaces using specialized encodings or Interfaces based on operators

A benchmarking and Profiling tool for interative Optimization Heuristics (Mrs. Carola Doerr)

  • Fine-grained performance analysis
  • Tracking the evolution of adaptative parameter

Evolutionary Multiobjective Optimization

  • Multiple objectives have to be optimized at the same time.
  • Exists multiple dominance
  • Generate solutions in decision space.
  • Objective space is different between multiobjective (2D) than single objective (1D)
  • EMO: set-based algorithms and possible to approximate the Pareto front.
  • EMO is used because:
    • Some problems are easier to solve in a multiobjective scenario – because their is more details in EMO.
    • Add new “helper objectives”
  • Fitness:
    • Aggregation-based (solution-oriented, scaling-dependent)
      • Apply a transformation on multiple objectives and get a single objective
        • Weighted sum approach: Not all Pareto-optimal solutions can be found if the front is not convex
        • Weighted Tchebycheff
    • Criterion-based
    • Dominance-based (set-oriented, scaling-independent)
  • Dominance:
    • Rank: NPGA
    • Count: SPEA2
    • Depth: NSGA-II (most used)
  • Diversity:
    • Crowding Distance: Sort solutions with regard to each objective
  • Selection:
    • Hypervolume to guide the search: refines dominance.

Constraint-Handling Techniques used with Evolutionary Algorithms

  • Penalty Functions:
    • Penalize the fitness function when there is a constraint.
    • With exterior methods we start from the infeasible solution towards a feasible solution.
    • What is the penaltie factor? Impossible to know.
  • Death Penalty:
    • Not a good approach.
    • Search may stagnate in the presence of very small feasible regions.
    • No information from infeasible.
  • Static Penalty
  • Dynamic Penalty
  • Self-Adaptive Fitness Formulation:
    • Sum of constraint violation is computed for each individual.
    • The best and worst solutions are identified.
  • Stochastic Ranking:
    • Population is sorted using na algorithm like bubble-short.
  • Repair Algorithms:
    • Can only be used in one problem
  • Recommendations:
    • Study traditional mathematical programming techniques.
    • Pay attention to diversity.

Notes by Bruno Taborda