Schedule of tutorials - 19th July, 2020

11:30 - 13:30
14:00 - 16:00
16:30 - 18:30
19:00 - 21:00
11:30 - 13:30
Tutorial Title Presenter Conference Contact Email
RANKING GAME: How to combine human and computational intelligence? Peter Erdi WCCI
Adversarial Machine Learning: On The Deeper Secrets of Deep Learning Danilo V. Vargas IJCNN
From brains to deep neural networks Saeid Sanei, Clive Cheong Took IJCNN
Deep randomized neural networks Claudio Gallicchio, Simone Scardapane IJCNN
Brain-Inspired Spiking Neural Network Architectures for Deep, Incremental Learning and Knowledge Evolution       Nikola Kasabov IJCNN
Fundamentals of Fuzzy Networks Alexander Gegov, Farzad Arabikhan FUZZ
Pareto Optimization for Subset Selection: Theories and Practical Algorithms Chao Qian, Yang Yu CEC
Selection Exploration and Exploitation Stephen Chen, James Montgomery CEC
Benchmarking and Analyzing Iterative Optimization Heuristics with IOHprofiler Carola Doerr, Thomas Bäck, Ofer Shir, Hao Wang CEC
Computational Complexity Analysis of Genetic Programming Pietro S. Oliveto, Andrei Lissovoi         CEC
Self-Organizing Migrating Algorithm - Recent Advances and Progress in Swarm Intelligence Algorithms

Roman Senkerik CEC
Visualising the search process of EC algorithms Su Nguyen, Yi Mei, and Mengjie Zhang CEC
14:00 - 16:00
Tutorial Title Presenter Conference Contact Email
Instance Space Analysis for Rigorous and Insightful Algorithm Testing Kate Smith-Miles, Mario Andres, Munoz Acosta WCCI
Advances in Deep Reinforcement Learning Thanh Thi Nguyen, Vijay Janapa Reddi,Ngoc Duy Nguyen, IJCNN 
Machine learning  for data streams in Python with scikit-multi flow Jacob Montiel, Heitor Gomes,Jesse Read, Albert Bifet IJCNN
Deep Learning for Graphs Davide Bacchiu IJCNN
Explainable-by-Design Deep Learning: Fast, Highly Accurate, Weakly Supervised, Self-evolving Plamen Angelov IJCNN
Cartesian Genetic Programming and its Applications Lukas Sekanina, Julian Miller CEC
Large-Scale Global Optimization Mohammad Nabi Omidvar, Daniel Molina CEC 
Niching Methods for Multimodal Optimization Mike Preuss, Michael G. Epitropakis, Xiadong Li CEC
A Gentle Introduction to the Time Complexity Analysis of Evolutionary Algorithms     Pietro S. Oliveto CEC
Theoretical Foundations of Evolutionary Computation for Beginners and Veterans. Darrel Whitely CEC
Evolutionary Computation for Dynamic Multi-objective Optimization Problems Shengxiang Yang CEC
16:30 - 18:30
Tutorial Title Presenter Conference Contact Email
New and Conventional Ensemble Methods José Antonio Iglesias, María Paz Sesmero Lorente, Araceli Sanchis de Miguel WCCI
Evolution of Neural Networks Risto Miikkulainen IJCNN
Mechanisms of Universal Turing Machines in Developmental Networks for Vision, Audition, and Natural Language Understanding Juyang Weng IJCNN
Generalized constraints for knowledge-driven-and-data-driven approaches Baogang Hu IJCNN
Randomization Based Deep and Shallow Learning Methods for Classification and Forecasting P N Suganthan IJCNN
Using intervals to capture and handle uncertainty Chritian Wagner, Prof. Vladik Kreinovich, Dr Josie McCulloch, Dr Zack Ellerby FUZZ
Fuzzy Systems for Neuroscience and Neuro-engineering Applications Javier Andreu, CT Lin FUZZ
Evolutionary Algorithms and Hyper-Heuristics Nelishia Pillay CEC
Recent Advances in Particle Swarm Optimization Analysis and Understanding Andries Engelbrecht, Christopher Cleghorn CEC
Recent Advanced in Landscape Analysis for Optimisation Katherine Malan, Gabriela Ochoa CEC
Evolutionary Machine Learning  Masaya Nakata, Shinichi Shirakawa, Will Browne CEC
Evolutionary Many-Objective Optimization Hisao Ishibuchi, Hiroyuki Sato CEC
19:00 - 21:00
Tutorial Title Presenter Conference Contact Email
Multi-modality Helps in Solving Biomedical Problems: Theory and ApplicationsSriparna Saha, Pratik
Probabilistic Tools for Analysis of Network Performance Věra Kůrková IJCNN
Experience Replay for Deep Reinforcement Learning ABDULRAHMAN ALTAHHAN, VASILE PALADE IJCNN
Deep Stochastic Learning and Understanding Jen-Tzung Chien IJCNN
Paving the way from Interpretable Fuzzy Systems to Explainable AI Systems José M. Alonso, Ciro Castiello, Corrado Menca, Luis Magdalena FUZZ
Evolving neuro-fuzzy systems in clustering and regression Igor Škrjanc, Fernando Gomide, Daniel Leite, Sašo Blažič FUZZ
Differential Evolution Rammohan Mallipeddi,  Guohua Wu CEC
Bilevel optimization Ankur Sinha, Kalyanmoy Deb CEC
Nature-Inspired Techniques for Combinatorial Problems Malek Mouhoub

Dynamic Parameter Choices in Evolutionary Computation Carola Doerr, Gregor Papa  CEC
Evolutionary computation for games: learning, planning, and designing

Julian Togelius , Jialin Liu 


Evolutionary Algorithms and Hyper-Heuristics


Hyper-heuristics is a rapidly developing domain which has proven to be effective at providing generalized solutions to problems and across problem domains. Evolutionary algorithms have played a pivotal role in the advancement of hyper-heuristics, especially generation hyper-heuristics. Evolutionary algorithm hyper-heuristics have been successful applied to solving problems in various domains including packing problems, educational timetabling, vehicle routing, permutation flowshop and financial forecasting amongst others. The aim of the tutorial is to firstly provide an introduction to evolutionary algorithm hyper-heuristics for researchers interested in working in this domain. An overview of hyper-heuristics will be provided including the assessment of hyper-heuristic performance. The tutorial will examine each of the four categories of hyper-heuristics, namely, selection constructive, selection perturbative, generation constructive and generation perturbative, showing how evolutionary algorithms can be used for each type of hyper-heuristic. A case study will be presented for each type of hyper-heuristic to provide researchers with a foundation to start their own research in this area. The EvoHyp library will be used to demonstrate the implementation of a genetic algorithm hyper-heuristic for the case studies for selection hyper-heuristics and a genetic programming hyper-heuristic for the generation hyper-heuristics. A theoretical understanding of evolutionary algorithm hyper-heuristics will be provided. Challenges in the implementation of evolutionary algorithm hyper-heuristics will be highlighted. An emerging research direction is using hyper-heuristics for the automated design of computational intelligence techniques. The tutorial will look at the synergistic relationship between evolutionary algorithms and hyper-heuristics in this area. The use of hyper-heuristics for the automated design of evolutionary algorithms will be examined as well as the application of evolutionary algorithm hyper-heuristics for the design of computational intelligence techniques. The tutorial will end with a discussion session on future directions in evolutionary algorithms and hyper-heuristics.

Tutorial Presenters’ Bios:

Nelishia Pillay is a professor and head of department in the Department of Computer, University of Pretoria. Her research areas include hyper-heuristics, combinatorial optimization, genetic programming, genetic algorithms and other biologically-inspired methods. She holds the Multichoice Joint Chair in Machine Learning. She is chair of the IEEE Task Force on Hyper-Heuristics, chair of the IEEE Task Force on Automated Algorithm Design, Configuration and Selection and vice-chair of the IEEE CIS Technical Committee on Intelligent Systems and Applications and a member of the IEEE Technical Committee on Evolutionary Computation. She has served on program committees for numerous national and international conferences and is a reviewer for various international journals and is associate editor for IEEE Computational Intelligence Magazine and the Journal of Scheduling. She is an active researcher in the field of evolutionary algorithm hyper-heuristics and the application thereof to combinatorial optimization problems and automated design. This is one of the focus areas of the NICOG (Nature-Inspired Computing Optimization) research group which she has established.

External website with more information on Tutorial (if applicable):

Recent Advances in Particle Swarm Optimization Analysis and Understanding


The main objective of this tutorial will be to inform particle swarm optimization (PSO) practitioners of the many common misconceptions and falsehoods that are actively hindering a practitioner’s successful use of PSO in solving challenging optimization problems. While the behaviour of PSO’s particles has been studied both theoretically and empirically since its inception in 1995, most practitioners unfortunately have not utilized these studies to guide their use of PSO. This tutorial will provide a succinct coverage of common PSO misconceptions, with a detailed explanation of why the misconceptions are in fact false, and how they are negatively impacting results. The tutorial will also provide recent theoretical results about PSO particle behaviour from which the PSO practitioner can now make better and more informed decisions about PSO and in particular make better PSO parameter selections.


Andries Engelbrecht (Stellenbosch University, South Africa)

Christopher Cleghorn (University of Pretoria, South Africa)


Andries Engelbrecht received the Masters and PhD degrees in ComputerScience from the University of Stellenbosch, South Africa, in 1994 and 1999 respectively. He is Voigt Chair in Data Science in the Department of Industrial Engineering, with a joint appointment as Professor in the Computer Science Division, Stellenbosch University. His research interests include swarm intelligence, evolutionary computation, artificial neural networks, artificial immune systems, and the application of these Computational Intelligence paradigms to data analytics, games, bioinformatics, finance, and difficult optimization problems. He is author of two books, Computational Intelligence: An Introduction and Fundamentals of Computational Swarm Intelligence.

Christopher Cleghorn received his Masters and PhD degrees in Computer Science from the University of Pretoria, South Africa, in 2013 and 2017 respectively. He is a senior lecturer in Computer Science at the University of Pretoria, and a member of the Computational Intelligence Research Group. His research interests include swarm intelligence, evolutionary computation, and machine learning, with a strong focus on theoretical research. Dr Cleghorn annually serves as a reviewer for numerous international journals and conferences in domains ranging from swarm intelligence and neural networks to mathematical optimization.


Evolution withEnsembles, Adaptations and Topologies


Differential Evolution (DE) is one of the most powerful stochastic real-parameter optimization algorithms of current interest. DE operates through similar computational steps as employed by a standard Evolutionary Algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance.  This tutorial will begin with abrief overview of the basic concepts related to DE, its algorithmic components and control parameters. It will subsequently discuss some of the significant algorithmic variants of DE for bound constrained single-objective optimization. Recent modifications of the DE family of algorithms for multi-objective, constrained, large-scale, niching and dynamic optimization problems will also be included. The talk will discuss the effects of incorporating ensemble learning in DE – a relatively recent concept that can be applied to swarm & evolutionary algorithms to solve various kinds of optimization problems. The talk will also discuss neighborhood topologies based DE and adaptive DEs to improve the performance of DE. Theoretical advances made to understand the search mechanism of DE and the effect of its most important control parameters will be discussed. The talk will finally highlight a few problems that pose challenge to the state-of-the-art DE algorithms and demand strong research effort from the DE-community in the future.

Duration: 1.5 hours

Intended Audience:  This presentation will include basics as well as advanced topics of DE. Hence, researchers commencing their research in DE as well as experienced researcher can attend. Practitioners will also benefit from the presentation.

Expected Enrollment: In the past 40-50 attendees registered to attend the DE tutorials at CEC. We expect a similar interest this year too.

Name: Dr.RammohanMallipeddi and Dr. Guohua Wu


Affiliation:Nanyang Technological University, Singapore.,





Goal: Differential evolution (DE) is one of the most successful numerical optimization paradigm. Hence, practitioners and junior researchers would be interested in learning this optimization algorithm.  DE is also rapidly growing. Hence, a tutorial on DE will be timely and beneficial to many of the CEC 2020 conference attendees. This tutorial will introduce the basics of the DE and then point out some advanced methods for solving diverse numerical optimization problems by using DE.

Format: The tutorial format will be primarily power point slides based. However, frequent interactions with the audience will be maintained.

Pertinent Qualification: The speakers co-authored several original articles on DE. In addition, the authors have published a survey paper on ensemble strategies in population-based algorithms including DE. The speakers have also been organizing numerical optimization competitions at the CEC conferences (EA Benchmarks / CEC Competitions). DE has been one of top performers in these competitions. As the organizers, the speakers would also be able to share their experiences.

Key Papers:

  • Guohua Wu, RammohanMallipeddi and P. N. Suganthan, Ensemble Strategies for Population-based Optimization Algorithms – A Survey, Swarm and Evolutionary Computation, Vol. 44, pp. 695-711, 2019.
  • Das, S. S. Mullick, P. N. Suganthan, “Recent Advances in Differential Evolution – An Updated Survey,” Swarm and Evolutionary Computation, Vol. 27, pp. 1-30, 2016.
  • Das and P. N. Suganthan,Differential Evolution: A Survey of the State-of-the-Art”, IEEE Trans. on Evolutionary Computation, 15(1):4 – 31, Feb. 2011.

General Bio-sketch:

Name: Dr. RammohanMallipeddi

Affiliation: Kyungpook National University, South Korea.

RammohanMallipeddi is an Associate Professor working in the School of Electronics Engineering, Kyungpook National University (Daegu, South Korea). He received Master’s and PhD degrees in computer control and automation from the School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore, in 2007 and 2010, respectively. His research interests include evolutionary computing, artificial intelligence, image processing, digital signal processing, robotics, and control engineering. He co-authored papers published IEEE TEVC, etc. Currently, he serves as an Associate Editor for “Swarm and Evolutionary Computation”, an international journal from Elsevier and a regular reviewer for journals including IEEE TEVC and IEEE TCYB.

Name: Dr. Guohua Wu

Affiliation: Central South University, China.

Guohua Wureceived the B.S. degree in Information Systems and Ph.D degree in Operations Research from National University of Defense Technology, China, in 2008 and 2014, respectively. During 2012 and 2014, he was a visiting Ph.D student at University of Alberta, Edmonton, Canada.He is now a Professor at the Schoolof Traffic and Transportation Engineering,Central South University, Changsha, China. His current research interests include planning and scheduling, evolutionary computation and machine learning. He has authored more than 50referred papers including those published in IEEE TCYB, IEEE TSMCA, INS, COR. He serves as an Associate Editor of Swarm and Evolutionary Computation, an editorial board member of International Journal of Bio-Inspired Computation, a Guest Editor of Information Sciences and Memetic Computing. He is a regular reviewer of more than 20 journals including IEEE TEVC, IEEE TCYB, IEEE TFS.

Pareto Optimization for Subset Selection: Theories and Practical Algorithms


Pareto optimization is a general optimization framework for solving single-objective optimization problems, based on multi-objective evolutionary optimization. The main idea is to transform a single-objective optimization problem into a bi-objective one, then employ a multi-objective evolutionary algorithm to solve it, and finally return the best feasible solution w.r.t. the original single-objective optimization problem from the produced non-dominated solution set.Pareto optimization has been shown a promising method for the subset selection problem, which has applications in diverse areas, including machine learning, data mining, natural language processing, computer vision, information retrieval, etc. The theoretical understanding of Pareto optimization has recently been significantly developed, showing its irreplaceability for subset selection. This tutorial will introduce Pareto optimization from scratch. We will show that it achieves the best-so-far theoretical and practical performances in several applications of subset selection. We will also introduce advanced variants of Pareto optimization for large-scale, noisy and dynamic subset selection. We assume that the audiences are with basic knowledge of probability theory.

Tutorial Presenters (names with affiliations):

Chao Qian, Nanjing University, China

Yang Yu, Nanjing University, China

Tutorial Presenters’ Bios:

Chao Qian is an Associate Professor in the School of Artificial Intelligence, Nanjing University, China. He received the BSc and PhD degrees in the Department of Computer Science and Technologyfrom Nanjing University in 2009 and 2015, respectively. From 2015 to 2019, He was an Associate Researcher in the School of Computer Science and TechnologyUniversity of Science and Technology of China. His research interests are mainly theoretical analysis of evolutionary algorithms and their application in machinelearning. He has published one book “Evolutionary Learning: Advances in Theories and Algorithms”and more than 30 papers in top-tier journals (e.g.,AIJ, TEvC, ECJ, Algorithmica) and conferences (e.g., NIPS, IJCAI, AAAI).He has won the ACM GECCO 2011 Best Theory PaperAward, and the IDEAL 2016 Best Paper Award. He is chair of IEEE Computational Intelligence Society (CIS) Task Force on Theoretical Foundations of Bio-inspired Computation.

Yang Yu is a Professor in School of Artificial Intelligence, Nanjing University, China. He joined the LAMDA Group as a faculty since he got his Ph.D. degree in 2011. His research area is in machine learning and reinforcement learning. He was recommended as AI’s 10 to Watch by IEEE Intelligent Systems in 2018, invited to have an Early Career Spotlight talk in IJCAI’18 on reinforcement learning, and received the Early Career Award of PAKDD in 2018.

Selection, Exploration, and Exploitation


The goal of exploration is to seek out new areas of the search space. The effect of selection is to concentrate search around the best-known areas of the search space. The power of selection can overwhelm exploration, allowing it to turn any exploratory method into a hill climber. The balancing of exploration and exploitation requires more than the consideration of what solutions are created — it requires an analysis of the interplay between exploration and selection.

This tutorial reviews a broad range of selection methods used in metaheuristics. Novel tools to analyze the effects of selection on exploration in the continuous domain are introduced and demonstrated on Particle Swarm Optimization and Differential Evolution. The difference between convergence (no exploratory search solutions are created) and stall (all exploratory search solutions are rejected) is highlighted. Remedies and alternate methods of selection are presented, and the ramifications for the future design of metaheuristics are discussed.

Tutorial Presenters (names with affiliations):

Stephen Chen, Associate Professor, School of Information Technology, York University, Toronto, Canada

James Montgomery, Senior Lecturer, School of Technology, Environments and Design, University of Tasmania, Hobart, Australia

Tutorial Presenters’ Bios:

Stephen Chen is an Associate Professor in the School of Information Technology at York University in Toronto, Canada. His research focuses on analyzing the mechanisms for exploration and exploitation in techniques designed for multi-modal optimization problems. He is particularly interested in the development and analysis of non-metaphor-based heuristic search techniques. He has conducted extensive research on genetic algorithms and swarm-based optimization systems, and his 60+ peer-reviewed publications include 20+ that have been presented at previous CEC events.

James Montgomery is a Senior Lecturer in the School of Technology, Environments and Design at the University of Tasmania in Hobart, Australia. His research focuses on search space analysis and the design of solution representations for complex, real-world problems. He has conducted extensive research on ant colony optimization and differential evolution, and his 50+ peer-reviewed publications include 10+ that have been presented at previous CEC events.

External website with more information on Tutorial (if applicable):

Dynamic Parameter Choices in Evolutionary Computation


One of the most challenging problems in solving optimization problems with evolutionary algorithms and other optimization heuristics is the selection of the control parameters that determine their behavior. In state-of-the-art heuristics, several control parameters need to be set, and their setting typically has an important impact on the performance of the algorithm. For example, in evolutionary algorithms, we typically need to chose the population size, the mutation strength, the crossover rate, the selective pressure, etc.
Two principal approaches to the parameter selection problem exist:
(1) parameter tuning, which asks to find parameters that are most suitable for the problem instances at hand, and
(2) parameter control, which aims to identify good parameter settings “on the fly”, i.e., during the optimization itself.
Parameter control has the advantage that no prior training is needed. It also accounts for the fact that the optimal parameter values typically change during the optimization process: for example, at the beginning of an optimization process we typically aim for exploration, while in the later stages we want the algorithm to converge and to focus its search on the most promising regions in the search space.
While parameter control is indispensable in continuous optimization, it is far from being well-established in discrete optimization heuristics. The ambition of this tutorial is therefore to change this situation, by informing participants about different parameter control techniques, and by discussing both theoretical and experimental results that demonstrate the unexploited potential of non-static parameter choices.
Our tutorial addresses experimentally as well as theory-oriented researchers alike, and requires only basic knowledge of optimization heuristics.

Tutorial Presenters (names with affiliations):

– Carola Doerr, Sorbonne University, Paris, France

– Gregor Papa, Jožef Stefan Institute, Ljubljana, Slovenia

Tutorial Presenters’ Bios:

  • Carola Doerr(Doerr@lip6.fr is a permanent CNRS researcher at Sorbonne University in Paris, France. She studied Mathematics at Kiel University (Germany, 2003-2007, Diplom) and Computer Science at the Max Planck Institute for Informatics and Saarland University (Germany, 2010-2011, PhD). Before joining the CNRS she was a post-doc at Paris Diderot University (Paris 7) and the Max Planck Institute for Informatics. From 2007 to 2009, Carola Doerr has worked as a business consultant for McKinsey & Company, where her interest in evolutionary algorithms originates from.
    Carola Doerr’s main research activities are in the mathematical analysis of randomized algorithms, with a strong focus on evolutionary algorithms and other black-box optimizers. She has been very active in the design and analysis of black-box complexity models, a theory-guided approach to explore the limitations of heuristic search algorithms. Most recently, she has used knowledge from these studies to prove superiority of dynamic parameter choices in evolutionary computation, a topic that she believes to carry huge unexplored potential for the community.
    Carola Doerr has received several awards for her work on evolutionary computation, among them the Otto Hahn Medal of the Max Planck Society and four best paper awards at GECCO. She is chairing the program committee of FOGA 2019 and previously chaired the theory tracks of GECCO 2015 and 2017. Carola is an editor of two special issues in Algorithmica. She is also vice chair of the EU-funded COST action 15140 on “Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice (ImAppNIO)”.
  • Gregor Papa (papa@ijs.si is a Senior researcher and a Head of Computer Systems Department at the Jožef Stefan Institute, Ljubljana, Slovenia, and an Associate Professor at the Jožef Stefan International Postgraduate School, Ljubljana, Slovenia. He received the PhD degree in Electrical engineering at the University of Ljubljana, Slovenia, in 2002.
    Gregor Papa’s research interests include meta-heuristic optimisation methods and hardware implementations of high-complexity algorithms, with a focus on dynamic setting of algorithms’ control parameters. His work is published in several international journals and conference proceedings. He regularly organizes several conferences and workshops in the field of nature-inspired algorithms from the year 2004 till nowadays. He led and participated in several national and European projects.
    Gregor Papa is a member of the Editorial Board of the Automatika journal (Taylor & Francis) for the field “Applied Computational Intelligence”. He is a Consultant at the Slovenian Strategic research and innovation partnership for Smart cities and communities.

External website with more information on Tutorial (if applicable):


Cartesian Genetic Programming and its Applications


The goal of this tutorial is to acquaint the WCCI (CEC) community with the principles and state-of-the-art results of Cartesian Genetic Programming (CGP), which is a well-known form of Genetic Programming developed by Julian Miller in 1999-2000. In its classic form, it uses a very simple integer address-based genetic representation of a program in the form of a directed graph. In a number of studies, CGP has been shown to be comparatively efficient to other GP techniques. The classical form of CGP has undergone a number of developments which have made it more useful, efficient and flexible in various ways. These include self-modifying CGP (SMCGP), cyclic connections (recurrent-CGP), encoding artificial neural networks and automatically defined functions (modularCGP). CGP is capable of creating programs and circuits not only with requested functionality, but also in a very compact form, which is interesting, for example, in low power computing. This tutorial also presents various methods (such as formal verification techniques) that have been developed to address the so-called scalability problem of evolutionary design. We will demonstrate that CGP can provide human-competitive results in the areas of automated design of logic circuits, approximate circuits, image operator design, and neural networks.

Detailed Bios:

Prof. Lukas Sekanina received all his degrees from Brno University of Technology, Czech Republic (Ing. In 1999, Ph.D. in 2002), where he is currently a full professor and Head of the Department of Computer Systems. He was awarded with the Fulbright scholarship and worked on the evolutionary circuit design with NASA Jet Propulsion Laboratory in Pasadena in 2004. He was a visiting lecturer with Pennsylvania State University (2001), Universidad Politécnica de Madrid (2012), and a visiting researcher with University of Oslo in 2001. Awards: Gold (2015), Silver (2011, 2008) and Bronze (2018) medals from Human-competitive awards in genetic and evolutionary computation at GECCO; Siemens Award for outstanding PhD thesis in 2003; Siemens Award for outstanding research monograph in 2005; Best paper/poster awards (e.g. DATE 2017, NASA/ESA AHS 2013, EvoHOT 2005, DDECS 2002); keynote conference speaker (e.g. IEEE SSCI-ICES 2015, DCIS 2014, ARCS 2013, UC 2009). He has served as a program committee member of many conferences (e.g. DATE, FPL, ReConFig, DDECS, GECCO, IEEE CEC, ICES, AHS, EuroGP), Associate Editor of IEEE Transactions on Evolutionary Computation (2011-2014) and Editorial board member of Genetic Programming and Evolvable Machines Journal and International Journal of Innovative

Computing and Applications. He served as General Chair of the 16th IEEE Symposium on Design and Diagnostics of Electronic Circuits and Systems (DDECS 2013), Program Co-Chair of EuroGP 2018 – 2019, DTIS 2016, ICES 2008, and Topic chair of DATE 2020 (D10 – Approximate computing). Prof. Sekanina is the author of Evolvable Components (a monograph published by

Springer Verlag in 2004). He co-authored over 180 papers mainly on evolvable hardware, approximate circuits and bio-inspired computing. He is a Senior Member of IEEE.

Dr. Julian Miller received his BSc in Physics at the University of London in 1980. He obtained his PhD on nonlinear wave interaction in 1988 at the City University, UK. He is currently an Honorary Fellow the Department of Electronic Engineering at the University of York. His main research interests are genetic programming (GP), and computational development. He is a highly cited and well-known researcher with over 12000 citations and an h-index of 42. He has published over 240 refereed  papers on evolutionary computation, genetic programming, evolvable hardware, computational development and other topics. He has been chair or co-chair of eighteen conferences or workshops in genetic programming, computational development, evolvable hardware and evolutionary techniques. Dr. Miller chaired of the Evolvable Hardware tracks at the Genetic and Evolutionary Computation Conference in 2002-2003 and was Genetic Programming track chair in 2008. He was co-chair the Generative and Developmental Systems (GDS) track in 2007, 2009 and 2010. He is an associate editor of the Genetic Programming and Evolvable Machines and a former associate editor of IEEE Transactions on Evolutionary Computation. He is an editorial board member of the journals Evolutionary Computation and Unconventional Computing. He received the Evostar award in 2011 for outstanding contribution in evolutionary computation.

Recent Advances in Landscape Analysis for Optimisation


The goal of this tutorial is to provide an overview of recent advances in landscape analysis for optimisation. The subject matter will be relevant to delegates who are interested in applying landscape analysis for the first time, but also to those involved in landscape analysis research to obtain a broader view of recent developments in the field. Fitness landscapes were first introduced to aid in the understanding of genetic evolution, but techniques were later developed for analysing fitness landscapes in the context of evolutionary computation. In the last decade, the field has experienced a large upswing in research, evident in the increased number of published papers on the topic as well as the appearance of tutorials, workshops and special sessions at all the major evolutionary computation conferences.

One of the changes that has emerged over the last decade is that the notion of fitness landscapes has been extended to include other views such as exploratory landscapes, landscape models (such as local optima networks), violation landscapes, error landscapes and loss landscapes. This tutorial will provide an overview of these different views on search spaces and how they relate. A number of new techniques for analysing landscapes have been developed and these will also be covered. In addition, an overview of recent applications of landscape analysis will be provided for understanding complex problems and algorithm behaviour, predicting algorithm performance and for automated algorithm configuration and selection. Case studies of the use of landscape  analysis in both discrete and continuous domains will be presented. Finally, the tutorial will highlight some opportunities for future research in landscape analysis.

Tutorial Presenters (names with affiliations):

Katherine Malan, University of South Africa

Gabriela Ochoa, University of Stirling, Scotland

Tutorial Presenters’ Bios:

Katherine Malan is an associate professor in the Department of Decision Sciences at the University of South Africa. She received her her PhD in computer science from the University of Pretoria and her MSc & BSc degrees from the University of Cape Town. She has over 20 years’ lecturing experience, mostly in Computer Science, at three different South African universities and has co-authored two programming textbooks. Her research interests include fitness landscape analysis and the application of computational intelligence techniques to real-world problems. She is particularly interested in the link between fitness landscape features and evolutionary algorithm behaviour with the aim of developing intelligent landscape-aware search. She is a senior member of the IEEE and has served as a volunteer in a number of roles including chair of the South African chapter of CIS and finance chair of SSCI 2015.

Gabriela Ochoa is a Professor in Computing Science at the University of Stirling, Scotland, UK. She received a PhD in Computer Science from the University of Sussex, UK. She worked in industry for five years before joining academia and has held faculty and research positions at the University Simon Bolivar, Venezuela and the University of Nottingham, UK before joining the University of Stirling. Her research interests lie in the foundations and application of evolutionary algorithms and heuristic search methods, with emphasis on autonomous search, hyper-heuristics, fitness landscape analysis and visualisation. She was associate editor of the IEEE Trans. on Evolutionary Computation, is associate editor of the Evolutionary Computation Journal, as well as for the newly established ACM Trans. on Evolutionary Learning and Optimization (TELO), and is a member of the editorial board for Genetic Programming and Evolvable Machines. She has served as organiser for various international events including IEEE CEC, PPSN, FOGA, EvoSTAR and GECCO, and served as the Editor-in-Chief for GECCO 2017. She is a member of both the executive boards of the ACM interest group on Evolutionary Computation – SIGEVO (where she also edits the quarterly Newsletter), and the leading European Event on Bio-inspired computing EvoSTAR.

External website with more information on Tutorial (if applicable):

Evolutionary Bilevel Optimization


Many practical optimization problems should better be posed as bilevel optimization problems in which there are two levels of optimization tasks. A solution at the upper level is feasible if the cor-responding lower level variable vector is optimal for the lower level optimization problem. Consid-er, for example, an inverted pendulum problem for which the motion of the platform relates to the upper level optimization problem of performing the balancing task in a time-optimal manner. For a given motion of the platform, whether the pendulum can be balanced at all becomes a lower level optimization problem of maximizing stability margin. Such nested optimization problems are com-monly found in transportation, engineering design, game playing and business models. They are also known as Stackelberg games in the operations research community. These problems are too complex to be solved using classical optimization methods simply due to the “nestedness” of one optimization task into another.

Bilevel Optimization, Bilevel Multi-objective Optimization, Evolutionary Algorithms, Multi-Criteria Decision Making, Theory on Bilevel Programming, Hierarchical Decision Making, Bilevel Applications, Hybrid Algorithms
Tutorial Description
What is Bilevel Programming?


To begin with, an introduction is provided to bilevel optimization problems. A generic formulation for bilevel problems is presented and its differences from ordinary single level optimization problems are discussed.
Bilevel Problems: A Genesis


The origin of bilevel problems can be traced to two roots; namely, game theory and mathematical programming. A genesis of these problems is provided through simple practical examples.
Properties of Bilevel Problems


The properties of bilevel optimization problems are discussed. Difficulties encountered in solving these problems are highlighted.


A number of practical applications from the areas of process optimization, game-playing strategy development, transportation problems, optimal control, environmental economics and coordination of multi-divisional firms are described to highlight the practical relevance of bilevel programming.
Solution Methodologies


Existing solution methodologies for bilevel optimization and their weaknesses are discussed. Lack of efficient methodologies is underlined and the need for better solution approaches is emphasized.
EAs Niche


Evolutionary algorithms provide a convenient framework for handling complex bilevel problems. Co-evolutionary ideas and flexibility in operator design can help in efficiently tackling the problem.
Multi-objective Extensions


Recent algorithms and results on multi-objective bilevel optimization using evolutionary algorithms are discussed and some application problems are highlighted.
Future Research Ideas


A number of immediate and future research ideas on bilevel optimization are highlighted related to decision making difficulties and robustness.


Concluding remarks for the tutorial are provided.


A list of references on bilevel optimization is provided.
Target Audience
Bilevel optimization belongs to a difficult class of optimization problems. Most of the classical optimization methods are unable to solve even simpler instances of bilevel problems. This offers a niche to the researchers in the field of evolutionary computation to work on the development of efficient bilevel procedures. However, many researchers working in the area of evolutionary computation are not familiar with this important class of optimization problems. Bilevel optimization has immense practical applications and it certainly requires attention of the researchers working on evolutionary computation. The target audience for this tutorial will be researchers and students looking to work on bilevel optimization. The tutorial will make the basic concepts on bilevel optimization and the recent results easily accessible to the audience.
Short Biography
Ankur Sinha is working as an Associate Professor at Indian Institute of Management, Ahmedabad, India. He completed his PhD from Helsinki School of Economics (Now: Aalto University School of Business) where his PhD thesis was adjudged as the best dissertation of the year 2011. He holds a Bachelors degree in Mechanical Engineering from Indian Institute of Technology (IIT) Kanpur. After completing his PhD, he has held visiting positions at Michigan State University and Aalto University. His research interests include Bilevel Optimization, Multi-Criteria Decision Making and Evolutionary Algorithms. He has offered tutorials on Evolutionary Bilevel Optimization at GECCO 2013, PPSN 2014, and CEC 2015, 2017, 2018, 2019. His research has been published in some of the leading Computer Science, Business and Statistics journals. He regularly chairs sessions at evolutionary computation conferences. For detailed information about his research and teaching, please refer to his personal page:
Kalyanmoy Deb is a Koenig Endowed Chair Professor at the Michigan State University in Michigan USA. He is the recipient of the prestigious TWAS Prize in Engineering Science, Infosys Prize in Engineering and Computer Science, Shanti Swarup Bhatnagar Prize in Engineering Sciences for the year 2005. He has also received the ‘Thomson Citation Laureate Award’ from Thompson Scientific for having highest number of citations in Computer Science during the past ten years in India. He is a fellow of IEEE, Indian National Academy of Engineering (INAE), Indian National Academy of Sciences, and International Society of Genetic and Evolutionary Computation (ISGEC). He has received Fredrick Wilhelm Bessel Research award from Alexander von Humboldt Foundation in 2003. His main research interests are in the area of computational optimization, modeling and design, and evolutionary algorithms. He has written two textbooks on optimization and more than 500 international journal and conference research papers. He has pioneered and is a leader in the field of evolutionary multi-objective optimization. He is associate editor and in the editorial board or a number of major international journals. More information about his research can be found from

Theoretical Foundations of Evolutionary Computation for Beginners and Veterans


Goals:There exists more than 40 years of theoretical research in evolutionary computation.  However, given the focus on runtime analysis in the last 10 years, much of this theory is not well understood in the EC community.  For example,  it is not widely known that the behavior of an Evolutionary Algorithm can be influenced by attractors that exists outside the space of the feasible population.The tutorial will mainly focus on the application of evolutionary algorithms to combinatorial problems.   The tutorial will also use easy to understand examples.

Plan and Outline: This talk will cover some of the classic theoretical results from the field of evolutionary algorithms,  as well as more general theory from operations research and mathematical methods for optimization.   The tutorial will review pseudo-Boolean optimization, and the representation of functions as both multi-linear polynomials and Fourier polynomials.   It will also explain how every pseudo-Boolean optimization problem can be converted into a k-bounded form.  And for every k-bounded pseudo-Boolean optimization problem, the location of improving moves (i.e., improving bit flips) can be computed in constant time, making simple mutation operators unnecessary.   It is also not well known that many classic parameter optimization problems (Rosenbrock, Rastrigin, and the entire DeJong Test suite) can be expressed as low order k-bounded pseudo-Boolean functions,  even at “double precision” for arbitrary numbers of variables.

The tutorial will cover 1) No Free Lunch (NFL),  2) Sharpened No Free Lunch and 3) Focused No Free Lunch, and how the different theoretical proofs can lead to seemingly different and even contradictory conclusions.   (What many researchers know about NFL might actually be wrong.)

The tutorial will also cover the relationship between functions and representations, the space of all possible representations and the duality between search algorithm and landscapes.    This perspective is critical to understanding landscape analysis, landscape visualization, variable neighborhood search methods, memetic algorithms, and self-adaptive search methods.

Other topics include both infinite and finite models of population trajectories.  The tutorial will explainboth Elementary Landscapes and eigenvectors of search neighborhoods in simple terms and explain how the two are related.    Example domains include classic NP-Hard problems such as Graph Coloring and MAX-kSAT.

Justification:  The tutorial will explore what theory can contribute to application focused researchers.  Theory can be used not only to look at convergence behavior, but also to understand problem structure.  It can also provide new tools to researchers. Every researcher in the field of Evolutionary Computation needs to be a wiser consumer of both theoretical and empirical results.   The tutorial will be for 1.5 hours.   Prof. Whitley has regularly had audiences of 50 to 75 people (and up to 175 people) at his tutorials and has presented tutorials at CEC, GECCO, AAAI and IJCAI.   The tutorial will provide new insights to both beginners and veterans in the field of Evolutionary Computation.


Prof. Darrell Whitley has been active in Evolutionary Computation  since 1986, and has published more than 250 papers.    These papers have garnered more than 24,000 citations.    Dr. Whitley’s H-index is 65.   He introduced the first “steady state genetic algorithm” with rank based selection,  published the first papers on neuroevolution,  and has worked on dozens of real world applications  of evolutionary algorithms.     He has served as Editor-in-Chief of the journal Evolutionary Computation,  and served as Chair of the Governing Board of ACM SIGEVO from 2007 to 2011.    Prof. Whitley was recently made an ACM Fellow for his many contributions to the field of Evolutionary Computation.  He is also the Co-Editor-in-Chief of the new ACM Transactions on Evolutionary Learning and Optimization.

Demos and Code:    Code will be made available online illustrating these techniques for the

Traveling Salesman Problem,  MAXSAT and NK-Landscapes.      Two tutorials covering much of the material from this talk will also be made available online.

A Gentle Introduction to the Time Complexity Analysis of Evolutionary Algorithms


Great advances have been made in recent years towards the runtime complexity analysis of evolutionary algorithms for combinatorial optimisation problems. Much of this progress has been due to the application of techniques from the study of randomised algorithms. The first pieces of work, started in the 90s, were directed towards analysing simple toy problems with significant structures. This work had two main goals:

1. to understand on which kind of landscapes EAs are efficient, and when they are not
2. to develop the first basis of general mathematical techniques needed to perform the analysis.

Thanks to this preliminary work, nowadays, it is possible to analyse the runtime of evolutionary algorithms on different combinatorial optimisation problems. In this beginners’ tutorial, we give a basic introduction to the most commonly used techniques, assuming no prior knowledge about time complexity analysis.

Tutorial Presenters (names with affiliations): 

Pietro S. Oliveto, University of Sheffield

Tutorial Presenter Bio:

Pietro S. Oliveto is a Senior Lecturer and EPSRC funded Early Career Fellow  at the University of Sheffield, UK.
He received the Laurea degree and PhD degree in computer science respectively from the University of Catania, Italy in 2005 and from the University of Birmingham, UK in 2009. From October 2007 to April 2008, he was a visiting researcher of the Efficient Algorithms and Complexity Theory Institute at the Department of Computer Science of the University of Dortmund where he collaborated with Prof. Ingo Wegener’s research group. From 2009 to 2013 he held the positions of EPSRC PhD+ Fellow for one year and of EPSRC Postdoctoral Fellow in Theoretical Computer Science for 3 years at the University of Birmingham. From 2013 to 2016 he was a Vice-chancellor’s Fellow at the University of Sheffield.
His main research interest is the rigorous performance analysis of randomised search heuristics for combinatorial optimisation. He has published several runtime analysis papers on evolutionary algorithms, artificial immune systems, hyper-heuristics and genetic programming. He has won best paper awards at the GECCO’08, ICARIS’11 and GECCO’14.
Dr. Oliveto has given several tutorials at GECCO, CEC and PPSN on the runtime analysis of evolutionary algorithms and a recent one on the analysis of genetic programming.  He is an associate editor of IEEE Transactions on Evolutionary Computation.

Evolutionary computation for games: learning, planning, and designing


This tutorial introduces several techniques and application areas for evolutionary computation in games, such as board games and video games. We will give a broad overview of the use cases and popular methods for evolutionary computation in games, and in particular cover the use of evolutionary computation for learning policies (evolutionary reinforcement learning using neuroevolution), planning (rolling horizon and online planning), and designing (search-based procedural content generation). The basic principles will be explained and illustrated by examples from our own research as well as others’ research.

Tentative outline:

  • Introduction: who are we, what are we talking about?
  • Why do research on evolutionary computation and games?
  • Part 1: Playing games
    • Reasons for building game-playing AI
    • Characteristics of games (and how they affect game-playing algorithms)
    • Reinforcement learning through evolution
    • Neuroevolution
    • Planning with evolution
    • Single-agent games (rolling horizon evolution)
    • Multi-agent games (online evolution)
  • Part 2: Designing and developing games
    • The need for AI in game design and development
    • Procedural content generation
    • Search-based procedural content generation
    • procedural content generation machine learning (PCGML)
    • Game balancing
    • Game testing
    • Game adaptation

Tutorial Presenters:

Julian Togelius

Associate Professor

Department of Computer Science and Engineering

Tandon School of Engineering

New York University

2 MetroTech Center, Brooklyn, NY 11201, USA

Co-director of the NYU Game Innovation Lab

Editor-in-Chief, IEEE Transactions on Games

Jialin Liu

Research Assistant Professor

Optimization and Learning Laboratory (OPAL Lab)

Department of Computer Science and Engineering (CSE)

Southern University of Science and Technology (SUSTech)

Shenzhen, China

Associate Editor, IEEE Transactions on Games

Tutorial Presenters’ Bios:

Julian Togelius is an Associate Professor in the Department of Computer Science and Engineering, New York University, USA. He works on artificial intelligence for games and games for artificial intelligence. His current main research directions involve search-based procedural content generation in games, general video game playing, player modeling, generating games based on open data, and fair and relevant benchmarking of AI through game-based competitions. He is the Editor-in-Chief of IEEE Transactions on Games, and has been chair or program chair of several of the main conferences on AI and Games. Julian holds a BA from Lund University, an MSc from the University of Sussex, and a Ph.D. from the University of Essex. He has previously worked at IDSIA in Lugano and at the IT University of Copenhagen.

Jialin Liu is currently a Research Assistant Professor in the Department of Computer Science and Engineering of Southern University of Science and Technology (SUSTech), China. Before joining SUSTech, she was a Postdoctoral Research Associate at Queen Mary University of London (QMUL, UK) and one of the founding members of the Game AI research group of QMUL. Her research interests include AI and Games, noisy optimisation, portfolio of algorithms and meta-heuristics.Jialin is an Associate Editor of IEEE Transactions on Games. She hasserved as Program Co-Chair of 2018 IEEE Computational Intelligence and Games (IEEE CIG2018), and Competition Chair of several main conferences on Evolutionary Computation, and AI and Games. She is also chairing IEEE CIS Games Technical Committee.

Niching Methods for Multimodal Optimization


Population or single solution search-based optimization algorithms (i.e. {meta,hyper}-heuristics) in their original forms are usually designed for locating a single global solution. Representative examples include among others evolutionary and swarm intelligence algorithms. These search algorithms typically converge to a single solution because of the global selection scheme used. Nevertheless, many real-world problems are “multimodal” by nature, i.e., multiple satisfactory solutions exist. It may be desirable to locate many such satisfactory solutions, or even all of them, so that a decision maker can choose one that is most proper in his/her problem domain. Numerous techniques have been developed in the past for locating multiple optima (global and/or local). These techniques are commonly referred to as “niching” methods. A niching method can be incorporated into a standard search-based optimization algorithm, in a sequential or parallel way, with an aim to locate multiple globally optimal or suboptimal solutions,. Sequential approaches locate optimal solutions progressively over time, while parallel approaches promote and maintain formation of multiple stable subpopulations within a single population. Many niching methods have been developed in the past, including crowding, fitness sharing, derating, restricted tournament selection, clearing, speciation, etc. In more recent times, niching methods have also been developed for meta-heuristic algorithms such as Particle Swarm Optimization, Differential Evolution and Evolution Strategies.

In this tutorial we will aim to provide an introduction to niching methods, including its historical background, the motivation of employing niching in EAs. We will present in details a few classic niching methods, such as the fitness sharing and crowding methods. We will also provide a review on several new niching methods that have been developed in meta-heuristics such as Particle Swarm Optimization and Differential Evolution. Employing niching methods in real-world situations still face significant challenges, and this tutorial will discuss several such difficulties. In particular, niching in static and dynamic environments will be specifically addressed. Following this, we will present a suite of new niching function benchmark functions specifically designed to reflect the characteristics of these challenges. Performance metrics for comparing niching methods will be also presented and their merits and shortcomings will be discussed. Experimental results across both classic and more recently developed niching methods will be analyzed based on selected performance metrics. Apart of benchmark niching test functions, several examples of applying niching methods to solving real-world optimization problems will be provided. This tutorial will use several demos to show the workings of niching methods.

This tutorial is supported by the IEEE CIS Task Force on Multi-modal Optimization ( )

Targeted audience

This tutorial should be of interest to both new beginners and as well as more experienced niching researchers, since this tutorial on niching is probably the first of this kind in many years of CEC history. The tutorial will provide a unique opportunity to update the research community on this classic EC subfield, which increasingly catches more and more attention lately. We expect the tutorial will last 2 hours.

Course material

The tutorial material will be made available prior to WCCI/CEC’2020 via a web-site that will be associated with the WCCI/CEC’2020 special session on niching methods for multimodal optimization.


The tutorial presenters will include Dr. Mike Preuss (WWU Muenster, Germany), Dr. Michael Epitropakis (The Signal Group, Greece), and Professor Xiaodong Li (RMIT University, Australia). All of them have significant research experience in designing and developing niching methods. They have successfully organized various events (special sessions, workshops and competitions on the area of multimodal optimization as well as serve on the chairing board of IEEE CIS Task Force on Multi-modal Optimization. See URL:


Ass. Prof. Mike Preuss

Leiden Institute of Advanced Computer Science

Niels Bohrweg 1

2333 CA Leiden

The Netherlands


Dr. Michael G. Epitropakis

The Signal Group,

Athens, Greece


Professor Xiaodong Li

School of Science (Computer Science and Software Engineering)

RMIT University

Melbourne, VIC 3001, Australia


Organizer Bios:

Mike Preuss is Assistant Professor at LIACS, the computer science institute of Universiteit Leiden in the Netherlands. Previously, he was with ERCIS (the information systems institute of WWU Muenster, Germany), and before with the Chair of Algorithm Engineering at TU Dortmund, Germany, where he received his PhD in 2013. His main research interests rest on the field of evolutionary algorithms for real-valued problems, namely on multimodal and multiobjective optimization, and on computational intelligence and machine learning methods for computer games, especially in procedural content generation (PGC) and realtime strategy games (RTS).

Michael G. Epitropakis received his B.S., M.S., and Ph.D. degrees from the Department of Mathematics, University of Patras, Patras, Greece. Currently, he is a Senior Research Scientist and a Product Manager at The Signal Group, Athens, Greece. From 2015 to 2018 he was a Lecturer in Foundations of Data Science at the Data Science Institute and the Department of Management Science, Lancaster University, Lancaster, UK. His current research interests include computational intelligence, evolutionary computation, swarm intelligence, machine learning and search-based software engineering. He has published more than 35 journal and conference papers. He is an active researcher on Multi-modal Optimization and a co-organized of the special session and competition series on Niching Methods for Multimodal Optimization. He is a member of the IEEE Computational Intelligence Society.

Xiaodong Li received his B.Sc. degree from Xidian University, Xi’an, China, and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively. He is a full professor at the School of Science (Computer Science and Software Engineering), RMIT University, Melbourne, Australia. His research interests include evolutionary computation, neural networks, machine learning, complex systems, multiobjective optimization, multimodal optimization (niching), and swarm intelligence. He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, Swarm Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member of IEEE CIS Task Force on Swarm Intelligence, a Vice-chair of IEEE CIS Task Force of Multi-Modal Optimization, and a former Chair of IEEE CIS Task Force on Large Scale Global Optimization.  He was the General Chair of SEAL’08, a Program Co-Chair AI’09, a Program Co-Chair for IEEE CEC’2012, a General Chair for ACALCI’2017 and AI’17. He is the recipient of 2013 ACM SIGEVO Impact Award and 2017 IEEE CIS “IEEE Transactions on Evolutionary Computation Outstanding Paper Award”.

Computational Complexity Analysis of Genetic Programming


Genetic Programming (GP) is an evolutionary computation paradigm that aims to evolve computer programs. Compared to the great number of successful applications of GP that have been reported, the theoretical understanding of its underlying working principles lags far behind. In particular, the identification of which classes of computer programs can be provably evolved efficiently via GP has progressed slowly compared to the understanding of the performance of traditional evolutionary algorithms (EAs) for function optimisation.

The main reason for the slow progress is that the analysis of GP systems is considerably more involved. Firstly, the analysis is complicated by the variable length of programs compared to the fixed solution representation used in EAs. Secondly, understanding the quality of a candidate program is challenging because it is not possible to evaluate its accuracy over all possible inputs.

Nevertheless, significant advances have been made in recent years towards the computational complexity analysis of GP. Rather than tackling complete GP applications, the first pieces of work isolated particular aspects and challenges occurring in the GP evolutionary process. Nowadays it is possible to analyse the time and space complexity of GP algorithms for evolving proper programs with input/output relationships where the fitness of candidate solutions is evaluated by comparing their accuracy on input/output samples of a polynomially-sized training set (e.g., Boolean functions). In this tutorial, we give an overview of the field, outlining the techniques used and the challenges involved.

Tutorial Presenters (names with affiliations):

Andrei Lissovoi, University of Sheffield, UK

Pietro S. Oliveto, University of Sheffield, UK

Tutorial Presenters’ Bios:

Andrei Lissovoi is a Research Associate in the Rigorous Research team at the University  of Sheffield, UK. He received the MSc and PhD degrees in computer science from the Technical  University of Denmark in 2012 and 2016 respectively. His main research interest is the time complexity analysis of nature-inspired algorithms. He has published several runtime analysis papers on Evolutionary Algorithms, Ant Colony Optimisation algorithms, and parallel evolutionary algorithms for dynamic optimisation problems. His recent work on GP includes runtime analyses of GP systems for evolving Boolean functions (AAAI-18, GECCO’19) and a survey book chapter in the recent Springer book on theory of evolutionary computation.

Pietro S. Oliveto is a Senior Lecturer and EPSRC funded Early Career Fellow  at the University of Sheffield, UK.
He received the Laurea degree and PhD degree in computer science respectively from the University of Catania, Italy in 2005 and from the University of Birmingham, UK in 2009. From October 2007 to April 2008, he was a visiting researcher of the Efficient Algorithms and Complexity Theory Institute at the Department of Computer Science of the University of Dortmund where he collaborated with Prof. Ingo Wegener’s research group. From 2009 to 2013 he held the positions of EPSRC PhD+ Fellow for one year and of EPSRC Postdoctoral Fellow in Theoretical Computer Science for 3 years at the University of Birmingham. From 2013 to 2016 he was a Vice-chancellor’s Fellow at the University of Sheffield.
His main research interest is the rigorous performance analysis of randomised search heuristics for combinatorial optimisation. He has published several runtime analysis papers on evolutionary algorithms, artificial immune systems, hyper-heuristics and genetic programming. He has won best paper awards at the GECCO’08, ICARIS’11 and GECCO’14.
Dr. Oliveto has given several tutorials at GECCO, CEC and PPSN on the runtime analysis of evolutionary algorithms and a recent one on the analysis of genetic programming.  He is an associate editor of IEEE Transactions on Evolutionary Computation.

Benchmarking and Analyzing Iterative Optimization Heuristics with IOHprofiler


IOHprofiler is a new benchmarking environment that has been developed for a highly versatile analysis of iterative optimization heuristics (IOHs) such as evolutionary algorithms, local search algorithms, model-based heuristics, etc. A key design principle of IOHprofiler is its highly modular setup, which makes it easy for its users to add algorithms, problems, and performance criteria of their choice. IOHprofiler is also useful for the in-depth analysis of the evolution of adaptive parameters, which can be plotted against fixed-targets or fixed-budgets. The analysis of robustness is also supported.

IOHprofiler supports all types of optimization problems, and is not restricted to a particular search domain.  A web-based interface of its analysis procedure is available at, the tool itself is available on GitHub ( and as CRAN package (

The tutorial addresses all CEC participants interested in analyzing and comparing heuristic solvers. By the end of the tutorial, the participants will known how to benchmark different solvers with IOHprofiler, which performance statistics it supports, and how to contribute to its design.

Tutorial Presenters (names with affiliations): 

Thomas Bäck, Leiden University, The Netherlands,

Carola Doerr, CNRS and Sorbonne University, France,

Ofer M. Shir, Tel-Hai College and Migal Institute, Israel,

Hao Wang, Sorbonne University, France.

Tutorial Presenters’ Bios:

  • Thomas Bäck is Full Professor of Computer Science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002. He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994, and then worked for the Informatik Centrum Dortmund (ICD) as department leader of the Center for Applied Systems Analysis. From 2000 – 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others. Thomas has more than 300 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and the Handbook of Natural Computing, and co-editor-in-chief of Springer’s Natural Computing book series. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft fürInformatik, GI) in 1995 and the IEEE Computational Intelligence Society Evolutionary Computation Pioneer Award in 2015.
  • Carola Doerr, formerly Winzen, is a permanent CNRS researcher at Sorbonne University in Paris, France. Her main research activities are in the mathematical analysis of randomized algorithms, with a strong focus on evolutionary algorithms and other black-box optimizers. She has been very active in the design and analysis of black-box complexity models, a theory-guided approach to explore the limitations of heuristic search algorithms. Most recently, she has used knowledge from these studies to prove superiority of dynamic parameter choices in evolutionary computation, a topic that she believes to carry huge unexplored potential for the community. Carola has received several awards for her work on evolutionary computation, among them the Otto Hahn Medal of the Max Planck Society and four best paper awards at GECCO. She is/was program chair of PPSN 2020, FOGA 2019 and the theory tracks of GECCO 2015 and 2017. Carola serves on the editorial boards of ACM Transactions on Evolutionary Learning and Optimization and of Evolutionary Computation and was editor of two special issues in Algorithmica. Carola is vice chair of the EU-funded COST action 15140 on “Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice (ImAppNIO)”.
  • Ofer M. Shir is the Head of the Computer Science Department of Tel-Hai College, and a Principal Investigator at the Migal-Galilee Research Institute – both located in the Upper Galilee, Israel. OferShir holds a BSc in Physics and Computer Science from the Hebrew University of Jerusalem, Israel (conferred 2003), and both MSc and PhD in Computer Science from Leiden University, The Netherlands (conferred 2004, 2008; PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010), hosted by Prof. Herschel Rabitz in the Department of Chemistry – where he specialized in computational aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member (2010-2013), which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics. His current topics of interest include Statistical Learning in Theory and in Practice, Experimental Optimization, Theory of Randomized Search Heuristics, Scientific Informatics, Natural Computing, Computational Intelligence in Physical Sciences, Quantum Control and Quantum Machine Learning.
  • Hao Wang obtained his PhD (cum laude, promotor: Prof. Thomas Bäck) from Leiden University in 2018. He is currently a postdoc at Sorbonne University (supervised by Carola Doerr) and has accepted a position as an assistant Professor at the Leiden Institute of Advanced Computer Science from Sep. 2020. He received the Best Paper Award at the PPSN 2016 conference and was a best paper award finalist at the IEEE SMC 2017 conference. His research interests are proposing, improving and analyzing stochastic optimization algorithms, especially Evolutionary Strategies and Bayesian Optimization. In addition, he also works on developing statistical machine learning algorithms for big and complex industrial data. He also aims at combining the state-of-the-art optimization algorithm with data mining/machine learning techniques to make the real-world optimization tasks more efficient and robust.


External website with more information on Tutorial (if applicable): None

Evolutionary Machine Learning


A fusion of Evolutionary Computation and Machine Learning, namely Evolutionary Machine Learning (EML), has been recognized as a rapidly growing research area as these powerful search and learning mechanisms are combined. Many specific branches of EML with different learning schemes and different ML problem domains have been proposed. These branches seek to address common challenges –

  • How evolutionary search can discover optimal ML configurations and parameter settings,
  • How the deterministic models of ML can influence evolutionary mechanisms,
  • How EC and ML can be integrated into one learning model.

Consequently, various insights address principle issues of the EML paradigm that are worthwhile to “transfer” to these different specific challenges.

The goal of our tutorial is to provide ideas of advanced techniques of specific EML branches, and then to share them as a common insight into the EML paradigm. Firstly, we introduce the common challenges in the EML paradigm and then discuss how various EML branches address these challenges. Then, as detailed examples, we provide two major approaches to EML: Evolutionary rule-based learning (i.e. Learning Classifier Systems) as a symbolic approach; Evolutionary Neural Networks as a connectionist approach.

Our tutorial will be organized for not only beginners but also experts in the EML field. For the beginners, our tutorial will be a gentle introduction regarding EML from basics to recent challenges. For the experts, our two specific talks provide the most recent advances of evolutionary rule-based learning and of evolutionary neural networks. Additionally, we will provide a discussion on how these techniques’ insights can be reused to other EML branches, which shapes the new directions of EML techniques.

Tutorial Presenters (names with affiliations):

  • Masaya Nakata, Assosiate Professor, Yokohama National University, Japan
  • Shinichi Shirakawa, Lecturer, Yokohama National University, Japan
  • Will Browne, Associate Professor Victoria University of Wellington, NZ

Tutorial Presenters’ Bios:

Dr. Nakata is an associate professor at Faculty of Engineering, Yokohama National University, Japan. He received his Ph.D. degree in informatics from the University of Electro-Communications, Japan, in 2016. He has been working on Evolutionary Rule-based Machine Learning, Reinforcement Learning, Data mining, more specifically, Learning Classifier System (LCS). He was a visiting researcher at Politecnico di Milano, University Bristol and Victoria University of Wellington. His contributions have been published as more than 10 journal papers and more than 20 conference papers including CEC, GECCO, PPSN. He is an organizing committee member of International Workshop on Learning Classifier Systems/Evolutionary Rule-based Machine Learning 2015-2016, 2018-2019 in GECCO conference, elected from the international LCS research community. He received IEEE CIS Japan chapter Young Research Award.

Dr. Shirakawa is a lecturer at Faculty of Environment and Information Sciences, Yokohama National University, Japan. He received his Ph.D. degree in engineering from Yokohama National University in 2009. He worked at Fujitsu Laboratories Ltd., Aoyama Gakuin University, and University of Tsukuba. His research interests include evolutionary computation, machine learning, computer vision, and so on. He is currently working on the evolutionary deep neural networks. His contributions have been published as high-quality journal and conference papers in EC and AI, e.g., CEC, GECCO, PPSN, AAAI. He received IEEE CIS Japan chapter Young Research Award in 2009 and won the best paper award in evolutionary machine learning track of GECCO 2017.

Associate Prof Will Browne’s research focuses on applied cognitive systems. Specifically, how to use inspiration from natural intelligence to enable computers/machines/robots to behave usefully. This includes cognitive robotics, learning classifier systems, and modern heuristics for industrial application. A/Prof. Browne has been co-track chair for the Genetics-Based Machine Learning (GBML) track and is currently the co-chair for the Evolutionary Machine Learning track at Genetic and Evolutionary Computation Conference. He has also provided tutorials on Rule-Based Machine Learning at GECCO, chaired the International Workshop on Learning Classifier Systems (LCSs) and lectured graduate courses on LCSs. He has recently co-authored the first textbook on LCSs ‘Introduction to Learning Classifier Systems, Springer 2017’. Currently, he leads the LCS theme in the Evolutionary Computation Research Group at Victoria University of Wellington, New Zealand.

Self-Organizing Migrating Algorithm – Recent Advances and Progress in Swarm Intelligence Algorithms


Self-Organizing Migrating Algorithm (SOMA) belongs to the class of swarm intelligence techniques. SOMA is inspired by competitive-cooperative behavior, uses inherent self-adaptation of movement over the search space, as well as discrete perturbation mimicking the mutation process. The SOMA performs significantly well in both continuous as well as discrete domains. The tutorial will cover several parts.

Firstly, state of the art on the field of swarm intelligence algorithms, similarities and differences between various algorithms and SOMA will be discussed.

The main part of the tutorial will show a collection of principal findings from original research papers discussing current research trends in parameters control, discrete perturbation, novel improvements approaches on and with SOMA from the latest scientific events. New and very efficient strategies like SOMA-T3A (4th place in 100-digit competition), recently published SASOMA, or SOMA-Pareto (6th place in 100-digit competition) will be discussed in detail with demonstrations.

Also, the description of our original concept for the transformation of internal dynamics of swarm algorithms (including SOMA) into the social-like network (social interaction amongst individuals) will be discussed here. Analysis of such a network can be then straightforwardly used as direct feedback into the algorithm for improving its performance.

Finally, the experiences from more than the last 10 years with SOMA, demonstrated on various applications like control engineering, cybersecurity, combinatorial optimization, or computer games, conclude the tutorial.

Tutorial Presenters (names with affiliations):

Name: Roman Senkerik
Affiliation: Tomas Bata University in Zlin, Department of Informatics and Artificial Intelligence

Tutorial Presenters’ Bios:

Roman Senkerik was born in Zlin, the Czech Republic, in 1981. He received an MSc degree in technical cybernetics from the Tomas Bata University in Zlin, Faculty of applied informatics in 2004, the Ph.D. degree also in technical Cybernetics, in 2008, from the same university, and Assoc. prof. Degree in Informatics from VSB – Technical University of Ostrava, in 2013.
From 2008 to 2013 he was a Research Assistant and Lecturer with the Tomas Bata University in Zlin, Faculty of applied informatics. Since 2014 he is an Associate Professor and Head of the A.I.Lab with the Department of Informatics and Artificial Intelligence, Tomas Bata University in Zlin. He is the author of more than 40 journal papers, 250 conference papers, and several book chapters as well as editorial notes. His research interests are soft computing methods and their interdisciplinary applications in optimization and cyber-security, development of evolutionary algorithms, machine learning, data science, the theory of chaos, complex systems. He is a recognized reviewer for many leading journals in computer science/computational intelligence. He was a part of the organizing teams for special sessions/symposiums or IPC/TPC at IEEE WCCI, CEC, SSCI, GECCO, SEMCCO or MENDEL (and more) events. He was a guest editor of several special issues in journals, and editor of proceedings for several conferences.

Evolutionary Many-Objective Optimization


The goal of the tutorial is clearly explaining difficulties of evolutionary many-objective optimization, approaches to the handling of those difficulties, and promising future research directions. Evolutionary multi-objective optimization (EMO) has been a very active research area in the field of evolutionary computation in the last two decades. In the EMO area, the hottest research topic is evolutionary many-objective optimization. The difference between multi-objective and many-objective optimization is simply the number of objectives. Multi-objective problems with four or more objectives are usually referred to as many-objective problems. It sounds that there exists no significant difference between three-objective and four-objective problems. However, the increase in the number of objectives significantly makes multi-objective problem difficult. In the first part (Part I: Difficulties), we clearly explain not only frequently-discussed well-known difficulties such as the weakening selection pressure towards the Pareto front and the exponential increase in the number of solutions for approximating the entire Pareto front but also other hidden difficulties such as the deterioration of the usefulness of crossover and the difficulty of performance evaluation of solution sets. The attendees of the tutorial will learn why many-objective optimization is difficult for EMO algorithms. After the clear explanations about the difficulties of many-objective optimization, we explain in the second part (Part II: Approaches and Future Directions) how to handle each difficulty. For example, we explain how to prevent the Pareto dominance relation from weakening its selection pressure and how to prevent a binary crossover operator from decreasing its search efficiently. We categorize approaches to tackle many-objective optimization problems and explain some state-of-the-art many-objective algorithms in each category. The attendees of the tutorial will learn some representative approaches to many-objective optimization and state-of-the-art many-objective algorithms. At the same time, the attendees will also learn that there still exist a large number of promising, interesting and important research directions in evolutionary many-objective optimization. Some promising research directions are explained in detail in the tutorial.

Tutorial Presenters (names with affiliations):

Name: HisaoIshibuchi

Affiliation: Southern University of Science and Technology

Name: Hiroyuki Sato

Affiliation: The University of Electro-Communications

Tutorial Presenters’ Bios:

HisaoIshibuchi received the B.S. and M.S. degrees from Kyoto University in 1985 and 1987, respectively, and the Ph.D. degree from Osaka Prefecture University in 1992. He was with Osaka Prefecture University in 1987-2017. Since April 2017, he is a Chair Professor at Southern University of Science and Technology, China. He received a JSPS Prize from Japan Society for the Promotion of Science in 2007, a best paper award from GECCO 2004, 2017, 2018 and FUZZ-IEEE 2009, 2011, and the IEEE CIS Fuzzy Systems Pioneer Award in 2019. Dr. Ishibuchi was an IEEE CIS Vice President in 2010-2013. Currently he is an AdCom member of the IEEE CIS (2014-2019), and the Editor-in-Chief of the IEEE Computational Intelligence Magazine (2014-2019). He is an IEEE Fellow.

Hiroyuki Sato received B.E. and M.E. degrees from Shinshu University, Japan, in 2003 and 2005, respectively. In 2009, he received Ph. D. degree from Shinshu University. He has worked at The University of Electro-Communications since 2009. He is currently an associate professor. He received best paper awards on the EMO track in GECCO 2011 and 2014, Transaction of the Japanese Society for Evolutionary Computation in 2012 and 2015. His research interests include evolutionary multi- and many-objective optimization, and its applications. He is a member of IEEE, ACM/SIGEVO.

External website with more information on Tutorial (if applicable):

Nature-Inspired Techniques for Combinatorial Problems


Combinatorial problems refer to those applications where we either look for the existence of a consistent scenario satisfying a set of constraints (decision problem), or for one or more good/best solutions meeting a set of requirements while optimizing some objectives (optimization problem). These latter objectives include user’s preferences that reflect desires and choices that need to be satisfied as much as possible. Moreover, constraints and objectives (in the case of an optimization problem) often come with uncertainty due to lack of knowledge, missing information, or variability caused by events, which are under nature’s control. Finally, in some applications such as timetabling, urban planning and robot motion planning, these constraints and objectives can be temporal, spatial or both. In this latter case, we are dealing with entities occupying a given position in time and space.

Because of the importance of these problems in so many fields, a wide variety of techniques and programming languages from artificial intelligence, computational logic, operations research and discrete mathematics, are being developed to tackle problems of this kind. While these tools have provided very promising results at both the representation and the reasoning levels, they are still impractical to dealing with many real-world applications, especially given the challenges we listed above.

In this tutorial, we will show how to apply nature-inspired techniques in order to overcome these limitations. This requires dealing with different aspects of uncertainty, change, preferences and spatio-temporal information. The approach that we will adopt is based on the Constraint Satisfaction Problem (CSP) paradigm and its variants.

Biography of the Speaker

Dr. Malek Mouhoub obtained his MSc and PhD in Computer Science from the University of Nancy in France. He is currently Professor and was the Head of the Department of Computer Science at the University of Regina, in Canada. Dr. Mouhoub’s research interests include Constraint Solving, Metaheuristics and Nature-Inspired Techniques, Spatial and Temporal Reasoning, Preference Reasoning, Constraint and Preference Learning, with applications to Scheduling and Planning, E-commerce, Online Auctions, Vehicle Routing and Geographic Information Systems (GIS). Dr. Mouhoub’s research is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada Foundation for Innovation (CFI), and the Mathematics of Information Technology and Complex Systems (MITACS) federal grants, in addition to several other funds and awards.

Dr. Mouhoub is the past treasurer and member of the executive of the Canadian Artificial Intelligence Association / Association pour l’intelligenceartificielle au Canada (CAIAC). CAIAC is the oldest national Artificial Intelligence association in the world. It is the official arm of the Association for the Advancement of Artificial Intelligence (AAAI) in Canada.


Dr. Mouhoub was the program co-chair for the 30th Canadian Conference on Artificial Intelligence (AI 2017), the 31st International Conference on Industrial, Engineering & Other Applications of Applied Intelligent Systems (IEA/AIE 2018) and the IFIP International Conference on Computational Intelligence and Its Applications (IFIP CIIA 2018).

Evolutionary Computation for Dynamic Multi-objective Optimization Problems


Many real-world optimization problems involve multiple conflicting objectives to be optimized and are subject to dynamic environments, where changes may occur over time regarding optimization objectives, decision variables, and/or constraint conditions. Such dynamic multi-objective optimization problems (DMOPs) are challenging problems due to their nature of difficulty. Yet, they are important problems that researchers and practitioners in decision-making in many domains need to face and solve. Evolutionary computation (EC) encapsulates a class of stochastic optimization methods that mimic principles from natural evolution to solve optimization and search problems. EC methods are good tools to address DMOPs due to their inspiration from natural and biological evolution, which has always been subject to changing environments. EC for DMOPs has attracted a lot of research effort during the last two decades with some promising results. However, this research area is still quite young and far away from well-understood. This tutorial provides an introduction to the research area of EC for DMOPs and carry out an in-depth description of the state-of-the-art of research in the field. The purpose is to (i) provide detailed description and classification of DMOP benchmark problems and performance measures; (ii) review current EC approaches and provide detailed explanations on how they work for DMOPs; (iii) present current applications in the area of EC for DMOPs; (iv) analyse current gaps and challenges in EC for DMOPs; and (v) point out future research directions in EC for DMOPs.

Tutorial Presenters (names with affiliations):

Prof. Shengxiang Yang, School of Computer Science and Informatics, De Montfort University, UK

Tutorial Presenters’ Bios:

Shengxiang Yang ( got his PhD degree in Systems Engineering in 1999 from Northeastern University, China. He is now a Professor of Computational Intelligence (CI) and Director of the Centre for Computational Intelligence (, De Montfort University (DMU), UK. He has worked extensively for 20 years in the areas of CI methods, including EC and artificial neural networks, and their applications for real-world problems. He has over 280 publications in these domains, with over 9800 citations and H-index of 53 according to Google Scholar. His work has been supported by UK research councils, EU FP7 and Horizon 2020, Chinese Ministry of Education, and industry partners, with a total funding of over £2M, of which two EPSRC standard research projects have been focused on EC for DMOPs.

He serves as an Associate Editor or Editorial Board Member of several international journals, including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, Information Sciences, Enterprise Information Systems, and Soft Computing, etc. He was the founding chair of the Task Force on Intelligent Network Systems (TF-INS, 2012-2017) and the chair of the Task Force on EC in Dynamic and Uncertain Environments (ECiDUEs, 2011-2017) of the IEEE CI Society (CIS). He has organised/chaired over 60 workshops and special sessions relevant to ECiDUEs for several major international conferences. He is the founding co-chair of the IEEE Symposium on CI in Dynamic and Uncertain Environments. He has co-edited 12 books, proceedings, and journal specialissues. He has been invited to give over 10 keynote speeches at international conferences and workshops.


External website with more information on Tutorial (if applicable): None.

Venue: With WCCI 2020 being held as a virtual conference, there will be a virtual experience of Glasgow, Scotland accessible through the virtual platform. We hope that everyone will have a chance to visit one of Europe’s most dynamic cultural capitals and the “World’s Friendliest City” soon in the future!.