Schedule of tutorials - 19th July, 2020

11:30 - 13:30
14:00 - 16:00
16:30 - 18:30
19:00 - 21:00
11:30 - 13:30
Tutorial Title Presenter Conference Email
Adversarial Machine Learning: On The Deeper Secrets of Deep Learning Danilo Vargas IJCNN vargas@inf.kyushu-u.ac.jp
Brain-Inspired Spiking Neural Network Architectures for Deep, Incremental Learning and Knowledge Evolution       Nikola Kasabov IJCNN nkasabov@aut.ac.nz
Fundamentals of Fuzzy Networks Alexander Gegov, Farzad Arabikhan FUZZ alexander.gegov@port.ac.uk
Instance Space Analysis for Rigorous and Insightful Algorithm Testing Kate Smith-Miles, Mario Andres, Munoz Acosta WCCI munoz.m@unimelb.edu.au
Advances in Deep Reinforcement Learning Thanh Thi Nguyen, Vijay Janapa Reddi,Ngoc Duy Nguyen, IJCNN thanh.nguyen@deakin.edu.au 
Selection Exploration and Exploitation Stephen Chen, James Montgomery CEC sychen@yorku.ca
Visualising the search process of EC algorithms Su Nguyen, Yi Mei, and Mengjie Zhang CEC P.Nguyen4@latrobe.edu.au
Evolutionary Machine Learning  Masaya Nakata, Shinichi Shirakawa, Will Browne CEC nakata-masaya-tb@ynu.ac.jp
Evolutionary Many-Objective Optimization Hisao Ishibuchi, Hiroyuki Sato CEC h.sato@uec.ac.jp
14:00 - 16:00
Tutorial Title Presenter Conference Email
Deep Learning for Graphs Davide Bacciu IJCNN bacciu@di.unipi.it
Randomization Based Deep and Shallow Learning Methods for Classification and Forecasting P N Suganthan IJCNN EPNSugan@ntu.edu.sg
Multi-modality Helps in Solving Biomedical Problems: Theory and Applications Sriparna Saha, Pratik Dutta WCCI pratik.pcs16@iitp.ac.in
Deep Stochastic Learning and Understanding Jen-Tzung Chien IJCNN jtchien@nctu.edu.tw
Paving the way from Interpretable Fuzzy Systems to Explainable AI Systems José M. Alonso, Ciro Castiello, Corrado Menca, Luis Magdalena FUZZ josemaria.alonso.moral@usc.es
Pareto Optimization for Subset Selection: Theories and Practical Algorithms Chao Qian, Yang Yu CEC qianc@lamda.nju.edu.cn
Benchmarking and Analyzing Iterative Optimization Heuristics with IOHprofiler Carola Doerr, Thomas Bäck, Ofer Shir, Hao Wang CEC h.wang@liacs.leidenuniv.nl
Differential Evolution Rammohan Mallipeddi,  Guohua Wu         CEC mallipeddi.ram@gmail.com
Evolutionary computation for games: learning, planning, and designing

Julian Togelius , Jialin Liu 

CEC liujl@sustech.edu.cn
16:30 - 18:30
Tutorial Title Presenter Conference Email
From brains to deep neural networks Saeid Sanei, Clive Cheong Took IJCNN Clive.CheongTook@rhul.ac.uk
Evolution of Neural Networks Risto Miikkulainen IJCNN risto@cs.utexas.edu
Experience Replay for Deep Reinforcement Learning Abdul Rahman Al Tahhan, Vasile Palade IJCNN A.Altahhan@leedsbeckett.ac.uk
Fuzzy Systems for Neuroscience and Neuro-engineering Applications Javier Andreu, CT Lin FUZZ javier.andreu@essex.ac.uk
Dynamic Parameter Choices in Evolutionary Computation Carola Doerr, Gregor Papa  CEC gregor.papa@ijs.si
Evolutionary Computation for Dynamic Multi-objective Optimization Problems Shengxiang Yang CEC syang@dmu.ac.uk
Evolutionary Algorithms and Hyper-Heuristics Nelishia Pillay CEC npillay@cs.up.ac.za
Large-Scale Global Optimization Mohammad Nabi Omidvar, Antonio LaTorre CEC M.N.Omidvar@leeds.ac.uk 
Bilevel optimization Ankur Sinha, Kalyanmoy Deb CEC asinha@iima.ac.in
19:00 - 21:00
Tutorial Title Presenter Conference Email
How to combine human and computational intelligence? Peter Erdi WCCI Peter.Erdi@kzoo.edu
Machine learning  for data streams in Python with scikit-multi flow Jacob Montiel, Heitor Gomes,Jesse Read, Albert Bifet IJCNN heitor.gomes@waikato.ac.nz
Deep randomized neural networks Claudio Gallicchio, Simone Scardapane IJCNN gallicch@di.unipi.it
Mechanisms of Universal Turing Machines: Vision, Audition, Natural Language, APFGP and Consciousness Juyang Weng IJCNN juyang.weng@gmail.com
Patch Learning: A New Method of Machine Learning, Implemented by Means of Fuzzy Sets Jerry Mendel FUZZ jmmprof@me.com
Self-Organizing Migrating Algorithm - Recent Advances and Progress in Swarm Intelligence Algorithms

Roman Senkerik CEC senkerik@utb.cz
Large-Scale Global Optimization - PART 2 Mohammad Nabi Omidvar, Antonio LaTorre CEC M.N.Omidvar@leeds.ac.uk 
Recent Advances in Particle Swarm Optimization Analysis and Understanding Andries Engelbrecht, Christopher Cleghorn CEC engel@sun.ac.za
Nature-Inspired Techniques for Combinatorial Problems Malek Mouhoub

CEC Malek.Mouhoub@uregina.ca
Niching Methods for Multimodal Optimization Xiaodong Li, Mike Preuss, Michael G. Epitropakis

CEC xiaodong.li@rmit.edu.au

Selection, Exploration, and Exploitation

Abstract

The goal of exploration is to seek out new areas of the search space. The effect of selection is to concentrate search around the best-known areas of the search space. The power of selection can overwhelm exploration, allowing it to turn any exploratory method into a hill climber. The balancing of exploration and exploitation requires more than the consideration of what solutions are created — it requires an analysis of the interplay between exploration and selection.

This tutorial reviews a broad range of selection methods used in metaheuristics. Novel tools to analyze the effects of selection on exploration in the continuous domain are introduced and demonstrated on Particle Swarm Optimization and Differential Evolution. The difference between convergence (no exploratory search solutions are created) and stall (all exploratory search solutions are rejected) is highlighted. Remedies and alternate methods of selection are presented, and the ramifications for the future design of metaheuristics are discussed.

Tutorial Presenters (names with affiliations):

Stephen Chen, Associate Professor, School of Information Technology, York University, Toronto, Canada

James Montgomery, Senior Lecturer, School of Technology, Environments and Design, University of Tasmania, Hobart, Australia

Tutorial Presenters’ Bios:

Stephen Chen is an Associate Professor in the School of Information Technology at York University in Toronto, Canada. His research focuses on analyzing the mechanisms for exploration and exploitation in techniques designed for multi-modal optimization problems. He is particularly interested in the development and analysis of non-metaphor-based heuristic search techniques. He has conducted extensive research on genetic algorithms and swarm-based optimization systems, and his 60+ peer-reviewed publications include 20+ that have been presented at previous CEC events.

James Montgomery is a Senior Lecturer in the School of Technology, Environments and Design at the University of Tasmania in Hobart, Australia. His research focuses on search space analysis and the design of solution representations for complex, real-world problems. He has conducted extensive research on ant colony optimization and differential evolution, and his 50+ peer-reviewed publications include 10+ that have been presented at previous CEC events.

External website with more information on Tutorial (if applicable):

https://www.yorku.ca/sychen/research/tutorials/CEC2020_Selection_Exploration_Exploitation_Tutorial.html

Visualising the search process of EC algorithms

Abstract

Evolutionary computation (EC) algorithms have been successfully applied to a wide range of artificial intelligence (AI) problems ranging from function optimisation, production scheduling, to evolutionary deep learning. EC researchers have been continuously developed new techniques to enhance the performance of EC algorithms. However, it is still very challenging to fully understand the behaviours of these algorithms due to the complexity of solution representations and search operators. As a result, researchers mainly rely on the performance results from experiments to suggest which algorithms perform better and to understand how novel features impact the final performance. In these studies, some questions usually left unanswered are how better results are obtained and whether the proposed algorithms behave as conceptually designed. Thus, it is critical to have an analysis tool that can help researchers gain insights on how the algorithms work and capture useful emerging patterns.

This tutorial aims at demonstrating how visualisation can be used to help researchers gain insights about EC algorithms. In this tutorial, we will review the applications of visualisation in EC such as visualising performance and generated solutions and highlight a new visualisation framework to capture high-level evolutionary patterns of EC algorithms. The following main topics will be covered in this 1.5-hour tutorial:

  • Quick recap of EC algorithms and applications
  • A review of visualisation techniques for EC algorithms
  • AI-based visualisation (AIV) framework to reveal evolutionary patterns of EC algorithms
    • Dimensionality reduction
    • Topological data analysis
    • Visual analytics
  • Case studies:
    • Evolving classifiers using genetic programming and particle swarm optimisation
    • Automated design of production scheduling heuristics with genetic programming
    • Evolving artificial neural networks
  • Using Python to implement the AIV framework
  • From AIV framework to people-centric evolutionary systems

Tutorial Presenters (names with affiliations):

Su Nguyen (La Trobe University), Yi Mei and Mengjie Zhang (Victoria University of Wellington)

Tutorial Presenters’ Bios:

Su Nguyen is a Senior Research Fellow and Algorithm Lead at the Centre for Data Analytics and Cognition (CDAC), La Trobe University, Australia. He received his Ph.D. degree in Artificial Intelligence and Data Analytics from Victoria University of Wellington (VUW), Wellington, New Zealand, in 2013. His expertise includes computational intelligence, optimization, data analytics, large-scale simulation, and their applications in energy, operations management, and social networks. His current research focuses on novel people-centric artificial intelligence to enhance explainability and human-AI interaction by combining the power of evolutionary computation techniques and advanced machine learning algorithms such as deep learning and incremental learning. His works have been published in top peer-reviewed journals in evolutionary computation and operations research such as IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, Evolutionary Computation Journal, Computers and Operations Research. He serves as a member in the IEEE CIS Technical Committee on Data Mining. He is the guest editor of special issue on “Automated Design and Adaption of Heuristics for Scheduling and Combinatorial Optimization” in Genetic Programming and Evolvable Machines journal. He also serves as a reviewer of high-quality journals and top conferences in evolutionary computation, operations research, data mining, and artificial intelligence.

Yi Mei is a Senior Lecturer at the School of Engineering and Computer Science, Victoria University of Wellington, Wellington, New Zealand. He received his BSc and PhD degrees from University of Science and Technology of China in 2005 and 2010, respectively. His research interests include evolutionary computation in scheduling, routing and combinatorial optimisation, as well as evolutionary machine learning, genetic programming, feature selection and dimensional reduction. He has more than 70 fully referred publications, including the top journals in EC and Operations Research (OR) such as IEEE TEVC, IEEE Transactions on Cybernetics, European Journal of Operational Research, ACM Transactions on Mathematical Software. He is an Editorial Board Member of International Journal of Bio-Inspired Computation and an Associate Editor of Associate Editor of International Journal of Applied Evolutionary Computation. He currently serves as a member of the two IEEE CIS Technical Committees, and a member of three IEEE CIS Task Forces. He is a guest editor of a special issue of the Genetic Programming Evolvable Machine journal. He serves as a reviewer of over 25 international journals including the top journals in EC and OR.

Mengjie Zhang is currently a Professor of computer science, the Head of the Evolutionary Computation Research Group, and the Associate Dean (Research and Innovation) with the Faculty of Engineering, Victoria University of Wellington, Wellington, New Zealand. He has published over 500 research papers in refereed international journals and conferences. His current research interests include evolutionary computation with application areas of image analysis, multiobjective optimization, feature selection and reduction, job shop scheduling, and transfer learning.

Dr. Zhang is a Fellow of Royal Society of NZ and IEEE Fellow. He was the Chair of the IEEE CIS Intelligent Systems and Applications Technical Committee, the IEEE CIS Emergent Technologies Technical Committee, and the Evolutionary Computation Technical Committee, and a member of the IEEE CIS Award Committee. He is the Vice-Chair of the IEEE CIS Task Force on Evolutionary Feature Selection and Construction and the Task Force on Evolutionary Computer Vision and Image Processing, and the Founding Chair of the IEEE Computational Intelligence Chapter in New Zealand. He is also a Committee Member of the IEEE NZ Central Section. He has been a Panel Member of the Marsden Fund (New Zealand Government Funding).

External website with more information on Tutorial (if applicable): to be updated

https://nguyensu.github.io/visevo/

 

Evolutionary Machine Learning

Abstract

A fusion of Evolutionary Computation and Machine Learning, namely Evolutionary Machine Learning (EML), has been recognized as a rapidly growing research area as these powerful search and learning mechanisms are combined. Many specific branches of EML with different learning schemes and different ML problem domains have been proposed. These branches seek to address common challenges –

  • How evolutionary search can discover optimal ML configurations and parameter settings,
  • How the deterministic models of ML can influence evolutionary mechanisms,
  • How EC and ML can be integrated into one learning model.

Consequently, various insights address principle issues of the EML paradigm that are worthwhile to “transfer” to these different specific challenges.

The goal of our tutorial is to provide ideas of advanced techniques of specific EML branches, and then to share them as a common insight into the EML paradigm. Firstly, we introduce the common challenges in the EML paradigm and then discuss how various EML branches address these challenges. Then, as detailed examples, we provide two major approaches to EML: Evolutionary rule-based learning (i.e. Learning Classifier Systems) as a symbolic approach; Evolutionary Neural Networks as a connectionist approach.

Our tutorial will be organized for not only beginners but also experts in the EML field. For the beginners, our tutorial will be a gentle introduction regarding EML from basics to recent challenges. For the experts, our two specific talks provide the most recent advances of evolutionary rule-based learning and of evolutionary neural networks. Additionally, we will provide a discussion on how these techniques’ insights can be reused to other EML branches, which shapes the new directions of EML techniques.

Tutorial Presenters (names with affiliations):

  • Masaya Nakata, Assosiate Professor, Yokohama National University, Japan
  • Shinichi Shirakawa, Lecturer, Yokohama National University, Japan
  • Will Browne, Associate Professor Victoria University of Wellington, NZ

Tutorial Presenters’ Bios:

Dr. Nakata is an associate professor at Faculty of Engineering, Yokohama National University, Japan. He received his Ph.D. degree in informatics from the University of Electro-Communications, Japan, in 2016. He has been working on Evolutionary Rule-based Machine Learning, Reinforcement Learning, Data mining, more specifically, Learning Classifier System (LCS). He was a visiting researcher at Politecnico di Milano, University Bristol and Victoria University of Wellington. His contributions have been published as more than 10 journal papers and more than 20 conference papers including CEC, GECCO, PPSN. He is an organizing committee member of International Workshop on Learning Classifier Systems/Evolutionary Rule-based Machine Learning 2015-2016, 2018-2019 in GECCO conference, elected from the international LCS research community. He received IEEE CIS Japan chapter Young Research Award.

Dr. Shirakawa is a lecturer at Faculty of Environment and Information Sciences, Yokohama National University, Japan. He received his Ph.D. degree in engineering from Yokohama National University in 2009. He worked at Fujitsu Laboratories Ltd., Aoyama Gakuin University, and University of Tsukuba. His research interests include evolutionary computation, machine learning, computer vision, and so on. He is currently working on the evolutionary deep neural networks. His contributions have been published as high-quality journal and conference papers in EC and AI, e.g., CEC, GECCO, PPSN, AAAI. He received IEEE CIS Japan chapter Young Research Award in 2009 and won the best paper award in evolutionary machine learning track of GECCO 2017.

Associate Prof Will Browne’s research focuses on applied cognitive systems. Specifically, how to use inspiration from natural intelligence to enable computers/machines/robots to behave usefully. This includes cognitive robotics, learning classifier systems, and modern heuristics for industrial application. A/Prof. Browne has been co-track chair for the Genetics-Based Machine Learning (GBML) track and is currently the co-chair for the Evolutionary Machine Learning track at Genetic and Evolutionary Computation Conference. He has also provided tutorials on Rule-Based Machine Learning at GECCO, chaired the International Workshop on Learning Classifier Systems (LCSs) and lectured graduate courses on LCSs. He has recently co-authored the first textbook on LCSs ‘Introduction to Learning Classifier Systems, Springer 2017’. Currently, he leads the LCS theme in the Evolutionary Computation Research Group at Victoria University of Wellington, New Zealand.

Evolutionary Many-Objective Optimization

Abstract

The goal of the tutorial is clearly explaining difficulties of evolutionary many-objective optimization, approaches to the handling of those difficulties, and promising future research directions. Evolutionary multi-objective optimization (EMO) has been a very active research area in the field of evolutionary computation in the last two decades. In the EMO area, the hottest research topic is evolutionary many-objective optimization. The difference between multi-objective and many-objective optimization is simply the number of objectives. Multi-objective problems with four or more objectives are usually referred to as many-objective problems. It sounds that there exists no significant difference between three-objective and four-objective problems. However, the increase in the number of objectives significantly makes multi-objective problem difficult. In the first part (Part I: Difficulties), we clearly explain not only frequently-discussed well-known difficulties such as the weakening selection pressure towards the Pareto front and the exponential increase in the number of solutions for approximating the entire Pareto front but also other hidden difficulties such as the deterioration of the usefulness of crossover and the difficulty of performance evaluation of solution sets. The attendees of the tutorial will learn why many-objective optimization is difficult for EMO algorithms. After the clear explanations about the difficulties of many-objective optimization, we explain in the second part (Part II: Approaches and Future Directions) how to handle each difficulty. For example, we explain how to prevent the Pareto dominance relation from weakening its selection pressure and how to prevent a binary crossover operator from decreasing its search efficiently. We categorize approaches to tackle many-objective optimization problems and explain some state-of-the-art many-objective algorithms in each category. The attendees of the tutorial will learn some representative approaches to many-objective optimization and state-of-the-art many-objective algorithms. At the same time, the attendees will also learn that there still exist a large number of promising, interesting and important research directions in evolutionary many-objective optimization. Some promising research directions are explained in detail in the tutorial.

Tutorial Presenters (names with affiliations):

Name: HisaoIshibuchi

Affiliation: Southern University of Science and Technology

Name: Hiroyuki Sato

Affiliation: The University of Electro-Communications

Tutorial Presenters’ Bios:

HisaoIshibuchi received the B.S. and M.S. degrees from Kyoto University in 1985 and 1987, respectively, and the Ph.D. degree from Osaka Prefecture University in 1992. He was with Osaka Prefecture University in 1987-2017. Since April 2017, he is a Chair Professor at Southern University of Science and Technology, China. He received a JSPS Prize from Japan Society for the Promotion of Science in 2007, a best paper award from GECCO 2004, 2017, 2018 and FUZZ-IEEE 2009, 2011, and the IEEE CIS Fuzzy Systems Pioneer Award in 2019. Dr. Ishibuchi was an IEEE CIS Vice President in 2010-2013. Currently he is an AdCom member of the IEEE CIS (2014-2019), and the Editor-in-Chief of the IEEE Computational Intelligence Magazine (2014-2019). He is an IEEE Fellow.

Hiroyuki Sato received B.E. and M.E. degrees from Shinshu University, Japan, in 2003 and 2005, respectively. In 2009, he received Ph. D. degree from Shinshu University. He has worked at The University of Electro-Communications since 2009. He is currently an associate professor. He received best paper awards on the EMO track in GECCO 2011 and 2014, Transaction of the Japanese Society for Evolutionary Computation in 2012 and 2015. His research interests include evolutionary multi- and many-objective optimization, and its applications. He is a member of IEEE, ACM/SIGEVO.

External website with more information on Tutorial (if applicable):

https://bit.ly/329qlRK

Pareto Optimization for Subset Selection: Theories and Practical Algorithms

Abstract

Pareto optimization is a general optimization framework for solving single-objective optimization problems, based on multi-objective evolutionary optimization. The main idea is to transform a single-objective optimization problem into a bi-objective one, then employ a multi-objective evolutionary algorithm to solve it, and finally return the best feasible solution w.r.t. the original single-objective optimization problem from the produced non-dominated solution set.Pareto optimization has been shown a promising method for the subset selection problem, which has applications in diverse areas, including machine learning, data mining, natural language processing, computer vision, information retrieval, etc. The theoretical understanding of Pareto optimization has recently been significantly developed, showing its irreplaceability for subset selection. This tutorial will introduce Pareto optimization from scratch. We will show that it achieves the best-so-far theoretical and practical performances in several applications of subset selection. We will also introduce advanced variants of Pareto optimization for large-scale, noisy and dynamic subset selection. We assume that the audiences are with basic knowledge of probability theory.

Tutorial Presenters (names with affiliations):

Chao Qian, Nanjing University, China http://www.lamda.nju.edu.cn/qianc

Yang Yu, Nanjing University, China http://www.lamda.nju.edu.cn/yuy

Tutorial Presenters’ Bios:

Chao Qian is an Associate Professor in the School of Artificial Intelligence, Nanjing University, China. He received the BSc and PhD degrees in the Department of Computer Science and Technologyfrom Nanjing University in 2009 and 2015, respectively. From 2015 to 2019, He was an Associate Researcher in the School of Computer Science and TechnologyUniversity of Science and Technology of China. His research interests are mainly theoretical analysis of evolutionary algorithms and their application in machinelearning. He has published one book “Evolutionary Learning: Advances in Theories and Algorithms”and more than 30 papers in top-tier journals (e.g.,AIJ, TEvC, ECJ, Algorithmica) and conferences (e.g., NIPS, IJCAI, AAAI).He has won the ACM GECCO 2011 Best Theory PaperAward, and the IDEAL 2016 Best Paper Award. He is chair of IEEE Computational Intelligence Society (CIS) Task Force on Theoretical Foundations of Bio-inspired Computation.

Yang Yu is a Professor in School of Artificial Intelligence, Nanjing University, China. He joined the LAMDA Group as a faculty since he got his Ph.D. degree in 2011. His research area is in machine learning and reinforcement learning. He was recommended as AI’s 10 to Watch by IEEE Intelligent Systems in 2018, invited to have an Early Career Spotlight talk in IJCAI’18 on reinforcement learning, and received the Early Career Award of PAKDD in 2018.

Benchmarking and Analyzing Iterative Optimization Heuristics with IOHprofiler

Abstract

IOHprofiler is a new benchmarking environment that has been developed for a highly versatile analysis of iterative optimization heuristics (IOHs) such as evolutionary algorithms, local search algorithms, model-based heuristics, etc. A key design principle of IOHprofiler is its highly modular setup, which makes it easy for its users to add algorithms, problems, and performance criteria of their choice. IOHprofiler is also useful for the in-depth analysis of the evolution of adaptive parameters, which can be plotted against fixed-targets or fixed-budgets. The analysis of robustness is also supported.

IOHprofiler supports all types of optimization problems, and is not restricted to a particular search domain.  A web-based interface of its analysis procedure is available at http://iohprofiler.liacs.nl, the tool itself is available on GitHub (https://github.com/IOHprofiler/IOHanalyzer) and as CRAN package (https://cran.rstudio.com/web/packages/IOHanalyzer/index.html).

The tutorial addresses all CEC participants interested in analyzing and comparing heuristic solvers. By the end of the tutorial, the participants will known how to benchmark different solvers with IOHprofiler, which performance statistics it supports, and how to contribute to its design.

Tutorial Presenters (names with affiliations): 

Thomas Bäck, Leiden University, The Netherlands,

Carola Doerr, CNRS and Sorbonne University, France,

Ofer M. Shir, Tel-Hai College and Migal Institute, Israel,

Hao Wang, Sorbonne University, France.

Tutorial Presenters’ Bios:

  • Thomas Bäck is Full Professor of Computer Science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002. He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994, and then worked for the Informatik Centrum Dortmund (ICD) as department leader of the Center for Applied Systems Analysis. From 2000 – 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others. Thomas has more than 300 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and the Handbook of Natural Computing, and co-editor-in-chief of Springer’s Natural Computing book series. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft fürInformatik, GI) in 1995 and the IEEE Computational Intelligence Society Evolutionary Computation Pioneer Award in 2015.
  • Carola Doerr, formerly Winzen, is a permanent CNRS researcher at Sorbonne University in Paris, France. Her main research activities are in the mathematical analysis of randomized algorithms, with a strong focus on evolutionary algorithms and other black-box optimizers. She has been very active in the design and analysis of black-box complexity models, a theory-guided approach to explore the limitations of heuristic search algorithms. Most recently, she has used knowledge from these studies to prove superiority of dynamic parameter choices in evolutionary computation, a topic that she believes to carry huge unexplored potential for the community. Carola has received several awards for her work on evolutionary computation, among them the Otto Hahn Medal of the Max Planck Society and four best paper awards at GECCO. She is/was program chair of PPSN 2020, FOGA 2019 and the theory tracks of GECCO 2015 and 2017. Carola serves on the editorial boards of ACM Transactions on Evolutionary Learning and Optimization and of Evolutionary Computation and was editor of two special issues in Algorithmica. Carola is vice chair of the EU-funded COST action 15140 on “Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice (ImAppNIO)”.
  • Ofer M. Shir is the Head of the Computer Science Department of Tel-Hai College, and a Principal Investigator at the Migal-Galilee Research Institute – both located in the Upper Galilee, Israel. OferShir holds a BSc in Physics and Computer Science from the Hebrew University of Jerusalem, Israel (conferred 2003), and both MSc and PhD in Computer Science from Leiden University, The Netherlands (conferred 2004, 2008; PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010), hosted by Prof. Herschel Rabitz in the Department of Chemistry – where he specialized in computational aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member (2010-2013), which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics. His current topics of interest include Statistical Learning in Theory and in Practice, Experimental Optimization, Theory of Randomized Search Heuristics, Scientific Informatics, Natural Computing, Computational Intelligence in Physical Sciences, Quantum Control and Quantum Machine Learning.
  • Hao Wang obtained his PhD (cum laude, promotor: Prof. Thomas Bäck) from Leiden University in 2018. He is currently a postdoc at Sorbonne University (supervised by Carola Doerr) and has accepted a position as an assistant Professor at the Leiden Institute of Advanced Computer Science from Sep. 2020. He received the Best Paper Award at the PPSN 2016 conference and was a best paper award finalist at the IEEE SMC 2017 conference. His research interests are proposing, improving and analyzing stochastic optimization algorithms, especially Evolutionary Strategies and Bayesian Optimization. In addition, he also works on developing statistical machine learning algorithms for big and complex industrial data. He also aims at combining the state-of-the-art optimization algorithm with data mining/machine learning techniques to make the real-world optimization tasks more efficient and robust.

 

External website with more information on Tutorial (if applicable): None

Evolution with Ensembles, Adaptations and Topologies

Abstract

Differential Evolution (DE) is one of the most powerful stochastic real-parameter optimization algorithms of current interest. DE operates through similar computational steps as employed by a standard Evolutionary Algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance.  This tutorial will begin with abrief overview of the basic concepts related to DE, its algorithmic components and control parameters. It will subsequently discuss some of the significant algorithmic variants of DE for bound constrained single-objective optimization. Recent modifications of the DE family of algorithms for multi-objective, constrained, large-scale, niching and dynamic optimization problems will also be included. The talk will discuss the effects of incorporating ensemble learning in DE – a relatively recent concept that can be applied to swarm & evolutionary algorithms to solve various kinds of optimization problems. The talk will also discuss neighborhood topologies based DE and adaptive DEs to improve the performance of DE. Theoretical advances made to understand the search mechanism of DE and the effect of its most important control parameters will be discussed. The talk will finally highlight a few problems that pose challenge to the state-of-the-art DE algorithms and demand strong research effort from the DE-community in the future.

Duration: 1.5 hours

Intended Audience:  This presentation will include basics as well as advanced topics of DE. Hence, researchers commencing their research in DE as well as experienced researcher can attend. Practitioners will also benefit from the presentation.

Expected Enrollment: In the past 40-50 attendees registered to attend the DE tutorials at CEC. We expect a similar interest this year too.

Name: Dr. Rammohan Mallipeddi and Dr. Guohua Wu

 

Affiliation: Nanyang Technological University, Singapore.

 

Email: mallipeddi.ram@gmail.com, guohuawu@csu.edu.cn

 

Website: http://ecis.knu.ac.kr/http://faculty.csu.edu.cn/guohuawu/en/index.htm

 

 

Goal: Differential evolution (DE) is one of the most successful numerical optimization paradigm. Hence, practitioners and junior researchers would be interested in learning this optimization algorithm.  DE is also rapidly growing. Hence, a tutorial on DE will be timely and beneficial to many of the CEC 2020 conference attendees. This tutorial will introduce the basics of the DE and then point out some advanced methods for solving diverse numerical optimization problems by using DE.

Format: The tutorial format will be primarily power point slides based. However, frequent interactions with the audience will be maintained.

Pertinent Qualification: The speakers co-authored several original articles on DE. In addition, the authors have published a survey paper on ensemble strategies in population-based algorithms including DE. The speakers have also been organizing numerical optimization competitions at the CEC conferences (EA Benchmarks / CEC Competitions). DE has been one of top performers in these competitions. As the organizers, the speakers would also be able to share their experiences.

Key Papers:

  • Guohua Wu, Rammohan Mallipeddi and P. N. Suganthan, Ensemble Strategies for Population-based Optimization Algorithms – A Survey, Swarm and Evolutionary Computation, Vol. 44, pp. 695-711, 2019.
  • Das, S. S. Mullick, P. N. Suganthan, “Recent Advances in Differential Evolution – An Updated Survey,” Swarm and Evolutionary Computation, Vol. 27, pp. 1-30, 2016.
  • Das and P. N. Suganthan,Differential Evolution: A Survey of the State-of-the-Art”, IEEE Trans. on Evolutionary Computation, 15(1):4 – 31, Feb. 2011.

General Bio-sketch:

Name: Dr. Rammohan Mallipeddi

Affiliation: Kyungpook National University, South Korea.

Rammohan Mallipeddi is an Associate Professor working in the School of Electronics Engineering, Kyungpook National University (Daegu, South Korea). He received Master’s and PhD degrees in computer control and automation from the School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore, in 2007 and 2010, respectively. His research interests include evolutionary computing, artificial intelligence, image processing, digital signal processing, robotics, and control engineering. He co-authored papers published IEEE TEVC, etc. Currently, he serves as an Associate Editor for “Swarm and Evolutionary Computation”, an international journal from Elsevier and a regular reviewer for journals including IEEE TEVC and IEEE TCYB.

Name: Dr. Guohua Wu

Affiliation: Central South University, China.

Guohua Wu received the B.S. degree in Information Systems and Ph.D. degree in Operations Research from National University of Defense Technology, China, in 2008 and 2014, respectively. During 2012 and 2014, he was a visiting Ph.D. student at University of Alberta, Edmonton, Canada.He is now a Professor at the School of Traffic and Transportation Engineering,Central South University, Changsha, China. His current research interests include planning and scheduling, evolutionary computation and machine learning. He has authored more than 50 referred papers including those published in IEEE TCYB, IEEE TSMCA, INS, COR. He serves as an Associate Editor of Swarm and Evolutionary Computation, an editorial board member of International Journal of Bio-Inspired Computation, a Guest Editor of Information Sciences and Memetic Computing. He is a regular reviewer of more than 20 journals including IEEE TEVC, IEEE TCYB, IEEE TFS.

Evolutionary computation for games: learning, planning, and designing

Abstract

This tutorial introduces several techniques and application areas for evolutionary computation in games, such as board games and video games. We will give a broad overview of the use cases and popular methods for evolutionary computation in games, and in particular cover the use of evolutionary computation for learning policies (evolutionary reinforcement learning using neuroevolution), planning (rolling horizon and online planning), and designing (search-based procedural content generation). The basic principles will be explained and illustrated by examples from our own research as well as others’ research.

Tentative outline:

  • Introduction: who are we, what are we talking about?
  • Why do research on evolutionary computation and games?
  • Part 1: Playing games
    • Reasons for building game-playing AI
    • Characteristics of games (and how they affect game-playing algorithms)
    • Reinforcement learning through evolution
    • Neuroevolution
    • Planning with evolution
    • Single-agent games (rolling horizon evolution)
    • Multi-agent games (online evolution)
  • Part 2: Designing and developing games
    • The need for AI in game design and development
    • Procedural content generation
    • Search-based procedural content generation
    • procedural content generation machine learning (PCGML)
    • Game balancing
    • Game testing
    • Game adaptation

Tutorial Presenters:

Julian Togelius

Associate Professor

Department of Computer Science and Engineering

Tandon School of Engineering

New York University

2 MetroTech Center, Brooklyn, NY 11201, USA

Co-director of the NYU Game Innovation Lab

Editor-in-Chief, IEEE Transactions on Games

julian@togelius.com

Jialin Liu

Research Assistant Professor

Optimization and Learning Laboratory (OPAL Lab)

Department of Computer Science and Engineering (CSE)

Southern University of Science and Technology (SUSTech)

Shenzhen, China

Associate Editor, IEEE Transactions on Games

liujl@sustech.edu.cn

Tutorial Presenters’ Bios:

Julian Togelius is an Associate Professor in the Department of Computer Science and Engineering, New York University, USA. He works on artificial intelligence for games and games for artificial intelligence. His current main research directions involve search-based procedural content generation in games, general video game playing, player modeling, generating games based on open data, and fair and relevant benchmarking of AI through game-based competitions. He is the Editor-in-Chief of IEEE Transactions on Games, and has been chair or program chair of several of the main conferences on AI and Games. Julian holds a BA from Lund University, an MSc from the University of Sussex, and a Ph.D. from the University of Essex. He has previously worked at IDSIA in Lugano and at the IT University of Copenhagen.

Jialin Liu is currently a Research Assistant Professor in the Department of Computer Science and Engineering of Southern University of Science and Technology (SUSTech), China. Before joining SUSTech, she was a Postdoctoral Research Associate at Queen Mary University of London (QMUL, UK) and one of the founding members of the Game AI research group of QMUL. Her research interests include AI and Games, noisy optimisation, portfolio of algorithms and meta-heuristics.Jialin is an Associate Editor of IEEE Transactions on Games. She hasserved as Program Co-Chair of 2018 IEEE Computational Intelligence and Games (IEEE CIG2018), and Competition Chair of several main conferences on Evolutionary Computation, and AI and Games. She is also chairing IEEE CIS Games Technical Committee.

Dynamic Parameter Choices in Evolutionary Computation

Abstract

One of the most challenging problems in solving optimization problems with evolutionary algorithms and other optimization heuristics is the selection of the control parameters that determine their behavior. In state-of-the-art heuristics, several control parameters need to be set, and their setting typically has an important impact on the performance of the algorithm. For example, in evolutionary algorithms, we typically need to chose the population size, the mutation strength, the crossover rate, the selective pressure, etc.
Two principal approaches to the parameter selection problem exist:
(1) parameter tuning, which asks to find parameters that are most suitable for the problem instances at hand, and
(2) parameter control, which aims to identify good parameter settings “on the fly”, i.e., during the optimization itself.
Parameter control has the advantage that no prior training is needed. It also accounts for the fact that the optimal parameter values typically change during the optimization process: for example, at the beginning of an optimization process we typically aim for exploration, while in the later stages we want the algorithm to converge and to focus its search on the most promising regions in the search space.
While parameter control is indispensable in continuous optimization, it is far from being well-established in discrete optimization heuristics. The ambition of this tutorial is therefore to change this situation, by informing participants about different parameter control techniques, and by discussing both theoretical and experimental results that demonstrate the unexploited potential of non-static parameter choices.
Our tutorial addresses experimentally as well as theory-oriented researchers alike, and requires only basic knowledge of optimization heuristics.

Tutorial Presenters (names with affiliations):

– Carola Doerr, Sorbonne University, Paris, France

– Gregor Papa, Jožef Stefan Institute, Ljubljana, Slovenia

Tutorial Presenters’ Bios:

  • Carola Doerr(Doerr@lip6.frhttp://www-ia.lip6.fr/~doerr/) is a permanent CNRS researcher at Sorbonne University in Paris, France. She studied Mathematics at Kiel University (Germany, 2003-2007, Diplom) and Computer Science at the Max Planck Institute for Informatics and Saarland University (Germany, 2010-2011, PhD). Before joining the CNRS she was a post-doc at Paris Diderot University (Paris 7) and the Max Planck Institute for Informatics. From 2007 to 2009, Carola Doerr has worked as a business consultant for McKinsey & Company, where her interest in evolutionary algorithms originates from.
    Carola Doerr’s main research activities are in the mathematical analysis of randomized algorithms, with a strong focus on evolutionary algorithms and other black-box optimizers. She has been very active in the design and analysis of black-box complexity models, a theory-guided approach to explore the limitations of heuristic search algorithms. Most recently, she has used knowledge from these studies to prove superiority of dynamic parameter choices in evolutionary computation, a topic that she believes to carry huge unexplored potential for the community.
    Carola Doerr has received several awards for her work on evolutionary computation, among them the Otto Hahn Medal of the Max Planck Society and four best paper awards at GECCO. She is chairing the program committee of FOGA 2019 and previously chaired the theory tracks of GECCO 2015 and 2017. Carola is an editor of two special issues in Algorithmica. She is also vice chair of the EU-funded COST action 15140 on “Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice (ImAppNIO)”.
  • Gregor Papa (papa@ijs.sihttp://cs.ijs.si/papa/) is a Senior researcher and a Head of Computer Systems Department at the Jožef Stefan Institute, Ljubljana, Slovenia, and an Associate Professor at the Jožef Stefan International Postgraduate School, Ljubljana, Slovenia. He received the PhD degree in Electrical engineering at the University of Ljubljana, Slovenia, in 2002.
    Gregor Papa’s research interests include meta-heuristic optimisation methods and hardware implementations of high-complexity algorithms, with a focus on dynamic setting of algorithms’ control parameters. His work is published in several international journals and conference proceedings. He regularly organizes several conferences and workshops in the field of nature-inspired algorithms from the year 2004 till nowadays. He led and participated in several national and European projects.
    Gregor Papa is a member of the Editorial Board of the Automatika journal (Taylor & Francis) for the field “Applied Computational Intelligence”. He is a Consultant at the Slovenian Strategic research and innovation partnership for Smart cities and communities.

External website with more information on Tutorial (if applicable):

TBA

Evolutionary Computation for Dynamic Multi-objective Optimization Problems

Abstract

Many real-world optimization problems involve multiple conflicting objectives to be optimized and are subject to dynamic environments, where changes may occur over time regarding optimization objectives, decision variables, and/or constraint conditions. Such dynamic multi-objective optimization problems (DMOPs) are challenging problems due to their nature of difficulty. Yet, they are important problems that researchers and practitioners in decision-making in many domains need to face and solve. Evolutionary computation (EC) encapsulates a class of stochastic optimization methods that mimic principles from natural evolution to solve optimization and search problems. EC methods are good tools to address DMOPs due to their inspiration from natural and biological evolution, which has always been subject to changing environments. EC for DMOPs has attracted a lot of research effort during the last two decades with some promising results. However, this research area is still quite young and far away from well-understood. This tutorial provides an introduction to the research area of EC for DMOPs and carry out an in-depth description of the state-of-the-art of research in the field. The purpose is to (i) provide detailed description and classification of DMOP benchmark problems and performance measures; (ii) review current EC approaches and provide detailed explanations on how they work for DMOPs; (iii) present current applications in the area of EC for DMOPs; (iv) analyse current gaps and challenges in EC for DMOPs; and (v) point out future research directions in EC for DMOPs.

Tutorial Presenters (names with affiliations):

Prof. Shengxiang Yang, School of Computer Science and Informatics, De Montfort University, UK

Tutorial Presenters’ Bios:

Shengxiang Yang (http://www.tech.dmu.ac.uk/~syang/) got his PhD degree in Systems Engineering in 1999 from Northeastern University, China. He is now a Professor of Computational Intelligence (CI) and Director of the Centre for Computational Intelligence (http://www.cci.dmu.ac.uk/), De Montfort University (DMU), UK. He has worked extensively for 20 years in the areas of CI methods, including EC and artificial neural networks, and their applications for real-world problems. He has over 280 publications in these domains, with over 9800 citations and H-index of 53 according to Google Scholar. His work has been supported by UK research councils, EU FP7 and Horizon 2020, Chinese Ministry of Education, and industry partners, with a total funding of over £2M, of which two EPSRC standard research projects have been focused on EC for DMOPs.

He serves as an Associate Editor or Editorial Board Member of several international journals, including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, Information Sciences, Enterprise Information Systems, and Soft Computing, etc. He was the founding chair of the Task Force on Intelligent Network Systems (TF-INS, 2012-2017) and the chair of the Task Force on EC in Dynamic and Uncertain Environments (ECiDUEs, 2011-2017) of the IEEE CI Society (CIS). He has organised/chaired over 60 workshops and special sessions relevant to ECiDUEs for several major international conferences. He is the founding co-chair of the IEEE Symposium on CI in Dynamic and Uncertain Environments. He has co-edited 12 books, proceedings, and journal specialissues. He has been invited to give over 10 keynote speeches at international conferences and workshops.

 

External website with more information on Tutorial (if applicable): None.

Evolutionary Algorithms and Hyper-Heuristics

Abstract

Hyper-heuristics is a rapidly developing domain which has proven to be effective at providing generalized solutions to problems and across problem domains. Evolutionary algorithms have played a pivotal role in the advancement of hyper-heuristics, especially generation hyper-heuristics. Evolutionary algorithm hyper-heuristics have been successful applied to solving problems in various domains including packing problems, educational timetabling, vehicle routing, permutation flowshop and financial forecasting amongst others. The aim of the tutorial is to firstly provide an introduction to evolutionary algorithm hyper-heuristics for researchers interested in working in this domain. An overview of hyper-heuristics will be provided including the assessment of hyper-heuristic performance. The tutorial will examine each of the four categories of hyper-heuristics, namely, selection constructive, selection perturbative, generation constructive and generation perturbative, showing how evolutionary algorithms can be used for each type of hyper-heuristic. A case study will be presented for each type of hyper-heuristic to provide researchers with a foundation to start their own research in this area. The EvoHyp library will be used to demonstrate the implementation of a genetic algorithm hyper-heuristic for the case studies for selection hyper-heuristics and a genetic programming hyper-heuristic for the generation hyper-heuristics. A theoretical understanding of evolutionary algorithm hyper-heuristics will be provided. Challenges in the implementation of evolutionary algorithm hyper-heuristics will be highlighted. An emerging research direction is using hyper-heuristics for the automated design of computational intelligence techniques. The tutorial will look at the synergistic relationship between evolutionary algorithms and hyper-heuristics in this area. The use of hyper-heuristics for the automated design of evolutionary algorithms will be examined as well as the application of evolutionary algorithm hyper-heuristics for the design of computational intelligence techniques. The tutorial will end with a discussion session on future directions in evolutionary algorithms and hyper-heuristics.

Tutorial Presenters’ Bios:

Nelishia Pillay is a professor and head of department in the Department of Computer, University of Pretoria. Her research areas include hyper-heuristics, combinatorial optimization, genetic programming, genetic algorithms and other biologically-inspired methods. She holds the Multichoice Joint Chair in Machine Learning. She is chair of the IEEE Task Force on Hyper-Heuristics, chair of the IEEE Task Force on Automated Algorithm Design, Configuration and Selection and vice-chair of the IEEE CIS Technical Committee on Intelligent Systems and Applications and a member of the IEEE Technical Committee on Evolutionary Computation. She has served on program committees for numerous national and international conferences and is a reviewer for various international journals and is associate editor for IEEE Computational Intelligence Magazine and the Journal of Scheduling. She is an active researcher in the field of evolutionary algorithm hyper-heuristics and the application thereof to combinatorial optimization problems and automated design. This is one of the focus areas of the NICOG (Nature-Inspired Computing Optimization) research group which she has established.

External website with more information on Tutorial (if applicable):

https://sites.google.com/site/easandhyperheuristics/home

Evolutionary Large-Scale Global Optimization

Abstract

Many real-world optimization problems involve a large number of decision variables. The trend in engineering
optimization shows that the number of decision variables involved in a typical optimization
problem has grown exponentially over the last 50 years [10], and this trend continues with an everincreasing
rate. The proliferation of big-data analytic applications has also resulted in the emergence of
large-scale optimization problems at the heart of many machine learning problems [1, 11]. The recent
advances in the area of machine learning has also witnessed very large scale optimization problems encountered
in training deep neural network architectures (so-called deep learning), some of which have
over a billion decision variables [3, 7]. It is this “curse-of-dimensionality” that has made large-scale optimization
an exceedingly difficult task. Current optimization methods are often ill-equipped in dealing
with such problems. It is this research gap in both theory and practice that has attracted much research
interest, making large-scale optimization an active field in recent years. We are currently witnessing a
wide range of mathematical and metaheuristics optimization algorithms being developed to overcome this
scalability issue. Among these, metaheuristics have gained popularity due to their ability in dealing with
black-box optimization problems. Currently, there are two different approaches to tackle this complex
search. The first one is to apply decomposition methods, that divide the total number of variables into
groups of variables allowing researcher to optimize each one separately, reducing the curse of dimensionality.
Their main drawback is that choosing proper decomposition could be very difficult and expensive
computationally. The other approach is to propose algorithms specifically designed for large-scale global
optimization, creating algorithms whose features are well-suited for that type of search. The Tutorial is
divided in two parts, each dedicated to exploring the advances in the approaches stated above, presented
by experts in each respective field.

Part I: Introduction and Decomposition Methods

Presenters:

Mohammad Nabi Omidvar
Antonio LaTorre

In this part of the tutorial, we provide an overview of the recent advances in the field of evolutionary largescale global optimization with an emphasis on the divide-and-conquer approaches (a.k.a. decomposition methods). In particular, we give an overview of different approaches including the non-decomposition based approaches such as memetic algorithms and sampling methods to deal with large-scale problems.

This is followed by a more detailed treatment of implicit and explicit decomposition algorithms in largescale optimization. Considering the popularity of decomposition methods in recent years, we provide a detailed technical explanation of the state-of-the-art decomposition algorithms including the differential grouping algorithm [8] and its latest improved derivatives, which outperform other decomposition algorithms on the latest large-scale global optimization benchmarks [5]. We also address the issue of resource allocation in cooperative co-evolution and provide a detailed explanation of some recent algorithms such
as the contribution-based cooperative co-evolution family of algorithms [9]. Overall, this tutorial takes the form of a critical survey of the existing methods with an emphasis on articulating the challenges in large-scale global optimization in order to stimulate further research interest in this area.

Part II: Algorithms and Design Considerations

Presenter:

Mohammad Nabi Omidvar
Antonio LaTorre

In this part of the tutorial, we introduce various algorithms specially designed for large-scale global optimization in further detail. These algorithms do not reply on any decomposition mechanism and tackle the optimization of all variables as a whole. In order to obtain good results, given the shear size of the search space to explore, many approaches have been proposed among which memetic algorithms which incorporate local search have shown better performance. Since they need to obtain a good trace-off among exploration of the search space and exploitation of the current best solutions, we introduce ways of
using the local search and other techniques such as the restart mechanisms to improve the optimization performance on problems with a large number of decision variables. In recent years, the proposals of algorithms for large-scale global optimization have significantly increased, as shown in Figure 1. We describe, in chronological order, different relevant algorithms in the topic, with more emphasis on some state-of-the-art algorithms such as MOS [4], considered the state of the art for several years until other modern algorithms such as [2, 6] improved its results. We also give a critical view of the algorithms
and existing challenges in applying them to real-world problems. Overall, this part of the tutorial takes the form of a critical survey of existing and emerging algorithms with an emphasis on techniques used, and future challenges not only to obtain better proposals but also to incorporate the existing ones in real-world problems.

Targeted Audience

This tutorial is suitable for anyone with an interest in evolutionary computation who wishes to learn more about the state-of-the-art in large-scale global optimization. The tutorial is specifically targeted for Ph.D. students, and early career researchers who want to gain an overview of the field and wish to identify the most important open questions and challenges in the field to bootstrap their research in large-scale optimization. The tutorial can also be of interest to more experienced researchers as well
as practitioners who wish to get a glimpse of the latest developments in the field. In addition to our prime goal which is to inform and educate, we also wish to use this tutorial as a forum for exchanging ideas between researchers. Overall, this tutorial provides a unique opportunity to showcase the latest developments on this hot research topic to the EC research community. The expected duration of each part is approximately 110 minutes.

Organizers’ Bio

Mohammad Nabi Omidvar (M’09) received the first bachelor’s degree (First Class Hons.) in computer
science, the second bachelor’s degree in applied mathematics, and the Ph.D. degree in computer
science from RMIT University, Melbourne, VIC, Australia, in 2010, 2014, and 2016, respectively. He is
currently an Academic Fellow ( Asst. Professor) with the School of Computing, University of Leeds, and
Leeds University Business School working on applications of artificial intelligence in financial services.
Prior to joining the University of Leeds, he was a research fellow with the School of Computer Science,
University of Birmingham, U.K. His current research interests include large-scale global optimization,
decomposition methods for optimization, multiobjective optimization, and AI in finance. Dr. Omidvar
2
was a recipient of the IEEE Transactions on Evolutionary Computation Outstanding Paper Award for
his research on large-scale global optimization in 2017, the Australian Postgraduate Award in 2010, and
the Best Computer Science Honours Thesis Award from the School of Computer Science and IT, RMIT
University. In 2019 he jointly received the CEC 2019 Competition on Large-Scale Global Optimization
award. He is also the chair of IEEE Taskforce on Large-Scale Global Optimization.
Xiaodong Li (M’03-SM’07-Fellow’20) received his B.Sc. degree from Xidian University, Xi’an, China,
and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively.
He is a Professor with the School of Science (Computer Science and Software Engineering), RMIT University,
Melbourne, Australia. His research interests include machine learning, evolutionary computation,
neural networks, data analytics, multiobjective optimization, multimodal optimization, and swarm intelligence.
He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, Swarm
Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member
of IEEE CIS Task Force on Swarm Intelligence, a vice-chair of IEEE Task Force on Multi-modal
Optimization, and a former chair of IEEE CIS Task Force on Large Scale Global Optimization. He is the
recipient of 2013 ACM SIGEVO Impact Award and 2017 IEEE CIS “IEEE Transactions on Evolutionary
Computation Outstanding Paper Award”. He is elevated to IEEE Fellow in 2020.
Daniel Molina Cabrera (Phd) is assistant professor at University of Granada. His research interests
focus on numeric optimization, large-scale optimization, machine learning, and neuroevolution. He has
more than 15 years of research experience publishing more than 20 papers in international journals, and
more than 30 peer-reviewed contributions in national and international conferences. He has been until
2019 the Chair of the IEEE CIS Task Force on Large Scale Global Optimization.
Antonio LaTorre (PhD) is assistant profesor at Universidad Politécnica de Madrid and subdirector of
the Center for Computational Simulation. His research interests focus on high-performance data analysis,
modeling and optimization. He has an active research in applied problems in the domain of logistics,
neurosciences and health. He has more than 14 years of research experience backed-up by his participation
in 14 national and international projects, both with public and private funding, leading 3 of them. He
has published more than 50 peer-reviewed contributions in international journals and conferences. He
currently serves as vice-chair of the IEEE CIS Task Force on Large Scale Global Optimization.
Yuan Sun is a research fellow in the School of Computer Science and Software Engineering, RMIT
University. Prior to that he obtained his Ph.D degree from University of Melbourne and a Bachelor’s
degree from Peking University. His research interests include large-scale optimization, machine learning,
and operations research. He has published actively in large-scale optimization and has won the CEC
2019 Competition on Large-Scale Global Optimization.

Evolutionary Bilevel Optimization

Abstract

Many practical optimization problems should better be posed as bilevel optimization problems in which there are two levels of optimization tasks. A solution at the upper level is feasible if the cor-responding lower level variable vector is optimal for the lower level optimization problem. Consid-er, for example, an inverted pendulum problem for which the motion of the platform relates to the upper level optimization problem of performing the balancing task in a time-optimal manner. For a given motion of the platform, whether the pendulum can be balanced at all becomes a lower level optimization problem of maximizing stability margin. Such nested optimization problems are com-monly found in transportation, engineering design, game playing and business models. They are also known as Stackelberg games in the operations research community. These problems are too complex to be solved using classical optimization methods simply due to the “nestedness” of one optimization task into another.

Keywords

Bilevel Optimization, Bilevel Multi-objective Optimization, Evolutionary Algorithms, Multi-Criteria Decision Making, Theory on Bilevel Programming, Hierarchical Decision Making, Bilevel Applications, Hybrid Algorithms

Tutorial Description

What is Bilevel Programming?

To begin with, an introduction is provided to bilevel optimization problems. A generic formulation for bilevel problems is presented and its differences from ordinary single level optimization problems are discussed.

Bilevel Problems: A Genesis
The origin of bilevel problems can be traced to two roots; namely, game theory and mathematical programming. A genesis of these problems is provided through simple practical examples.

Properties of Bilevel Problems
The properties of bilevel optimization problems are discussed. Difficulties encountered in solving these problems are highlighted.

Applications
A number of practical applications from the areas of process optimization, game-playing strategy development, transportation problems, optimal control, environmental economics and coordination of multi-divisional firms are described to highlight the practical relevance of bilevel programming.

Solution Methodologies
Existing solution methodologies for bilevel optimization and their weaknesses are discussed. Lack of efficient methodologies is underlined and the need for better solution approaches is emphasized.

EAs Niche
Evolutionary algorithms provide a convenient framework for handling complex bilevel problems. Co-evolutionary ideas and flexibility in operator design can help in efficiently tackling the problem.

Multi-objective Extensions
Recent algorithms and results on multi-objective bilevel optimization using evolutionary algorithms are discussed and some application problems are highlighted.

Future Research Ideas
A number of immediate and future research ideas on bilevel optimization are highlighted related to decision making difficulties and robustness.

Conclusions
Concluding remarks for the tutorial are provided.

References
A list of references on bilevel optimization is provided.

Target Audience
Bilevel optimization belongs to a difficult class of optimization problems. Most of the classical optimization methods are unable to solve even simpler instances of bilevel problems. This offers a niche to the researchers in the field of evolutionary computation to work on the development of efficient bilevel procedures. However, many researchers working in the area of evolutionary computation are not familiar with this important class of optimization problems. Bilevel optimization has immense practical applications and it certainly requires attention of the researchers working on evolutionary computation. The target audience for this tutorial will be researchers and students looking to work on bilevel optimization. The tutorial will make the basic concepts on bilevel optimization and the recent results easily accessible to the audience.

Short Biography
Ankur Sinha is working as an Associate Professor at Indian Institute of Management, Ahmedabad, India. He completed his PhD from Helsinki School of Economics (Now: Aalto University School of Business) where his PhD thesis was adjudged as the best dissertation of the year 2011. He holds a Bachelors degree in Mechanical Engineering from Indian Institute of Technology (IIT) Kanpur. After completing his PhD, he has held visiting positions at Michigan State University and Aalto University. His research interests include Bilevel Optimization, Multi-Criteria Decision Making and Evolutionary Algorithms. He has offered tutorials on Evolutionary Bilevel Optimization at GECCO 2013, PPSN 2014, and CEC 2015, 2017, 2018, 2019. His research has been published in some of the leading Computer Science, Business and Statistics journals. He regularly chairs sessions at evolutionary computation conferences. For detailed information about his research and teaching, please refer to his personal page: http://www.iima.ac.in/~asinha/.

Kalyanmoy Deb is a Koenig Endowed Chair Professor at the Michigan State University in Michigan USA. He is the recipient of the prestigious TWAS Prize in Engineering Science, Infosys Prize in Engineering and Computer Science, Shanti Swarup Bhatnagar Prize in Engineering Sciences for the year 2005. He has also received the ‘Thomson Citation Laureate Award’ from Thompson Scientific for having highest number of citations in Computer Science during the past ten years in India. He is a fellow of IEEE, Indian National Academy of Engineering (INAE), Indian National Academy of Sciences, and International Society of Genetic and Evolutionary Computation (ISGEC). He has received Fredrick Wilhelm Bessel Research award from Alexander von Humboldt Foundation in 2003. His main research interests are in the area of computational optimization, modeling and design, and evolutionary algorithms. He has written two textbooks on optimization and more than 500 international journal and conference research papers. He has pioneered and is a leader in the field of evolutionary multi-objective optimization. He is associate editor and in the editorial board or a number of major international journals. More information about his research can be found from http://www.egr.msu.edu/people/profile/kdeb.

Self-Organizing Migrating Algorithm – Recent Advances and Progress in Swarm Intelligence Algorithms

Abstract

Self-Organizing Migrating Algorithm (SOMA) belongs to the class of swarm intelligence techniques. SOMA is inspired by competitive-cooperative behavior, uses inherent self-adaptation of movement over the search space, as well as discrete perturbation mimicking the mutation process. The SOMA performs significantly well in both continuous as well as discrete domains. The tutorial will cover several parts.

Firstly, state of the art on the field of swarm intelligence algorithms, similarities and differences between various algorithms and SOMA will be discussed.

The main part of the tutorial will show a collection of principal findings from original research papers discussing current research trends in parameters control, discrete perturbation, novel improvements approaches on and with SOMA from the latest scientific events. New and very efficient strategies like SOMA-T3A (4th place in 100-digit competition), recently published SASOMA, or SOMA-Pareto (6th place in 100-digit competition) will be discussed in detail with demonstrations.

Also, the description of our original concept for the transformation of internal dynamics of swarm algorithms (including SOMA) into the social-like network (social interaction amongst individuals) will be discussed here. Analysis of such a network can be then straightforwardly used as direct feedback into the algorithm for improving its performance.

Finally, the experiences from more than the last 10 years with SOMA, demonstrated on various applications like control engineering, cybersecurity, combinatorial optimization, or computer games, conclude the tutorial.

Tutorial Presenters (names with affiliations):

Name: Roman Senkerik
Affiliation: Tomas Bata University in Zlin, Department of Informatics and Artificial Intelligence
Email: 
senkerik@utb.cz


Tutorial Presenters’ Bios:

Roman Senkerik was born in Zlin, the Czech Republic, in 1981. He received an MSc degree in technical cybernetics from the Tomas Bata University in Zlin, Faculty of applied informatics in 2004, the Ph.D. degree also in technical Cybernetics, in 2008, from the same university, and Assoc. prof. Degree in Informatics from VSB – Technical University of Ostrava, in 2013.
From 2008 to 2013 he was a Research Assistant and Lecturer with the Tomas Bata University in Zlin, Faculty of applied informatics. Since 2014 he is an Associate Professor and Head of the A.I.Lab with the Department of Informatics and Artificial Intelligence, Tomas Bata University in Zlin. He is the author of more than 40 journal papers, 250 conference papers, and several book chapters as well as editorial notes. His research interests are soft computing methods and their interdisciplinary applications in optimization and cyber-security, development of evolutionary algorithms, machine learning, data science, the theory of chaos, complex systems. He is a recognized reviewer for many leading journals in computer science/computational intelligence. He was a part of the organizing teams for special sessions/symposiums or IPC/TPC at IEEE WCCI, CEC, SSCI, GECCO, SEMCCO or MENDEL (and more) events. He was a guest editor of several special issues in journals, and editor of proceedings for several conferences.

Evolutionary Large-Scale Global Optimization – PART 2

Abstract

Many real-world optimization problems involve a large number of decision variables. The trend in engineering
optimization shows that the number of decision variables involved in a typical optimization
problem has grown exponentially over the last 50 years [10], and this trend continues with an everincreasing
rate. The proliferation of big-data analytic applications has also resulted in the emergence of
large-scale optimization problems at the heart of many machine learning problems [1, 11]. The recent
advances in the area of machine learning has also witnessed very large scale optimization problems encountered
in training deep neural network architectures (so-called deep learning), some of which have
over a billion decision variables [3, 7]. It is this “curse-of-dimensionality” that has made large-scale optimization
an exceedingly difficult task. Current optimization methods are often ill-equipped in dealing
with such problems. It is this research gap in both theory and practice that has attracted much research
interest, making large-scale optimization an active field in recent years. We are currently witnessing a
wide range of mathematical and metaheuristics optimization algorithms being developed to overcome this
scalability issue. Among these, metaheuristics have gained popularity due to their ability in dealing with
black-box optimization problems. Currently, there are two different approaches to tackle this complex
search. The first one is to apply decomposition methods, that divide the total number of variables into
groups of variables allowing researcher to optimize each one separately, reducing the curse of dimensionality.
Their main drawback is that choosing proper decomposition could be very difficult and expensive
computationally. The other approach is to propose algorithms specifically designed for large-scale global
optimization, creating algorithms whose features are well-suited for that type of search. The Tutorial is
divided in two parts, each dedicated to exploring the advances in the approaches stated above, presented
by experts in each respective field.

Part I: Introduction and Decomposition Methods

Presenters:

Mohammad Nabi Omidvar
Antonio LaTorre

In this part of the tutorial, we provide an overview of the recent advances in the field of evolutionary largescale global optimization with an emphasis on the divide-and-conquer approaches (a.k.a. decomposition methods). In particular, we give an overview of different approaches including the non-decomposition based approaches such as memetic algorithms and sampling methods to deal with large-scale problems.

This is followed by a more detailed treatment of implicit and explicit decomposition algorithms in largescale optimization. Considering the popularity of decomposition methods in recent years, we provide a detailed technical explanation of the state-of-the-art decomposition algorithms including the differential grouping algorithm [8] and its latest improved derivatives, which outperform other decomposition algorithms on the latest large-scale global optimization benchmarks [5]. We also address the issue of resource allocation in cooperative co-evolution and provide a detailed explanation of some recent algorithms such
as the contribution-based cooperative co-evolution family of algorithms [9]. Overall, this tutorial takes the form of a critical survey of the existing methods with an emphasis on articulating the challenges in large-scale global optimization in order to stimulate further research interest in this area.

Part II: Algorithms and Design Considerations

Presenter:

Mohammad Nabi Omidvar
Antonio LaTorre

In this part of the tutorial, we introduce various algorithms specially designed for large-scale global optimization in further detail. These algorithms do not reply on any decomposition mechanism and tackle the optimization of all variables as a whole. In order to obtain good results, given the shear size of the search space to explore, many approaches have been proposed among which memetic algorithms which incorporate local search have shown better performance. Since they need to obtain a good trace-off among exploration of the search space and exploitation of the current best solutions, we introduce ways of
using the local search and other techniques such as the restart mechanisms to improve the optimization performance on problems with a large number of decision variables. In recent years, the proposals of algorithms for large-scale global optimization have significantly increased, as shown in Figure 1. We describe, in chronological order, different relevant algorithms in the topic, with more emphasis on some state-of-the-art algorithms such as MOS [4], considered the state of the art for several years until other modern algorithms such as [2, 6] improved its results. We also give a critical view of the algorithms
and existing challenges in applying them to real-world problems. Overall, this part of the tutorial takes the form of a critical survey of existing and emerging algorithms with an emphasis on techniques used, and future challenges not only to obtain better proposals but also to incorporate the existing ones in real-world problems.

Targeted Audience

This tutorial is suitable for anyone with an interest in evolutionary computation who wishes to learn more about the state-of-the-art in large-scale global optimization. The tutorial is specifically targeted for Ph.D. students, and early career researchers who want to gain an overview of the field and wish to identify the most important open questions and challenges in the field to bootstrap their research in large-scale optimization. The tutorial can also be of interest to more experienced researchers as well
as practitioners who wish to get a glimpse of the latest developments in the field. In addition to our prime goal which is to inform and educate, we also wish to use this tutorial as a forum for exchanging ideas between researchers. Overall, this tutorial provides a unique opportunity to showcase the latest developments on this hot research topic to the EC research community. The expected duration of each part is approximately 110 minutes.

Organizers’ Bio

Mohammad Nabi Omidvar (M’09) received the first bachelor’s degree (First Class Hons.) in computer
science, the second bachelor’s degree in applied mathematics, and the Ph.D. degree in computer
science from RMIT University, Melbourne, VIC, Australia, in 2010, 2014, and 2016, respectively. He is
currently an Academic Fellow ( Asst. Professor) with the School of Computing, University of Leeds, and
Leeds University Business School working on applications of artificial intelligence in financial services.
Prior to joining the University of Leeds, he was a research fellow with the School of Computer Science,
University of Birmingham, U.K. His current research interests include large-scale global optimization,
decomposition methods for optimization, multiobjective optimization, and AI in finance. Dr. Omidvar
2
was a recipient of the IEEE Transactions on Evolutionary Computation Outstanding Paper Award for
his research on large-scale global optimization in 2017, the Australian Postgraduate Award in 2010, and
the Best Computer Science Honours Thesis Award from the School of Computer Science and IT, RMIT
University. In 2019 he jointly received the CEC 2019 Competition on Large-Scale Global Optimization
award. He is also the chair of IEEE Taskforce on Large-Scale Global Optimization.
Xiaodong Li (M’03-SM’07-Fellow’20) received his B.Sc. degree from Xidian University, Xi’an, China,
and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively.
He is a Professor with the School of Science (Computer Science and Software Engineering), RMIT University,
Melbourne, Australia. His research interests include machine learning, evolutionary computation,
neural networks, data analytics, multiobjective optimization, multimodal optimization, and swarm intelligence.
He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, Swarm
Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member
of IEEE CIS Task Force on Swarm Intelligence, a vice-chair of IEEE Task Force on Multi-modal
Optimization, and a former chair of IEEE CIS Task Force on Large Scale Global Optimization. He is the
recipient of 2013 ACM SIGEVO Impact Award and 2017 IEEE CIS “IEEE Transactions on Evolutionary
Computation Outstanding Paper Award”. He is elevated to IEEE Fellow in 2020.
Daniel Molina Cabrera (Phd) is assistant professor at University of Granada. His research interests
focus on numeric optimization, large-scale optimization, machine learning, and neuroevolution. He has
more than 15 years of research experience publishing more than 20 papers in international journals, and
more than 30 peer-reviewed contributions in national and international conferences. He has been until
2019 the Chair of the IEEE CIS Task Force on Large Scale Global Optimization.
Antonio LaTorre (PhD) is assistant profesor at Universidad Politécnica de Madrid and subdirector of
the Center for Computational Simulation. His research interests focus on high-performance data analysis,
modeling and optimization. He has an active research in applied problems in the domain of logistics,
neurosciences and health. He has more than 14 years of research experience backed-up by his participation
in 14 national and international projects, both with public and private funding, leading 3 of them. He
has published more than 50 peer-reviewed contributions in international journals and conferences. He
currently serves as vice-chair of the IEEE CIS Task Force on Large Scale Global Optimization.
Yuan Sun is a research fellow in the School of Computer Science and Software Engineering, RMIT
University. Prior to that he obtained his Ph.D degree from University of Melbourne and a Bachelor’s
degree from Peking University. His research interests include large-scale optimization, machine learning,
and operations research. He has published actively in large-scale optimization and has won the CEC
2019 Competition on Large-Scale Global Optimization.

Recent Advances in Particle Swarm Optimization Analysis and Understanding

Abstract

The main objective of this tutorial will be to inform particle swarm optimization (PSO) practitioners of the many common misconceptions and falsehoods that are actively hindering a practitioner’s successful use of PSO in solving challenging optimization problems. While the behaviour of PSO’s particles has been studied both theoretically and empirically since its inception in 1995, most practitioners unfortunately have not utilized these studies to guide their use of PSO. This tutorial will provide a succinct coverage of common PSO misconceptions, with a detailed explanation of why the misconceptions are in fact false, and how they are negatively impacting results. The tutorial will also provide recent theoretical results about PSO particle behaviour from which the PSO practitioner can now make better and more informed decisions about PSO and in particular make better PSO parameter selections.

Presenters:

Andries Engelbrecht (Stellenbosch University, South Africa)

Christopher Cleghorn (University of Pretoria, South Africa)

Bios:

Andries Engelbrecht received the Masters and PhD degrees in ComputerScience from the University of Stellenbosch, South Africa, in 1994 and 1999 respectively. He is Voigt Chair in Data Science in the Department of Industrial Engineering, with a joint appointment as Professor in the Computer Science Division, Stellenbosch University. His research interests include swarm intelligence, evolutionary computation, artificial neural networks, artificial immune systems, and the application of these Computational Intelligence paradigms to data analytics, games, bioinformatics, finance, and difficult optimization problems. He is author of two books, Computational Intelligence: An Introduction and Fundamentals of Computational Swarm Intelligence.

Christopher Cleghorn received his Masters and PhD degrees in Computer Science from the University of Pretoria, South Africa, in 2013 and 2017 respectively. He is a senior lecturer in Computer Science at the University of Pretoria, and a member of the Computational Intelligence Research Group. His research interests include swarm intelligence, evolutionary computation, and machine learning, with a strong focus on theoretical research. Dr Cleghorn annually serves as a reviewer for numerous international journals and conferences in domains ranging from swarm intelligence and neural networks to mathematical optimization.

URL: https://cirg.cs.up.ac.za/CEC/index.html

Nature-Inspired Techniques for Combinatorial Problems

Abstract

Combinatorial problems refer to those applications where we either look for the existence of a consistent scenario satisfying a set of constraints (decision problem), or for one or more good/best solutions meeting a set of requirements while optimizing some objectives (optimization problem). These latter objectives include user’s preferences that reflect desires and choices that need to be satisfied as much as possible. Moreover, constraints and objectives (in the case of an optimization problem) often come with uncertainty due to lack of knowledge, missing information, or variability caused by events, which are under nature’s control. Finally, in some applications such as timetabling, urban planning and robot motion planning, these constraints and objectives can be temporal, spatial or both. In this latter case, we are dealing with entities occupying a given position in time and space.

Because of the importance of these problems in so many fields, a wide variety of techniques and programming languages from artificial intelligence, computational logic, operations research and discrete mathematics, are being developed to tackle problems of this kind. While these tools have provided very promising results at both the representation and the reasoning levels, they are still impractical to dealing with many real-world applications, especially given the challenges we listed above.

In this tutorial, we will show how to apply nature-inspired techniques in order to overcome these limitations. This requires dealing with different aspects of uncertainty, change, preferences and spatio-temporal information. The approach that we will adopt is based on the Constraint Satisfaction Problem (CSP) paradigm and its variants.

Biography of the Speaker

Dr. Malek Mouhoub obtained his MSc and PhD in Computer Science from the University of Nancy in France. He is currently Professor and was the Head of the Department of Computer Science at the University of Regina, in Canada. Dr. Mouhoub’s research interests include Constraint Solving, Metaheuristics and Nature-Inspired Techniques, Spatial and Temporal Reasoning, Preference Reasoning, Constraint and Preference Learning, with applications to Scheduling and Planning, E-commerce, Online Auctions, Vehicle Routing and Geographic Information Systems (GIS). Dr. Mouhoub’s research is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada Foundation for Innovation (CFI), and the Mathematics of Information Technology and Complex Systems (MITACS) federal grants, in addition to several other funds and awards.

Dr. Mouhoub is the past treasurer and member of the executive of the Canadian Artificial Intelligence Association / Association pour l’intelligenceartificielle au Canada (CAIAC). CAIAC is the oldest national Artificial Intelligence association in the world. It is the official arm of the Association for the Advancement of Artificial Intelligence (AAAI) in Canada.

 

Dr. Mouhoub was the program co-chair for the 30th Canadian Conference on Artificial Intelligence (AI 2017), the 31st International Conference on Industrial, Engineering & Other Applications of Applied Intelligent Systems (IEA/AIE 2018) and the IFIP International Conference on Computational Intelligence and Its Applications (IFIP CIIA 2018).

Niching Methods for Multimodal Optimization

Xiaodong Li, RMIT University, Melbourne, Australia
Mike Preuss, Universiteit Leiden, the Netherlands
Michael G. Epitropakis, The Signal Group, Athens, Greece

Date/Time:
Sunday, July 19th from 7:00 – 9:00pm UK Time

Abstract

Population or single solution search-based optimization algorithms (i.e., meta-heuristics) in their original forms are usually designed for locating a single global solution. Representative examples include among others evolutionary and swarm intelligence algorithms. These search algorithms typically converge to a single solution because of the global selection scheme used. Nevertheless, many real-world problems are “multi-modal” by nature, i.e., multiple satisfactory solutions exist. It may be desirable to locate many such satisfactory solutions, or even all of them, so that a decision maker can choose one that is most proper in his/her problem context. Numerous techniques have been developed in the past for locating multiple optima (global and/or local). These techniques are commonly referred to as “niching” methods, e.g., crowding, fitness sharing, derating, restricted tournament selection, clearing, speciation, etc. In more recent times, niching methods have also been developed for meta-heuristic algorithms such as Particle Swarm Optimization (PSO) and Differential Evolution (DE). In this talk we will introduce niching methods, including its historical background, the motivation of employing niching in EAs, and the challenges in applying it to solving real-world problems. We will describe a few classic niching methods, such as the fitness sharing and crowding, as well as niching methods developed using new meta-heuristics such as PSO and DE. We will also describe a niching competition series run annually by the IEEE CIS Taskforce on Multimodal Optimization, hoping to attract more researchers to participate. Niching methods can be applied for effective handling of a wide range of problems including static and dynamic optimization, multiobjective optimization, clustering, feature selection, and machine learning. We will provide several such examples of solving real-world multimodal optimization problems. This tutorial will use several demos to show the workings of niching methods. The tutorial is supported by the IEEE CIS Task Force on Multi-modal Optimization (http://www.epitropakis.co.uk/ieee-mmo/).

Duration: 1.5 hours.

Intended audience: This tutorial should be of interest to both new beginners as well as more experienced researchers. The tutorial will provide a unique opportunity to get an update on the latest developments of this classic evolutionary computing topic, which attracts increasingly more attention in recent years.

Tutorial Presenters (names with affiliations):

Xiaodong Li, RMIT University, Melbourne, Australia
Mike Preuss, Universiteit Leiden, the Netherlands
Michael G. Epitropakis, The Signal Group, Athens, Greece

Tutorial Presenters’ Bios:

Xiaodong Li received his B.Sc. degree from Xidian University, Xi’an, China, and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively. He is a full professor at the School of Science (Computer Science and Software Engineering), RMIT University, Melbourne, Australia. His research interests include evolutionary computation, neural networks, machine learning, complex systems, multiobjective optimization, multimodal optimization (niching), and swarm intelligence. He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, Swarm Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member of IEEE CIS Task Force on Swarm Intelligence, a Vice-chair of IEEE CIS Task Force of Multi-Modal Optimization, and a former Chair of IEEE CIS Task Force on Large Scale Global Optimization. He was the General Chair of SEAL’08, a Program Co-Chair AI’09, a Program Co-Chair for IEEE CEC’2012, a General Chair for ACALCI’2017 and AI’17. He is the recipient of 2013 ACM SIGEVO Impact Award and 2017 IEEE CIS “IEEE Transactions on Evolutionary Computation Outstanding Paper Award”.

Mike Preuss is Assistant Professor at LIACS, the computer science institute of Universiteit Leiden in the Netherlands. Previously, he was with ERCIS (the information systems institute of WWU Muenster, Germany), and before with the Chair of Algorithm Engineering at TU Dortmund, Germany, where he received his PhD in 2013. His main research interests rest on the field of evolutionary algorithms for real-valued problems, namely on multimodal and multiobjective optimization, and on computational intelligence and machine learning methods for computer games, especially in procedural content generation (PGC) and real-time strategy games (RTS).

Michael G. Epitropakis received his B.S., M.S., and Ph.D. degrees from the Department of Mathematics, University of Patras, Patras, Greece. Currently, he is a Senior Research Scientist and a Product Manager at The Signal Group, Athens, Greece. From 2015 to 2018 he was a Lecturer in Foundations of Data Science at the Data Science Institute and the Department of Management Science, Lancaster University, Lancaster, UK. His current research interests include computational intelligence, evolutionary computation, swarm intelligence, machine learning and search-based software engineering. He has published more than 35 journal and conference papers. He is an active researcher on Multi-modal Optimization and a co-organized of the special session and competition series on Niching Methods for Multimodal Optimization. He is a member of the IEEE Computational Intelligence Society.

 

Venue: With WCCI 2020 being held as a virtual conference, there will be a virtual experience of Glasgow, Scotland accessible through the virtual platform. We hope that everyone will have a chance to visit one of Europe’s most dynamic cultural capitals and the “World’s Friendliest City” soon in the future!.