Schedule of tutorials - 19th July, 2020

11:30 - 13:30
14:00 - 16:00
16:30 - 18:30
19:00 - 21:00
11:30 - 13:30
Tutorial Title Presenter Conference Contact Email
RANKING GAME: How to combine human and computational intelligence? Peter Erdi WCCI
Adversarial Machine Learning: On The Deeper Secrets of Deep Learning Danilo V. Vargas IJCNN
From brains to deep neural networks Saeid Sanei, Clive Cheong Took IJCNN
Deep randomized neural networks Claudio Gallicchio, Simone Scardapane IJCNN
Brain-Inspired Spiking Neural Network Architectures for Deep, Incremental Learning and Knowledge Evolution       Nikola Kasabov IJCNN
Fundamentals of Fuzzy Networks Alexander Gegov, Farzad Arabikhan FUZZ
Pareto Optimization for Subset Selection: Theories and Practical Algorithms Chao Qian, Yang Yu CEC
Selection Exploration and Exploitation Stephen Chen, James Montgomery CEC
Benchmarking and Analyzing Iterative Optimization Heuristics with IOHprofiler Carola Doerr, Thomas Bäck, Ofer Shir, Hao Wang CEC
Computational Complexity Analysis of Genetic Programming Pietro S. Oliveto, Andrei Lissovoi         CEC
Self-Organizing Migrating Algorithm - Recent Advances and Progress in Swarm Intelligence Algorithms

Roman Senkerik CEC
Visualising the search process of EC algorithms Su Nguyen, Yi Mei, and Mengjie Zhang CEC
14:00 - 16:00
Tutorial Title Presenter Conference Contact Email
Instance Space Analysis for Rigorous and Insightful Algorithm Testing Kate Smith-Miles, Mario Andres, Munoz Acosta WCCI
Advances in Deep Reinforcement Learning Thanh Thi Nguyen, Vijay Janapa Reddi,Ngoc Duy Nguyen, IJCNN 
Machine learning  for data streams in Python with scikit-multi flow Jacob Montiel, Heitor Gomes,Jesse Read, Albert Bifet IJCNN
Deep Learning for Graphs Davide Bacchiu IJCNN
Explainable-by-Design Deep Learning: Fast, Highly Accurate, Weakly Supervised, Self-evolving Plamen Angelov IJCNN
Cartesian Genetic Programming and its Applications Lukas Sekanina, Julian Miller CEC
Large-Scale Global Optimization Mohammad Nabi Omidvar, Daniel Molina CEC 
Niching Methods for Multimodal Optimization Mike Preuss, Michael G. Epitropakis, Xiadong Li CEC
A Gentle Introduction to the Time Complexity Analysis of Evolutionary Algorithms     Pietro S. Oliveto CEC
Theoretical Foundations of Evolutionary Computation for Beginners and Veterans. Darrel Whitely CEC
Evolutionary Computation for Dynamic Multi-objective Optimization Problems Shengxiang Yang CEC
16:30 - 18:30
Tutorial Title Presenter Conference Contact Email
New and Conventional Ensemble Methods José Antonio Iglesias, María Paz Sesmero Lorente, Araceli Sanchis de Miguel WCCI
Evolution of Neural Networks Risto Miikkulainen IJCNN
Mechanisms of Universal Turing Machines in Developmental Networks for Vision, Audition, and Natural Language Understanding Juyang Weng IJCNN
Generalized constraints for knowledge-driven-and-data-driven approaches Baogang Hu IJCNN
Randomization Based Deep and Shallow Learning Methods for Classification and Forecasting P N Suganthan IJCNN
Using intervals to capture and handle uncertainty Chritian Wagner, Prof. Vladik Kreinovich, Dr Josie McCulloch, Dr Zack Ellerby FUZZ
Fuzzy Systems for Neuroscience and Neuro-engineering Applications Javier Andreu, CT Lin FUZZ
Evolutionary Algorithms and Hyper-Heuristics Nelishia Pillay CEC
Recent Advances in Particle Swarm Optimization Analysis and Understanding Andries Engelbrecht, Christopher Cleghorn CEC
Recent Advanced in Landscape Analysis for Optimisation Katherine Malan, Gabriela Ochoa CEC
Evolutionary Machine Learning  Masaya Nakata, Shinichi Shirakawa, Will Browne CEC
Evolutionary Many-Objective Optimization Hisao Ishibuchi, Hiroyuki Sato CEC
19:00 - 21:00
Tutorial Title Presenter Conference Contact Email
Multi-modality Helps in Solving Biomedical Problems: Theory and ApplicationsSriparna Saha, Pratik
Probabilistic Tools for Analysis of Network Performance Věra Kůrková IJCNN
Experience Replay for Deep Reinforcement Learning ABDULRAHMAN ALTAHHAN, VASILE PALADE IJCNN
Deep Stochastic Learning and Understanding Jen-Tzung Chien IJCNN
Paving the way from Interpretable Fuzzy Systems to Explainable AI Systems José M. Alonso, Ciro Castiello, Corrado Menca, Luis Magdalena FUZZ
Evolving neuro-fuzzy systems in clustering and regression Igor Škrjanc, Fernando Gomide, Daniel Leite, Sašo Blažič FUZZ
Differential Evolution Rammohan Mallipeddi,  Guohua Wu CEC
Bilevel optimization Ankur Sinha, Kalyanmoy Deb CEC
Nature-Inspired Techniques for Combinatorial Problems Malek Mouhoub

Dynamic Parameter Choices in Evolutionary Computation Carola Doerr, Gregor Papa  CEC
Evolutionary computation for games: learning, planning, and designing

Julian Togelius , Jialin Liu 


Evolutionary Algorithms and Hyper-Heuristics


Hyper-heuristics is a rapidly developing domain which has proven to be effective at providing generalized solutions to problems and across problem domains. Evolutionary algorithms have played a pivotal role in the advancement of hyper-heuristics, especially generation hyper-heuristics. Evolutionary algorithm hyper-heuristics have been successful applied to solving problems in various domains including packing problems, educational timetabling, vehicle routing, permutation flowshop and financial forecasting amongst others. The aim of the tutorial is to firstly provide an introduction to evolutionary algorithm hyper-heuristics for researchers interested in working in this domain. An overview of hyper-heuristics will be provided including the assessment of hyper-heuristic performance. The tutorial will examine each of the four categories of hyper-heuristics, namely, selection constructive, selection perturbative, generation constructive and generation perturbative, showing how evolutionary algorithms can be used for each type of hyper-heuristic. A case study will be presented for each type of hyper-heuristic to provide researchers with a foundation to start their own research in this area. The EvoHyp library will be used to demonstrate the implementation of a genetic algorithm hyper-heuristic for the case studies for selection hyper-heuristics and a genetic programming hyper-heuristic for the generation hyper-heuristics. A theoretical understanding of evolutionary algorithm hyper-heuristics will be provided. Challenges in the implementation of evolutionary algorithm hyper-heuristics will be highlighted. An emerging research direction is using hyper-heuristics for the automated design of computational intelligence techniques. The tutorial will look at the synergistic relationship between evolutionary algorithms and hyper-heuristics in this area. The use of hyper-heuristics for the automated design of evolutionary algorithms will be examined as well as the application of evolutionary algorithm hyper-heuristics for the design of computational intelligence techniques. The tutorial will end with a discussion session on future directions in evolutionary algorithms and hyper-heuristics.

Tutorial Presenters’ Bios:

Nelishia Pillay is a professor and head of department in the Department of Computer, University of Pretoria. Her research areas include hyper-heuristics, combinatorial optimization, genetic programming, genetic algorithms and other biologically-inspired methods. She holds the Multichoice Joint Chair in Machine Learning. She is chair of the IEEE Task Force on Hyper-Heuristics, chair of the IEEE Task Force on Automated Algorithm Design, Configuration and Selection and vice-chair of the IEEE CIS Technical Committee on Intelligent Systems and Applications and a member of the IEEE Technical Committee on Evolutionary Computation. She has served on program committees for numerous national and international conferences and is a reviewer for various international journals and is associate editor for IEEE Computational Intelligence Magazine and the Journal of Scheduling. She is an active researcher in the field of evolutionary algorithm hyper-heuristics and the application thereof to combinatorial optimization problems and automated design. This is one of the focus areas of the NICOG (Nature-Inspired Computing Optimization) research group which she has established.

External website with more information on Tutorial (if applicable):

Recent Advances in Particle Swarm Optimization Analysis and Understanding


The main objective of this tutorial will be to inform particle swarm optimization (PSO) practitioners of the many common misconceptions and falsehoods that are actively hindering a practitioner’s successful use of PSO in solving challenging optimization problems. While the behaviour of PSO’s particles has been studied both theoretically and empirically since its inception in 1995, most practitioners unfortunately have not utilized these studies to guide their use of PSO. This tutorial will provide a succinct coverage of common PSO misconceptions, with a detailed explanation of why the misconceptions are in fact false, and how they are negatively impacting results. The tutorial will also provide recent theoretical results about PSO particle behaviour from which the PSO practitioner can now make better and more informed decisions about PSO and in particular make better PSO parameter selections.


Andries Engelbrecht (Stellenbosch University, South Africa)

Christopher Cleghorn (University of Pretoria, South Africa)


Andries Engelbrecht received the Masters and PhD degrees in ComputerScience from the University of Stellenbosch, South Africa, in 1994 and 1999 respectively. He is Voigt Chair in Data Science in the Department of Industrial Engineering, with a joint appointment as Professor in the Computer Science Division, Stellenbosch University. His research interests include swarm intelligence, evolutionary computation, artificial neural networks, artificial immune systems, and the application of these Computational Intelligence paradigms to data analytics, games, bioinformatics, finance, and difficult optimization problems. He is author of two books, Computational Intelligence: An Introduction and Fundamentals of Computational Swarm Intelligence.

Christopher Cleghorn received his Masters and PhD degrees in Computer Science from the University of Pretoria, South Africa, in 2013 and 2017 respectively. He is a senior lecturer in Computer Science at the University of Pretoria, and a member of the Computational Intelligence Research Group. His research interests include swarm intelligence, evolutionary computation, and machine learning, with a strong focus on theoretical research. Dr Cleghorn annually serves as a reviewer for numerous international journals and conferences in domains ranging from swarm intelligence and neural networks to mathematical optimization.


Evolution withEnsembles, Adaptations and Topologies


Differential Evolution (DE) is one of the most powerful stochastic real-parameter optimization algorithms of current interest. DE operates through similar computational steps as employed by a standard Evolutionary Algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance.  This tutorial will begin with abrief overview of the basic concepts related to DE, its algorithmic components and control parameters. It will subsequently discuss some of the significant algorithmic variants of DE for bound constrained single-objective optimization. Recent modifications of the DE family of algorithms for multi-objective, constrained, large-scale, niching and dynamic optimization problems will also be included. The talk will discuss the effects of incorporating ensemble learning in DE – a relatively recent concept that can be applied to swarm & evolutionary algorithms to solve various kinds of optimization problems. The talk will also discuss neighborhood topologies based DE and adaptive DEs to improve the performance of DE. Theoretical advances made to understand the search mechanism of DE and the effect of its most important control parameters will be discussed. The talk will finally highlight a few problems that pose challenge to the state-of-the-art DE algorithms and demand strong research effort from the DE-community in the future.

Duration: 1.5 hours

Intended Audience:  This presentation will include basics as well as advanced topics of DE. Hence, researchers commencing their research in DE as well as experienced researcher can attend. Practitioners will also benefit from the presentation.

Expected Enrollment: In the past 40-50 attendees registered to attend the DE tutorials at CEC. We expect a similar interest this year too.

Name: Dr.RammohanMallipeddi and Dr. Guohua Wu


Affiliation:Nanyang Technological University, Singapore.,





Goal: Differential evolution (DE) is one of the most successful numerical optimization paradigm. Hence, practitioners and junior researchers would be interested in learning this optimization algorithm.  DE is also rapidly growing. Hence, a tutorial on DE will be timely and beneficial to many of the CEC 2020 conference attendees. This tutorial will introduce the basics of the DE and then point out some advanced methods for solving diverse numerical optimization problems by using DE.

Format: The tutorial format will be primarily power point slides based. However, frequent interactions with the audience will be maintained.

Pertinent Qualification: The speakers co-authored several original articles on DE. In addition, the authors have published a survey paper on ensemble strategies in population-based algorithms including DE. The speakers have also been organizing numerical optimization competitions at the CEC conferences (EA Benchmarks / CEC Competitions). DE has been one of top performers in these competitions. As the organizers, the speakers would also be able to share their experiences.

Key Papers:

  • Guohua Wu, RammohanMallipeddi and P. N. Suganthan, Ensemble Strategies for Population-based Optimization Algorithms – A Survey, Swarm and Evolutionary Computation, Vol. 44, pp. 695-711, 2019.
  • Das, S. S. Mullick, P. N. Suganthan, “Recent Advances in Differential Evolution – An Updated Survey,” Swarm and Evolutionary Computation, Vol. 27, pp. 1-30, 2016.
  • Das and P. N. Suganthan,Differential Evolution: A Survey of the State-of-the-Art”, IEEE Trans. on Evolutionary Computation, 15(1):4 – 31, Feb. 2011.

General Bio-sketch:

Name: Dr. RammohanMallipeddi

Affiliation: Kyungpook National University, South Korea.

RammohanMallipeddi is an Associate Professor working in the School of Electronics Engineering, Kyungpook National University (Daegu, South Korea). He received Master’s and PhD degrees in computer control and automation from the School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore, in 2007 and 2010, respectively. His research interests include evolutionary computing, artificial intelligence, image processing, digital signal processing, robotics, and control engineering. He co-authored papers published IEEE TEVC, etc. Currently, he serves as an Associate Editor for “Swarm and Evolutionary Computation”, an international journal from Elsevier and a regular reviewer for journals including IEEE TEVC and IEEE TCYB.

Name: Dr. Guohua Wu

Affiliation: Central South University, China.

Guohua Wureceived the B.S. degree in Information Systems and Ph.D degree in Operations Research from National University of Defense Technology, China, in 2008 and 2014, respectively. During 2012 and 2014, he was a visiting Ph.D student at University of Alberta, Edmonton, Canada.He is now a Professor at the Schoolof Traffic and Transportation Engineering,Central South University, Changsha, China. His current research interests include planning and scheduling, evolutionary computation and machine learning. He has authored more than 50referred papers including those published in IEEE TCYB, IEEE TSMCA, INS, COR. He serves as an Associate Editor of Swarm and Evolutionary Computation, an editorial board member of International Journal of Bio-Inspired Computation, a Guest Editor of Information Sciences and Memetic Computing. He is a regular reviewer of more than 20 journals including IEEE TEVC, IEEE TCYB, IEEE TFS.

Pareto Optimization for Subset Selection: Theories and Practical Algorithms


Pareto optimization is a general optimization framework for solving single-objective optimization problems, based on multi-objective evolutionary optimization. The main idea is to transform a single-objective optimization problem into a bi-objective one, then employ a multi-objective evolutionary algorithm to solve it, and finally return the best feasible solution w.r.t. the original single-objective optimization problem from the produced non-dominated solution set.Pareto optimization has been shown a promising method for the subset selection problem, which has applications in diverse areas, including machine learning, data mining, natural language processing, computer vision, information retrieval, etc. The theoretical understanding of Pareto optimization has recently been significantly developed, showing its irreplaceability for subset selection. This tutorial will introduce Pareto optimization from scratch. We will show that it achieves the best-so-far theoretical and practical performances in several applications of subset selection. We will also introduce advanced variants of Pareto optimization for large-scale, noisy and dynamic subset selection. We assume that the audiences are with basic knowledge of probability theory.

Tutorial Presenters (names with affiliations):

Chao Qian, Nanjing University, China

Yang Yu, Nanjing University, China

Tutorial Presenters’ Bios:

Chao Qian is an Associate Professor in the School of Artificial Intelligence, Nanjing University, China. He received the BSc and PhD degrees in the Department of Computer Science and Technologyfrom Nanjing University in 2009 and 2015, respectively. From 2015 to 2019, He was an Associate Researcher in the School of Computer Science and TechnologyUniversity of Science and Technology of China. His research interests are mainly theoretical analysis of evolutionary algorithms and their application in machinelearning. He has published one book “Evolutionary Learning: Advances in Theories and Algorithms”and more than 30 papers in top-tier journals (e.g.,AIJ, TEvC, ECJ, Algorithmica) and conferences (e.g., NIPS, IJCAI, AAAI).He has won the ACM GECCO 2011 Best Theory PaperAward, and the IDEAL 2016 Best Paper Award. He is chair of IEEE Computational Intelligence Society (CIS) Task Force on Theoretical Foundations of Bio-inspired Computation.

Yang Yu is a Professor in School of Artificial Intelligence, Nanjing University, China. He joined the LAMDA Group as a faculty since he got his Ph.D. degree in 2011. His research area is in machine learning and reinforcement learning. He was recommended as AI’s 10 to Watch by IEEE Intelligent Systems in 2018, invited to have an Early Career Spotlight talk in IJCAI’18 on reinforcement learning, and received the Early Career Award of PAKDD in 2018.

Selection, Exploration, and Exploitation


The goal of exploration is to seek out new areas of the search space. The effect of selection is to concentrate search around the best-known areas of the search space. The power of selection can overwhelm exploration, allowing it to turn any exploratory method into a hill climber. The balancing of exploration and exploitation requires more than the consideration of what solutions are created — it requires an analysis of the interplay between exploration and selection.

This tutorial reviews a broad range of selection methods used in metaheuristics. Novel tools to analyze the effects of selection on exploration in the continuous domain are introduced and demonstrated on Particle Swarm Optimization and Differential Evolution. The difference between convergence (no exploratory search solutions are created) and stall (all exploratory search solutions are rejected) is highlighted. Remedies and alternate methods of selection are presented, and the ramifications for the future design of metaheuristics are discussed.

Tutorial Presenters (names with affiliations):

Stephen Chen, Associate Professor, School of Information Technology, York University, Toronto, Canada

James Montgomery, Senior Lecturer, School of Technology, Environments and Design, University of Tasmania, Hobart, Australia

Tutorial Presenters’ Bios:

Stephen Chen is an Associate Professor in the School of Information Technology at York University in Toronto, Canada. His research focuses on analyzing the mechanisms for exploration and exploitation in techniques designed for multi-modal optimization problems. He is particularly interested in the development and analysis of non-metaphor-based heuristic search techniques. He has conducted extensive research on genetic algorithms and swarm-based optimization systems, and his 60+ peer-reviewed publications include 20+ that have been presented at previous CEC events.

James Montgomery is a Senior Lecturer in the School of Technology, Environments and Design at the University of Tasmania in Hobart, Australia. His research focuses on search space analysis and the design of solution representations for complex, real-world problems. He has conducted extensive research on ant colony optimization and differential evolution, and his 50+ peer-reviewed publications include 10+ that have been presented at previous CEC events.

External website with more information on Tutorial (if applicable):

Dynamic Parameter Choices in Evolutionary Computation


One of the most challenging problems in solving optimization problems with evolutionary algorithms and other optimization heuristics is the selection of the control parameters that determine their behavior. In state-of-the-art heuristics, several control parameters need to be set, and their setting typically has an important impact on the performance of the algorithm. For example, in evolutionary algorithms, we typically need to chose the population size, the mutation strength, the crossover rate, the selective pressure, etc.
Two principal approaches to the parameter selection problem exist:
(1) parameter tuning, which asks to find parameters that are most suitable for the problem instances at hand, and
(2) parameter control, which aims to identify good parameter settings “on the fly”, i.e., during the optimization itself.
Parameter control has the advantage that no prior training is needed. It also accounts for the fact that the optimal parameter values typically change during the optimization process: for example, at the beginning of an optimization process we typically aim for exploration, while in the later stages we want the algorithm to converge and to focus its search on the most promising regions in the search space.
While parameter control is indispensable in continuous optimization, it is far from being well-established in discrete optimization heuristics. The ambition of this tutorial is therefore to change this situation, by informing participants about different parameter control techniques, and by discussing both theoretical and experimental results that demonstrate the unexploited potential of non-static parameter choices.
Our tutorial addresses experimentally as well as theory-oriented researchers alike, and requires only basic knowledge of optimization heuristics.

Tutorial Presenters (names with affiliations):

– Carola Doerr, Sorbonne University, Paris, France

– Gregor Papa, Jožef Stefan Institute, Ljubljana, Slovenia

Tutorial Presenters’ Bios:

  • Carola Doerr(Doerr@lip6.fr is a permanent CNRS researcher at Sorbonne University in Paris, France. She studied Mathematics at Kiel University (Germany, 2003-2007, Diplom) and Computer Science at the Max Planck Institute for Informatics and Saarland University (Germany, 2010-2011, PhD). Before joining the CNRS she was a post-doc at Paris Diderot University (Paris 7) and the Max Planck Institute for Informatics. From 2007 to 2009, Carola Doerr has worked as a business consultant for McKinsey & Company, where her interest in evolutionary algorithms originates from.
    Carola Doerr’s main research activities are in the mathematical analysis of randomized algorithms, with a strong focus on evolutionary algorithms and other black-box optimizers. She has been very active in the design and analysis of black-box complexity models, a theory-guided approach to explore the limitations of heuristic search algorithms. Most recently, she has used knowledge from these studies to prove superiority of dynamic parameter choices in evolutionary computation, a topic that she believes to carry huge unexplored potential for the community.
    Carola Doerr has received several awards for her work on evolutionary computation, among them the Otto Hahn Medal of the Max Planck Society and four best paper awards at GECCO. She is chairing the program committee of FOGA 2019 and previously chaired the theory tracks of GECCO 2015 and 2017. Carola is an editor of two special issues in Algorithmica. She is also vice chair of the EU-funded COST action 15140 on “Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice (ImAppNIO)”.
  • Gregor Papa (papa@ijs.si is a Senior researcher and a Head of Computer Systems Department at the Jožef Stefan Institute, Ljubljana, Slovenia, and an Associate Professor at the Jožef Stefan International Postgraduate School, Ljubljana, Slovenia. He received the PhD degree in Electrical engineering at the University of Ljubljana, Slovenia, in 2002.
    Gregor Papa’s research interests include meta-heuristic optimisation methods and hardware implementations of high-complexity algorithms, with a focus on dynamic setting of algorithms’ control parameters. His work is published in several international journals and conference proceedings. He regularly organizes several conferences and workshops in the field of nature-inspired algorithms from the year 2004 till nowadays. He led and participated in several national and European projects.
    Gregor Papa is a member of the Editorial Board of the Automatika journal (Taylor & Francis) for the field “Applied Computational Intelligence”. He is a Consultant at the Slovenian Strategic research and innovation partnership for Smart cities and communities.

External website with more information on Tutorial (if applicable):


Cartesian Genetic Programming and its Applications


The goal of this tutorial is to acquaint the WCCI (CEC) community with the principles and state-of-the-art results of Cartesian Genetic Programming (CGP), which is a well-known form of Genetic Programming developed by Julian Miller in 1999-2000. In its classic form, it uses a very simple integer address-based genetic representation of a program in the form of a directed graph. In a number of studies, CGP has been shown to be comparatively efficient to other GP techniques. The classical form of CGP has undergone a number of developments which have made it more useful, efficient and flexible in various ways. These include self-modifying CGP (SMCGP), cyclic connections (recurrent-CGP), encoding artificial neural networks and automatically defined functions (modularCGP). CGP is capable of creating programs and circuits not only with requested functionality, but also in a very compact form, which is interesting, for example, in low power computing. This tutorial also presents various methods (such as formal verification techniques) that have been developed to address the so-called scalability problem of evolutionary design. We will demonstrate that CGP can provide human-competitive results in the areas of automated design of logic circuits, approximate circuits, image operator design, and neural networks.

Detailed Bios:

Prof. Lukas Sekanina received all his degrees from Brno University of Technology, Czech Republic (Ing. In 1999, Ph.D. in 2002), where he is currently a full professor and Head of the Department of Computer Systems. He was awarded with the Fulbright scholarship and worked on the evolutionary circuit design with NASA Jet Propulsion Laboratory in Pasadena in 2004. He was a visiting lecturer with Pennsylvania State University (2001), Universidad Politécnica de Madrid (2012), and a visiting researcher with University of Oslo in 2001. Awards: Gold (2015), Silver (2011, 2008) and Bronze (2018) medals from Human-competitive awards in genetic and evolutionary computation at GECCO; Siemens Award for outstanding PhD thesis in 2003; Siemens Award for outstanding research monograph in 2005; Best paper/poster awards (e.g. DATE 2017, NASA/ESA AHS 2013, EvoHOT 2005, DDECS 2002); keynote conference speaker (e.g. IEEE SSCI-ICES 2015, DCIS 2014, ARCS 2013, UC 2009). He has served as a program committee member of many conferences (e.g. DATE, FPL, ReConFig, DDECS, GECCO, IEEE CEC, ICES, AHS, EuroGP), Associate Editor of IEEE Transactions on Evolutionary Computation (2011-2014) and Editorial board member of Genetic Programming and Evolvable Machines Journal and International Journal of Innovative

Computing and Applications. He served as General Chair of the 16th IEEE Symposium on Design and Diagnostics of Electronic Circuits and Systems (DDECS 2013), Program Co-Chair of EuroGP 2018 – 2019, DTIS 2016, ICES 2008, and Topic chair of DATE 2020 (D10 – Approximate computing). Prof. Sekanina is the author of Evolvable Components (a monograph published by

Springer Verlag in 2004). He co-authored over 180 papers mainly on evolvable hardware, approximate circuits and bio-inspired computing. He is a Senior Member of IEEE.

Dr. Julian Miller received his BSc in Physics at the University of London in 1980. He obtained his PhD on nonlinear wave interaction in 1988 at the City University, UK. He is currently an Honorary Fellow the Department of Electronic Engineering at the University of York. His main research interests are genetic programming (GP), and computational development. He is a highly cited and well-known researcher with over 12000 citations and an h-index of 42. He has published over 240 refereed  papers on evolutionary computation, genetic programming, evolvable hardware, computational development and other topics. He has been chair or co-chair of eighteen conferences or workshops in genetic programming, computational development, evolvable hardware and evolutionary techniques. Dr. Miller chaired of the Evolvable Hardware tracks at the Genetic and Evolutionary Computation Conference in 2002-2003 and was Genetic Programming track chair in 2008. He was co-chair the Generative and Developmental Systems (GDS) track in 2007, 2009 and 2010. He is an associate editor of the Genetic Programming and Evolvable Machines and a former associate editor of IEEE Transactions on Evolutionary Computation. He is an editorial board member of the journals Evolutionary Computation and Unconventional Computing. He received the Evostar award in 2011 for outstanding contribution in evolutionary computation.

Recent Advances in Landscape Analysis for Optimisation


The goal of this tutorial is to provide an overview of recent advances in landscape analysis for optimisation. The subject matter will be relevant to delegates who are interested in applying landscape analysis for the first time, but also to those involved in landscape analysis research to obtain a broader view of recent developments in the field. Fitness landscapes were first introduced to aid in the understanding of genetic evolution, but techniques were later developed for analysing fitness landscapes in the context of evolutionary computation. In the last decade, the field has experienced a large upswing in research, evident in the increased number of published papers on the topic as well as the appearance of tutorials, workshops and special sessions at all the major evolutionary computation conferences.

One of the changes that has emerged over the last decade is that the notion of fitness landscapes has been extended to include other views such as exploratory landscapes, landscape models (such as local optima networks), violation landscapes, error landscapes and loss landscapes. This tutorial will provide an overview of these different views on search spaces and how they relate. A number of new techniques for analysing landscapes have been developed and these will also be covered. In addition, an overview of recent applications of landscape analysis will be provided for understanding complex problems and algorithm behaviour, predicting algorithm performance and for automated algorithm configuration and selection. Case studies of the use of landscape  analysis in both discrete and continuous domains will be presented. Finally, the tutorial will highlight some opportunities for future research in landscape analysis.

Tutorial Presenters (names with affiliations):

Katherine Malan, University of South Africa

Gabriela Ochoa, University of Stirling, Scotland

Tutorial Presenters’ Bios:

Katherine Malan is an associate professor in the Department of Decision Sciences at the University of South Africa. She received her her PhD in computer science from the University of Pretoria and her MSc & BSc degrees from the University of Cape Town. She has over 20 years’ lecturing experience, mostly in Computer Science, at three different South African universities and has co-authored two programming textbooks. Her research interests include fitness landscape analysis and the application of computational intelligence techniques to real-world problems. She is particularly interested in the link between fitness landscape features and evolutionary algorithm behaviour with the aim of developing intelligent landscape-aware search. She is a senior member of the IEEE and has served as a volunteer in a number of roles including chair of the South African chapter of CIS and finance chair of SSCI 2015.

Gabriela Ochoa is a Professor in Computing Science at the University of Stirling, Scotland, UK. She received a PhD in Computer Science from the University of Sussex, UK. She worked in industry for five years before joining academia and has held faculty and research positions at the University Simon Bolivar, Venezuela and the University of Nottingham, UK before joining the University of Stirling. Her research interests lie in the foundations and application of evolutionary algorithms and heuristic search methods, with emphasis on autonomous search, hyper-heuristics, fitness landscape analysis and visualisation. She was associate editor of the IEEE Trans. on Evolutionary Computation, is associate editor of the Evolutionary Computation Journal, as well as for the newly established ACM Trans. on Evolutionary Learning and Optimization (TELO), and is a member of the editorial board for Genetic Programming and Evolvable Machines. She has served as organiser for various international events including IEEE CEC, PPSN, FOGA, EvoSTAR and GECCO, and served as the Editor-in-Chief for GECCO 2017. She is a member of both the executive boards of the ACM interest group on Evolutionary Computation – SIGEVO (where she also edits the quarterly Newsletter), and the leading European Event on Bio-inspired computing EvoSTAR.

External website with more information on Tutorial (if applicable):

Evolutionary Bilevel Optimization


Many practical optimization problems should better be posed as bilevel optimization problems in which there are two levels of optimization tasks. A solution at the upper level is feasible if the cor-responding lower level variable vector is optimal for the lower level optimization problem. Consid-er, for example, an inverted pendulum problem for which the motion of the platform relates to the upper level optimization problem of performing the balancing task in a time-optimal manner. For a given motion of the platform, whether the pendulum can be balanced at all becomes a lower level optimization problem of maximizing stability margin. Such nested optimization problems are com-monly found in transportation, engineering design, game playing and business models. They are also known as Stackelberg games in the operations research community. These problems are too complex to be solved using classical optimization methods simply due to the “nestedness” of one optimization task into another.

Bilevel Optimization, Bilevel Multi-objective Optimization, Evolutionary Algorithms, Multi-Criteria Decision Making, Theory on Bilevel Programming, Hierarchical Decision Making, Bilevel Applications, Hybrid Algorithms
Tutorial Description
What is Bilevel Programming?


To begin with, an introduction is provided to bilevel optimization problems. A generic formulation for bilevel problems is presented and its differences from ordinary single level optimization problems are discussed.
Bilevel Problems: A Genesis


The origin of bilevel problems can be traced to two roots; namely, game theory and mathematical programming. A genesis of these problems is provided through simple practical examples.
Properties of Bilevel Problems


The properties of bilevel optimization problems are discussed. Difficulties encountered in solving these problems are highlighted.


A number of practical applications from the areas of process optimization, game-playing strategy development, transportation problems, optimal control, environmental economics and coordination of multi-divisional firms are described to highlight the practical relevance of bilevel programming.
Solution Methodologies


Existing solution methodologies for bilevel optimization and their weaknesses are discussed. Lack of efficient methodologies is underlined and the need for better solution approaches is emphasized.
EAs Niche


Evolutionary algorithms provide a convenient framework for handling complex bilevel problems. Co-evolutionary ideas and flexibility in operator design can help in efficiently tackling the problem.
Multi-objective Extensions


Recent algorithms and results on multi-objective bilevel optimization using evolutionary algorithms are discussed and some application problems are highlighted.
Future Research Ideas


A number of immediate and future research ideas on bilevel optimization are highlighted related to decision making difficulties and robustness.


Concluding remarks for the tutorial are provided.


A list of references on bilevel optimization is provided.
Target Audience
Bilevel optimization belongs to a difficult class of optimization problems. Most of the classical optimization methods are unable to solve even simpler instances of bilevel problems. This offers a niche to the researchers in the field of evolutionary computation to work on the development of efficient bilevel procedures. However, many researchers working in the area of evolutionary computation are not familiar with this important class of optimization problems. Bilevel optimization has immense practical applications and it certainly requires attention of the researchers working on evolutionary computation. The target audience for this tutorial will be researchers and students looking to work on bilevel optimization. The tutorial will make the basic concepts on bilevel optimization and the recent results easily accessible to the audience.
Short Biography
Ankur Sinha is working as an Associate Professor at Indian Institute of Management, Ahmedabad, India. He completed his PhD from Helsinki School of Economics (Now: Aalto University School of Business) where his PhD thesis was adjudged as the best dissertation of the year 2011. He holds a Bachelors degree in Mechanical Engineering from Indian Institute of Technology (IIT) Kanpur. After completing his PhD, he has held visiting positions at Michigan State University and Aalto University. His research interests include Bilevel Optimization, Multi-Criteria Decision Making and Evolutionary Algorithms. He has offered tutorials on Evolutionary Bilevel Optimization at GECCO 2013, PPSN 2014, and CEC 2015, 2017, 2018, 2019. His research has been published in some of the leading Computer Science, Business and Statistics journals. He regularly chairs sessions at evolutionary computation conferences. For detailed information about his research and teaching, please refer to his personal page:
Kalyanmoy Deb is a Koenig Endowed Chair Professor at the Michigan State University in Michigan USA. He is the recipient of the prestigious TWAS Prize in Engineering Science, Infosys Prize in Engineering and Computer Science, Shanti Swarup Bhatnagar Prize in Engineering Sciences for the year 2005. He has also received the ‘Thomson Citation Laureate Award’ from Thompson Scientific for having highest number of citations in Computer Science during the past ten years in India. He is a fellow of IEEE, Indian National Academy of Engineering (INAE), Indian National Academy of Sciences, and International Society of Genetic and Evolutionary Computation (ISGEC). He has received Fredrick Wilhelm Bessel Research award from Alexander von Humboldt Foundation in 2003. His main research interests are in the area of computational optimization, modeling and design, and evolutionary algorithms. He has written two textbooks on optimization and more than 500 international journal and conference research papers. He has pioneered and is a leader in the field of evolutionary multi-objective optimization. He is associate editor and in the editorial board or a number of major international journals. More information about his research can be found from

Theoretical Foundations of Evolutionary Computation for Beginners and Veterans


Goals:There exists more than 40 years of theoretical research in evolutionary computation.  However, given the focus on runtime analysis in the last 10 years, much of this theory is not well understood in the EC community.  For example,  it is not widely known that the behavior of an Evolutionary Algorithm can be influenced by attractors that exists outside the space of the feasible population.The tutorial will mainly focus on the application of evolutionary algorithms to combinatorial problems.   The tutorial will also use easy to understand examples.

Plan and Outline: This talk will cover some of the classic theoretical results from the field of evolutionary algorithms,  as well as more general theory from operations research and mathematical methods for optimization.   The tutorial will review pseudo-Boolean optimization, and the representation of functions as both multi-linear polynomials and Fourier polynomials.   It will also explain how every pseudo-Boolean optimization problem can be converted into a k-bounded form.  And for every k-bounded pseudo-Boolean optimization problem, the location of improving moves (i.e., improving bit flips) can be computed in constant time, making simple mutation operators unnecessary.   It is also not well known that many classic parameter optimization problems (Rosenbrock, Rastrigin, and the entire DeJong Test suite) can be expressed as low order k-bounded pseudo-Boolean functions,  even at “double precision” for arbitrary numbers of variables.

The tutorial will cover 1) No Free Lunch (NFL),  2) Sharpened No Free Lunch and 3) Focused No Free Lunch, and how the different theoretical proofs can lead to seemingly different and even contradictory conclusions.   (What many researchers know about NFL might actually be wrong.)

The tutorial will also cover the relationship between functions and representations, the space of all possible representations and the duality between search algorithm and landscapes.    This perspective is critical to understanding landscape analysis, landscape visualization, variable neighborhood search methods, memetic algorithms, and self-adaptive search methods.

Other topics include both infinite and finite models of population trajectories.  The tutorial will explainboth Elementary Landscapes and eigenvectors of search neighborhoods in simple terms and explain how the two are related.    Example domains include classic NP-Hard problems such as Graph Coloring and MAX-kSAT.

Justification:  The tutorial will explore what theory can contribute to application focused researchers.  Theory can be used not only to look at convergence behavior, but also to understand problem structure.  It can also provide new tools to researchers. Every researcher in the field of Evolutionary Computation needs to be a wiser consumer of both theoretical and empirical results.   The tutorial will be for 1.5 hours.   Prof. Whitley has regularly had audiences of 50 to 75 people (and up to 175 people) at his tutorials and has presented tutorials at CEC, GECCO, AAAI and IJCAI.   The tutorial will provide new insights to both beginners and veterans in the field of Evolutionary Computation.


Prof. Darrell Whitley has been active in Evolutionary Computation  since 1986, and has published more than 250 papers.    These papers have garnered more than 24,000 citations.    Dr. Whitley’s H-index is 65.   He introduced the first “steady state genetic algorithm” with rank based selection,  published the first papers on neuroevolution,  and has worked on dozens of real world applications  of evolutionary algorithms.     He has served as Editor-in-Chief of the journal Evolutionary Computation,  and served as Chair of the Governing Board of ACM SIGEVO from 2007 to 2011.    Prof. Whitley was recently made an ACM Fellow for his many contributions to the field of Evolutionary Computation.  He is also the Co-Editor-in-Chief of the new ACM Transactions on Evolutionary Learning and Optimization.

Demos and Code:    Code will be made available online illustrating these techniques for the

Traveling Salesman Problem,  MAXSAT and NK-Landscapes.      Two tutorials covering much of the material from this talk will also be made available online.

A Gentle Introduction to the Time Complexity Analysis of Evolutionary Algorithms


Great advances have been made in recent years towards the runtime complexity analysis of evolutionary algorithms for combinatorial optimisation problems. Much of this progress has been due to the application of techniques from the study of randomised algorithms. The first pieces of work, started in the 90s, were directed towards analysing simple toy problems with significant structures. This work had two main goals:

1. to understand on which kind of landscapes EAs are efficient, and when they are not
2. to develop the first basis of general mathematical techniques needed to perform the analysis.

Thanks to this preliminary work, nowadays, it is possible to analyse the runtime of evolutionary algorithms on different combinatorial optimisation problems. In this beginners’ tutorial, we give a basic introduction to the most commonly used techniques, assuming no prior knowledge about time complexity analysis.

Tutorial Presenters (names with affiliations): 

Pietro S. Oliveto, University of Sheffield

Tutorial Presenter Bio:

Pietro S. Oliveto is a Senior Lecturer and EPSRC funded Early Career Fellow  at the University of Sheffield, UK.
He received the Laurea degree and PhD degree in computer science respectively from the University of Catania, Italy in 2005 and from the University of Birmingham, UK in 2009. From October 2007 to April 2008, he was a visiting researcher of the Efficient Algorithms and Complexity Theory Institute at the Department of Computer Science of the University of Dortmund where he collaborated with Prof. Ingo Wegener’s research group. From 2009 to 2013 he held the positions of EPSRC PhD+ Fellow for one year and of EPSRC Postdoctoral Fellow in Theoretical Computer Science for 3 years at the University of Birmingham. From 2013 to 2016 he was a Vice-chancellor’s Fellow at the University of Sheffield.
His main research interest is the rigorous performance analysis of randomised search heuristics for combinatorial optimisation. He has published several runtime analysis papers on evolutionary algorithms, artificial immune systems, hyper-heuristics and genetic programming. He has won best paper awards at the GECCO’08, ICARIS’11 and GECCO’14.
Dr. Oliveto has given several tutorials at GECCO, CEC and PPSN on the runtime analysis of evolutionary algorithms and a recent one on the analysis of genetic programming.  He is an associate editor of IEEE Transactions on Evolutionary Computation.

Evolutionary computation for games: learning, planning, and designing


This tutorial introduces several techniques and application areas for evolutionary computation in games, such as board games and video games. We will give a broad overview of the use cases and popular methods for evolutionary computation in games, and in particular cover the use of evolutionary computation for learning policies (evolutionary reinforcement learning using neuroevolution), planning (rolling horizon and online planning), and designing (search-based procedural content generation). The basic principles will be explained and illustrated by examples from our own research as well as others’ research.

Tentative outline:

  • Introduction: who are we, what are we talking about?
  • Why do research on evolutionary computation and games?
  • Part 1: Playing games
    • Reasons for building game-playing AI
    • Characteristics of games (and how they affect game-playing algorithms)
    • Reinforcement learning through evolution
    • Neuroevolution
    • Planning with evolution
    • Single-agent games (rolling horizon evolution)
    • Multi-agent games (online evolution)
  • Part 2: Designing and developing games
    • The need for AI in game design and development
    • Procedural content generation
    • Search-based procedural content generation
    • procedural content generation machine learning (PCGML)
    • Game balancing
    • Game testing
    • Game adaptation

Tutorial Presenters:

Julian Togelius

Associate Professor

Department of Computer Science and Engineering

Tandon School of Engineering

New York University

2 MetroTech Center, Brooklyn, NY 11201, USA

Co-director of the NYU Game Innovation Lab

Editor-in-Chief, IEEE Transactions on Games

Jialin Liu

Research Assistant Professor

Optimization and Learning Laboratory (OPAL Lab)

Department of Computer Science and Engineering (CSE)

Southern University of Science and Technology (SUSTech)

Shenzhen, China

Associate Editor, IEEE Transactions on Games

Tutorial Presenters’ Bios:

Julian Togelius is an Associate Professor in the Department of Computer Science and Engineering, New York University, USA. He works on artificial intelligence for games and games for artificial intelligence. His current main research directions involve search-based procedural content generation in games, general video game playing, player modeling, generating games based on open data, and fair and relevant benchmarking of AI through game-based competitions. He is the Editor-in-Chief of IEEE Transactions on Games, and has been chair or program chair of several of the main conferences on AI and Games. Julian holds a BA from Lund University, an MSc from the University of Sussex, and a Ph.D. from the University of Essex. He has previously worked at IDSIA in Lugano and at the IT University of Copenhagen.

Jialin Liu is currently a Research Assistant Professor in the Department of Computer Science and Engineering of Southern University of Science and Technology (SUSTech), China. Before joining SUSTech, she was a Postdoctoral Research Associate at Queen Mary University of London (QMUL, UK) and one of the founding members of the Game AI research group of QMUL. Her research interests include AI and Games, noisy optimisation, portfolio of algorithms and meta-heuristics.Jialin is an Associate Editor of IEEE Transactions on Games. She hasserved as Program Co-Chair of 2018 IEEE Computational Intelligence and Games (IEEE CIG2018), and Competition Chair of several main conferences on Evolutionary Computation, and AI and Games. She is also chairing IEEE CIS Games Technical Committee.

Niching Methods for Multimodal Optimization


Population or single solution search-based optimization algorithms (i.e. {meta,hyper}-heuristics) in their original forms are usually designed for locating a single global solution. Representative examples include among others evolutionary and swarm intelligence algorithms. These search algorithms typically converge to a single solution because of the global selection scheme used. Nevertheless, many real-world problems are “multimodal” by nature, i.e., multiple satisfactory solutions exist. It may be desirable to locate many such satisfactory solutions, or even all of them, so that a decision maker can choose one that is most proper in his/her problem domain. Numerous techniques have been developed in the past for locating multiple optima (global and/or local). These techniques are commonly referred to as “niching” methods. A niching method can be incorporated into a standard search-based optimization algorithm, in a sequential or parallel way, with an aim to locate multiple globally optimal or suboptimal solutions,. Sequential approaches locate optimal solutions progressively over time, while parallel approaches promote and maintain formation of multiple stable subpopulations within a single population. Many niching methods have been developed in the past, including crowding, fitness sharing, derating, restricted tournament selection, clearing, speciation, etc. In more recent times, niching methods have also been developed for meta-heuristic algorithms such as Particle Swarm Optimization, Differential Evolution and Evolution Strategies.

In this tutorial we will aim to provide an introduction to niching methods, including its historical background, the motivation of employing niching in EAs. We will present in details a few classic niching methods, such as the fitness sharing and crowding methods. We will also provide a review on several new niching methods that have been developed in meta-heuristics such as Particle Swarm Optimization and Differential Evolution. Employing niching methods in real-world situations still face significant challenges, and this tutorial will discuss several such difficulties. In particular, niching in static and dynamic environments will be specifically addressed. Following this, we will present a suite of new niching function benchmark functions specifically designed to reflect the characteristics of these challenges. Performance metrics for comparing niching methods will be also presented and their merits and shortcomings will be discussed. Experimental results across both classic and more recently developed niching methods will be analyzed based on selected performance metrics. Apart of benchmark niching test functions, several examples of applying niching methods to solving real-world optimization problems will be provided. This tutorial will use several demos to show the workings of niching methods.

This tutorial is supported by the IEEE CIS Task Force on Multi-modal Optimization ( )

Targeted audience

This tutorial should be of interest to both new beginners and as well as more experienced niching researchers, since this tutorial on niching is probably the first of this kind in many years of CEC history. The tutorial will provide a unique opportunity to update the research community on this classic EC subfield, which increasingly catches more and more attention lately. We expect the tutorial will last 2 hours.

Course material

The tutorial material will be made available prior to WCCI/CEC’2020 via a web-site that will be associated with the WCCI/CEC’2020 special session on niching methods for multimodal optimization.


The tutorial presenters will include Dr. Mike Preuss (WWU Muenster, Germany), Dr. Michael Epitropakis (The Signal Group, Greece), and Professor Xiaodong Li (RMIT University, Australia). All of them have significant research experience in designing and developing niching methods. They have successfully organized various events (special sessions, workshops and competitions on the area of multimodal optimization as well as serve on the chairing board of IEEE CIS Task Force on Multi-modal Optimization. See URL:


Ass. Prof. Mike Preuss

Leiden Institute of Advanced Computer Science

Niels Bohrweg 1

2333 CA Leiden

The Netherlands


Dr. Michael G. Epitropakis

The Signal Group,

Athens, Greece


Professor Xiaodong Li

School of Science (Computer Science and Software Engineering)

RMIT University

Melbourne, VIC 3001, Australia


Organizer Bios:

Mike Preuss is Assistant Professor at LIACS, the computer science institute of Universiteit Leiden in the Netherlands. Previously, he was with ERCIS (the information systems institute of WWU Muenster, Germany), and before with the Chair of Algorithm Engineering at TU Dortmund, Germany, where he received his PhD in 2013. His main research interests rest on the field of evolutionary algorithms for real-valued problems, namely on multimodal and multiobjective optimization, and on computational intelligence and machine learning methods for computer games, especially in procedural content generation (PGC) and realtime strategy games (RTS).

Michael G. Epitropakis received his B.S., M.S., and Ph.D. degrees from the Department of Mathematics, University of Patras, Patras, Greece. Currently, he is a Senior Research Scientist and a Product Manager at The Signal Group, Athens, Greece. From 2015 to 2018 he was a Lecturer in Foundations of Data Science at the Data Science Institute and the Department of Management Science, Lancaster University, Lancaster, UK. His current research interests include computational intelligence, evolutionary computation, swarm intelligence, machine learning and search-based software engineering. He has published more than 35 journal and conference papers. He is an active researcher on Multi-modal Optimization and a co-organized of the special session and competition series on Niching Methods for Multimodal Optimization. He is a member of the IEEE Computational Intelligence Society.

Xiaodong Li received his B.Sc. degree from Xidian University, Xi’an, China, and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively. He is a full professor at the School of Science (Computer Science and Software Engineering), RMIT University, Melbourne, Australia. His research interests include evolutionary computation, neural networks, machine learning, complex systems, multiobjective optimization, multimodal optimization (niching), and swarm intelligence. He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, Swarm Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member of IEEE CIS Task Force on Swarm Intelligence, a Vice-chair of IEEE CIS Task Force of Multi-Modal Optimization, and a former Chair of IEEE CIS Task Force on Large Scale Global Optimization.  He was the General Chair of SEAL’08, a Program Co-Chair AI’09, a Program Co-Chair for IEEE CEC’2012, a General Chair for ACALCI’2017 and AI’17. He is the recipient of 2013 ACM SIGEVO Impact Award and 2017 IEEE CIS “IEEE Transactions on Evolutionary Computation Outstanding Paper Award”.

Computational Complexity Analysis of Genetic Programming


Genetic Programming (GP) is an evolutionary computation paradigm that aims to evolve computer programs. Compared to the great number of successful applications of GP that have been reported, the theoretical understanding of its underlying working principles lags far behind. In particular, the identification of which classes of computer programs can be provably evolved efficiently via GP has progressed slowly compared to the understanding of the performance of traditional evolutionary algorithms (EAs) for function optimisation.

The main reason for the slow progress is that the analysis of GP systems is considerably more involved. Firstly, the analysis is complicated by the variable length of programs compared to the fixed solution representation used in EAs. Secondly, understanding the quality of a candidate program is challenging because it is not possible to evaluate its accuracy over all possible inputs.

Nevertheless, significant advances have been made in recent years towards the computational complexity analysis of GP. Rather than tackling complete GP applications, the first pieces of work isolated particular aspects and challenges occurring in the GP evolutionary process. Nowadays it is possible to analyse the time and space complexity of GP algorithms for evolving proper programs with input/output relationships where the fitness of candidate solutions is evaluated by comparing their accuracy on input/output samples of a polynomially-sized training set (e.g., Boolean functions). In this tutorial, we give an overview of the field, outlining the techniques used and the challenges involved.

Tutorial Presenters (names with affiliations):

Andrei Lissovoi, University of Sheffield, UK

Pietro S. Oliveto, University of Sheffield, UK

Tutorial Presenters’ Bios:

Andrei Lissovoi is a Research Associate in the Rigorous Research team at the University  of Sheffield, UK. He received the MSc and PhD degrees in computer science from the Technical  University of Denmark in 2012 and 2016 respectively. His main research interest is the time complexity analysis of nature-inspired algorithms. He has published several runtime analysis papers on Evolutionary Algorithms, Ant Colony Optimisation algorithms, and parallel evolutionary algorithms for dynamic optimisation problems. His recent work on GP includes runtime analyses of GP systems for evolving Boolean functions (AAAI-18, GECCO’19) and a survey book chapter in the recent Springer book on theory of evolutionary computation.

Pietro S. Oliveto is a Senior Lecturer and EPSRC funded Early Career Fellow  at the University of Sheffield, UK.
He received the Laurea degree and PhD degree in computer science respectively from the University of Catania, Italy in 2005 and from the University of Birmingham, UK in 2009. From October 2007 to April 2008, he was a visiting researcher of the Efficient Algorithms and Complexity Theory Institute at the Department of Computer Science of the University of Dortmund where he collaborated with Prof. Ingo Wegener’s research group. From 2009 to 2013 he held the positions of EPSRC PhD+ Fellow for one year and of EPSRC Postdoctoral Fellow in Theoretical Computer Science for 3 years at the University of Birmingham. From 2013 to 2016 he was a Vice-chancellor’s Fellow at the University of Sheffield.
His main research interest is the rigorous performance analysis of randomised search heuristics for combinatorial optimisation. He has published several runtime analysis papers on evolutionary algorithms, artificial immune systems, hyper-heuristics and genetic programming. He has won best paper awards at the GECCO’08, ICARIS’11 and GECCO’14.
Dr. Oliveto has given several tutorials at GECCO, CEC and PPSN on the runtime analysis of evolutionary algorithms and a recent one on the analysis of genetic programming.  He is an associate editor of IEEE Transactions on Evolutionary Computation.

Benchmarking and Analyzing Iterative Optimization Heuristics with IOHprofiler


IOHprofiler is a new benchmarking environment that has been developed for a highly versatile analysis of iterative optimization heuristics (IOHs) such as evolutionary algorithms, local search algorithms, model-based heuristics, etc. A key design principle of IOHprofiler is its highly modular setup, which makes it easy for its users to add algorithms, problems, and performance criteria of their choice. IOHprofiler is also useful for the in-depth analysis of the evolution of adaptive parameters, which can be plotted against fixed-targets or fixed-budgets. The analysis of robustness is also supported.

IOHprofiler supports all types of optimization problems, and is not restricted to a particular search domain.  A web-based interface of its analysis procedure is available at, the tool itself is available on GitHub ( and as CRAN package (

The tutorial addresses all CEC participants interested in analyzing and comparing heuristic solvers. By the end of the tutorial, the participants will known how to benchmark different solvers with IOHprofiler, which performance statistics it supports, and how to contribute to its design.

Tutorial Presenters (names with affiliations): 

Thomas Bäck, Leiden University, The Netherlands,

Carola Doerr, CNRS and Sorbonne University, France,

Ofer M. Shir, Tel-Hai College and Migal Institute, Israel,

Hao Wang, Sorbonne University, France.

Tutorial Presenters’ Bios:

  • Thomas Bäck is Full Professor of Computer Science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002. He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994, and then worked for the Informatik Centrum Dortmund (ICD) as department leader of the Center for Applied Systems Analysis. From 2000 – 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others. Thomas has more than 300 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and the Handbook of Natural Computing, and co-editor-in-chief of Springer’s Natural Computing book series. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft fürInformatik, GI) in 1995 and the IEEE Computational Intelligence Society Evolutionary Computation Pioneer Award in 2015.
  • Carola Doerr, formerly Winzen, is a permanent CNRS researcher at Sorbonne University in Paris, France. Her main research activities are in the mathematical analysis of randomized algorithms, with a strong focus on evolutionary algorithms and other black-box optimizers. She has been very active in the design and analysis of black-box complexity models, a theory-guided approach to explore the limitations of heuristic search algorithms. Most recently, she has used knowledge from these studies to prove superiority of dynamic parameter choices in evolutionary computation, a topic that she believes to carry huge unexplored potential for the community. Carola has received several awards for her work on evolutionary computation, among them the Otto Hahn Medal of the Max Planck Society and four best paper awards at GECCO. She is/was program chair of PPSN 2020, FOGA 2019 and the theory tracks of GECCO 2015 and 2017. Carola serves on the editorial boards of ACM Transactions on Evolutionary Learning and Optimization and of Evolutionary Computation and was editor of two special issues in Algorithmica. Carola is vice chair of the EU-funded COST action 15140 on “Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice (ImAppNIO)”.
  • Ofer M. Shir is the Head of the Computer Science Department of Tel-Hai College, and a Principal Investigator at the Migal-Galilee Research Institute – both located in the Upper Galilee, Israel. OferShir holds a BSc in Physics and Computer Science from the Hebrew University of Jerusalem, Israel (conferred 2003), and both MSc and PhD in Computer Science from Leiden University, The Netherlands (conferred 2004, 2008; PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010), hosted by Prof. Herschel Rabitz in the Department of Chemistry – where he specialized in computational aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member (2010-2013), which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics. His current topics of interest include Statistical Learning in Theory and in Practice, Experimental Optimization, Theory of Randomized Search Heuristics, Scientific Informatics, Natural Computing, Computational Intelligence in Physical Sciences, Quantum Control and Quantum Machine Learning.
  • Hao Wang obtained his PhD (cum laude, promotor: Prof. Thomas Bäck) from Leiden University in 2018. He is currently a postdoc at Sorbonne University (supervised by Carola Doerr) and has accepted a position as an assistant Professor at the Leiden Institute of Advanced Computer Science from Sep. 2020. He received the Best Paper Award at the PPSN 2016 conference and was a best paper award finalist at the IEEE SMC 2017 conference. His research interests are proposing, improving and analyzing stochastic optimization algorithms, especially Evolutionary Strategies and Bayesian Optimization. In addition, he also works on developing statistical machine learning algorithms for big and complex industrial data. He also aims at combining the state-of-the-art optimization algorithm with data mining/machine learning techniques to make the real-world optimization tasks more efficient and robust.


External website with more information on Tutorial (if applicable): None

Evolutionary Machine Learning


A fusion of Evolutionary Computation and Machine Learning, namely Evolutionary Machine Learning (EML), has been recognized as a rapidly growing research area as these powerful search and learning mechanisms are combined. Many specific branches of EML with different learning schemes and different ML problem domains have been proposed. These branches seek to address common challenges –

  • How evolutionary search can discover optimal ML configurations and parameter settings,
  • How the deterministic models of ML can influence evolutionary mechanisms,
  • How EC and ML can be integrated into one learning model.

Consequently, various insights address principle issues of the EML paradigm that are worthwhile to “transfer” to these different specific challenges.

The goal of our tutorial is to provide ideas of advanced techniques of specific EML branches, and then to share them as a common insight into the EML paradigm. Firstly, we introduce the common challenges in the EML paradigm and then discuss how various EML branches address these challenges. Then, as detailed examples, we provide two major approaches to EML: Evolutionary rule-based learning (i.e. Learning Classifier Systems) as a symbolic approach; Evolutionary Neural Networks as a connectionist approach.

Our tutorial will be organized for not only beginners but also experts in the EML field. For the beginners, our tutorial will be a gentle introduction regarding EML from basics to recent challenges. For the experts, our two specific talks provide the most recent advances of evolutionary rule-based learning and of evolutionary neural networks. Additionally, we will provide a discussion on how these techniques’ insights can be reused to other EML branches, which shapes the new directions of EML techniques.

Tutorial Presenters (names with affiliations):

  • Masaya Nakata, Assosiate Professor, Yokohama National University, Japan
  • Shinichi Shirakawa, Lecturer, Yokohama National University, Japan
  • Will Browne, Associate Professor Victoria University of Wellington, NZ

Tutorial Presenters’ Bios:

Dr. Nakata is an associate professor at Faculty of Engineering, Yokohama National University, Japan. He received his Ph.D. degree in informatics from the University of Electro-Communications, Japan, in 2016. He has been working on Evolutionary Rule-based Machine Learning, Reinforcement Learning, Data mining, more specifically, Learning Classifier System (LCS). He was a visiting researcher at Politecnico di Milano, University Bristol and Victoria University of Wellington. His contributions have been published as more than 10 journal papers and more than 20 conference papers including CEC, GECCO, PPSN. He is an organizing committee member of International Workshop on Learning Classifier Systems/Evolutionary Rule-based Machine Learning 2015-2016, 2018-2019 in GECCO conference, elected from the international LCS research community. He received IEEE CIS Japan chapter Young Research Award.

Dr. Shirakawa is a lecturer at Faculty of Environment and Information Sciences, Yokohama National University, Japan. He received his Ph.D. degree in engineering from Yokohama National University in 2009. He worked at Fujitsu Laboratories Ltd., Aoyama Gakuin University, and University of Tsukuba. His research interests include evolutionary computation, machine learning, computer vision, and so on. He is currently working on the evolutionary deep neural networks. His contributions have been published as high-quality journal and conference papers in EC and AI, e.g., CEC, GECCO, PPSN, AAAI. He received IEEE CIS Japan chapter Young Research Award in 2009 and won the best paper award in evolutionary machine learning track of GECCO 2017.

Associate Prof Will Browne’s research focuses on applied cognitive systems. Specifically, how to use inspiration from natural intelligence to enable computers/machines/robots to behave usefully. This includes cognitive robotics, learning classifier systems, and modern heuristics for industrial application. A/Prof. Browne has been co-track chair for the Genetics-Based Machine Learning (GBML) track and is currently the co-chair for the Evolutionary Machine Learning track at Genetic and Evolutionary Computation Conference. He has also provided tutorials on Rule-Based Machine Learning at GECCO, chaired the International Workshop on Learning Classifier Systems (LCSs) and lectured graduate courses on LCSs. He has recently co-authored the first textbook on LCSs ‘Introduction to Learning Classifier Systems, Springer 2017’. Currently, he leads the LCS theme in the Evolutionary Computation Research Group at Victoria University of Wellington, New Zealand.

Self-Organizing Migrating Algorithm – Recent Advances and Progress in Swarm Intelligence Algorithms


Self-Organizing Migrating Algorithm (SOMA) belongs to the class of swarm intelligence techniques. SOMA is inspired by competitive-cooperative behavior, uses inherent self-adaptation of movement over the search space, as well as discrete perturbation mimicking the mutation process. The SOMA performs significantly well in both continuous as well as discrete domains. The tutorial will cover several parts.

Firstly, state of the art on the field of swarm intelligence algorithms, similarities and differences between various algorithms and SOMA will be discussed.

The main part of the tutorial will show a collection of principal findings from original research papers discussing current research trends in parameters control, discrete perturbation, novel improvements approaches on and with SOMA from the latest scientific events. New and very efficient strategies like SOMA-T3A (4th place in 100-digit competition), recently published SASOMA, or SOMA-Pareto (6th place in 100-digit competition) will be discussed in detail with demonstrations.

Also, the description of our original concept for the transformation of internal dynamics of swarm algorithms (including SOMA) into the social-like network (social interaction amongst individuals) will be discussed here. Analysis of such a network can be then straightforwardly used as direct feedback into the algorithm for improving its performance.

Finally, the experiences from more than the last 10 years with SOMA, demonstrated on various applications like control engineering, cybersecurity, combinatorial optimization, or computer games, conclude the tutorial.

Tutorial Presenters (names with affiliations):

Name: Roman Senkerik
Affiliation: Tomas Bata University in Zlin, Department of Informatics and Artificial Intelligence

Tutorial Presenters’ Bios:

Roman Senkerik was born in Zlin, the Czech Republic, in 1981. He received an MSc degree in technical cybernetics from the Tomas Bata University in Zlin, Faculty of applied informatics in 2004, the Ph.D. degree also in technical Cybernetics, in 2008, from the same university, and Assoc. prof. Degree in Informatics from VSB – Technical University of Ostrava, in 2013.
From 2008 to 2013 he was a Research Assistant and Lecturer with the Tomas Bata University in Zlin, Faculty of applied informatics. Since 2014 he is an Associate Professor and Head of the A.I.Lab with the Department of Informatics and Artificial Intelligence, Tomas Bata University in Zlin. He is the author of more than 40 journal papers, 250 conference papers, and several book chapters as well as editorial notes. His research interests are soft computing methods and their interdisciplinary applications in optimization and cyber-security, development of evolutionary algorithms, machine learning, data science, the theory of chaos, complex systems. He is a recognized reviewer for many leading journals in computer science/computational intelligence. He was a part of the organizing teams for special sessions/symposiums or IPC/TPC at IEEE WCCI, CEC, SSCI, GECCO, SEMCCO or MENDEL (and more) events. He was a guest editor of several special issues in journals, and editor of proceedings for several conferences.

Evolutionary Many-Objective Optimization


The goal of the tutorial is clearly explaining difficulties of evolutionary many-objective optimization, approaches to the handling of those difficulties, and promising future research directions. Evolutionary multi-objective optimization (EMO) has been a very active research area in the field of evolutionary computation in the last two decades. In the EMO area, the hottest research topic is evolutionary many-objective optimization. The difference between multi-objective and many-objective optimization is simply the number of objectives. Multi-objective problems with four or more objectives are usually referred to as many-objective problems. It sounds that there exists no significant difference between three-objective and four-objective problems. However, the increase in the number of objectives significantly makes multi-objective problem difficult. In the first part (Part I: Difficulties), we clearly explain not only frequently-discussed well-known difficulties such as the weakening selection pressure towards the Pareto front and the exponential increase in the number of solutions for approximating the entire Pareto front but also other hidden difficulties such as the deterioration of the usefulness of crossover and the difficulty of performance evaluation of solution sets. The attendees of the tutorial will learn why many-objective optimization is difficult for EMO algorithms. After the clear explanations about the difficulties of many-objective optimization, we explain in the second part (Part II: Approaches and Future Directions) how to handle each difficulty. For example, we explain how to prevent the Pareto dominance relation from weakening its selection pressure and how to prevent a binary crossover operator from decreasing its search efficiently. We categorize approaches to tackle many-objective optimization problems and explain some state-of-the-art many-objective algorithms in each category. The attendees of the tutorial will learn some representative approaches to many-objective optimization and state-of-the-art many-objective algorithms. At the same time, the attendees will also learn that there still exist a large number of promising, interesting and important research directions in evolutionary many-objective optimization. Some promising research directions are explained in detail in the tutorial.

Tutorial Presenters (names with affiliations):

Name: HisaoIshibuchi

Affiliation: Southern University of Science and Technology

Name: Hiroyuki Sato

Affiliation: The University of Electro-Communications

Tutorial Presenters’ Bios:

HisaoIshibuchi received the B.S. and M.S. degrees from Kyoto University in 1985 and 1987, respectively, and the Ph.D. degree from Osaka Prefecture University in 1992. He was with Osaka Prefecture University in 1987-2017. Since April 2017, he is a Chair Professor at Southern University of Science and Technology, China. He received a JSPS Prize from Japan Society for the Promotion of Science in 2007, a best paper award from GECCO 2004, 2017, 2018 and FUZZ-IEEE 2009, 2011, and the IEEE CIS Fuzzy Systems Pioneer Award in 2019. Dr. Ishibuchi was an IEEE CIS Vice President in 2010-2013. Currently he is an AdCom member of the IEEE CIS (2014-2019), and the Editor-in-Chief of the IEEE Computational Intelligence Magazine (2014-2019). He is an IEEE Fellow.

Hiroyuki Sato received B.E. and M.E. degrees from Shinshu University, Japan, in 2003 and 2005, respectively. In 2009, he received Ph. D. degree from Shinshu University. He has worked at The University of Electro-Communications since 2009. He is currently an associate professor. He received best paper awards on the EMO track in GECCO 2011 and 2014, Transaction of the Japanese Society for Evolutionary Computation in 2012 and 2015. His research interests include evolutionary multi- and many-objective optimization, and its applications. He is a member of IEEE, ACM/SIGEVO.

External website with more information on Tutorial (if applicable):

Nature-Inspired Techniques for Combinatorial Problems


Combinatorial problems refer to those applications where we either look for the existence of a consistent scenario satisfying a set of constraints (decision problem), or for one or more good/best solutions meeting a set of requirements while optimizing some objectives (optimization problem). These latter objectives include user’s preferences that reflect desires and choices that need to be satisfied as much as possible. Moreover, constraints and objectives (in the case of an optimization problem) often come with uncertainty due to lack of knowledge, missing information, or variability caused by events, which are under nature’s control. Finally, in some applications such as timetabling, urban planning and robot motion planning, these constraints and objectives can be temporal, spatial or both. In this latter case, we are dealing with entities occupying a given position in time and space.

Because of the importance of these problems in so many fields, a wide variety of techniques and programming languages from artificial intelligence, computational logic, operations research and discrete mathematics, are being developed to tackle problems of this kind. While these tools have provided very promising results at both the representation and the reasoning levels, they are still impractical to dealing with many real-world applications, especially given the challenges we listed above.

In this tutorial, we will show how to apply nature-inspired techniques in order to overcome these limitations. This requires dealing with different aspects of uncertainty, change, preferences and spatio-temporal information. The approach that we will adopt is based on the Constraint Satisfaction Problem (CSP) paradigm and its variants.

Biography of the Speaker

Dr. Malek Mouhoub obtained his MSc and PhD in Computer Science from the University of Nancy in France. He is currently Professor and was the Head of the Department of Computer Science at the University of Regina, in Canada. Dr. Mouhoub’s research interests include Constraint Solving, Metaheuristics and Nature-Inspired Techniques, Spatial and Temporal Reasoning, Preference Reasoning, Constraint and Preference Learning, with applications to Scheduling and Planning, E-commerce, Online Auctions, Vehicle Routing and Geographic Information Systems (GIS). Dr. Mouhoub’s research is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada Foundation for Innovation (CFI), and the Mathematics of Information Technology and Complex Systems (MITACS) federal grants, in addition to several other funds and awards.

Dr. Mouhoub is the past treasurer and member of the executive of the Canadian Artificial Intelligence Association / Association pour l’intelligenceartificielle au Canada (CAIAC). CAIAC is the oldest national Artificial Intelligence association in the world. It is the official arm of the Association for the Advancement of Artificial Intelligence (AAAI) in Canada.


Dr. Mouhoub was the program co-chair for the 30th Canadian Conference on Artificial Intelligence (AI 2017), the 31st International Conference on Industrial, Engineering & Other Applications of Applied Intelligent Systems (IEA/AIE 2018) and the IFIP International Conference on Computational Intelligence and Its Applications (IFIP CIIA 2018).

Evolutionary Computation for Dynamic Multi-objective Optimization Problems


Many real-world optimization problems involve multiple conflicting objectives to be optimized and are subject to dynamic environments, where changes may occur over time regarding optimization objectives, decision variables, and/or constraint conditions. Such dynamic multi-objective optimization problems (DMOPs) are challenging problems due to their nature of difficulty. Yet, they are important problems that researchers and practitioners in decision-making in many domains need to face and solve. Evolutionary computation (EC) encapsulates a class of stochastic optimization methods that mimic principles from natural evolution to solve optimization and search problems. EC methods are good tools to address DMOPs due to their inspiration from natural and biological evolution, which has always been subject to changing environments. EC for DMOPs has attracted a lot of research effort during the last two decades with some promising results. However, this research area is still quite young and far away from well-understood. This tutorial provides an introduction to the research area of EC for DMOPs and carry out an in-depth description of the state-of-the-art of research in the field. The purpose is to (i) provide detailed description and classification of DMOP benchmark problems and performance measures; (ii) review current EC approaches and provide detailed explanations on how they work for DMOPs; (iii) present current applications in the area of EC for DMOPs; (iv) analyse current gaps and challenges in EC for DMOPs; and (v) point out future research directions in EC for DMOPs.

Tutorial Presenters (names with affiliations):

Prof. Shengxiang Yang, School of Computer Science and Informatics, De Montfort University, UK

Tutorial Presenters’ Bios:

Shengxiang Yang ( got his PhD degree in Systems Engineering in 1999 from Northeastern University, China. He is now a Professor of Computational Intelligence (CI) and Director of the Centre for Computational Intelligence (, De Montfort University (DMU), UK. He has worked extensively for 20 years in the areas of CI methods, including EC and artificial neural networks, and their applications for real-world problems. He has over 280 publications in these domains, with over 9800 citations and H-index of 53 according to Google Scholar. His work has been supported by UK research councils, EU FP7 and Horizon 2020, Chinese Ministry of Education, and industry partners, with a total funding of over £2M, of which two EPSRC standard research projects have been focused on EC for DMOPs.

He serves as an Associate Editor or Editorial Board Member of several international journals, including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, Information Sciences, Enterprise Information Systems, and Soft Computing, etc. He was the founding chair of the Task Force on Intelligent Network Systems (TF-INS, 2012-2017) and the chair of the Task Force on EC in Dynamic and Uncertain Environments (ECiDUEs, 2011-2017) of the IEEE CI Society (CIS). He has organised/chaired over 60 workshops and special sessions relevant to ECiDUEs for several major international conferences. He is the founding co-chair of the IEEE Symposium on CI in Dynamic and Uncertain Environments. He has co-edited 12 books, proceedings, and journal specialissues. He has been invited to give over 10 keynote speeches at international conferences and workshops.


External website with more information on Tutorial (if applicable): None.

Fundamentals of Fuzzy Networks

Alexander Gegov
University of Portsmouth, UK


The tutorial focuses on the theoretical foundations of fuzzy networks. The nodes of these networks are fuzzy systems represented by rule bases and the connections between the nodes are outputs from and inputs to these rule bases [1]-[6].

Fuzzy networks have an underlying two-dimensional grid structure with horizontal levels and vertical layers. The levels represent spatial hierarchy in terms of network breadth and the layers represent temporal hierarchy in terms of network depth.

The nodes of fuzzy networks are modelled by Boolean matrices or binary relations. The connections between the nodes are modelled by block schemes or topological expressions. Each network node is located in a cell within the underlying grid structure.

Nodes in fuzzy networks are manipulated by merging and splitting operations. The merging operations are for network analysis and the splitting operations are for network design. These operations are used for converting a fuzzy network into a fuzzy system and vice versa.

The operations are illustrated on feedforward and feedback fuzzy networks. Feedforward networks include combinations of narrow/broad and shallow/deep network structures. Feedback networks include combinations of single/multiple and local/global feedback loops.

Fuzzy networks are applied to case studies from engineering, computing, transport and finance. They are validated successfully against standard and hierarchical fuzzy systems. The validation uses performance evaluation indicators for feasibility, accuracy, efficiency, transparency.

[1] A.Gegov, Fuzzy Networks for Complex Systems: A Modular Rule Base Approach, Series in Studies in Fuzziness and Soft Computing (Springer, Berlin, 2011)

[2] F.Arabikhan, Telecommuting Choice Modelling using Fuzzy Rule Based Networks, PhD Thesis (University of Portsmouth, UK, 2017)

[3] A.Gegov, F.Arabikhan and N.Petrov, Linguistic composition based modelling by fuzzy networks with modular rule bases, Fuzzy Sets and Systems 269 (2015) 1-29

[4] X.Wang, A.Gegov, F.Arabikhan, Y.Chen and Q.Hu, Fuzzy network based framework for software maintainability prediction, Uncertainty, Fuzziness and Knowledge Based Systems 27/5 (2019) 841-862

[5] A.Yaakob, A.Serguieva and A.Gegov, FN-TOPSIS: Fuzzy networks for ranking traded equities, IEEE Transactions on Fuzzy Systems 25/2 (2016) 315-332

[6] A.Yaakob, A.Gegov and S.Rahman, Fuzzy networks with rule base aggregation for selection of alternatives, Fuzzy Sets and Systems341 (2018) 123-144

Presenters Names andAffiliations:

Alexander Gegov, University of Portsmouth, UK,

Farzad Arabikhan, University of Portsmouth, UK,

Presenters Bios:

Alexander Gegov is Reader in Computational Intelligence in the School of Computing, University of Portsmouth, UK. He holds a PhD in Control Systems and a DSc in Intelligent Systems – both from the Bulgarian Academy of Sciences. He has been a recipient of a National Annual Award for Best Young Researcher from the Bulgarian Union of Scientists. He has been Humboldt Guest Researcher at the University of Duisburg in Germany. He has also been EU Visiting Researcher at the University of Wuppertal in Germany and the Delft University of Technology in the Netherlands.

Alexander Gegov’s research interests are in the development of computational intelligence methods and their application for modelling and simulation of complex systems and networks. He has edited 6 books, authored 5 research monographs and over 30 book chapters – most of these published by Springer. He has authoredover 50 articles and 100 papers in international journals and conferences – many of these published and organised by IEEE. He hasalso presented over 20 invited lectures and tutorials – most of these at IEEE Conferences on Fuzzy Systems, Intelligent Systems, Computational Intelligence and Cybernetics.

Alexander Gegov is Associate Editor for ‘IEEE Transactions on Fuzzy Systems’, ‘Fuzzy Sets and Systems’, ‘Intelligent and Fuzzy Systems’ and ‘Computational Intelligence Systems’. He is also Reviewer for several IEEE journals and Assessor for several National Research Councils. He is Member of the IEEE Computational Intelligence Society and the Soft Computing Technical Committee of the IEEE Society of Systems, Man and Cybernetics. He is also Guest Editor for the forthcoming Special Issue on Deep Fuzzy Models of the IEEE Transactions on Fuzzy Systems.

Farzad Arabikhan joined the University of Portsmouth as a lecturer in 2017. He completed his PhD on 2017 at the University of Portsmouth and his thesis focus was on Modelling Telecommuting using Fuzzy Networks.  In his research, he optimised Fuzzy Networks using Genetic Algorithms and data mining approaches. Having published his research results in several journal and conference papers, he has also secured funding from European Cooperation in Science and Technology (COST) to collaborate with European scholars in University Paris 1 Pantheon Sorbonne, Paris, France and Mediterranean University of Reggio Calabria to pursue his research activities. He holds BSc and MSc degrees in Civil and Transportation Engineering from the Sharif University of Technology, Tehran, Iran.

Patch Learning: A New Method of Machine Learning, Implemented by
Means of Fuzzy Sets


There have been different strategies to improve the performance of a machine learning model, e.g., increasing the depth, width, and/or nonlinearity of the model, and using ensemble learning to aggregate multiple base/weak learners in parallel or in series. The goal of this tutorial is to describe a novel and new strategy called patch learning (PL) for this problem. PL consists of three steps: 1) train an initial global model using all training data; 2) identify from the initial global model the patches which contribute the most to the learning error, and then train a (local) patch model for each such patch; and, 3) update the global model using training data that do not fall into any patch. To use a PL model, one first determines if the input falls into any patch. If yes, then the corresponding patch model is used to compute the output. Otherwise, the global model is used. To-date, PL can only be implemented using fuzzy systems. How this is accomplished will be explained. Some regression problems on 1D/2D/3D curve fitting, nonlinear system identification, and chaotic time-series prediction, will be explained to demonstrate the effectiveness of PL. PL opens up a promising new line of research in machine learning. Opportunities for future
research will be explained.


This tutorial is based on materials in the following journal articles:
• D. Wu and J. M. Mendel, “Patch Learning,” IEEE Trans. on Fuzzy Systems, Early Access, July 2019.
• J. M. Mendel, “Explaining the performance potential of rule-based fuzzy systems as of the state space,” IEEE Trans. on Fuzzy Systems, vol. 26, no. 4, pp. 2362–2373, Aug a. 2 g0r1ea8t. er sculpting
• J. M. Mendel, “Adaptive variable structure basis function expansions: candidates for machine learning,” Information Sciences, vol. 496, pp. 124–149, 2019.

Outline of Covered Material
• Introduction
o Machine learning
o Present approaches to improving machine learning performance
• Three questions, as the basis for the rest of the tutorial:
o What is the general idea of Patch Learning (PL)?
o How can a patch and PL be implemented?
o How well does PL perform?
• What is the general idea of PL?
o What is a patch?
o Steps of PL
o Analogy to a sculptor who is sculpting a human figure
o Determining optimal number of patch models
o PL illustrated by a simple regression example
o Logic for determining which model to use in PL
• How can a patch and PL be implemented?
o Patches
o Partitions of the state space
o Implementing a patch using fuzzy sets
o Locating a measured value in a specific patch
o Implementation of PL
• How well does PL perform?
o Example 1: 1D curve fitting
o Example 2: 2D surface fitting
o Other examples, time permitting
• Future research topics
• Conclusions

Tutorial Presenter:

Jerry M. Mendel (Life Fellow IEEE, Fuzzy Systems Pioneer of the IEEE Computational Intelligence Society), Emeritus Professor of Electrical Engineering, University of Southern California, Los Angeles, CA.

Tutorial Presenter’s Biography:

Jerry M. Mendel (; received the Ph.D. degree in electrical engineering from the Polytechnic Institute of Brooklyn, Brooklyn, NY. Currently, he is Emeritus Professor of Electrical Engineering at the University of Southern
California in Los Angeles, where he has been since 1974. He is also a Tianjin 1000-Talents Foreign Experts Plan Endowed Professor, and Honorary Dean of the College of Artificial Intelligence, Tianjin Normal University, Tianjin, China. He has published over 580 technical papers and is author and/or co-author of 13 books, including Uncertain Rule-based Fuzzy
Systems: Introduction and New Directions, 2nd ed. (Springer 2017), Perceptual Computing:
Aiding People in Making Subjective Judgments (Wiley & IEEE Press, 2010), and Introduction to Type-2 Fuzzy Logic Control: Theory and Application (Wiley & IEEE Press, 2014). He is a Life Fellow of the IEEE, a Distinguished Member of the IEEE Control Systems Society, and a Fellow of the International Fuzzy Systems Association. He was President of the IEEE Control Systems Society in 1986, a member of the Administrative Committee of the IEEE Computational Intelligence Society for nine years, and Chairman of its Fuzzy Systems Technical Committee and the Computing With Words Task Force of that TC. Among his awards are the 1983 Best Transactions Paper Award of the IEEE Geoscience and Remote Sensing Society, the 1992 Signal
Processing Society Paper Award, the 2002 and 2014 Transactions on Fuzzy Systems Outstanding Paper Awards, a 1984 IEEE Centennial Medal, an IEEE Third Millenium Medal, a Fuzzy Systems Pioneer Award (2008) from the IEEE Computational Intelligence Society for “fundamental theoretical contributions and seminal results in fuzzy systems”; and, 2015 USC
Viterbi School of Engineering Senior Research Award. His present research interests (yes, he is still performing research with many colleagues around the Globe) include: type-2 fuzzy logic systems and computing with words.

Fuzzy Systems for Neuroscience and Neuro-Engineering

Javier Andreu-Perez
University of Essex

Ching-Teng Lin
University of Technology Sydney

Abstract: This tutorial will be an introduction to new researchers to the field of Neuroscience and Neuro-engineering from a fuzzy perspective. Attendees are not required prior knowledge on fuzzy systems or neuroscience to attend this tutorial. We will focus on brain research/decoding methods that use non-invasive modalities of neuroimaging. We will present the latest and most outstanding works that have applied Fuzzy Systems in brain signals to date. Given the important challenges associated with the processing of brain signals obtained from neuroimaging modalities, fuzzy sets and systems have been proposed as a useful and effective framework for the analysis of brain activity as well as to enable a direct communication pathway between the brain and external devices (brain computer/machine interfaces). While there has been increasing interest in these questions, the contribution of fuzzy logic sets, and systems has been diverse depending on the area of application in neuroscience. With regard to the decoding of brain activity, fuzzy sets and systems represent an excellent tool to overcome the challenge of processing extremely high signals that are most likely to be affected by high uncertainty. In this tutorial we will also provide an introduction about the foundations of fuzzy sets, logic and systems for the analysis of brain signals and neuroimaging data, including related disciplines such as computational neuroscience, brain computer/machine interfaces, neuroscience, neuroengineering, neuroinformatics, neuroergonomics, affective neuroscience and neurotechnology. After the tutorial, we will conduct an interactive survey and a panel discussion among the attendees.

Tutorial Presenters (names with affiliations):  Javier Andreu-Perez (University of Essex, United Kingdom), Ching-Teng Lin (University of Technology Sydney, Australia)


 Javier Andreu-Perez (SMIEEE) is Senior Lecturer in the School of Computer Science and Electronic Engineering (CSEE), University of Essex, United Kingdom (UK). PhD in Intelligent Systems from University of Lancaster, UK. His research expertise lies in the development of new methods in artificial intelligence and machine learning within the healthcare domain, particularly in application-driven advances of AI and Machine Learning for the analysis of bio-medical and neuroimaging data. He has expertise in the use of Big Data, machine learning models based on deep learning and methodologies of uncertainty modelling for highly noisy non-stationary signals. Javier has published relevant highly-cited papers and several prestigious IEEE Transactions and other Top Q1 journals in Artificial Intelligence and Neuroscience. In total Javier’s work in Artificial Intelligence and Biomedical engineering has attracted more than 1400+ citations. Javier has participated in awarded projects from UK research councils such us The Innovate UK Council, NIHR Biomedical Research Centre, Welcome Trust Centre for Global Health Research and private corporations. He is also member of the EPSRC peer review college. He is also Associate/Area Editor for the Journal Neurocomputing (Elsevier) and International Journal of Computational Intelligence (EUSFLAT official journal). Javier’s is also Chair if the IEEE CIS Society Task Force on Extensions to Type-1 Fuzzy  and co-chair of an international working group on Uncertainty Modelling for Neuro-Engineering. Javier is a frequent organiser of special sessions and competitions at FUZZ-IEEE and WCCI on the use of fuzzy systems in Brain research and interfaces.

Ching-Teng Lin (FIEEE), Professor Chin-Teng Lin received the B.S. degree from National Chiao-Tung University (NCTU), Taiwan in 1986, and the Master and Ph.D. degree in electrical engineering from Purdue University, USA in 1989 and 1992, respectively. He is currently the Distinguished Professor of Faculty of Engineering and Information Technology, and Co-Director of Center for Artificial Intelligence, University of Technology Sydney, Australia. He is also invited as Honorary Chair Professor of Electrical and Computer Engineering, NCTU, International Faculty of University of California at San-Diego (UCSD), and Honorary Professorship of University of Nottingham. Dr. Lin was elevated to be an IEEE Fellow for his contributions to biologically inspired information systems in 2005 and was elevated International Fuzzy Systems Association (IFSA) Fellow in 2012. Dr. Lin received the IEEE Fuzzy Systems Pioneer Awards in 2017. He served as the Editor-in-chief of IEEE Transactions on Fuzzy Systems from 2011 to 2016. He also served on the Board of Governors at IEEE Circuits and Systems (CAS) Society in 2005-2008, IEEE Systems, Man, Cybernetics (SMC) Society in 2003-2005, IEEE Computational Intelligence Society in 2008-2010, and Chair of IEEE Taipei Section in 2009-2010. Dr. Lin was the Distinguished Lecturer of IEEE CAS Society from 2003 to 2005 and CIS Society from 2015-2017. He serves as the Chair of IEEE CIS Distinguished Lecturer Program Committee in 2018~2019. He served as the Deputy Editor-in-Chief of IEEE Transactions on Circuits and Systems-II in 2006-2008. Dr. Lin was the Program Chair of IEEE International Conference on Systems, Man, and Cybernetics in 2005 and General Chair of 2011 IEEE International Conference on Fuzzy Systems. Dr. Lin is the co-author of Neural Fuzzy Systems (Prentice-Hall), and the author of Neural Fuzzy Control Systems with Structure and Parameter Learning (World Scientific). He has published more than 300 journal papers (Total Citation: 20,163, H-index: 65, i10-index: 254) in the areas of neural networks, fuzzy systems, brain computer interface, multimedia information processing, and cognitive neuro-engineering, including about 120 IEEE journal papers.

Paving the way from Interpretable Fuzzy Systems to Explainable Artificial Intelligence Systems

José M. Alonso
Research Centre in Intelligent Technologies (CiTIUS)
University of Santiago de Compostela (USC)
Campus Vida, E-15782, Santiago de Compostela, Spain
Email: (


Ciro Castiello
Department of Informatics
University of Bari “Aldo Moro”, Bari, Italy


Department of Informatics
University of Bari “Aldo Moro”, Bari, Italy


Luis Magdalena
Department of Applied Mathematics, School of Informatics
Universidad Politécnica de Madrid (UPM), Spain



In the era of the Internet of Things and Big Data, data scientists are required to extract valuable knowledge from the given data. They first analyze, cure and pre-process data. Then, they apply Artificial Intelligence (AI) techniques to automatically extract knowledge from data. Actually, AI has been identified as the most strategic technology of the 21st century” and is already part of our everyday life. The European Commission states that “EU must therefore ensure that AI is developed and applied in an appropriate framework which promotes innovation and respects the Union’s values and fundamental rights as well as ethical principles such as accountability and transparency”. It emphasizes the importance of eXplainable AI (XAI in short), in order to develop an AI coherent with European values:to further strengthen trust, people also need to understand how the technology works, hence the importance of research into the explainability of AI systems. Moreover, as remarked in the last challenge stated by the USA Defense Advanced Research Projects Agency (DARPA), “even though current AI systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans”. Accordingly, users without a strong background on AI, require a new generation of XAI systems. They are expected to naturally interact with humans, thus providing comprehensible explanations of decisions automatically made.

XAI is an endeavor to evolve AI methodologies and technology by focusing on the development of agents capable of both generating decisions that a human could understand in a given context, and explicitly explaining such decisions. This way, it is possible to verify if automated decisions are made on the basis of accepted rules and principles, so that decisions can be trusted and their impact justified.

Even though XAI systems are likely to make their impact felt in the near future, there is a lack of experts to develop the fundamentals of XAI, i.e., ready to develop and to maintain the new generation of AI systems that are expected to surround us soon. This is mainly due to the inherent multidisciplinary character of this field of research, with XAI researchers coming from heterogeneous research fields. Moreover, it is hard to find XAI experts with a holistic view as well as wide and solid background regarding all the related topics.

Consequently, the main goal of this tutorial is to provide attendees with a holistic view of fundamentals and current research trends in the XAI field, paying special attention to fuzzy-grounded knowledge representation and how to enhance human-machine interaction.

The tutorial will cover the main theoretical concepts of the topic, as well as examples and real applications of XAI techniques. In addition, ethical and legal aspects concerning XAI will also be considered.

Tutorial Presenters (names with affiliations):

José M. Alonso (
Research Centre in Intelligent Technologies (CiTIUS)
University of Santiago de Compostela (USC)
Campus Vida, E-15782, Santiago de Compostela, Spain


Ciro Castiello (
Department of Informatics
University of Bari “Aldo Moro”, Bari, Italy


CorradoMencar (
Department of Informatics
University of Bari “Aldo Moro”, Bari, Italy


Luis Magdalena (
Department of Applied Mathematics, School of Informatics
Universidad Politécnica de Madrid (UPM), Spain


Tutorial Presenters’ Bios:

Jose M. Alonso received his M.S. and Ph.D. degrees in Telecommunication Engineering, both from the Technical University of Madrid (UPM), Spain, in 2003 and 2007, respectively. Since June 2016, he is postdoctoral researcher at the University of Santiago de Compostela, in the Research Centre in Intelligent Technologies (CiTIUS). He is currently Chair of the Task Force on “Fuzzy Systems Software” in the Fuzzy Systems Technical Committee of the IEEE Computational Intelligence Society, Associate Editor of the IEEE Computational Intelligence Magazine (ISSN:1556-603X), secretary of the ACL Special Interest Group on Natural Language Generation, and chair of the Doctoral Consortium at the 2020 European Conference on Artificial Intelligence. He is currently coordinating the H2020-MSCA-ITN-2019 project entitled “Interactive Natural Language Technology for Explainable Artificial Intelligence” (NL4XAI). He has published more than 130 papers in international journals, book chapters and in peer-review conferences. According to Google Scholar (accessed: February 15, 2020) he has h-index=21 and i10-index=42. His research interests include computational intelligence, explainable artificial intelligence, interpretable fuzzy systems, natural language generation, development of free software tools, etc.

CiroCastiello, Ph.D. graduated in Informatics in 2001 and received his Ph.D. in Informatics in 2005. Currently he is an Assistant Professor at the Department of Informatics of the University of Bari Aldo Moro, Italy. His research interests include: soft computing techniques, inductive learning mechanisms, interpretability of fuzzy systems, eXplainable Artificial Intelligence. He participated in several research projects and published more than seventy peer-reviewed papers. He is also regularly involved in the teaching activities of his department. He is member of the European Society for Fuzzy Logic and Technology (EUSFLAT) and of the INdAM Research group GNCS (Italian National Group of Scientific Computing).

CorradoMencar is Associate Professor in Computer Science at the Department of Computer Science of the University of Bari “A. Moro”, Italy. He graduated in 2000 in Computer Science and in 2005 he obtained the title of PhD in Computer Science. In 2001 he was analyst and software designer for some Italian companies. Since 2005 he has been working on research topics concerning Computational Intelligence and Granular Computing. As part of his research activity, he has participated in several research projects and has published over one hundred peer-reviewed international scientific publications. He is also Associate Editor of several international scientific journals, as well as Featured Reviewer for ACM Computing Reviews. He regularly organizes scientific events related to his research topics with international colleagues. Currently, research topics include fuzzy logic systems with a focus on Interpretability and Explainable Artificial Intelligence, Granular Computing, Computational Intelligence applied to the Semantic Web, and Intelligent Data Analysis. As part of his teaching activity, he is, or has been, the holder of numerous classes and PhD courses on various topics, including Computer Architectures, Programming Fundamentals, Computational Intelligence and Information Theory.

Luis Magdalena is with the Dept. of Applied Mathematics for ICT of the Universidad Politécnica de Madrid. From 2006 to 2016 he was Director General of the European Centre for Soft Computing in Asturias (Spain). Under his direction, the Center was recognized with the IEEE-CIS Outstanding Organization Award in 2012. Prof. Magdalena has been actively involved in more than forty research projects. He has co-author or co-edited ten books including “Genetic Fuzzy Systems”, “Accuracy Improvements in Linguistic Fuzzy Modelling”, and “Interpretability Issues in Fuzzy Modeling”. He has also authored over one hundred and fifty papers in books, journals and conferences, receiving more than 6000 citations. Prof. Magdalena has been President of the “European Society for Fuzzy Logic and Technologies”, Vice-president of the “International Fuzzy Systems Association” and is Vice-President for Technical Activities of the IEEE Computational Intelligence Society for the period 2020-21.

External website with more information on Tutorial (if applicable):


Randomization Based Deep and Shallow Learning Methods for Classification and Forecasting

 Dr. P. N.Suganthan
Nanyang Technological University, Singapore.


This tutorial will first introduce the main randomization-based learning paradigms with closed-form solutions such as the randomization-based feedforward neural networks, randomization based recurrent neural networks and kernel ridge regression. The popular instantiation of the feedforward type called random vector functional link neural network (RVFL) originated in early 1990s. Other feedforward methods are random weight neural networks (RWNN), extreme learning machines (ELM), etc. Reservoir computing methods such as echo state networks (ESN) and liquid state machines (LSM) are randomized recurrent networks. Another paradigm is based on kernel trick such as the kernel ridge regressionwhich includes randomization for scaling to large training data. The tutorial will also consider computational complexity with increasing scale of the classification/forecasting problems. Another randomization-based paradigm is the random forest which exhibits highly competitive performances.The tutorial will also present extensive benchmarking studies using classification and forecasting datasets.

 Key Papers:

 General Bio-sketch:

PonnuthuraiNagaratnamSuganthanreceived the B.A degree, Postgraduate Certificate and M.A degree in Electrical and Information Engineering from the University of Cambridge, UK in 1990, 1992 and 1994, respectively. He received an honorary doctorate (i.e. Doctor Honoris Causa) in 2020 from University of Maribor, Slovenia.After completing his PhD research in 1995, he served as a pre-doctoral Research Assistant in the Dept of Electrical Engineering, University of Sydney in 1995–96 and a lecturer in the Dept of Computer Science and Electrical Engineering, University of Queensland in 1996–99. He moved to Singapore in 1999. He is an Editorial Board Member of the Evolutionary Computation Journal, MIT Press (2013-2018). He is/was an associate editor of the Applied Soft Computing (Elsevier, 2018-), Neurocomputing (Elsevier, 2018-), IEEE Trans on Cybernetics (2012 – 2018), IEEE Trans on Evolutionary Computation (2005 – ), Information Sciences (Elsevier) (2009 – ), Pattern Recognition (Elsevier) (2001 – ) and Int. J. of Swarm Intelligence Research (2009 – ) Journals. He is a founding co-editor-in-chief of Swarm and Evolutionary Computation (2010 – ), an SCI Indexed Elsevier Journal. His co-authored SaDE paper (published in April 2009) won the “IEEE Trans. on Evolutionary Computation outstanding paper award” in 2012. His former PhD student, Dr Jane Jing Liang, won the IEEE CIS Outstanding PhD dissertation award, in 2014. His research interests include swarm and evolutionary algorithms, pattern recognition, big data, deep learning and applications of swarm, evolutionary & machine learning algorithms. His publications have been well cited. His SCI indexed publications attracted over 1000 SCI citations in each calendar year since 2013. He was selected as one of the highly cited researchers by Thomson Reuters yearly from 2015 to 2019 in computer science. He served as the General Chair of the IEEE SSCI 2013. He has been a member of the IEEE (S’90, M’92, SM’00, F’15) since 1991 and an elected AdCom member of the IEEE Computational Intelligence Society (CIS) in 2014-2016. He is an IEEE CIS distinguished lecturer (DLP) in 2018-2020.

Brain-Inspired Spiking Neural Network Architectures for Deep, Incremental Learning and Knowledge Representation   

Prof. Nikola Kasabov
Director, Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland, New Zealand


The 2 hour tutorial demonstrates that the third generation of artificial neural networks, the spiking neural networks (SNN) are not only capable of deep, incremental learning of temporal or spatio-temporal data, but also enabling the extraction of knowledge representation from the learned data and tracing the knowledge evolution over time from the incoming data. Similarly to how the brain learns, these SNN models do not need to be restricted in number of layers, neurons in each layer, etc. as they adopt self-organising learning principles of the brain. The tutorial consists of 2 parts:

  1. Algorithms for deep, incremental learning in SNN.
  2. Algorithms for knowledge representation and for tracing the knowledge evolution in SNN over time from incoming data. Representing fuzzy spatio-temporal rules from SNN.
  3. Selected Applications

The material is illustrated on an exemplar SNN architecture NeuCube (free software and open source along with a cloud-based version available from Case studies are presented of brain and environmental data modelling and knowledge representation using incremental and transfer learning algorithms. These include: predictive modelling of EEG and fMRI data measuring cognitive processes and response to treatment; AD prediction; understanding depression; predicting environmental hazards and extreme events.

It is also demonstrated that brain-inspired SNN architectures, such as the NeuCube, allow for  knowledge transfer between humans and machines through building brain-inspired Brain-Computer Interfaces (BI-BCI). These are used to understand human-to-human knowledge transfer through hyper-scanning and also to create brain-like neuro-rehabilitation robots. This opens the way to build a new type of AI systems – the open and transparent  AI.

Reference: N.Kasabov, Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence, Springer, 2019,


Prof. Nikola Kasabov, Director, Knowledge Engineering and Discovery Research Institute,

Auckland University of Technology, Auckland, New Zealand,,


Professor Nikola Kasabov is Fellow of IEEE, Fellow of the Royal Society of New Zealand, Fellow of the INNS College of Fellows, DVF of the Royal Academy of Engineering UK. He is the Founding Director of the Knowledge Engineering and Discovery Research Institute (KEDRI), Auckland and Professor at the School of Engineering, Computing and Mathematical Sciences at Auckland University of Technology, New Zealand. Kasabov is the Immediate Past President of the Asia Pacific Neural Network Society (APNNS) and Past President of the International Neural Network Society (INNS). He is member of several technical committees of IEEE Computational Intelligence Society and Distinguished Lecturer of IEEE (2012-2014). He is Editor of Springer Handbook of Bio-Neuroinformatics, Springer Series of Bio-and Neuro-systems and Springer journal Evolving Systems. He is Associate Editor of several journals, including Neural Networks, IEEE TrNN, Tr CDS, Information Sciences, Applied Soft Computing. Kasabov holds MSc and PhD from TU Sofia, Bulgaria. His main research interests are in the areas of neural networks, intelligent information systems, soft computing, bioinformatics, neuroinformatics. He has published more than 620 publications. He has extensive academic experience at various academic and research organisations in Europe and Asia, including: TU Sofia Bulgaria; University of Essex UK; University of Otago, NZ; Advisory Professor at Shanghai Jiao Tong University and CASIA China, Visiting Professor at ETH/University of Zurich and Robert Gordon University UK, Honorary Professor of Teesside University, UK; George Moore Professor of data analytics at the University of Ulster. Prof. Kasabov has received a number of awards, among them: Doctor Honoris Causa from Obuda University, Budapest; INNS Ada Lovelace Meritorious Service Award; NN Best Paper Award for 2016; APNNA ‘Outstanding Achievements Award’; INNS Gabor Award for ‘Outstanding contributions to engineering applications of neural networks’; EU Marie Curie Fellowship; Bayer Science Innovation Award; APNNA Excellent Service Award;  RSNZ Science and Technology Medal; 2015 AUT Medal; Honorable Member of the Bulgarian, the Greek and the Scottish Societies for Computer Science. More information of Prof. Kasabov can be found from:

Generalized constraints for knowledge-driven-and-data-driven approaches

Bao-Gang (B.-G.) Hu
Professor, Senior Member of IEEE
National Laboratory of Pattern Recognition
Institute of Automation,Chinese Academy of Sciences, China


In this tutorial, I will start with a question about the existing studies on machine learning (ML) and artificial intelligence (AI): “Do we encounter any new mathematical, yet general, problem in ML and AI?”. The question is given based on the philosophy by Kalman (2008) stated as “Once you get the physics right, the rest is mathematics”. The tutorial will take the notion of “Generalized Constraints (GCs)” by Zadeh (1986, 1996). We consider it a “new” mathematical problem due to the facts that the problem is still far away from awareness for every related community.

The tutorial will focus on GCs in the context of knowledge-and-data-driven modeling (KDDM, called KD/DD by Brinkley, 1985) approaches. When Deep Learning (DL), as a data-driven (DD) approach, is successful in various application areas, we believe that KDDM approaches will be the next for advancing the existing tools. The current Artificial Neural Networks (ANNs), including DL, are not ready to incorporate any type of prior knowledge and works as “black box” tools in general.For overcoming the difficulties, we redefine GCs and discuss the problems in comparison with the conventional constraints. We show that GCs appear more often and general in applications and will enlarge our study space from a novel mathematic problem, such as “Generalized Constraint Learning (GCL)”. Five open issues are presented to highlight missing, yet important, studies in ANNs.

The tutorial follows the previous one (Hu, 2017 on “What to learn?” but stresses on Transparent ANN (tANN) for the learning target. We consider KDDM and GCs to be necessary solutions. Furthermore, the relations among tANN, Interpretable ANN (iANN) andExplainable ANN (xANN) are redefined. Several numerical examples are demonstrated on supervised and unsupervised problems in relations to the five issues.

The objective of this tutorial is to highlight GC problems from a KDDM perspective for transparent/interpretable/explainable AI, rather than for specific applications.

Tutorial Presenters (names with affiliations): 

Bao-Gang (B.-G.) Hu, Professor, Senior Member of IEEE

National Laboratory of Pattern Recognition,
Institute of Automation,
Chinese Academy of Sciences, China

Tutorial Presenters’ Bios: 

Dr. Bao-Gang Hu is a Professor with NLPR (National Laboratory of Pattern Recognition), Institute of Automation, Chinese Academy of Sciences, Beijing, China. He received his M.S. degree from the University of Science and Technology, Beijing, China in 1983, and his Ph.D. degree from McMaster University, Canada in 1993. From 2000 to 2005, he was the Chinese Director of LIAMA (the Chinese-French Joint Laboratory supported by CAS and INRIA). His current research interests are pattern recognition and computer modeling.  He gave a tutorial in IJCNN-2017/ICONIP-2018, entitled as “Information Theoretic Learning in Pattern Classification”. (

External website with more information on Tutorial (if applicable): I will provide the website including slides before June 15, 2020.

Explainable-by-Design Deep Learning: Fast, Highly Accurate, Weakly Supervised, Self-evolving

Prof Plamen Angelov, PhD, DSc, FIEEE
School of Computing and Communications, Lancaster University, UK
Vice President International Neural Network Society

Machine Learning (ML) and AI justifiably attract the attention and interest not only of the wider scientific community and industry, but also society and policy makers. Recent developments in this area range from accurately recognising images and speech to beating the best players in games like Chess, Go and Jeopardy. In such well-structured problems, the ML and AI algorithms were able to surpass the human performance, acting autonomously. These breakthroughs in performance were made possible due to the dramatic increase of computational power and the amount and ubiquity of the data available. This data-rich environment, however, led to the temptation to shortcut from raw data to the solutions without getting a deep insight and understanding of the underlying dependencies and causalities between the factors and the internal model structure. Even the most powerful (in terms of accuracy) algorithms such as deep learning (DL) can give a wrong output, which may be fatal.

Recently, a crash by a driverless Uber car was reported raising issues such as responsibility and the lack of transparency, which could help analyse the cause and prevent future crashes. Due to the opaque and cumbersome model structure used by DL, some authors started to talk about a dystopian “black box” society. Having the true potential to revolutionize industries and the way we live, the recent breakthroughs in ML and AI also raised many new questions and issues. These are related primarily to their transparency, explainability, fairness, bias and their heavy dependence on large quantities of labeled training data.

Despite the success in this area, the way computers learn is still principally different from the way people acquire new knowledge, recognise objects and make decisions. Children during their sensory-motor development stage (first two years of a child’s life) imitate observed activities and are able to learn from one or few examples in “one-shot learning”. People do not need a huge amount of annotated data. They learn by example, using similarities to previously acquired prototypes, not by using parametric analytical models. They can explain and pass aggregated knowledge to other humans. They predict based on rules they formulate using prototypes and examples.

Current ML approaches are focused primarily on accuracy and overlook explainability, the semantic meaning of the internal model representation, reasoning and its link with the problem domain. They also overlook the efforts to collect and label training data and rely on assumptions about the data distribution that are often not satisfied. For example, the widely used assumption that the validation data has the same distribution as that of the training data is usually not satisfied in reality and is the main reason for poor performance. The typical assumption for classification, that all validation data come from the same classes as the training data, may also be incorrect. It does not consider scenarios in which new classes appear. For example, if a driverless car is confronted with a scene that was never used in the training data or if a new type of malware or attack appears in a cybersecurity domain. In such scenarios, the best existing approach of transfer learning will require a heavy and long process of training with huge amounts of labeled data. While driving in real time, the car will be helpless. In the cybersecurity area it is not possible to pre-train for all possible attacks and viruses. Therefore, the ability to detect the unseen and unexpected and start learning this new class/es in real time with no or very little supervision is critically important and is something that no currently existing classifier can offer. Another big problem with the currently existing ML algorithms is that they ignore the semantic meaning, explainability and reasoning aspects of the solutions they propose. The challenge is to fill this gap between high level of accuracy and the semantically meaningful solutions.

The most efficient algorithms that have fueled interest towards ML and AI recently are also computationally very hungry – they require specific hardware accelerators such as GPU, huge amounts of labeled data and time. They produce parameterised models with hundreds of millions of coefficients, which are also impossible to interpret or be manipulated by a human. Once trained, such models are inflexible to new knowledge. They cannot dynamically evolve their internal structure to start recognising new classes. They are good only for what they were originally trained for. They also lack robustness, formal guarantees about their behaviour and explanatory and normative transparency. This makes problematic use of such algorithms in high stake complex problems such as aviation, health, bailing from jail, etc. where the clear rationale for a particular decision is very important and the errors are very costly.

All these challenges and identified gaps require a dramatic paradigm shift and a radical new approach. In this tutorial the speaker will present such a new approach towards the next generation of computationally lean ML and AI algorithms that can learn in real-time using normal CPUs on computers, laptops, smartphones or even be implemented on chip that will change dramatically the way these new technologies are being applied. It will open a huge market of truly intelligent devices that can learn lifelong, improve their performance and adapt them to the user’s demands. This approach is called anthropomorphic, because it shares similar characteristics to the way people learn, aggregate, articulate and exchange knowledge. It is explainable-by-design. It focuses on addressing the open research challenge of developing highly efficient, accurate ML algorithms and AI models that are transparent, interpretable, explainable and fair by design. Such systems are able to self-learn lifelong, and continuously improve without the need for complete re-training, can start learning from few training data samples, explore the data space, detect and learn from unseen data patterns, collaborate with humans or other such algorithms seamlessly.


[1] P. P. Angelov, X. Gu, Toward anthropomorphic machine learning, IEEE Computer, 51(9):18–27, 2018.
[2] P. Angelov. X. Gu, Empirical Approach to Machine Learning, Studies in Computational Intelligence, vol.800, ISBN 978-3-030-02383-6, Springer, Cham, Switzerland, 2018.
[3] P. P. Angelov, X. Gu, Deep rule-based classifier with human-level performance and characteristics, Information Sciences, vol. 463-464, pp.196-213, Oct. 2018.
[4] P. Angelov, X. Gu, J. Principe, Autonomous learning multi-model systems from data streams, IEEE Transactions on Fuzzy Systems, 26(4): 2213-2224, Aug. 2018.
[5] P. Angelov, X. Gu, J. Principe, A generalized methodology for data analysis, IEEE Transactions on Cybernetics, 48(10): 2981-2993, Oct 2018.
[6] X. Gu, P. Angelov, C. Zhang, P. Atkinson, A massively parallel deep rule-based ensemble classifier for remote sensing scenes, IEEE Geoscience and Remote Sensing Letters, vol. 15 (3), pp. 345-349, 2018.
[7] P. Angelov, Autonomous Learning Systems: From Data Streams to Knowledge in Real time, John Willey and Sons, Dec.2012, ISBN: 978-1-1199-5152-0.
[8] P. Angelov, E. Soares, Toawards Explainable Deep Neural Networks, xDNN, ArXiv publication at arXiv:1912.02523, 5 December 2019 (publication of the week at

Biographical data of the speaker:

Prof. Angelov (MEng 1989, PhD 1993, DSc 2015) is a Fellow of the IEEE, of the IET and of the HEA. He is Vice President of the International Neural Networks Society (INNS) for Conference and Governor of the Systems, Man and Cybernetics Society of the IEEE. He has 30 years of professional experience in high level research and holds a Personal Chair in Intelligent Systems at Lancaster University, UK. He founded in 2010 the Intelligent Systems Research group which he led till 2014 when he founded the Data Science group at the School of Computing and Communications before going on sabbatical in 2017 and established LIRA (Lancaster Intelligent, Robotic and Autonomous systems) Research Centre ( ) which includes over 30 academics across different Faculties and Departments of the University. He is a founding member of the Data Science Institute and of the CyberSecurity Academic Centre of Excellence at Lancaster. He has authored or co-authored 300 peer-reviewed publications in leading journals, peer-reviewed conference proceedings, 3 granted patents (+ 3 filed applications), 3 research monographs (by Wiley, 2012 and Springer, 2002 and 2018) cited 9000+ times with an h-index of 49 and i10-index of 160. His single most cited paper has 960 citations. He has an active research portfolio in the area of computational intelligence and machine learning and internationally recognised results into online and evolving learning and algorithms for knowledge extraction in the form of human-intelligible fuzzy rule-based systems. Prof. Angelov leads numerous projects (including several multimillion ones) funded by UK research councils, EU, industry, UK MoD. His research was recognised by ‘The Engineer Innovation and Technology 2008 Special Award’ and ‘For outstanding Services’ (2013) by IEEE and INNS. He is also the founding co-Editor-in-Chief of Springer’s journal on Evolving Systems and Associate Editor of several leading international scientific journals, including IEEE Transactions on Fuzzy Systems (the IEEE Transactions with the highest impact factor) of the IEEE Transactions on Systems, Man and Cybernetics as well as of several other journals such as Applied Soft Computing, Fuzzy Sets and Systems, Soft Computing, etc. He gave over a dozen plenary and key note talks at high profile conferences. Prof. Angelov was General co-Chair of a number of high profile conferences including IJCNN2013, Dallas, TX; IJCNN2015, Killarney, Ireland; the inaugural INNS Conference on Big Data, San Francisco; the 2nd INNS Conference on Big Data, Thessaloniki, Greece and a series of annual IEEE Symposia on Evolving and Adaptive Intelligent Systems. Dr Angelov is the founding Chair of the Technical Committee on Evolving Intelligent Systems, SMC Society of the IEEE and was previously chairing the Standards Committee of the Computational Intelligent Society of the IEEE (2010-2012). He was also a member of International Program Committee of over 100 international conferences (primarily IEEE). More details can be found at

Deep learning for graphs

Davide Bacciu
Università di Pisa


The tutorial will introduce the lively field of deep learning for graphs and its applications.  Dealing with graph data requires learning models capable of adapting to structured samples of varying size and topology, capturing the relevant structural patterns to perform predictive and explorative tasks while maintaining the efficiency and scalability necessary to process large scale networks. The tutorial will first introduce foundational aspects and seminal models for learning with graph structured data. Then it will discuss the most recent advancements in terms of deep learning for network and graph data, including learning structure embeddings, graph convolutions, attentional models and graph generation.

Tutorial Presenters (names with affiliations):

Davide Bacciu (Università di Pisa)

Tutorial Presenters’ Bios:

Davide Bacciu is Assistant Professor at the Computer Science Department, University of Pisa. The core of his research is on Machine Learning (ML) and deep learning models for structured data processing, including sequences, trees and graphs. He is the PI of an Italian National project on ML for structured data and the Coordinator of the H2020-RIA project TEACHING (2020-2023).  He has been teaching courses of Artificial Intelligence (AI) and ML at undergraduate and graduate levels since 2010. He is an IEEE Senior Member, a member of the IEEE NN Technical Committee and of the IEEE CIS Task Force on Deep Learning. He is an Associate Editor of the IEEE Transactions on Neural Networks and Learning Systems. Since 2017 he is the Secretary of the Italian Association for Artificial Intelligence (AI*IA).

External website with more information on Tutorial (if applicable):

Fast and Deep Neural Networks

 Claudio Gallicchio
University of Pisa (Italy)

Simone Scardapane
La Sapienza University of Rome (Italy)


Deep Neural Networks (DNNs) are a fundamental tool in the modern

development of Machine Learning. Beyond the merits of properly designed training strategies, a great part of DNNs success is undoubtedly due to the inherent properties of their layered architectures, i.e., to the introduced architectural biases. In this tutorial, we analyze how far we can go by relying almost exclusively on these architectural biases. In particular, we explore recent classes of DNN models wherein the majority of connections are randomized or more generally fixed according to some specific heuristic, leading to the development of Fast and Deep Neural Network (FDNN) models. Examples of such systems consist of multi-layered neural network architectures where the connections to the hidden layer(s) are left untrained after initialization. Limiting the training algorithms to operate on a reduced set of weights implies a number of intriguing features. Among them, the extreme efficiency of the resulting learning processes is undoubtedly a striking advantage with respect to fully trained architectures. Besides, despite the involved simplifications, randomized neural systems possess remarkable properties both in practice, achieving state-of-the-art results in multiple domains, and theoretically, allowing to analyze intrinsic properties of neural architectures (e.g. before training of the hidden layers’ connections). In recent years, the study of randomized neural networks has been extended towards deep architectures, opening new research directions to the design of effective yet extremely efficient deep learning models in vectorial as well as in more complex data domains.

This tutorial will cover all the major aspects regarding the design and analysis of Fast and Deep Neural Networks, and some of the key results with respect to their approximation capabilities. In particular, the tutorial will first introduce the fundamentals of randomized neural models in the context of feedforward networks (i.e., Random Vector Functional Link and equivalent models), convolutional filters, and recurrent systems (i.e., Reservoir Computing networks). Then, it will focus specifically on recent results in the domain of deep randomized systems, and their application to structured domains (trees, graphs).

Tutorial Presenters (names with affiliations):

Claudio Gallicchio, University of Pisa (Italy)

Simone Scardapane, La Sapienza University of Rome (Italy)

*Tutorial Presenters’ Bios:

Claudio Gallicchio is Assistant Professor at the Department of Computer Science, University of Pisa. He is Chair of the IEEE CIS Task Force on Reservoir Computing, and member of IEEE CIS Data Mining and Big Data Analytics Technical Committee, and of the IEEE CIS Task Force on Deep Learning. Claudio Gallicchio has organized several events (special sessions and workshops) in major international conferences (including IJCNN/WCCI, ESANN, ICANN) on themes related to Randomized Neural Networks. He serves as member of several program committees of conferences and workshops in Machine Learning and Artificial Intelligence. He has been invited speaker for several national and international conference. His research interests include Machine Learning, Deep Learning, Randomized Neural Networks, Reservoir Computing, Recurrent and Recursive Neural Networks, Graph Neural Networks.

Simone Scardapane is Assistant Professor at the the “Sapienza” University of Rome. He is active as co-organizer of special sessions and special issues on themes related to Randomized Neural Networks and Randomized Machine Learning approaches. His research interests include Machine Learning, Neural Networks, Reservoir Computing and Randomized Neural Networks, Distributed and Semi-supervised Learning, Kernel Methods, and Audio Classification. Simone Scardapane is an Honorary Research Fellow with the CogBID Laboratory, University of Stirling, Stirling, U.K. Simone Scardapane is the co-organizer of the Rome Machine Learning & Data Science Meetup, that organizes monthly events in Rome, and a member of the advisory board for Codemotion Italy. He is also a co-founder of the Italian Association for Machine Learning, a not-for-profit organization with the aim of promoting machine learning concepts in the public. In 2017 he has been certified as a Google Developer expert for machine learning. Currently, he is the track director for the CNR sponsored “Advanced School of AI”


* External website with more information on Tutorial (if applicable):

Deep Stochastic Learning and Understanding

National Chiao Tung University



This tutorial addresses the advances in deep Bayesian learning for sequence data which are ubiquitous in speech, music, text,

image, video, web, communication and networking applications. Spatial and temporal contents are analyzed and represented to fulfill a variety of tasks ranging from classification, synthesis, generation, segmentation, dialogue, search, recommendation, summarization, answering, captioning, mining, translation, adaptation to name a few. Traditionally, “deep learning” is taken to be a learning process where the inference or optimization is based on the real-valued deterministic model. The “latent semantic structure” in words, sentences, images, actions, documents or videos learned from data may not be well expressed or correctly optimized in mathematical logic or computer programs. The “distribution function” in discrete or continuous latent variable model for spatial and temporal sequences may not be properly decomposed or estimated. This tutorial addresses the fundamentals of statistical models and neural networks, and focus on a series of advanced Bayesian models and deep models including recurrent neural network, sequence-to-sequence model, variational auto-encoder (VAE), attention mechanism, memory-augmented neural network, skip neural network, temporal difference VAE, stochastic neural network, stochastic temporal convolutional network, predictive state neural network, and policy neural network. Enhancing the prior/posterior representation is addressed. We present how these models are connected and why they work for a variety of applications on symbolic and complex patterns in sequence data. The variational inference and sampling method are formulated to tackle the optimization for complicated models. The embeddings, clustering or co-clustering of words, sentences or objects are merged with linguistic and semantic constraints. A series of case studies, tasks and applications are presented to tackle different issues in deep Bayesian learning and understanding. At last, we will point out a number of directions and outlooks for future studies. This tutorial serves the objectives to introduce novices to major topics within deep Bayesian learning, motivate and explain a topic of emerging importance for natural language understanding, and present a novel synthesis combining distinct lines of machine learning work.

Tutorial Presenters (names with affiliations):

Jen-TzungChien, National Chiao Tung University, Taiwan

Tutorial Presenters’ Bios:

Jen-TzungChien is the Chair Professor at the National Chiao Tung University, Taiwan. He held the Visiting Professor position at the IBM T. J. Watson Research Center, Yorktown Heights, NY, in 2010. His research interests include machine learning, deep learning, computer vision and natural language processing. Dr. Chien served as the associate editor of the IEEE Signal Processing Letters in 2008-2011, the general co-chair of the IEEE International Workshop on Machine Learning for Signal Processing in 2017, and the tutorial speaker of the ICASSP in 2012, 2015, 2017, the INTERSPEECH in 2013, 2016, the COLING in 2018, the AAAI, ACL, KDD, IJCAI in 2019. He received the Best Paper Award of IEEE Automatic Speech Recognition and Understanding Workshop in 2011 and the AAPM Farrington Daniels Award in 2018. He has published extensively, including the books “Bayesian Speech and Language Processing”, Cambridge University Press, in 2015, and “Source Separation and Machine Learning”, Academic Press, in 2018. He is currently serving as an elected member of the IEEE Machine Learning for Signal Processing Technical Committee.

External website with more information on Tutorial:

Physics of The Mind

Leonid I. Perlovsky
Harvard University


What is physics of the mind? Is it possible? Physics of the mind uses the methodology of physics for extending neural networks towards more realistic modeling of the mind from perception through the entire mental hierarchy including language, higher cognition and emotions. The presentation focuses on mathematical models of the fundamental principles of the mind-brain neural mechanisms and practical applications in several fields. Big data and autonomous learning algorithms are discussed for cybersecurity, gene-phenotype associations, medical applications to disease diagnostics, financial predictions, data mining in distributed data bases, learning of patterns under noise, interaction of language and cognition in mental hierarchy. Mathematical models of the mind-brain are discussed for mechanisms of concepts, emotions, instincts, behavior, language, cognition, intuitions, conscious and unconscious, abilities for symbols, functions of the beautiful and musical emotions in cognition and evolution. This new area of science was created recently and won National and International Awards.

A mathematical and cognitive breakthrough, dynamic logic is described. It models cognitive processes ìfrom vague and unconscious to crisp and conscious,î from vague representations, plans, thoughts to crisp ones. It resulted in more than 100 times improvements in several engineering applications; brain imaging experiments at Harvard Medical School, and several labs around the world proved it to be a valid model for various brain-mind processes. New cognitive and mathematical principles are discussed, language-cognition interaction, function of music in cognition, and co-evolution of music and cultures. How does language interact with cognition? Do we think using language or is language just a label for completed thoughts? Why the music ability has evolved from animal cries to Bach and Elvis? I briefly review past mathematical difficulties of computational intelligence and new mathematical techniques of dynamic logic and neural networks implementing it, which overcome past limitations. Dynamic logic reveals the role of unconscious mechanisms, which will lead to revolution in psychology.

The presentation discusses cognitive functions of emotions. Why human cognition needs emotions of beautiful, music, sublime. Dynamic logic is related to knowledge instinct and language instinct; why are they different? How languages affect evolution of cultures. Language networks are scale-free and small-world, what does this tell us about cultural values? What are the biases of English, Spanish, French, German, Arabic, Chinese; what is the role of language in cultural differences?

Relations between cognition, language, and music, are discussed. Mathematical models of the mind and cultures bear on contemporary world, and may be used to improve mutual understanding among peoples around the globe and reduce tensions among cultures.

Leonid I. Perlovsky
Harvard University,


Dr. Leonid Perlovsky is Visiting Professor at Harvard University School of Engineering and Applied Science, Harvard University Medical School, Professor at Northeastern University Psychology and Northeastern University Engineering,  Professor at St. Petersburg Polytechnic University, CEO LPIT, past Principal Research Physicist and Technical Advisor at the Air Force Research Laboratory (AFRL). He leads research projects on neural networks, modeling the mind and cognitive algorithms for integration of sensor data with knowledge, multi-sensor systems, recognition, fusion, languages, aesthetic emotions, emotions of the beautiful, music cognition, and cultures. He developed dynamic logic that overcame computational complexity in engineering and psychology. As Chief Scientist at Nichols Research, a $0.5B high-tech organization, he led the corporate research in intelligent systems and neural networks. He served as professor at Novosibirsk University and New York University; as a principal in commercial startups developing tools for biotechnology, text understanding, and financial predictions. His company predicted the market crash following 9/11 a week before the event. He is invited as a keynote plenary speaker and tutorial lecturer worldwide, including most prestigious venues, such as Nobel Forum, published more than 600 papers, 20 book chapters, and 8 books, including ìNeural Networks and Intellect,î Oxford University Press 2001 (currently in the 3rd printing), ìCognitive Emotional Algorithmsî Springer 2011, “Music: Passions and Cognitive Functions” Academic Press 2017. Dr. Perlovsky participates in organizing conferences on Neural Networks, CI, Past chair of IEEE Boston CI Chapter; serves on the Editorial Boards for ten journals, including Editor-in-Chief for ìPhysics of Life Reviewsî, IF=13.84, Thomson Reuter rank #1 in the world, past member of the INNS Board of Governors, Chair of the INNS Award Committee. Received National and International awards including the Gabor Award, the top engineering award from the INNS; and the John McLucas Award, the highest US Air Force Award for basic research.

Accompanying text

I propose to include in course registration, as an option, a new book by Academic Press, “Music: Passions and Cognitive Functions” (a mixture of science and popular science, $49.90). This book addresses the entire mind from basic principles to learning mechanisms, to higher cognition, including mechanisms of musical emotions, beautiful, meaning + co-evolution of culture and music. Aristotle, Kant, Darwin called music “the greatest mystery,” and contemporary evolutionary musicologists agree with Darwin, music is a millennial mystery, which just have been understood.


From Brain to Deep Neural Networks

Nottingham Trent University UK

Clive Cheong Took
Royal Holloway University of London UK


The aim of this tutorial is to provide the stepping stone for machine learning enthusiasts into the area of brain pathway modelling using innovative deep learning techniques through processing and learning from electroencephalogram (EEG). An insight into EEG generation and processing will provide the audience with a better understanding of deep network structures used to learn and detect the insightful information about the deep brain function.

Tutorial Presenters

SaeidSanei, Nottingham Trent University UK

Clive Cheong Took, Royal Holloway University of London UK


SaeidSanei is a full professor with Nottingham Trent University and a visiting professor at Imperial College London. He leads a group where several young researchers working in EEG Processing and its application in brain computer interface (BCI). He authored two research monographs on electroencephalogram (EEG) processing and pattern recognition. Saeid has delivered numerous workshops on EEG Signal Processing & Machine Learning with diverse applications all over the world particularly in Europe, China, and Singapore.

Clive Cheong Took is a senior lecturer (associate professor) at Royal Holloway, University of London. Clive has a background in machine learning and investigates its applications in biomedical problems for more than 10 years. He is an associate editor for IEEE Transactions on Neural Networks and Learning Systems since 2013, and has co-organised special issues on deep learning for healthcare and security. At WCCI 2020, he will also co-organise a special session on Generative Adversarial Learning with Ariel Ruiz-Garcia, Vasile Palade, Jürgen Schmidhuber, and Danilo Mandic.

External Website


Adversarial Machine Learning: On The Deeper Secrets of Deep Learning

Danilo V. Vargas, Associate Professor
Faculty of Information Science and Electrical Engineering, Kyushu University


Recent research has found out that Deep Neural Networks (DNN) behave strangely to slight changes in the input. This tutorial will talk about this curious, and yet, still poorly understood behavior. Moreover, it will dig deep into the meaning of this behavior and its links to the understanding of DNNs.

In this tutorial, I will explain the basic concepts underlying adversarial machine learning and briefly review the state-of-the-art with many illustrations and examples. In the latter part of the tutorial, I will demonstrate how attacks are helping to understand the behavior of DNNs as well as show how many defenses proposed are not improving the robustness. There are still many challenges and puzzles left unsolved. I will present some of them as well as delineate a couple of paths to a solution. Lastly, the tutorial will be closed with an open discussion and promotion of cross-community collaborations.

Tutorial Presenters (names with affiliations):

Associate Professor at Kyushu University

Tutorial Presenters’ Bios:

Danilo Vasconcellos Vargas is currently an Associate Professor at Kyushu University, Japan. His research interests span Artificial Intelligence (AI), evolutionary computation, complex adaptive systems, interdisciplinary studies involving or using an AI’s perspective and AI applications. Many of his works were published in prestigious journals such as Evolutionary Computation (MIT Press), IEEE Transactions on Evolutionary Computation and and IEEE Transactions of Neural Networks and Learning Systems with press coverage in news magazines such as BBC news. He received awards such as the IEEE Excellent Student Award and scholarships to study in Germany and Japan for many years. Regarding his community activities, he was the presenter of two tutorials at the renowned GECCO conference.

Regarding adversarial machine learning, he has more than 5 invited talks about the subject. One given in a workshop in CVPR 2019. He has authored more than 10 articles and three chapters on books about adversarial machine learning, one of its research output was published on BBC news (about the paper “One pixel attack for fooling deep neural networks”).

Currently, he leads the Laboratory of Intelligent Systems aimed at building a new age of robust and adaptive artificial intelligence. More info can be found both in his website and his Lab Page

External website with more information on Tutorial (if applicable):


Machine Learning for Data Dtreams in Python with Scikit-Multiflow

Jacob Montiel
University of Waikato

HeitorMurilo Gomes
University of Waikato

Jesse Read
École Polytechnique

Albert Bifet
University of Waikato


Data stream mining has gained a lot of attention in recent years as an exciting researc

h topic. However, there is still a gap between the pure research proposals and the practical applications to real world machine learning problems. In this tutorial we are going to introduce attendees to data stream mining procedures and examples of big data stream mining applications. Besides the theory we will also present examples using the \skmultiflow framework, a novel open source Python framework.

 Tutorial Presenters (names with affiliations): 

Jacob Montiel (University of Waikato), HeitorMurilo Gomes (University of Waikato), Jesse Read (École Polytechnique), Albert Bifet (University of Waikato)

 Tutorial Presenters’ Bios: 

Jacob Montiel

Jacob is a research fellow at the University of Waikato in New Zealand and the lead maintainer of \skmultiflow. His research interests are in the field of machine learning for evolving data streams. Prior to focusing on research, Jacob led development work for onboard software for aircraft and engine’s prognostics at GE Aviation; working in the development of GE’s Brilliant Machines, part of the IoT and GE’s approach to Industrial Big Data.


HeitorMurilo Gomes

Heitor is a senior research fellow at the University of Waikato in New Zealand. His main research area is Machine Learning, specially Evolving Data Streams, Concept Drift, Ensemble methods and Big Data Streams. He is an active contributor to the open data stream mining project MOA and a co-leader of the StreamDM project, a real-time analytics open-source software library built on top of Spark Streaming.


Jesse Read

Jesse is a Professor at the DaSciM team in LIX at École Polytechnique in France. His research interests are in the areas of Artificial Intelligence, Machine Learning, and Data Science and Mining. Jesse is the maintainer of the open-source software MEKA, a multi-label/multi-target extension to Weka.


Albert Bifet

Albert is a Professor at University of Waikato and Télécom Paris. His research focuses on Data Stream mining, Big Data Machine Learning and Artificial Intelligence. Problems he investigate are motivated by large scale data, the Internet of Things (IoT), and Big Data Science.

He co-leads the open source projects MOA (Massive On-line Analysis), Apache SAMOA (Scalable Advanced Massive Online Analysis) and StreamDM.



External website with more information on Tutorial (if applicable): NA

Experience Replay for Deep Reinforcement Learning
A Comprehensive Review

Leeds Beckett University, UK.

Coventry University, UK.

Primary contact ( )


Reinforcement learning is expected to play an important role in our AI and machine learning era, this is evident by latest major advances, particularly in games. This is due to its flexibility and arguably minimum designer intervention especially when the feature extraction process is left to a robust model such as a deep neural network. Although deep learning alleviated the long-standing burden of manual feature design, another important issue remains to be tackled, that is the experience-hungry nature of RL models which is mainly due to bootstrapping and exploration. One important technique that will play a centre stage role in tackling this issue is experience replay. Naturally, it allows us to capitalise on the already gained experience and to shorten the time needed to train an RL agent. The frequency and depth of the replay can vary significantly and currently a unifying view and a clear understanding of the issues related to off-policy and on-policy replay is generally lacking. For example, on the far end of the spectrum, extensive experience-replay, although should conceivably help reduce the data-intensity of the training period, when done naively, put significant constrains on the practicality of the model and requires both extra time and space that can grow significantly; relegating the method impractical. On the other hand, in its optimal form, whether it is a target re-evaluation or a re-update, when importance sampling ratio uses bootstrapping, the methods computational requirements matches other model based RL methods for planning. In this tutorial we will be tackling the issues and techniques related to the theory and application of deep reinforcement learning and experience replay, and how and when these techniques can be applied effectively to produce a robust model. In addition, we will promote a unified view of experience replay that involves replaying and re-evaluation of the target updates. What is more, we will show that the generalised intensive experience replay method can be used to derive several important algorithms as special cases of other methods including n-steps true online TD and LSTD. This surprising but important view can help immensely the neuro-dynamic/RL community to move this concept further forward and will benefit both the researchers and practitioners in their quest for a better and more practical RL methods and models.


Deep reinforcement learning combined with experience replay allows us to capitalise on the gained experience; capping the model appetite for new experience. Experience replay can be performed in several ways some of which may or may not be suitable for the problem in hand. For example, intensive experience replay, if performed optimally, could shorten the learning cycle of an RL agent and allow it to be taken away from the virtual arena such as games/simulation to the physical/mechanical arena such as robotics. The type of intensive training required for RL models, which can be afforded by a virtual agent, may not be tolerated, or may at least not be desired, for a physical agent. Of course, one way to move to the mechanical world is by utilising model-free policy gradient (search) methods that are based on simulation and then migrate/map the model to the physical world. However, constructing a simulation for the physical world is a tedious process that requires extra time and calibration making it impractical to the type of pervasive RL models that we hope to achieve. In both cases, whether it is a policy gradient or value-function with policy iteration, experience replay plays and important role to make them more practical. For example, for policy gradient methods adopting a softmax policy is preferable over the ε-greedy policy as it can approach asymptotically a deterministic policy after some exploration, while ε-greedy will always have a fixed percentage of exploratory actions regardless of the maturity of the policy being developed/improved.

The tutorial is timely and novel in its treatment and packaging of the topic. It will lay the necessary foundation for a unified overview of the subject. Which will allow other researcher to take it to the next level and will allow the subject area to take off on solid and unified grounds.

It turns out that extensive experience replay can be used as a generalised model from which several n-steps modern reinforcement learning techniques can be deduced, offering an easy way to unify several popular reinforcement learning methods and giving rise to several new more.

In this tutorial I will be giving a step by step detailed overview of the framework that allows us to safely deploy replay methods.
The tutorial will review all major advances of RL replay algorithms and will categorise them into: occasional replay, regular replay, and intensive regular replay.

Bellman equations are the fundamental of individual RL updates, however all the n-steps aggregate methods that have driven the latest breakthrough of RL need different treatment.

The unified view through experience replay offer a new theoretical framework to study the inner traits/properties of those techniques. For example, convergence of several new RL algorithms can be proven by proving the convergence of the unified replay algorithm and then projecting each algorithm as a special case of the general method.

——————————–First part about one hour————————by A. Altahhan————-
• Deep Reinforcement Learning a concise review
• Traditional Replay and Type of Replay: Occasional, Regular, Intensive or Both
• Unified View of Experience Replay
o Replay Past Update vs Target Re-evaluation: how to integrate
o Off-policy vs On-policy! Experience Replay
o The Role of Importance Sampling and Bootstrapping
o Emphatic TD and its Cousins
o Unifying Algorithm for Regular Intensive Sarsa(λ)-Replay
o N-steps Methods as a Special cases of Experience Replay
o Policy Search Methods and Unified Replay
o Exploration, Exploitation and Replay
o Convergence of Replay Methods
——————————–Second part about one hour———————-by V. Palade————–
• Practical Consideration for DRL and Experience Replay:
o To Replay or Not to Replay!
o Time and Space Complexity of Replay
o Combing Deep Learning and Replay in a Meaningful Way
o Softmax or ε-Greedy for Replay
o Replay for Games vs Replay for Robotics
o Live Demonstration on the Game of Packman
o Live Demo on Cheap Affordable Robot
o Audience Running the Code and Connect to the Robot
• Q&A, Discussion and Closing Notes
• To develop a deep understanding of the capabilities and limitations of deep reinforcement learning
• To develop a unified view of the different types of experience replay and how and when to apply them in the context of deep reinforcement learning

• Demonstrate how to apply experience replay on policy search methods
• Demonstrate how to combine of experience replay and deep learning
• Demonstrate first-hand the effect of replay on multiple platforms including Games and Robotics domains

Expected Outcomes
• To gain an in-depth understanding of recent developments in DRL and experience replay
• To gain an in-depth understanding of which update rules to adopt, on-policy or off-policy
• To contrast traditional replay with the more recent re-evaluation that has been termed as replay

Target audience and session organisation:

The target audience are researchers and practitioners in the growing reinforcement learning community who are seeking a better understanding of the issues surrounding combining experience replay, deep learning and off-policy learning in their quest for a more practical methods and models.

The tutorial will take 2 hours to be completed and will provide code that can be easily run on Octave or MATLAB.

The two-hour tutorial will be delivered into two integrated sections, the first will cover the theory and the second will cover the application. The presenters will alternate between each other on both the theoretical part and the application part. Two applications will be covered one is the game of packman and the other is a hacked mini robot that will learn to navigate in small 2x1m easy to assemble arena. The robot is a cheap affordable robot, such as Lego, that is equipped with a Raspberry PI module and camera. It will use vision and deep learning combined with experience replay to learn to perform a homing task. The audience will be provided with the Octave/MATLAB code to experience first-hand with the algorithms and see how they are developed form the inside. The code is general enough to be reused for other RL problems. For remote access audiences the code will be shared on Git, so they will be able to experiments with the model directly and a web camera will be mounted on top of the small robot arena to broadcast how the robot will gradually learn to navigate towards it home using vision to learn an optimal path and using infra-red sensors for obstacle avoidance behaviour. For those attending the tutorial they can SSH to the controlling laptop, to which the intensive processing is off-boarded, in order to try and change the Octave code that is driving the robot and see its effect. If a VPN can be setup, then the same the remote audiences can be provided with the same experience.

Previous tutorials:

We have given a successful tutorial on RL and Deep Learning at the IJCNN 2018 on July 2018 in Rio.


Senior Lecturer in Computing Email: Dr Abdulrahman Altahhan has been teaching AI and related topics since 2000, currently he is a Senior Lecturer in Computing as Leeds Beckett University. He served as the Programme Director of MSc in Data Science at Coventry University, UK. Previously, Dr Altahhan worked in Dubai as an Assistant Professor and Acting Dean. He has a PhD in 2008 in Reinforcement Learning and Neural Networks and an MPhil in Fuzzy Expert Systems. Dr Abdulrahman is actively researching in the area of Deep Reinforcement Learning applied to robotics and autonomous agents with publications in this front. He has extensively prepared designed and developed a novel reinforcement learning family of methods and studied their mathematical underlying properties. Recently he established a new set of algorithms and findings where he combined deep learning with reinforcement learning in a unique way that is hoped to contribute to the development of this new research area. He presented in prestigious conferences and venues in the area of machine learning and neural network. Dr Abdulrahman is a reviewer for important Neural Networks related journals, and venues from Springer and the IEEE; including Neural Computing and Applications journal, International Conference of Robotics and Automation ICRA, and he serves in the programme committees for related conferences such as INNS Big Data 2016. Dr Abdulrahman has organised several special sessions in Deep Reinforcement Learning in IJCNN2016 and IJCNN 2017 as well as ICONIP 2016 and 2017 conferences. Dr Abdulrahman is an EPSRC reviewer and taught Machine Learning, Neural Networks and Big Data Analysis modules in the MSc of Data Science, he is an IEEE Member, a member of the IEEE Computational Intelligence Society and International Neural Network Society.


Professor: Pervasive Computing Email: Prof Vasile Palade has joined Coventry University in 2013 as a Reader in Pervasive Computing, after working for many years as a Lecturer in the Department of Computer Science, of the University of Oxford, UK. His research interests lie in the wide area of machine learning, and encompass mainly neural networks and deep learning, neuro-fuzzy systems, various nature inspired algorithms such as swarm optimization algorithms, hybrid intelligent systems, ensemble of classifiers, class imbalance learning. Application areas include image processing, social network data analysis, Bioinformatics problems, fault diagnosis, web usage mining, health, among others. Dr Palade is author and co-author of more than 160 papers in journals and conference proceedings as well as books on computational intelligence and applications (which attracted 4250 citations and an h-index of 29 according to Scholar Google). He has also co-edited several books including conference proceedings. He is an Associate Editor for several reputed journals, such as Knowledge and Information Systems (Elsevier), Neurocomputing (Elsevier), International Journal on Artificial Intelligence Tools (World Scientific), International Journal of Hybrid Intelligent Systems (IOS Press). He has delivered keynote talks to international conferences on machine learning and applications. Prof. Vasile Palade is an IEEE Senior Member and a member of the IEEE Computational Intelligence Society.

References that will be covered

[1] L.-J. Lin, “Self-improving reactive agents based on reinforcement learning, planning and teaching,” Machine Learning, vol. 8, no. 3, pp. 293-321, 1992.
[2] V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, p. 529, 2015
[3] A. Altahhan, “TD(0)-Replay: An Efficient Model-Free Planning with full Replay,” in 2018 International Joint Conference on Neural Networks (IJCNN), 2018, pp. 1-7.
[4] A. Altahhan, “Deep Feature-Action Processing with Mixture of Updates,” in 2015 International Conference on Neural Information Processing, Istanbul, Turkey, 2015, pp. 1-10.
[5] S. Zhang and R. S. Sutton, “A Deeper Look at Experience Replay,” eprint arXiv:1712.01275, 2017
[6] H. Vanseijen and R. Sutton, “A Deeper Look at Planning as Learning from Replay,” presented at the Proceedings of the 32nd International Conference on Machine Learning, Proceedings of Machine Learning Research, 2015.
[7] Y. Pan, M. Zaheer, A. White, A. Patterson, and M. White, “Organizing Experience: A Deeper Look at Replay Mechanisms for Sample-based Planning in Continuous State Domains,” eprint arXiv:1806.04624, 2018.
[8] van Hasselt, H. and Sutton, R. S. (2015). Learning to predict independent of span. arXiv:1508.04582.
[9] H. van Seijen, A. Rupam Mahmood, P. M. Pilarski, M. C. Machado, and R. S. Sutton, “True Online Temporal-Difference Learning,” Journal of Machine Learning Research, vol. 17, no. 145, pp. 1-40, 2016.
[10] Sutton, R. S. and Barto, A. G. (2017). Reinforcement Learning: An Introduction. 2nd Edition, Accessed online, MIT Press, Cambridge.
[11] Watkins, C.J.C.H. & Dayan, P., Q-learning, Mach Learn (1992) 8: 279.
[12] J. Modayil, A. White, and R. S. Sutton, “Multi-timescale nexting in a reinforcement learning robot,” Adaptive Behavior, vol. 22, no. 2, pp. 146-160, 2014/04/01 2014.
[13] D. Precup, R. S. Sutton, and S. Dasgupta, “Off-Policy Temporal Difference Learning with Function Approximation,” presented at the Proceedings of the Eighteenth International Conference on Machine Learning, 2001.
[14] R. S. Sutton, A. Rupam Mahmood, and M. White, “An Emphatic Approach to the Problem of Off-policy Temporal-Difference Learning,” Journal of Machine Learning Research, vol. 17, pp. 1-29, 2016.
[15] H. Yu, “On Convergence of Emphatic Temporal-Difference Learning,” eprint arXiv:1506.02582, 2015.
[16] A. Hallak, A. Tamar, R. Munos, and S. Mannor, “Generalized Emphatic Temporal Difference Learning: Bias-Variance Analysis,” eprint arXiv:1509.05172, 2015.
[17] X. Gu, S. Ghiassian, and R. S. Sutton, “Should All Temporal Difference Learning Use Emphasis?,” eprint arXiv:1903.00194, 2019.
[18] M. P. Deisenroth, G. Neumann, J. Peters, et al., “A survey on policy search for robotics,” Foundations and Trends R in Robotics, vol. 2, no. 1–2, pp. 1–142, 2013.
[19] R. S. Sutton, C. Szepesvari, A. Geramifard, and M. P. Bowling, “Dyna-Style Planning with Linear Function Approximation and Prioritized Sweeping,” eprint arXiv:1206.3285, p. arXiv:1206.3285, 2012.

Mechanisms of Universal Turing Machines inDevelopmental Networks for Vision, Audition, and Natural Language Understanding

Juyang Weng
Department of Computer Science and Engineering
Cognitive Science Program
Neuroscience Program

Michigan State University, East Lansing, MI, 48824 USA
Tutorial URL:


Finite automata (i.e., finite-state machines) have been taught in almost all electrical engineering programs.  However, Turing machines, especially universal Turing machines (UTM), have not been taught in many electrical engineering programs and were dropped inmany computer science and engineering programs as a required course.This resulted inmajor knowledge weakness in many people working on neural networks and AI since without knowing UTM, researchers have considered neural networks as merely general-purpose function approximators, but notgeneral-purpose computers. This tutorialfirst briefly explains what a Turing machine is, what aUTM is,why a UTM is a general-purpose computer, and why Turing machines and UTMs are all symbolic and handcrafted.  In contrast, a Developmental Network (DN)not only is a new kind of neural network, but also canlearn to becomea general-purposecomputer by learning an emergent Turing machine.  It does so by first taking a sequence of instructions as a user provided program and the data for the program to run on, and then running the program on the data.  Therefore, a universal Turing machine inside a DN emerges autonomously on the fly.  The DN learns UTM transitionsone at a time incrementally, without iterations,and refines UTM transitionsfrom the physical experience through network’slifetime.

Presenter Biographies:

Juyang Weng: Professor at the Department of Computer Science and Engineering, the Cognitive Science Program, and the Neuroscience Program, Michigan State University, East Lansing, Michigan, USA. He is also a visiting professor at Fudan University, Shanghai, China. He received his BS degree from Fudan University in 1982, his MS and PhD degrees from University of Illinois at Urbana-Champaign, 1985 and 1989, respectively, all in Computer Science.  From August 2006 to May 2007, he is also a visiting professor at the Department of Brain and Cognitive Science of MIT.   His research interests include computational biology, computational neuroscience, computational developmental psychology, biologically inspired systems, computer vision, audition, touch, behaviors, and intelligent robots.  He is the author or coauthor of over 250 research articles.  He is an editor-in-chief of the International Journal of Humanoid Robotics and an associate editor of the IEEE Transactions on Autonomous Mental Development. He has chaired and co-chaired some conferences, including the NSF/DARPA funded Workshop on Development and Learning 2000 (1st ICDL), 2nd ICDL (2002), 7th ICDL (2008), 8th ICDL (2009), and INNS NNN 2008. He was the Chairman of the Governing Board of the International Conferences on Development and Learning (ICDLs) (2005-2007), chairman of the Autonomous Mental Development Technical Committee of the IEEE Computational Intelligence Society (2004-2005), an associate editor of IEEE Trans. On Pattern Recognition and Machine Intelligence, an associate editor of IEEE Trans. on Image Processing.  He was the General Chair of AIML Contest 2016 and taught BMI 831, BMI 861 and BMI 871 that prepared the contestants for the AIML Contest session in IJCNN 2017 in Alaska.  He is a Fellow of IEEE.



Evolution of Neural Networks

The University of Texas at Austin and Cognizant Technology Solutions


Evolution of artificial neural networks has recently emerged as a powerful technique in two areas. First, while the standard value-function based reinforcement learning works well when the environment is fully observable, neuroevolution makes it possible to disambiguate hidden state through memory. Such memory makes new applications possible in areas such as robotic control, game playing, and artificial life. Second, deep learning performance depends crucially on the network architecture and hyperparameters. While many such architectures are too complex to be optimized by hand, neuroevolution can be used to do so automatically. Such evolutionary AutoML can be used to achieve good deep learning performance even with limited resources, or state=of-the art performance with more effort. It is also possible to optimize other aspects of the architecture, like its size, speed, or fit with hardware. In this tutorial, I will review (1) neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes, (2) methods for neural architecture search and evolutionary AutoML, and (3) applications of these techniques in control, robotics, artificial life, games, image processing, and language.

Tutorial Presenters (names with affiliations):

The University of Texas at Austin and Cognizant Technology Solutions

Tutorial Presenters’ Bios:

RistoMiikkulainen is a Professor of Computer Science at the University of Texas at Austin and a AVP of Evolutionary AI at Cognizant Technology Solutions. He received an M.S. in Engineering from the Helsinki University of Technology, Finland, in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His research focuses on methods and applications of neuroevolution, as well as neural network models of natural language processing and self-organization of the visual cortex; he is an author of over 430 articles in these research areas. He is an IEEE Fellow, recipient of the 2020 IEEE CIS EC Pioneer Award, recent awards from INNS and ISAL, as well as nine Best-Paper Awards at GECCO.

External website with more information on Tutorial (if applicable):

Probabilistic Tools for Analysis of Network Performance

RNDr. Věra Kůrková, DrSc.
Institute of Computer Science of the Academy of Sciences of the Czech Republic, Pod Vodárenskou věží 2, 182 07 Prague, Czech  Republic


Due to the recent progress of hardware, neural networks with large numbers of parameters can perform classification and regression tasks on large data sets. In particular, randomized models and algorithms have turned out to be quite efficient for performing high-dimensional tasks. Some insights into computation of such tasks can be obtained using a probabilistic approach which shows that with increase in data dimension and network size, outputs tend to be sharply concentrated around precalculated values. This behavior can be explained by rather counter-intuitive properties of the geometry of high  dimensional spaces. It exhibits “concentration of measure phenomenon” which implies probabilistic inequalities on concentration of values of random variables around their mean values and possibilities of reduction of dimensionality of data by random projections.

This tutorial will present probabilistic tools for analysis of performance of neural networks on randomly selected tasks and for analysis of randomized algorithms. It will review recent results on choice of optimal networks for tasks described by probability distributions and on behaviour of networks with large numbers of randomly selected parameters. The tutorial will focus on the following topics:

Counter-intuitive properties of geometry of high-dimensional spaces, “concentration of measure phenomenon”, correlation and quasi-orthogonal dimension, Levy Lemma on concentration of values of smooth functions, Johnson-Lindenstrauss Lemma on random projections and reduction of dimension, Chernoff-Hoeffding Inequality on sums of large numbers of independent random variables, Azuma Inequality on functions of random variables, probabilistic inequalities holding without the “naive Bayes assumption”, implications of the probabilistic approach to analysis of suitability of networks for tasks characterized by probability distributions and to performance of randomized algorithms.

The tutorial is self-contained, and is suitable for researchers who already use neural networks as a tool and wish to understand their mathematical foundations, capabilities and limitations. The tutorial does not require a sophisticated mathematical background.

Tutorial Presenter:

RNDr. Věra Kůrková, DrSc.

Institute of Computer Science of the Academy of Sciences of the Czech Republic, Pod Vodárenskou věží 2, 182 07 Prague, Czech  Republic


Tutorial Presenter’s Bio:

Věra Kůrková received Ph.D. in mathematicsfromthe Charles University, Prague, and DrSc. (Prof.) in theoreticalcomputer science  fromtheAcademyofSciencesofthe Czech Republic. Since 1990 sheisaffiliatedwiththe Institute ofComputer Science, Prague, in 2002-2009 shewastheHeadofthe Department ofTheoreticalComputer Science. Her researchinterests are in mathematicaltheoryofneurocomputing and machinelearning. Her workincludescharacterizationsofrelationshipsbetweennetworksofvarioustypes, estimatesoftheir model complexities, characterizationoftheircapabilitiesofgeneralization and ofprocessinghigh-dimensionaltasks. Since 2008, she has been a memberoftheBoardoftheEuropeanNeural Network Society (ENNS) and in 2017-2019 its president. Sheis a memberoftheeditorialboardsofthejournalsNeuralNetworks and NeuralProcessingLetters, and shewas a guest editor ofspecialissuesofthejournalsNeuralNetworks and Neurocomputing. ShewasthegeneralchairofEuropeanconferences ICANNGA 2001, ICANN 2008, co-chairorhonorarychairof ICANN 2017, ICANN 2018, and ICANN 2019.

RANKING GAME: How to combine human and computational intelligence?
(A Cross-disciplinary tutorial)

Organizer: Péter Érdi (Henry Luce Professor of Complex Systems Studies, Kalamazoo College;

Wigner Research Centre for Physics, Budapest


Comparison, ranking and even rating is a fundamental feature of human nature. The goal of this tutorial to explain the integrative aspects of the evolutionary, psychological, institutional and algorithmic aspects of ranking. Since we humans (1) love lists; (2), are competitive and (3), are jealous of other people, we like ranking. The practice of ranking is studied in social psychology and political science, the algorithms of ranking in computer science. Initial results of the possible neural and cognitive architectures behind rankings are also reviewed.

The tutorial is based on the book of the organizer:


The Unwritten Rules of the Social Game We All Play, Oxford University Press, 2020

Tentative plan:
1. Why we rank? How we rank?
1.1 Comparison, ranking and rating
1.2 The social psychology of ranking
1.3 Biased ranking
1.4 Social ranking
Social ranking in animal societies
Pecking order
1.5 Struggle for reputation
2. Ranking in the every day’s life
2.1 Ranking countries
2.2 Ranking universities
3. A success story: ranking the web
3.1 PageRank and its variations
3.2. Rank reversal
3.3 Webometrics
4. Scientific journals and scientists
4.1 Publish and perish, but where? Impact factor, the fading superstar
4.2 h-index and its variations
5. Cognitive architectures for ranking: are lists perfectly designed for our brain?
6. Ranking, rating and everything else. The mystery of the future: how to combine
human and computational intelligence?


Dr. Péter Érdi serves as the Henry R. Luce Professor of Complex Systems Studies at Kalamazoo College. He is also a research professor in his home town, in Budapest, at the Wigner Research Centre of Physics. In addition, he is the founding co-director of the Budapest Semester in Cognitive Science, a study abroad program. Péter is a Member of the Board of Governors of the International Neural Network Society, the past Vice President of the International Neural Network Society, and among others as the past Editor-in-Chief of Cognitive Systems Research. He served as the Honorary Chair of the IJCNN 2019, and now serving as an IJCNN Technical Chair of the IEEE World Congress on Computational Intelligence in Glasgow. His books on mathematical modeling of chemical, biological, and other complex systems have been published by Princeton University Press, MIT Press, Springer Publishing house. His new book RANKING: The Unwritten Rules of the Social Game We All Play was published recently by the Oxford University Press, and is already under translation for several languages. See also

Instance Space Analysis for Rigorous and Insightful Algorithm Testing

Prof. Kate Smith-Miles
School of Mathematics and Statistics, The University of Melbourne, Australia

Dr. Mario Andrés Muñoz
School of Mathematics and Statistics, The University of Melbourne, Australia


Algorithm testing often consists of reporting on-average performance across a suite of well-studied benchmark instances. Therefore, the conclusions drawn from testing depend on the choice of benchmarks. Hence, test suites should be unbiased, challenging, and contain a mix of synthetic and real-world-like instances with diverse structure. Without diversity, the conclusions drawn future expected algorithm performance are necessarily limited. Moreover, on-average performance often disguises the strengths and weaknesses of an algorithm for particular types of instances. In other words, the standard benchmarking approach has two limitations that affect the conclusions: (a) there is no mechanism to assess whether the selected test instances are unbiased and diverse enough; and (b) there is little opportunity to gain insights into the strengths and weaknesses of algorithms, when hidden by on-average performance metrics.

This tutorial introduces Instance Space Analysis (ISA), a visual methodology for algorithm evaluation that reveals the relationships between the structure of an instance and its impact on performance. ISA offers an opportunity to gain more nuanced insights into algorithm strengths and weaknesses for various types of test instances, and to objectively assess the relative power of algorithms, unbiased by the choice of test instances. A space is constructed whereby an instance is represented a point in a 2d plane, with algorithm footprints being the regions of predicted good performance of an algorithm, based on statistical evidence. From this broad instance space, we can assess the diversity of a chosen test set. Moreover, through ISA we can identify regions where additional test instances would be valuable to support greater insights. By generating new instances with controllable properties, an algorithm can be “stress-tested” under all possible conditions. The tutorial makes use of the on-line tools available at the Melbourne Algorithm Test Instance Library with Data Analytics (MATILDA) and provides access to its MATLAB computational engine. MATILDA also provides a collection of ISA results and other meta-data available for downloading for several well-studied problems from optimization and machine learning, from previously published studies.

Tutorial Presenters (names with affiliations):

Prof. Kate Smith-Miles
School of Mathematics and Statistics
The University of Melbourne, Australia

Dr. Mario Andrés Muñoz
School of Mathematics and Statistics
The University of Melbourne, Australia

Tutorial Presenters’ Bios:

Kate Smith-Miles is a Professor of Applied Mathematics in the School of Mathematics and Statistics at The University of Melbourne and holds a five year Laureate Fellowship from the Australian Research Council. Prior to joining The University of Melbourne in September 2017, she was Professor of Applied Mathematics at Monash University, and Head of the School of Mathematical Sciences (2009-2014). Previous roles include President of the Australian Mathematical Society (2016-2018), and membership of the Australian Research Council College of Experts (2017-2019). Kate was elected Fellow of the Institute of Engineers Australia (FIEAust) in 2006, and Fellow of the Australian Mathematical Society (FAustMS) in 2008.

Kate obtained a B.Sc(Hons) in Mathematics and a Ph.D. in Electrical Engineering, both from The University of Melbourne. Commencing her academic career in 1996, she has published 2 books on neural networks and data mining, and over 260 refereed journal and international conference papers in the areas of neural networks, optimization, data mining, and various applied mathematics topics. She has supervised 28 PhD students to completion and has been awarded over AUD$15 million in competitive grants, including 13 Australian Research Council grants and industry awards. MATILDA and the instance space analysis methodology has been developed through her 5-year Laureate Fellowship awarded by the Australian Research Council.

Mario Andrés Muñoz is a Researcher in Operations Research in the School of Mathematics and Statistics, at the University of Melbourne. Before joining the University of Melbourne, he was a Research Fellow in Applied Mathematics at Monash University (2014-2014), and a Lecturer at the Universidad del Valle, Colombia (2008-2009)

Mario Andrés obtained a B.Eng. (2005) and a M.Eng. (2008) in Electronics from Universidad del Valle, Colombia, and a Ph.D. (2014) in Engineering from the University of Melbourne. He has published over 40 refereed journal and conference papers in optimization, data mining, and other inter-disciplinary topics. He has developed and maintains the MATLAB code that drives MATILDA.

External website with more information on Tutorial (if applicable):

Multi-modality for Biomedical Problems: Theory and Applications

 Dr. Sriparna Saha
Associate Professor, Department of Computer Science and Engineering
Indian Institute of Technology Patna

Mr. Pratik Dutta
PhD Research Scholar, Department of Computer Science and Engineering
Indian Institute of Technology Patna


With the exploration of omics technologies, researchers are able to collect high-throughput biomedical data. The explosion of these new frontier omics technologies produces diverse genomic datasets such as microarray gene expression, miRNA expression, DNA sequence, 3D structures etc. These different representations (modality) of the biomedical data contain distinct, useful and complementary information of different samples. As a consequence, there is a growing interest in collecting ”multi-modal” data for the same set of subjects and integrating this heterogeneous information to obtain more profound insights into the underlying biological system. The current tutorial will discuss in detail different problems of bioinformatics and the concepts of multimodality in bioinformatics. In recent years different machine learning and deep learning based approaches become popular in dealing with multimodal data. Drawing attention from the above facts, this tutorial is a roadmap of existing deep multi-modal architectures in solving different computational biology problems. This tutorial will be an advanced survey equally of interest to academic researchers and industry practitioners – very timely with so much vibrant research in the computational biology domain over the past 5 years. As IEEE WCCI is an prestigious conference for discussion of neural network frontiers, this tutorial is very much relevant for IEEE WCCI.

Tutorial Presenters (names with affiliations): 

  1. SriparnaSaha, Associate Professor, Department of Computer Science and Engineering, Indian Institute of Technology Patna
  2. Pratik Dutta, PhD Research Scholar, Department of Computer Science and Engineering, Indian Institute of Technology Patna

Tutorial Presenters’ Bios: 

  1. SriparnaSaha:SriparnaSaha received the M.Tech and Ph.D. degrees in computer science from Indian Statistical Institute, Kolkata, India, in the years 2005 and 2011, respectively. She is currently an Associate Professor in the Department of Computer Science and Engineering, Indian Institute of Technology Patna, India. Her current research interests include machine learning, multi-objective optimization, evolutionary techniques, text mining and biomedical information extraction. She is the recipient of the Google India Women in Engineering Award, 2008, NASI YOUNG SCIENTIST PLATINUM JUBILEE AWARD 2016, BIRD Award 2016, IEI Young Engineers’ Award 2016, SERB WOMEN IN EXCELLENCE AWARD 2018 and SERB Early Career Research Award 2018. She is the recipient of DUO-India fellowship 2020, Humboldt Research Fellowship 2016, Indo-U.S. Fellowship for Women in STEMM (WISTEMM) Women Overseas Fellowship program 2018 and CNRS fellowship. She had also received India4EU fellowship of the European Union to work as a Post-doctoral Research Fellow in the University of Trento, Italy from September 2010 to January 2011. She was also the recipient of Erasmus Mundus Mobility with Asia (EMMA) fellowship of the European Union to work as a Post-doctoral Research Fellow in the Heidelberg University, Germany from September 2009 to June 2010. She had visited University of Caen, France as a visiting scientist during the period October 2013, December 2013, May-July, 2014 and May-June, 2015; University of Mainz, Germany as a visiting scientist from April-September 2016, April-August 2017; University of Kyoto, Japan as a visiting professor from June-July 2018; University of California, San Diego as a visiting scientist for the period December 2018-February 2019; Dublin City University, Ireland as a visiting scientist in July, 2019. She won the best paper awards in CLINICAL-NLP workshop of COLING 2016, IEEE-INDICON 2015, International Conference on Advances in Computing, Communications and Informatics (ICACCI 2012). According to Google Scholar, his citation count is 3488 and with h-index 24. For more information please refer to : sriparna.
  2. Pratik Dutta:Pratik Dutta is currently working as PhD scholar in the Department of Computer Science and Engineering at Indian Institute of Technology Patna. He received his BE and ME degree from Indian Institute of Engineering Science and Technology, Shibpur in 2013 and 2015, respectively. His research interest lies in computational biology, genomic sequence, protein-protein interaction, machine learning and deep learning techniques. He is the recipient of Visvesvaraya PhD research fellowship, an initiative of Ministry of Electronics and Information Technology (MeitY), Government of India (GoI). According to Google Scholar, his citation count is 34 along with h-index 4. For the last four years, he has extensively explored in various frontiers of computational biology more precisely in protein-protein interaction identification. He has published various research articles in different prestigious fora like Computers in Biology and Medicine, IEEE/ACM Transactions on Computational Biology and Bioinformatics, IEEE Journal of Biomedical and Health Informatics, Nature-Scientific Reports, etc.

External website with more information on Tutorial (if applicable):

Venue: With WCCI 2020 being held as a virtual conference, there will be a virtual experience of Glasgow, Scotland accessible through the virtual platform. We hope that everyone will have a chance to visit one of Europe’s most dynamic cultural capitals and the “World’s Friendliest City” soon in the future!.