Schedule of tutorials - 19th July, 2020

11:30 - 13:30
14:00 - 16:00
16:30 - 18:30
19:00 - 21:00
11:30 - 13:30
Tutorial Title Presenter Conference Email
Adversarial Machine Learning: On The Deeper Secrets of Deep Learning Danilo Vargas IJCNN vargas@inf.kyushu-u.ac.jp
Brain-Inspired Spiking Neural Network Architectures for Deep, Incremental Learning and Knowledge Evolution       Nikola Kasabov IJCNN nkasabov@aut.ac.nz
Fundamentals of Fuzzy Networks Alexander Gegov, Farzad Arabikhan FUZZ alexander.gegov@port.ac.uk
Instance Space Analysis for Rigorous and Insightful Algorithm Testing Kate Smith-Miles, Mario Andres, Munoz Acosta WCCI munoz.m@unimelb.edu.au
Advances in Deep Reinforcement Learning Thanh Thi Nguyen, Vijay Janapa Reddi,Ngoc Duy Nguyen, IJCNN thanh.nguyen@deakin.edu.au 
Selection Exploration and Exploitation Stephen Chen, James Montgomery CEC sychen@yorku.ca
Visualising the search process of EC algorithms Su Nguyen, Yi Mei, and Mengjie Zhang CEC P.Nguyen4@latrobe.edu.au
Evolutionary Machine Learning  Masaya Nakata, Shinichi Shirakawa, Will Browne CEC nakata-masaya-tb@ynu.ac.jp
Evolutionary Many-Objective Optimization Hisao Ishibuchi, Hiroyuki Sato CEC h.sato@uec.ac.jp
14:00 - 16:00
Tutorial Title Presenter Conference Email
Deep Learning for Graphs Davide Bacciu IJCNN bacciu@di.unipi.it
Randomization Based Deep and Shallow Learning Methods for Classification and Forecasting P N Suganthan IJCNN EPNSugan@ntu.edu.sg
Multi-modality Helps in Solving Biomedical Problems: Theory and Applications Sriparna Saha, Pratik Dutta WCCI pratik.pcs16@iitp.ac.in
Deep Stochastic Learning and Understanding Jen-Tzung Chien IJCNN jtchien@nctu.edu.tw
Paving the way from Interpretable Fuzzy Systems to Explainable AI Systems José M. Alonso, Ciro Castiello, Corrado Menca, Luis Magdalena FUZZ josemaria.alonso.moral@usc.es
Pareto Optimization for Subset Selection: Theories and Practical Algorithms Chao Qian, Yang Yu CEC qianc@lamda.nju.edu.cn
Benchmarking and Analyzing Iterative Optimization Heuristics with IOHprofiler Carola Doerr, Thomas Bäck, Ofer Shir, Hao Wang CEC h.wang@liacs.leidenuniv.nl
Differential Evolution Rammohan Mallipeddi,  Guohua Wu         CEC mallipeddi.ram@gmail.com
Evolutionary computation for games: learning, planning, and designing

Julian Togelius , Jialin Liu 

CEC liujl@sustech.edu.cn
16:30 - 18:30
Tutorial Title Presenter Conference Email
From brains to deep neural networks Saeid Sanei, Clive Cheong Took IJCNN Clive.CheongTook@rhul.ac.uk
Evolution of Neural Networks Risto Miikkulainen IJCNN risto@cs.utexas.edu
Experience Replay for Deep Reinforcement Learning Abdul Rahman Al Tahhan, Vasile Palade IJCNN A.Altahhan@leedsbeckett.ac.uk
Fuzzy Systems for Neuroscience and Neuro-engineering Applications Javier Andreu, CT Lin FUZZ javier.andreu@essex.ac.uk
Dynamic Parameter Choices in Evolutionary Computation Carola Doerr, Gregor Papa  CEC gregor.papa@ijs.si
Evolutionary Computation for Dynamic Multi-objective Optimization Problems Shengxiang Yang CEC syang@dmu.ac.uk
Evolutionary Algorithms and Hyper-Heuristics Nelishia Pillay CEC npillay@cs.up.ac.za
Large-Scale Global Optimization Mohammad Nabi Omidvar, Antonio LaTorre CEC M.N.Omidvar@leeds.ac.uk 
Bilevel optimization Ankur Sinha, Kalyanmoy Deb CEC asinha@iima.ac.in
19:00 - 21:00
Tutorial Title Presenter Conference Email
How to combine human and computational intelligence? Peter Erdi WCCI Peter.Erdi@kzoo.edu
Machine learning  for data streams in Python with scikit-multi flow Jacob Montiel, Heitor Gomes,Jesse Read, Albert Bifet IJCNN heitor.gomes@waikato.ac.nz
Deep randomized neural networks Claudio Gallicchio, Simone Scardapane IJCNN gallicch@di.unipi.it
Mechanisms of Universal Turing Machines: Vision, Audition, Natural Language, APFGP and Consciousness Juyang Weng IJCNN juyang.weng@gmail.com
Patch Learning: A New Method of Machine Learning, Implemented by Means of Fuzzy Sets Jerry Mendel FUZZ jmmprof@me.com
Self-Organizing Migrating Algorithm - Recent Advances and Progress in Swarm Intelligence Algorithms

Roman Senkerik CEC senkerik@utb.cz
Large-Scale Global Optimization - PART 2 Mohammad Nabi Omidvar, Antonio LaTorre CEC M.N.Omidvar@leeds.ac.uk 
Recent Advances in Particle Swarm Optimization Analysis and Understanding Andries Engelbrecht, Christopher Cleghorn CEC engel@sun.ac.za
Nature-Inspired Techniques for Combinatorial Problems Malek Mouhoub

CEC Malek.Mouhoub@uregina.ca
Niching Methods for Multimodal Optimization Xiaodong Li, Mike Preuss, Michael G. Epitropakis

CEC xiaodong.li@rmit.edu.au

Selection, Exploration, and Exploitation

Abstract

The goal of exploration is to seek out new areas of the search space. The effect of selection is to concentrate search around the best-known areas of the search space. The power of selection can overwhelm exploration, allowing it to turn any exploratory method into a hill climber. The balancing of exploration and exploitation requires more than the consideration of what solutions are created — it requires an analysis of the interplay between exploration and selection.

This tutorial reviews a broad range of selection methods used in metaheuristics. Novel tools to analyze the effects of selection on exploration in the continuous domain are introduced and demonstrated on Particle Swarm Optimization and Differential Evolution. The difference between convergence (no exploratory search solutions are created) and stall (all exploratory search solutions are rejected) is highlighted. Remedies and alternate methods of selection are presented, and the ramifications for the future design of metaheuristics are discussed.

Tutorial Presenters (names with affiliations):

Stephen Chen, Associate Professor, School of Information Technology, York University, Toronto, Canada

James Montgomery, Senior Lecturer, School of Technology, Environments and Design, University of Tasmania, Hobart, Australia

Tutorial Presenters’ Bios:

Stephen Chen is an Associate Professor in the School of Information Technology at York University in Toronto, Canada. His research focuses on analyzing the mechanisms for exploration and exploitation in techniques designed for multi-modal optimization problems. He is particularly interested in the development and analysis of non-metaphor-based heuristic search techniques. He has conducted extensive research on genetic algorithms and swarm-based optimization systems, and his 60+ peer-reviewed publications include 20+ that have been presented at previous CEC events.

James Montgomery is a Senior Lecturer in the School of Technology, Environments and Design at the University of Tasmania in Hobart, Australia. His research focuses on search space analysis and the design of solution representations for complex, real-world problems. He has conducted extensive research on ant colony optimization and differential evolution, and his 50+ peer-reviewed publications include 10+ that have been presented at previous CEC events.

External website with more information on Tutorial (if applicable):

https://www.yorku.ca/sychen/research/tutorials/CEC2020_Selection_Exploration_Exploitation_Tutorial.html

Visualising the search process of EC algorithms

Abstract

Evolutionary computation (EC) algorithms have been successfully applied to a wide range of artificial intelligence (AI) problems ranging from function optimisation, production scheduling, to evolutionary deep learning. EC researchers have been continuously developed new techniques to enhance the performance of EC algorithms. However, it is still very challenging to fully understand the behaviours of these algorithms due to the complexity of solution representations and search operators. As a result, researchers mainly rely on the performance results from experiments to suggest which algorithms perform better and to understand how novel features impact the final performance. In these studies, some questions usually left unanswered are how better results are obtained and whether the proposed algorithms behave as conceptually designed. Thus, it is critical to have an analysis tool that can help researchers gain insights on how the algorithms work and capture useful emerging patterns.

This tutorial aims at demonstrating how visualisation can be used to help researchers gain insights about EC algorithms. In this tutorial, we will review the applications of visualisation in EC such as visualising performance and generated solutions and highlight a new visualisation framework to capture high-level evolutionary patterns of EC algorithms. The following main topics will be covered in this 1.5-hour tutorial:

  • Quick recap of EC algorithms and applications
  • A review of visualisation techniques for EC algorithms
  • AI-based visualisation (AIV) framework to reveal evolutionary patterns of EC algorithms
    • Dimensionality reduction
    • Topological data analysis
    • Visual analytics
  • Case studies:
    • Evolving classifiers using genetic programming and particle swarm optimisation
    • Automated design of production scheduling heuristics with genetic programming
    • Evolving artificial neural networks
  • Using Python to implement the AIV framework
  • From AIV framework to people-centric evolutionary systems

Tutorial Presenters (names with affiliations):

Su Nguyen (La Trobe University), Yi Mei and Mengjie Zhang (Victoria University of Wellington)

Tutorial Presenters’ Bios:

Su Nguyen is a Senior Research Fellow and Algorithm Lead at the Centre for Data Analytics and Cognition (CDAC), La Trobe University, Australia. He received his Ph.D. degree in Artificial Intelligence and Data Analytics from Victoria University of Wellington (VUW), Wellington, New Zealand, in 2013. His expertise includes computational intelligence, optimization, data analytics, large-scale simulation, and their applications in energy, operations management, and social networks. His current research focuses on novel people-centric artificial intelligence to enhance explainability and human-AI interaction by combining the power of evolutionary computation techniques and advanced machine learning algorithms such as deep learning and incremental learning. His works have been published in top peer-reviewed journals in evolutionary computation and operations research such as IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, Evolutionary Computation Journal, Computers and Operations Research. He serves as a member in the IEEE CIS Technical Committee on Data Mining. He is the guest editor of special issue on “Automated Design and Adaption of Heuristics for Scheduling and Combinatorial Optimization” in Genetic Programming and Evolvable Machines journal. He also serves as a reviewer of high-quality journals and top conferences in evolutionary computation, operations research, data mining, and artificial intelligence.

Yi Mei is a Senior Lecturer at the School of Engineering and Computer Science, Victoria University of Wellington, Wellington, New Zealand. He received his BSc and PhD degrees from University of Science and Technology of China in 2005 and 2010, respectively. His research interests include evolutionary computation in scheduling, routing and combinatorial optimisation, as well as evolutionary machine learning, genetic programming, feature selection and dimensional reduction. He has more than 70 fully referred publications, including the top journals in EC and Operations Research (OR) such as IEEE TEVC, IEEE Transactions on Cybernetics, European Journal of Operational Research, ACM Transactions on Mathematical Software. He is an Editorial Board Member of International Journal of Bio-Inspired Computation and an Associate Editor of Associate Editor of International Journal of Applied Evolutionary Computation. He currently serves as a member of the two IEEE CIS Technical Committees, and a member of three IEEE CIS Task Forces. He is a guest editor of a special issue of the Genetic Programming Evolvable Machine journal. He serves as a reviewer of over 25 international journals including the top journals in EC and OR.

Mengjie Zhang is currently a Professor of computer science, the Head of the Evolutionary Computation Research Group, and the Associate Dean (Research and Innovation) with the Faculty of Engineering, Victoria University of Wellington, Wellington, New Zealand. He has published over 500 research papers in refereed international journals and conferences. His current research interests include evolutionary computation with application areas of image analysis, multiobjective optimization, feature selection and reduction, job shop scheduling, and transfer learning.

Dr. Zhang is a Fellow of Royal Society of NZ and IEEE Fellow. He was the Chair of the IEEE CIS Intelligent Systems and Applications Technical Committee, the IEEE CIS Emergent Technologies Technical Committee, and the Evolutionary Computation Technical Committee, and a member of the IEEE CIS Award Committee. He is the Vice-Chair of the IEEE CIS Task Force on Evolutionary Feature Selection and Construction and the Task Force on Evolutionary Computer Vision and Image Processing, and the Founding Chair of the IEEE Computational Intelligence Chapter in New Zealand. He is also a Committee Member of the IEEE NZ Central Section. He has been a Panel Member of the Marsden Fund (New Zealand Government Funding).

External website with more information on Tutorial (if applicable): to be updated

https://nguyensu.github.io/visevo/

 

Evolutionary Machine Learning

Abstract

A fusion of Evolutionary Computation and Machine Learning, namely Evolutionary Machine Learning (EML), has been recognized as a rapidly growing research area as these powerful search and learning mechanisms are combined. Many specific branches of EML with different learning schemes and different ML problem domains have been proposed. These branches seek to address common challenges –

  • How evolutionary search can discover optimal ML configurations and parameter settings,
  • How the deterministic models of ML can influence evolutionary mechanisms,
  • How EC and ML can be integrated into one learning model.

Consequently, various insights address principle issues of the EML paradigm that are worthwhile to “transfer” to these different specific challenges.

The goal of our tutorial is to provide ideas of advanced techniques of specific EML branches, and then to share them as a common insight into the EML paradigm. Firstly, we introduce the common challenges in the EML paradigm and then discuss how various EML branches address these challenges. Then, as detailed examples, we provide two major approaches to EML: Evolutionary rule-based learning (i.e. Learning Classifier Systems) as a symbolic approach; Evolutionary Neural Networks as a connectionist approach.

Our tutorial will be organized for not only beginners but also experts in the EML field. For the beginners, our tutorial will be a gentle introduction regarding EML from basics to recent challenges. For the experts, our two specific talks provide the most recent advances of evolutionary rule-based learning and of evolutionary neural networks. Additionally, we will provide a discussion on how these techniques’ insights can be reused to other EML branches, which shapes the new directions of EML techniques.

Tutorial Presenters (names with affiliations):

  • Masaya Nakata, Assosiate Professor, Yokohama National University, Japan
  • Shinichi Shirakawa, Lecturer, Yokohama National University, Japan
  • Will Browne, Associate Professor Victoria University of Wellington, NZ

Tutorial Presenters’ Bios:

Dr. Nakata is an associate professor at Faculty of Engineering, Yokohama National University, Japan. He received his Ph.D. degree in informatics from the University of Electro-Communications, Japan, in 2016. He has been working on Evolutionary Rule-based Machine Learning, Reinforcement Learning, Data mining, more specifically, Learning Classifier System (LCS). He was a visiting researcher at Politecnico di Milano, University Bristol and Victoria University of Wellington. His contributions have been published as more than 10 journal papers and more than 20 conference papers including CEC, GECCO, PPSN. He is an organizing committee member of International Workshop on Learning Classifier Systems/Evolutionary Rule-based Machine Learning 2015-2016, 2018-2019 in GECCO conference, elected from the international LCS research community. He received IEEE CIS Japan chapter Young Research Award.

Dr. Shirakawa is a lecturer at Faculty of Environment and Information Sciences, Yokohama National University, Japan. He received his Ph.D. degree in engineering from Yokohama National University in 2009. He worked at Fujitsu Laboratories Ltd., Aoyama Gakuin University, and University of Tsukuba. His research interests include evolutionary computation, machine learning, computer vision, and so on. He is currently working on the evolutionary deep neural networks. His contributions have been published as high-quality journal and conference papers in EC and AI, e.g., CEC, GECCO, PPSN, AAAI. He received IEEE CIS Japan chapter Young Research Award in 2009 and won the best paper award in evolutionary machine learning track of GECCO 2017.

Associate Prof Will Browne’s research focuses on applied cognitive systems. Specifically, how to use inspiration from natural intelligence to enable computers/machines/robots to behave usefully. This includes cognitive robotics, learning classifier systems, and modern heuristics for industrial application. A/Prof. Browne has been co-track chair for the Genetics-Based Machine Learning (GBML) track and is currently the co-chair for the Evolutionary Machine Learning track at Genetic and Evolutionary Computation Conference. He has also provided tutorials on Rule-Based Machine Learning at GECCO, chaired the International Workshop on Learning Classifier Systems (LCSs) and lectured graduate courses on LCSs. He has recently co-authored the first textbook on LCSs ‘Introduction to Learning Classifier Systems, Springer 2017’. Currently, he leads the LCS theme in the Evolutionary Computation Research Group at Victoria University of Wellington, New Zealand.

Evolutionary Many-Objective Optimization

Abstract

The goal of the tutorial is clearly explaining difficulties of evolutionary many-objective optimization, approaches to the handling of those difficulties, and promising future research directions. Evolutionary multi-objective optimization (EMO) has been a very active research area in the field of evolutionary computation in the last two decades. In the EMO area, the hottest research topic is evolutionary many-objective optimization. The difference between multi-objective and many-objective optimization is simply the number of objectives. Multi-objective problems with four or more objectives are usually referred to as many-objective problems. It sounds that there exists no significant difference between three-objective and four-objective problems. However, the increase in the number of objectives significantly makes multi-objective problem difficult. In the first part (Part I: Difficulties), we clearly explain not only frequently-discussed well-known difficulties such as the weakening selection pressure towards the Pareto front and the exponential increase in the number of solutions for approximating the entire Pareto front but also other hidden difficulties such as the deterioration of the usefulness of crossover and the difficulty of performance evaluation of solution sets. The attendees of the tutorial will learn why many-objective optimization is difficult for EMO algorithms. After the clear explanations about the difficulties of many-objective optimization, we explain in the second part (Part II: Approaches and Future Directions) how to handle each difficulty. For example, we explain how to prevent the Pareto dominance relation from weakening its selection pressure and how to prevent a binary crossover operator from decreasing its search efficiently. We categorize approaches to tackle many-objective optimization problems and explain some state-of-the-art many-objective algorithms in each category. The attendees of the tutorial will learn some representative approaches to many-objective optimization and state-of-the-art many-objective algorithms. At the same time, the attendees will also learn that there still exist a large number of promising, interesting and important research directions in evolutionary many-objective optimization. Some promising research directions are explained in detail in the tutorial.

Tutorial Presenters (names with affiliations):

Name: HisaoIshibuchi

Affiliation: Southern University of Science and Technology

Name: Hiroyuki Sato

Affiliation: The University of Electro-Communications

Tutorial Presenters’ Bios:

HisaoIshibuchi received the B.S. and M.S. degrees from Kyoto University in 1985 and 1987, respectively, and the Ph.D. degree from Osaka Prefecture University in 1992. He was with Osaka Prefecture University in 1987-2017. Since April 2017, he is a Chair Professor at Southern University of Science and Technology, China. He received a JSPS Prize from Japan Society for the Promotion of Science in 2007, a best paper award from GECCO 2004, 2017, 2018 and FUZZ-IEEE 2009, 2011, and the IEEE CIS Fuzzy Systems Pioneer Award in 2019. Dr. Ishibuchi was an IEEE CIS Vice President in 2010-2013. Currently he is an AdCom member of the IEEE CIS (2014-2019), and the Editor-in-Chief of the IEEE Computational Intelligence Magazine (2014-2019). He is an IEEE Fellow.

Hiroyuki Sato received B.E. and M.E. degrees from Shinshu University, Japan, in 2003 and 2005, respectively. In 2009, he received Ph. D. degree from Shinshu University. He has worked at The University of Electro-Communications since 2009. He is currently an associate professor. He received best paper awards on the EMO track in GECCO 2011 and 2014, Transaction of the Japanese Society for Evolutionary Computation in 2012 and 2015. His research interests include evolutionary multi- and many-objective optimization, and its applications. He is a member of IEEE, ACM/SIGEVO.

External website with more information on Tutorial (if applicable):

https://bit.ly/329qlRK

Pareto Optimization for Subset Selection: Theories and Practical Algorithms

Abstract

Pareto optimization is a general optimization framework for solving single-objective optimization problems, based on multi-objective evolutionary optimization. The main idea is to transform a single-objective optimization problem into a bi-objective one, then employ a multi-objective evolutionary algorithm to solve it, and finally return the best feasible solution w.r.t. the original single-objective optimization problem from the produced non-dominated solution set.Pareto optimization has been shown a promising method for the subset selection problem, which has applications in diverse areas, including machine learning, data mining, natural language processing, computer vision, information retrieval, etc. The theoretical understanding of Pareto optimization has recently been significantly developed, showing its irreplaceability for subset selection. This tutorial will introduce Pareto optimization from scratch. We will show that it achieves the best-so-far theoretical and practical performances in several applications of subset selection. We will also introduce advanced variants of Pareto optimization for large-scale, noisy and dynamic subset selection. We assume that the audiences are with basic knowledge of probability theory.

Tutorial Presenters (names with affiliations):

Chao Qian, Nanjing University, China http://www.lamda.nju.edu.cn/qianc

Yang Yu, Nanjing University, China http://www.lamda.nju.edu.cn/yuy

Tutorial Presenters’ Bios:

Chao Qian is an Associate Professor in the School of Artificial Intelligence, Nanjing University, China. He received the BSc and PhD degrees in the Department of Computer Science and Technologyfrom Nanjing University in 2009 and 2015, respectively. From 2015 to 2019, He was an Associate Researcher in the School of Computer Science and TechnologyUniversity of Science and Technology of China. His research interests are mainly theoretical analysis of evolutionary algorithms and their application in machinelearning. He has published one book “Evolutionary Learning: Advances in Theories and Algorithms”and more than 30 papers in top-tier journals (e.g.,AIJ, TEvC, ECJ, Algorithmica) and conferences (e.g., NIPS, IJCAI, AAAI).He has won the ACM GECCO 2011 Best Theory PaperAward, and the IDEAL 2016 Best Paper Award. He is chair of IEEE Computational Intelligence Society (CIS) Task Force on Theoretical Foundations of Bio-inspired Computation.

Yang Yu is a Professor in School of Artificial Intelligence, Nanjing University, China. He joined the LAMDA Group as a faculty since he got his Ph.D. degree in 2011. His research area is in machine learning and reinforcement learning. He was recommended as AI’s 10 to Watch by IEEE Intelligent Systems in 2018, invited to have an Early Career Spotlight talk in IJCAI’18 on reinforcement learning, and received the Early Career Award of PAKDD in 2018.

Benchmarking and Analyzing Iterative Optimization Heuristics with IOHprofiler

Abstract

IOHprofiler is a new benchmarking environment that has been developed for a highly versatile analysis of iterative optimization heuristics (IOHs) such as evolutionary algorithms, local search algorithms, model-based heuristics, etc. A key design principle of IOHprofiler is its highly modular setup, which makes it easy for its users to add algorithms, problems, and performance criteria of their choice. IOHprofiler is also useful for the in-depth analysis of the evolution of adaptive parameters, which can be plotted against fixed-targets or fixed-budgets. The analysis of robustness is also supported.

IOHprofiler supports all types of optimization problems, and is not restricted to a particular search domain.  A web-based interface of its analysis procedure is available at http://iohprofiler.liacs.nl, the tool itself is available on GitHub (https://github.com/IOHprofiler/IOHanalyzer) and as CRAN package (https://cran.rstudio.com/web/packages/IOHanalyzer/index.html).

The tutorial addresses all CEC participants interested in analyzing and comparing heuristic solvers. By the end of the tutorial, the participants will known how to benchmark different solvers with IOHprofiler, which performance statistics it supports, and how to contribute to its design.

Tutorial Presenters (names with affiliations): 

Thomas Bäck, Leiden University, The Netherlands,

Carola Doerr, CNRS and Sorbonne University, France,

Ofer M. Shir, Tel-Hai College and Migal Institute, Israel,

Hao Wang, Sorbonne University, France.

Tutorial Presenters’ Bios:

  • Thomas Bäck is Full Professor of Computer Science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002. He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994, and then worked for the Informatik Centrum Dortmund (ICD) as department leader of the Center for Applied Systems Analysis. From 2000 – 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others. Thomas has more than 300 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and the Handbook of Natural Computing, and co-editor-in-chief of Springer’s Natural Computing book series. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft fürInformatik, GI) in 1995 and the IEEE Computational Intelligence Society Evolutionary Computation Pioneer Award in 2015.
  • Carola Doerr, formerly Winzen, is a permanent CNRS researcher at Sorbonne University in Paris, France. Her main research activities are in the mathematical analysis of randomized algorithms, with a strong focus on evolutionary algorithms and other black-box optimizers. She has been very active in the design and analysis of black-box complexity models, a theory-guided approach to explore the limitations of heuristic search algorithms. Most recently, she has used knowledge from these studies to prove superiority of dynamic parameter choices in evolutionary computation, a topic that she believes to carry huge unexplored potential for the community. Carola has received several awards for her work on evolutionary computation, among them the Otto Hahn Medal of the Max Planck Society and four best paper awards at GECCO. She is/was program chair of PPSN 2020, FOGA 2019 and the theory tracks of GECCO 2015 and 2017. Carola serves on the editorial boards of ACM Transactions on Evolutionary Learning and Optimization and of Evolutionary Computation and was editor of two special issues in Algorithmica. Carola is vice chair of the EU-funded COST action 15140 on “Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice (ImAppNIO)”.
  • Ofer M. Shir is the Head of the Computer Science Department of Tel-Hai College, and a Principal Investigator at the Migal-Galilee Research Institute – both located in the Upper Galilee, Israel. OferShir holds a BSc in Physics and Computer Science from the Hebrew University of Jerusalem, Israel (conferred 2003), and both MSc and PhD in Computer Science from Leiden University, The Netherlands (conferred 2004, 2008; PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010), hosted by Prof. Herschel Rabitz in the Department of Chemistry – where he specialized in computational aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member (2010-2013), which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics. His current topics of interest include Statistical Learning in Theory and in Practice, Experimental Optimization, Theory of Randomized Search Heuristics, Scientific Informatics, Natural Computing, Computational Intelligence in Physical Sciences, Quantum Control and Quantum Machine Learning.
  • Hao Wang obtained his PhD (cum laude, promotor: Prof. Thomas Bäck) from Leiden University in 2018. He is currently a postdoc at Sorbonne University (supervised by Carola Doerr) and has accepted a position as an assistant Professor at the Leiden Institute of Advanced Computer Science from Sep. 2020. He received the Best Paper Award at the PPSN 2016 conference and was a best paper award finalist at the IEEE SMC 2017 conference. His research interests are proposing, improving and analyzing stochastic optimization algorithms, especially Evolutionary Strategies and Bayesian Optimization. In addition, he also works on developing statistical machine learning algorithms for big and complex industrial data. He also aims at combining the state-of-the-art optimization algorithm with data mining/machine learning techniques to make the real-world optimization tasks more efficient and robust.

 

External website with more information on Tutorial (if applicable): None

Evolution with Ensembles, Adaptations and Topologies

Abstract

Differential Evolution (DE) is one of the most powerful stochastic real-parameter optimization algorithms of current interest. DE operates through similar computational steps as employed by a standard Evolutionary Algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance.  This tutorial will begin with abrief overview of the basic concepts related to DE, its algorithmic components and control parameters. It will subsequently discuss some of the significant algorithmic variants of DE for bound constrained single-objective optimization. Recent modifications of the DE family of algorithms for multi-objective, constrained, large-scale, niching and dynamic optimization problems will also be included. The talk will discuss the effects of incorporating ensemble learning in DE – a relatively recent concept that can be applied to swarm & evolutionary algorithms to solve various kinds of optimization problems. The talk will also discuss neighborhood topologies based DE and adaptive DEs to improve the performance of DE. Theoretical advances made to understand the search mechanism of DE and the effect of its most important control parameters will be discussed. The talk will finally highlight a few problems that pose challenge to the state-of-the-art DE algorithms and demand strong research effort from the DE-community in the future.

Duration: 1.5 hours

Intended Audience:  This presentation will include basics as well as advanced topics of DE. Hence, researchers commencing their research in DE as well as experienced researcher can attend. Practitioners will also benefit from the presentation.

Expected Enrollment: In the past 40-50 attendees registered to attend the DE tutorials at CEC. We expect a similar interest this year too.

Name: Dr. Rammohan Mallipeddi and Dr. Guohua Wu

 

Affiliation: Nanyang Technological University, Singapore.

 

Email: mallipeddi.ram@gmail.com, guohuawu@csu.edu.cn

 

Website: http://ecis.knu.ac.kr/http://faculty.csu.edu.cn/guohuawu/en/index.htm

 

 

Goal: Differential evolution (DE) is one of the most successful numerical optimization paradigm. Hence, practitioners and junior researchers would be interested in learning this optimization algorithm.  DE is also rapidly growing. Hence, a tutorial on DE will be timely and beneficial to many of the CEC 2020 conference attendees. This tutorial will introduce the basics of the DE and then point out some advanced methods for solving diverse numerical optimization problems by using DE.

Format: The tutorial format will be primarily power point slides based. However, frequent interactions with the audience will be maintained.

Pertinent Qualification: The speakers co-authored several original articles on DE. In addition, the authors have published a survey paper on ensemble strategies in population-based algorithms including DE. The speakers have also been organizing numerical optimization competitions at the CEC conferences (EA Benchmarks / CEC Competitions). DE has been one of top performers in these competitions. As the organizers, the speakers would also be able to share their experiences.

Key Papers:

  • Guohua Wu, Rammohan Mallipeddi and P. N. Suganthan, Ensemble Strategies for Population-based Optimization Algorithms – A Survey, Swarm and Evolutionary Computation, Vol. 44, pp. 695-711, 2019.
  • Das, S. S. Mullick, P. N. Suganthan, “Recent Advances in Differential Evolution – An Updated Survey,” Swarm and Evolutionary Computation, Vol. 27, pp. 1-30, 2016.
  • Das and P. N. Suganthan,Differential Evolution: A Survey of the State-of-the-Art”, IEEE Trans. on Evolutionary Computation, 15(1):4 – 31, Feb. 2011.

General Bio-sketch:

Name: Dr. Rammohan Mallipeddi

Affiliation: Kyungpook National University, South Korea.

Rammohan Mallipeddi is an Associate Professor working in the School of Electronics Engineering, Kyungpook National University (Daegu, South Korea). He received Master’s and PhD degrees in computer control and automation from the School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore, in 2007 and 2010, respectively. His research interests include evolutionary computing, artificial intelligence, image processing, digital signal processing, robotics, and control engineering. He co-authored papers published IEEE TEVC, etc. Currently, he serves as an Associate Editor for “Swarm and Evolutionary Computation”, an international journal from Elsevier and a regular reviewer for journals including IEEE TEVC and IEEE TCYB.

Name: Dr. Guohua Wu

Affiliation: Central South University, China.

Guohua Wu received the B.S. degree in Information Systems and Ph.D. degree in Operations Research from National University of Defense Technology, China, in 2008 and 2014, respectively. During 2012 and 2014, he was a visiting Ph.D. student at University of Alberta, Edmonton, Canada.He is now a Professor at the School of Traffic and Transportation Engineering,Central South University, Changsha, China. His current research interests include planning and scheduling, evolutionary computation and machine learning. He has authored more than 50 referred papers including those published in IEEE TCYB, IEEE TSMCA, INS, COR. He serves as an Associate Editor of Swarm and Evolutionary Computation, an editorial board member of International Journal of Bio-Inspired Computation, a Guest Editor of Information Sciences and Memetic Computing. He is a regular reviewer of more than 20 journals including IEEE TEVC, IEEE TCYB, IEEE TFS.

Evolutionary computation for games: learning, planning, and designing

Abstract

This tutorial introduces several techniques and application areas for evolutionary computation in games, such as board games and video games. We will give a broad overview of the use cases and popular methods for evolutionary computation in games, and in particular cover the use of evolutionary computation for learning policies (evolutionary reinforcement learning using neuroevolution), planning (rolling horizon and online planning), and designing (search-based procedural content generation). The basic principles will be explained and illustrated by examples from our own research as well as others’ research.

Tentative outline:

  • Introduction: who are we, what are we talking about?
  • Why do research on evolutionary computation and games?
  • Part 1: Playing games
    • Reasons for building game-playing AI
    • Characteristics of games (and how they affect game-playing algorithms)
    • Reinforcement learning through evolution
    • Neuroevolution
    • Planning with evolution
    • Single-agent games (rolling horizon evolution)
    • Multi-agent games (online evolution)
  • Part 2: Designing and developing games
    • The need for AI in game design and development
    • Procedural content generation
    • Search-based procedural content generation
    • procedural content generation machine learning (PCGML)
    • Game balancing
    • Game testing
    • Game adaptation

Tutorial Presenters:

Julian Togelius

Associate Professor

Department of Computer Science and Engineering

Tandon School of Engineering

New York University

2 MetroTech Center, Brooklyn, NY 11201, USA

Co-director of the NYU Game Innovation Lab

Editor-in-Chief, IEEE Transactions on Games

julian@togelius.com

Jialin Liu

Research Assistant Professor

Optimization and Learning Laboratory (OPAL Lab)

Department of Computer Science and Engineering (CSE)

Southern University of Science and Technology (SUSTech)

Shenzhen, China

Associate Editor, IEEE Transactions on Games

liujl@sustech.edu.cn

Tutorial Presenters’ Bios:

Julian Togelius is an Associate Professor in the Department of Computer Science and Engineering, New York University, USA. He works on artificial intelligence for games and games for artificial intelligence. His current main research directions involve search-based procedural content generation in games, general video game playing, player modeling, generating games based on open data, and fair and relevant benchmarking of AI through game-based competitions. He is the Editor-in-Chief of IEEE Transactions on Games, and has been chair or program chair of several of the main conferences on AI and Games. Julian holds a BA from Lund University, an MSc from the University of Sussex, and a Ph.D. from the University of Essex. He has previously worked at IDSIA in Lugano and at the IT University of Copenhagen.

Jialin Liu is currently a Research Assistant Professor in the Department of Computer Science and Engineering of Southern University of Science and Technology (SUSTech), China. Before joining SUSTech, she was a Postdoctoral Research Associate at Queen Mary University of London (QMUL, UK) and one of the founding members of the Game AI research group of QMUL. Her research interests include AI and Games, noisy optimisation, portfolio of algorithms and meta-heuristics.Jialin is an Associate Editor of IEEE Transactions on Games. She hasserved as Program Co-Chair of 2018 IEEE Computational Intelligence and Games (IEEE CIG2018), and Competition Chair of several main conferences on Evolutionary Computation, and AI and Games. She is also chairing IEEE CIS Games Technical Committee.

Dynamic Parameter Choices in Evolutionary Computation

Abstract

One of the most challenging problems in solving optimization problems with evolutionary algorithms and other optimization heuristics is the selection of the control parameters that determine their behavior. In state-of-the-art heuristics, several control parameters need to be set, and their setting typically has an important impact on the performance of the algorithm. For example, in evolutionary algorithms, we typically need to chose the population size, the mutation strength, the crossover rate, the selective pressure, etc.
Two principal approaches to the parameter selection problem exist:
(1) parameter tuning, which asks to find parameters that are most suitable for the problem instances at hand, and
(2) parameter control, which aims to identify good parameter settings “on the fly”, i.e., during the optimization itself.
Parameter control has the advantage that no prior training is needed. It also accounts for the fact that the optimal parameter values typically change during the optimization process: for example, at the beginning of an optimization process we typically aim for exploration, while in the later stages we want the algorithm to converge and to focus its search on the most promising regions in the search space.
While parameter control is indispensable in continuous optimization, it is far from being well-established in discrete optimization heuristics. The ambition of this tutorial is therefore to change this situation, by informing participants about different parameter control techniques, and by discussing both theoretical and experimental results that demonstrate the unexploited potential of non-static parameter choices.
Our tutorial addresses experimentally as well as theory-oriented researchers alike, and requires only basic knowledge of optimization heuristics.

Tutorial Presenters (names with affiliations):

– Carola Doerr, Sorbonne University, Paris, France

– Gregor Papa, Jožef Stefan Institute, Ljubljana, Slovenia

Tutorial Presenters’ Bios:

  • Carola Doerr(Doerr@lip6.frhttp://www-ia.lip6.fr/~doerr/) is a permanent CNRS researcher at Sorbonne University in Paris, France. She studied Mathematics at Kiel University (Germany, 2003-2007, Diplom) and Computer Science at the Max Planck Institute for Informatics and Saarland University (Germany, 2010-2011, PhD). Before joining the CNRS she was a post-doc at Paris Diderot University (Paris 7) and the Max Planck Institute for Informatics. From 2007 to 2009, Carola Doerr has worked as a business consultant for McKinsey & Company, where her interest in evolutionary algorithms originates from.
    Carola Doerr’s main research activities are in the mathematical analysis of randomized algorithms, with a strong focus on evolutionary algorithms and other black-box optimizers. She has been very active in the design and analysis of black-box complexity models, a theory-guided approach to explore the limitations of heuristic search algorithms. Most recently, she has used knowledge from these studies to prove superiority of dynamic parameter choices in evolutionary computation, a topic that she believes to carry huge unexplored potential for the community.
    Carola Doerr has received several awards for her work on evolutionary computation, among them the Otto Hahn Medal of the Max Planck Society and four best paper awards at GECCO. She is chairing the program committee of FOGA 2019 and previously chaired the theory tracks of GECCO 2015 and 2017. Carola is an editor of two special issues in Algorithmica. She is also vice chair of the EU-funded COST action 15140 on “Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice (ImAppNIO)”.
  • Gregor Papa (papa@ijs.sihttp://cs.ijs.si/papa/) is a Senior researcher and a Head of Computer Systems Department at the Jožef Stefan Institute, Ljubljana, Slovenia, and an Associate Professor at the Jožef Stefan International Postgraduate School, Ljubljana, Slovenia. He received the PhD degree in Electrical engineering at the University of Ljubljana, Slovenia, in 2002.
    Gregor Papa’s research interests include meta-heuristic optimisation methods and hardware implementations of high-complexity algorithms, with a focus on dynamic setting of algorithms’ control parameters. His work is published in several international journals and conference proceedings. He regularly organizes several conferences and workshops in the field of nature-inspired algorithms from the year 2004 till nowadays. He led and participated in several national and European projects.
    Gregor Papa is a member of the Editorial Board of the Automatika journal (Taylor & Francis) for the field “Applied Computational Intelligence”. He is a Consultant at the Slovenian Strategic research and innovation partnership for Smart cities and communities.

External website with more information on Tutorial (if applicable):

TBA

Evolutionary Computation for Dynamic Multi-objective Optimization Problems

Abstract

Many real-world optimization problems involve multiple conflicting objectives to be optimized and are subject to dynamic environments, where changes may occur over time regarding optimization objectives, decision variables, and/or constraint conditions. Such dynamic multi-objective optimization problems (DMOPs) are challenging problems due to their nature of difficulty. Yet, they are important problems that researchers and practitioners in decision-making in many domains need to face and solve. Evolutionary computation (EC) encapsulates a class of stochastic optimization methods that mimic principles from natural evolution to solve optimization and search problems. EC methods are good tools to address DMOPs due to their inspiration from natural and biological evolution, which has always been subject to changing environments. EC for DMOPs has attracted a lot of research effort during the last two decades with some promising results. However, this research area is still quite young and far away from well-understood. This tutorial provides an introduction to the research area of EC for DMOPs and carry out an in-depth description of the state-of-the-art of research in the field. The purpose is to (i) provide detailed description and classification of DMOP benchmark problems and performance measures; (ii) review current EC approaches and provide detailed explanations on how they work for DMOPs; (iii) present current applications in the area of EC for DMOPs; (iv) analyse current gaps and challenges in EC for DMOPs; and (v) point out future research directions in EC for DMOPs.

Tutorial Presenters (names with affiliations):

Prof. Shengxiang Yang, School of Computer Science and Informatics, De Montfort University, UK

Tutorial Presenters’ Bios:

Shengxiang Yang (http://www.tech.dmu.ac.uk/~syang/) got his PhD degree in Systems Engineering in 1999 from Northeastern University, China. He is now a Professor of Computational Intelligence (CI) and Director of the Centre for Computational Intelligence (http://www.cci.dmu.ac.uk/), De Montfort University (DMU), UK. He has worked extensively for 20 years in the areas of CI methods, including EC and artificial neural networks, and their applications for real-world problems. He has over 280 publications in these domains, with over 9800 citations and H-index of 53 according to Google Scholar. His work has been supported by UK research councils, EU FP7 and Horizon 2020, Chinese Ministry of Education, and industry partners, with a total funding of over £2M, of which two EPSRC standard research projects have been focused on EC for DMOPs.

He serves as an Associate Editor or Editorial Board Member of several international journals, including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, Information Sciences, Enterprise Information Systems, and Soft Computing, etc. He was the founding chair of the Task Force on Intelligent Network Systems (TF-INS, 2012-2017) and the chair of the Task Force on EC in Dynamic and Uncertain Environments (ECiDUEs, 2011-2017) of the IEEE CI Society (CIS). He has organised/chaired over 60 workshops and special sessions relevant to ECiDUEs for several major international conferences. He is the founding co-chair of the IEEE Symposium on CI in Dynamic and Uncertain Environments. He has co-edited 12 books, proceedings, and journal specialissues. He has been invited to give over 10 keynote speeches at international conferences and workshops.

 

External website with more information on Tutorial (if applicable): None.

Evolutionary Algorithms and Hyper-Heuristics

Abstract

Hyper-heuristics is a rapidly developing domain which has proven to be effective at providing generalized solutions to problems and across problem domains. Evolutionary algorithms have played a pivotal role in the advancement of hyper-heuristics, especially generation hyper-heuristics. Evolutionary algorithm hyper-heuristics have been successful applied to solving problems in various domains including packing problems, educational timetabling, vehicle routing, permutation flowshop and financial forecasting amongst others. The aim of the tutorial is to firstly provide an introduction to evolutionary algorithm hyper-heuristics for researchers interested in working in this domain. An overview of hyper-heuristics will be provided including the assessment of hyper-heuristic performance. The tutorial will examine each of the four categories of hyper-heuristics, namely, selection constructive, selection perturbative, generation constructive and generation perturbative, showing how evolutionary algorithms can be used for each type of hyper-heuristic. A case study will be presented for each type of hyper-heuristic to provide researchers with a foundation to start their own research in this area. The EvoHyp library will be used to demonstrate the implementation of a genetic algorithm hyper-heuristic for the case studies for selection hyper-heuristics and a genetic programming hyper-heuristic for the generation hyper-heuristics. A theoretical understanding of evolutionary algorithm hyper-heuristics will be provided. Challenges in the implementation of evolutionary algorithm hyper-heuristics will be highlighted. An emerging research direction is using hyper-heuristics for the automated design of computational intelligence techniques. The tutorial will look at the synergistic relationship between evolutionary algorithms and hyper-heuristics in this area. The use of hyper-heuristics for the automated design of evolutionary algorithms will be examined as well as the application of evolutionary algorithm hyper-heuristics for the design of computational intelligence techniques. The tutorial will end with a discussion session on future directions in evolutionary algorithms and hyper-heuristics.

Tutorial Presenters’ Bios:

Nelishia Pillay is a professor and head of department in the Department of Computer, University of Pretoria. Her research areas include hyper-heuristics, combinatorial optimization, genetic programming, genetic algorithms and other biologically-inspired methods. She holds the Multichoice Joint Chair in Machine Learning. She is chair of the IEEE Task Force on Hyper-Heuristics, chair of the IEEE Task Force on Automated Algorithm Design, Configuration and Selection and vice-chair of the IEEE CIS Technical Committee on Intelligent Systems and Applications and a member of the IEEE Technical Committee on Evolutionary Computation. She has served on program committees for numerous national and international conferences and is a reviewer for various international journals and is associate editor for IEEE Computational Intelligence Magazine and the Journal of Scheduling. She is an active researcher in the field of evolutionary algorithm hyper-heuristics and the application thereof to combinatorial optimization problems and automated design. This is one of the focus areas of the NICOG (Nature-Inspired Computing Optimization) research group which she has established.

External website with more information on Tutorial (if applicable):

https://sites.google.com/site/easandhyperheuristics/home

Evolutionary Large-Scale Global Optimization

Abstract

Many real-world optimization problems involve a large number of decision variables. The trend in engineering
optimization shows that the number of decision variables involved in a typical optimization
problem has grown exponentially over the last 50 years [10], and this trend continues with an everincreasing
rate. The proliferation of big-data analytic applications has also resulted in the emergence of
large-scale optimization problems at the heart of many machine learning problems [1, 11]. The recent
advances in the area of machine learning has also witnessed very large scale optimization problems encountered
in training deep neural network architectures (so-called deep learning), some of which have
over a billion decision variables [3, 7]. It is this “curse-of-dimensionality” that has made large-scale optimization
an exceedingly difficult task. Current optimization methods are often ill-equipped in dealing
with such problems. It is this research gap in both theory and practice that has attracted much research
interest, making large-scale optimization an active field in recent years. We are currently witnessing a
wide range of mathematical and metaheuristics optimization algorithms being developed to overcome this
scalability issue. Among these, metaheuristics have gained popularity due to their ability in dealing with
black-box optimization problems. Currently, there are two different approaches to tackle this complex
search. The first one is to apply decomposition methods, that divide the total number of variables into
groups of variables allowing researcher to optimize each one separately, reducing the curse of dimensionality.
Their main drawback is that choosing proper decomposition could be very difficult and expensive
computationally. The other approach is to propose algorithms specifically designed for large-scale global
optimization, creating algorithms whose features are well-suited for that type of search. The Tutorial is
divided in two parts, each dedicated to exploring the advances in the approaches stated above, presented
by experts in each respective field.

Part I: Introduction and Decomposition Methods

Presenters:

Mohammad Nabi Omidvar
Antonio LaTorre

In this part of the tutorial, we provide an overview of the recent advances in the field of evolutionary largescale global optimization with an emphasis on the divide-and-conquer approaches (a.k.a. decomposition methods). In particular, we give an overview of different approaches including the non-decomposition based approaches such as memetic algorithms and sampling methods to deal with large-scale problems.

This is followed by a more detailed treatment of implicit and explicit decomposition algorithms in largescale optimization. Considering the popularity of decomposition methods in recent years, we provide a detailed technical explanation of the state-of-the-art decomposition algorithms including the differential grouping algorithm [8] and its latest improved derivatives, which outperform other decomposition algorithms on the latest large-scale global optimization benchmarks [5]. We also address the issue of resource allocation in cooperative co-evolution and provide a detailed explanation of some recent algorithms such
as the contribution-based cooperative co-evolution family of algorithms [9]. Overall, this tutorial takes the form of a critical survey of the existing methods with an emphasis on articulating the challenges in large-scale global optimization in order to stimulate further research interest in this area.

Part II: Algorithms and Design Considerations

Presenter:

Mohammad Nabi Omidvar
Antonio LaTorre

In this part of the tutorial, we introduce various algorithms specially designed for large-scale global optimization in further detail. These algorithms do not reply on any decomposition mechanism and tackle the optimization of all variables as a whole. In order to obtain good results, given the shear size of the search space to explore, many approaches have been proposed among which memetic algorithms which incorporate local search have shown better performance. Since they need to obtain a good trace-off among exploration of the search space and exploitation of the current best solutions, we introduce ways of
using the local search and other techniques such as the restart mechanisms to improve the optimization performance on problems with a large number of decision variables. In recent years, the proposals of algorithms for large-scale global optimization have significantly increased, as shown in Figure 1. We describe, in chronological order, different relevant algorithms in the topic, with more emphasis on some state-of-the-art algorithms such as MOS [4], considered the state of the art for several years until other modern algorithms such as [2, 6] improved its results. We also give a critical view of the algorithms
and existing challenges in applying them to real-world problems. Overall, this part of the tutorial takes the form of a critical survey of existing and emerging algorithms with an emphasis on techniques used, and future challenges not only to obtain better proposals but also to incorporate the existing ones in real-world problems.

Targeted Audience

This tutorial is suitable for anyone with an interest in evolutionary computation who wishes to learn more about the state-of-the-art in large-scale global optimization. The tutorial is specifically targeted for Ph.D. students, and early career researchers who want to gain an overview of the field and wish to identify the most important open questions and challenges in the field to bootstrap their research in large-scale optimization. The tutorial can also be of interest to more experienced researchers as well
as practitioners who wish to get a glimpse of the latest developments in the field. In addition to our prime goal which is to inform and educate, we also wish to use this tutorial as a forum for exchanging ideas between researchers. Overall, this tutorial provides a unique opportunity to showcase the latest developments on this hot research topic to the EC research community. The expected duration of each part is approximately 110 minutes.

Organizers’ Bio

Mohammad Nabi Omidvar (M’09) received the first bachelor’s degree (First Class Hons.) in computer
science, the second bachelor’s degree in applied mathematics, and the Ph.D. degree in computer
science from RMIT University, Melbourne, VIC, Australia, in 2010, 2014, and 2016, respectively. He is
currently an Academic Fellow ( Asst. Professor) with the School of Computing, University of Leeds, and
Leeds University Business School working on applications of artificial intelligence in financial services.
Prior to joining the University of Leeds, he was a research fellow with the School of Computer Science,
University of Birmingham, U.K. His current research interests include large-scale global optimization,
decomposition methods for optimization, multiobjective optimization, and AI in finance. Dr. Omidvar
2
was a recipient of the IEEE Transactions on Evolutionary Computation Outstanding Paper Award for
his research on large-scale global optimization in 2017, the Australian Postgraduate Award in 2010, and
the Best Computer Science Honours Thesis Award from the School of Computer Science and IT, RMIT
University. In 2019 he jointly received the CEC 2019 Competition on Large-Scale Global Optimization
award. He is also the chair of IEEE Taskforce on Large-Scale Global Optimization.
Xiaodong Li (M’03-SM’07-Fellow’20) received his B.Sc. degree from Xidian University, Xi’an, China,
and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively.
He is a Professor with the School of Science (Computer Science and Software Engineering), RMIT University,
Melbourne, Australia. His research interests include machine learning, evolutionary computation,
neural networks, data analytics, multiobjective optimization, multimodal optimization, and swarm intelligence.
He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, Swarm
Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member
of IEEE CIS Task Force on Swarm Intelligence, a vice-chair of IEEE Task Force on Multi-modal
Optimization, and a former chair of IEEE CIS Task Force on Large Scale Global Optimization. He is the
recipient of 2013 ACM SIGEVO Impact Award and 2017 IEEE CIS “IEEE Transactions on Evolutionary
Computation Outstanding Paper Award”. He is elevated to IEEE Fellow in 2020.
Daniel Molina Cabrera (Phd) is assistant professor at University of Granada. His research interests
focus on numeric optimization, large-scale optimization, machine learning, and neuroevolution. He has
more than 15 years of research experience publishing more than 20 papers in international journals, and
more than 30 peer-reviewed contributions in national and international conferences. He has been until
2019 the Chair of the IEEE CIS Task Force on Large Scale Global Optimization.
Antonio LaTorre (PhD) is assistant profesor at Universidad Politécnica de Madrid and subdirector of
the Center for Computational Simulation. His research interests focus on high-performance data analysis,
modeling and optimization. He has an active research in applied problems in the domain of logistics,
neurosciences and health. He has more than 14 years of research experience backed-up by his participation
in 14 national and international projects, both with public and private funding, leading 3 of them. He
has published more than 50 peer-reviewed contributions in international journals and conferences. He
currently serves as vice-chair of the IEEE CIS Task Force on Large Scale Global Optimization.
Yuan Sun is a research fellow in the School of Computer Science and Software Engineering, RMIT
University. Prior to that he obtained his Ph.D degree from University of Melbourne and a Bachelor’s
degree from Peking University. His research interests include large-scale optimization, machine learning,
and operations research. He has published actively in large-scale optimization and has won the CEC
2019 Competition on Large-Scale Global Optimization.

Evolutionary Bilevel Optimization

Abstract

Many practical optimization problems should better be posed as bilevel optimization problems in which there are two levels of optimization tasks. A solution at the upper level is feasible if the cor-responding lower level variable vector is optimal for the lower level optimization problem. Consid-er, for example, an inverted pendulum problem for which the motion of the platform relates to the upper level optimization problem of performing the balancing task in a time-optimal manner. For a given motion of the platform, whether the pendulum can be balanced at all becomes a lower level optimization problem of maximizing stability margin. Such nested optimization problems are com-monly found in transportation, engineering design, game playing and business models. They are also known as Stackelberg games in the operations research community. These problems are too complex to be solved using classical optimization methods simply due to the “nestedness” of one optimization task into another.

Keywords

Bilevel Optimization, Bilevel Multi-objective Optimization, Evolutionary Algorithms, Multi-Criteria Decision Making, Theory on Bilevel Programming, Hierarchical Decision Making, Bilevel Applications, Hybrid Algorithms

Tutorial Description

What is Bilevel Programming?

To begin with, an introduction is provided to bilevel optimization problems. A generic formulation for bilevel problems is presented and its differences from ordinary single level optimization problems are discussed.

Bilevel Problems: A Genesis
The origin of bilevel problems can be traced to two roots; namely, game theory and mathematical programming. A genesis of these problems is provided through simple practical examples.

Properties of Bilevel Problems
The properties of bilevel optimization problems are discussed. Difficulties encountered in solving these problems are highlighted.

Applications
A number of practical applications from the areas of process optimization, game-playing strategy development, transportation problems, optimal control, environmental economics and coordination of multi-divisional firms are described to highlight the practical relevance of bilevel programming.

Solution Methodologies
Existing solution methodologies for bilevel optimization and their weaknesses are discussed. Lack of efficient methodologies is underlined and the need for better solution approaches is emphasized.

EAs Niche
Evolutionary algorithms provide a convenient framework for handling complex bilevel problems. Co-evolutionary ideas and flexibility in operator design can help in efficiently tackling the problem.

Multi-objective Extensions
Recent algorithms and results on multi-objective bilevel optimization using evolutionary algorithms are discussed and some application problems are highlighted.

Future Research Ideas
A number of immediate and future research ideas on bilevel optimization are highlighted related to decision making difficulties and robustness.

Conclusions
Concluding remarks for the tutorial are provided.

References
A list of references on bilevel optimization is provided.

Target Audience
Bilevel optimization belongs to a difficult class of optimization problems. Most of the classical optimization methods are unable to solve even simpler instances of bilevel problems. This offers a niche to the researchers in the field of evolutionary computation to work on the development of efficient bilevel procedures. However, many researchers working in the area of evolutionary computation are not familiar with this important class of optimization problems. Bilevel optimization has immense practical applications and it certainly requires attention of the researchers working on evolutionary computation. The target audience for this tutorial will be researchers and students looking to work on bilevel optimization. The tutorial will make the basic concepts on bilevel optimization and the recent results easily accessible to the audience.

Short Biography
Ankur Sinha is working as an Associate Professor at Indian Institute of Management, Ahmedabad, India. He completed his PhD from Helsinki School of Economics (Now: Aalto University School of Business) where his PhD thesis was adjudged as the best dissertation of the year 2011. He holds a Bachelors degree in Mechanical Engineering from Indian Institute of Technology (IIT) Kanpur. After completing his PhD, he has held visiting positions at Michigan State University and Aalto University. His research interests include Bilevel Optimization, Multi-Criteria Decision Making and Evolutionary Algorithms. He has offered tutorials on Evolutionary Bilevel Optimization at GECCO 2013, PPSN 2014, and CEC 2015, 2017, 2018, 2019. His research has been published in some of the leading Computer Science, Business and Statistics journals. He regularly chairs sessions at evolutionary computation conferences. For detailed information about his research and teaching, please refer to his personal page: http://www.iima.ac.in/~asinha/.

Kalyanmoy Deb is a Koenig Endowed Chair Professor at the Michigan State University in Michigan USA. He is the recipient of the prestigious TWAS Prize in Engineering Science, Infosys Prize in Engineering and Computer Science, Shanti Swarup Bhatnagar Prize in Engineering Sciences for the year 2005. He has also received the ‘Thomson Citation Laureate Award’ from Thompson Scientific for having highest number of citations in Computer Science during the past ten years in India. He is a fellow of IEEE, Indian National Academy of Engineering (INAE), Indian National Academy of Sciences, and International Society of Genetic and Evolutionary Computation (ISGEC). He has received Fredrick Wilhelm Bessel Research award from Alexander von Humboldt Foundation in 2003. His main research interests are in the area of computational optimization, modeling and design, and evolutionary algorithms. He has written two textbooks on optimization and more than 500 international journal and conference research papers. He has pioneered and is a leader in the field of evolutionary multi-objective optimization. He is associate editor and in the editorial board or a number of major international journals. More information about his research can be found from http://www.egr.msu.edu/people/profile/kdeb.

Self-Organizing Migrating Algorithm – Recent Advances and Progress in Swarm Intelligence Algorithms

Abstract

Self-Organizing Migrating Algorithm (SOMA) belongs to the class of swarm intelligence techniques. SOMA is inspired by competitive-cooperative behavior, uses inherent self-adaptation of movement over the search space, as well as discrete perturbation mimicking the mutation process. The SOMA performs significantly well in both continuous as well as discrete domains. The tutorial will cover several parts.

Firstly, state of the art on the field of swarm intelligence algorithms, similarities and differences between various algorithms and SOMA will be discussed.

The main part of the tutorial will show a collection of principal findings from original research papers discussing current research trends in parameters control, discrete perturbation, novel improvements approaches on and with SOMA from the latest scientific events. New and very efficient strategies like SOMA-T3A (4th place in 100-digit competition), recently published SASOMA, or SOMA-Pareto (6th place in 100-digit competition) will be discussed in detail with demonstrations.

Also, the description of our original concept for the transformation of internal dynamics of swarm algorithms (including SOMA) into the social-like network (social interaction amongst individuals) will be discussed here. Analysis of such a network can be then straightforwardly used as direct feedback into the algorithm for improving its performance.

Finally, the experiences from more than the last 10 years with SOMA, demonstrated on various applications like control engineering, cybersecurity, combinatorial optimization, or computer games, conclude the tutorial.

Tutorial Presenters (names with affiliations):

Name: Roman Senkerik
Affiliation: Tomas Bata University in Zlin, Department of Informatics and Artificial Intelligence
Email: 
senkerik@utb.cz


Tutorial Presenters’ Bios:

Roman Senkerik was born in Zlin, the Czech Republic, in 1981. He received an MSc degree in technical cybernetics from the Tomas Bata University in Zlin, Faculty of applied informatics in 2004, the Ph.D. degree also in technical Cybernetics, in 2008, from the same university, and Assoc. prof. Degree in Informatics from VSB – Technical University of Ostrava, in 2013.
From 2008 to 2013 he was a Research Assistant and Lecturer with the Tomas Bata University in Zlin, Faculty of applied informatics. Since 2014 he is an Associate Professor and Head of the A.I.Lab with the Department of Informatics and Artificial Intelligence, Tomas Bata University in Zlin. He is the author of more than 40 journal papers, 250 conference papers, and several book chapters as well as editorial notes. His research interests are soft computing methods and their interdisciplinary applications in optimization and cyber-security, development of evolutionary algorithms, machine learning, data science, the theory of chaos, complex systems. He is a recognized reviewer for many leading journals in computer science/computational intelligence. He was a part of the organizing teams for special sessions/symposiums or IPC/TPC at IEEE WCCI, CEC, SSCI, GECCO, SEMCCO or MENDEL (and more) events. He was a guest editor of several special issues in journals, and editor of proceedings for several conferences.

Evolutionary Large-Scale Global Optimization – PART 2

Abstract

Many real-world optimization problems involve a large number of decision variables. The trend in engineering
optimization shows that the number of decision variables involved in a typical optimization
problem has grown exponentially over the last 50 years [10], and this trend continues with an everincreasing
rate. The proliferation of big-data analytic applications has also resulted in the emergence of
large-scale optimization problems at the heart of many machine learning problems [1, 11]. The recent
advances in the area of machine learning has also witnessed very large scale optimization problems encountered
in training deep neural network architectures (so-called deep learning), some of which have
over a billion decision variables [3, 7]. It is this “curse-of-dimensionality” that has made large-scale optimization
an exceedingly difficult task. Current optimization methods are often ill-equipped in dealing
with such problems. It is this research gap in both theory and practice that has attracted much research
interest, making large-scale optimization an active field in recent years. We are currently witnessing a
wide range of mathematical and metaheuristics optimization algorithms being developed to overcome this
scalability issue. Among these, metaheuristics have gained popularity due to their ability in dealing with
black-box optimization problems. Currently, there are two different approaches to tackle this complex
search. The first one is to apply decomposition methods, that divide the total number of variables into
groups of variables allowing researcher to optimize each one separately, reducing the curse of dimensionality.
Their main drawback is that choosing proper decomposition could be very difficult and expensive
computationally. The other approach is to propose algorithms specifically designed for large-scale global
optimization, creating algorithms whose features are well-suited for that type of search. The Tutorial is
divided in two parts, each dedicated to exploring the advances in the approaches stated above, presented
by experts in each respective field.

Part I: Introduction and Decomposition Methods

Presenters:

Mohammad Nabi Omidvar
Antonio LaTorre

In this part of the tutorial, we provide an overview of the recent advances in the field of evolutionary largescale global optimization with an emphasis on the divide-and-conquer approaches (a.k.a. decomposition methods). In particular, we give an overview of different approaches including the non-decomposition based approaches such as memetic algorithms and sampling methods to deal with large-scale problems.

This is followed by a more detailed treatment of implicit and explicit decomposition algorithms in largescale optimization. Considering the popularity of decomposition methods in recent years, we provide a detailed technical explanation of the state-of-the-art decomposition algorithms including the differential grouping algorithm [8] and its latest improved derivatives, which outperform other decomposition algorithms on the latest large-scale global optimization benchmarks [5]. We also address the issue of resource allocation in cooperative co-evolution and provide a detailed explanation of some recent algorithms such
as the contribution-based cooperative co-evolution family of algorithms [9]. Overall, this tutorial takes the form of a critical survey of the existing methods with an emphasis on articulating the challenges in large-scale global optimization in order to stimulate further research interest in this area.

Part II: Algorithms and Design Considerations

Presenter:

Mohammad Nabi Omidvar
Antonio LaTorre

In this part of the tutorial, we introduce various algorithms specially designed for large-scale global optimization in further detail. These algorithms do not reply on any decomposition mechanism and tackle the optimization of all variables as a whole. In order to obtain good results, given the shear size of the search space to explore, many approaches have been proposed among which memetic algorithms which incorporate local search have shown better performance. Since they need to obtain a good trace-off among exploration of the search space and exploitation of the current best solutions, we introduce ways of
using the local search and other techniques such as the restart mechanisms to improve the optimization performance on problems with a large number of decision variables. In recent years, the proposals of algorithms for large-scale global optimization have significantly increased, as shown in Figure 1. We describe, in chronological order, different relevant algorithms in the topic, with more emphasis on some state-of-the-art algorithms such as MOS [4], considered the state of the art for several years until other modern algorithms such as [2, 6] improved its results. We also give a critical view of the algorithms
and existing challenges in applying them to real-world problems. Overall, this part of the tutorial takes the form of a critical survey of existing and emerging algorithms with an emphasis on techniques used, and future challenges not only to obtain better proposals but also to incorporate the existing ones in real-world problems.

Targeted Audience

This tutorial is suitable for anyone with an interest in evolutionary computation who wishes to learn more about the state-of-the-art in large-scale global optimization. The tutorial is specifically targeted for Ph.D. students, and early career researchers who want to gain an overview of the field and wish to identify the most important open questions and challenges in the field to bootstrap their research in large-scale optimization. The tutorial can also be of interest to more experienced researchers as well
as practitioners who wish to get a glimpse of the latest developments in the field. In addition to our prime goal which is to inform and educate, we also wish to use this tutorial as a forum for exchanging ideas between researchers. Overall, this tutorial provides a unique opportunity to showcase the latest developments on this hot research topic to the EC research community. The expected duration of each part is approximately 110 minutes.

Organizers’ Bio

Mohammad Nabi Omidvar (M’09) received the first bachelor’s degree (First Class Hons.) in computer
science, the second bachelor’s degree in applied mathematics, and the Ph.D. degree in computer
science from RMIT University, Melbourne, VIC, Australia, in 2010, 2014, and 2016, respectively. He is
currently an Academic Fellow ( Asst. Professor) with the School of Computing, University of Leeds, and
Leeds University Business School working on applications of artificial intelligence in financial services.
Prior to joining the University of Leeds, he was a research fellow with the School of Computer Science,
University of Birmingham, U.K. His current research interests include large-scale global optimization,
decomposition methods for optimization, multiobjective optimization, and AI in finance. Dr. Omidvar
2
was a recipient of the IEEE Transactions on Evolutionary Computation Outstanding Paper Award for
his research on large-scale global optimization in 2017, the Australian Postgraduate Award in 2010, and
the Best Computer Science Honours Thesis Award from the School of Computer Science and IT, RMIT
University. In 2019 he jointly received the CEC 2019 Competition on Large-Scale Global Optimization
award. He is also the chair of IEEE Taskforce on Large-Scale Global Optimization.
Xiaodong Li (M’03-SM’07-Fellow’20) received his B.Sc. degree from Xidian University, Xi’an, China,
and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively.
He is a Professor with the School of Science (Computer Science and Software Engineering), RMIT University,
Melbourne, Australia. His research interests include machine learning, evolutionary computation,
neural networks, data analytics, multiobjective optimization, multimodal optimization, and swarm intelligence.
He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, Swarm
Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member
of IEEE CIS Task Force on Swarm Intelligence, a vice-chair of IEEE Task Force on Multi-modal
Optimization, and a former chair of IEEE CIS Task Force on Large Scale Global Optimization. He is the
recipient of 2013 ACM SIGEVO Impact Award and 2017 IEEE CIS “IEEE Transactions on Evolutionary
Computation Outstanding Paper Award”. He is elevated to IEEE Fellow in 2020.
Daniel Molina Cabrera (Phd) is assistant professor at University of Granada. His research interests
focus on numeric optimization, large-scale optimization, machine learning, and neuroevolution. He has
more than 15 years of research experience publishing more than 20 papers in international journals, and
more than 30 peer-reviewed contributions in national and international conferences. He has been until
2019 the Chair of the IEEE CIS Task Force on Large Scale Global Optimization.
Antonio LaTorre (PhD) is assistant profesor at Universidad Politécnica de Madrid and subdirector of
the Center for Computational Simulation. His research interests focus on high-performance data analysis,
modeling and optimization. He has an active research in applied problems in the domain of logistics,
neurosciences and health. He has more than 14 years of research experience backed-up by his participation
in 14 national and international projects, both with public and private funding, leading 3 of them. He
has published more than 50 peer-reviewed contributions in international journals and conferences. He
currently serves as vice-chair of the IEEE CIS Task Force on Large Scale Global Optimization.
Yuan Sun is a research fellow in the School of Computer Science and Software Engineering, RMIT
University. Prior to that he obtained his Ph.D degree from University of Melbourne and a Bachelor’s
degree from Peking University. His research interests include large-scale optimization, machine learning,
and operations research. He has published actively in large-scale optimization and has won the CEC
2019 Competition on Large-Scale Global Optimization.

Recent Advances in Particle Swarm Optimization Analysis and Understanding

Abstract

The main objective of this tutorial will be to inform particle swarm optimization (PSO) practitioners of the many common misconceptions and falsehoods that are actively hindering a practitioner’s successful use of PSO in solving challenging optimization problems. While the behaviour of PSO’s particles has been studied both theoretically and empirically since its inception in 1995, most practitioners unfortunately have not utilized these studies to guide their use of PSO. This tutorial will provide a succinct coverage of common PSO misconceptions, with a detailed explanation of why the misconceptions are in fact false, and how they are negatively impacting results. The tutorial will also provide recent theoretical results about PSO particle behaviour from which the PSO practitioner can now make better and more informed decisions about PSO and in particular make better PSO parameter selections.

Presenters:

Andries Engelbrecht (Stellenbosch University, South Africa)

Christopher Cleghorn (University of Pretoria, South Africa)

Bios:

Andries Engelbrecht received the Masters and PhD degrees in ComputerScience from the University of Stellenbosch, South Africa, in 1994 and 1999 respectively. He is Voigt Chair in Data Science in the Department of Industrial Engineering, with a joint appointment as Professor in the Computer Science Division, Stellenbosch University. His research interests include swarm intelligence, evolutionary computation, artificial neural networks, artificial immune systems, and the application of these Computational Intelligence paradigms to data analytics, games, bioinformatics, finance, and difficult optimization problems. He is author of two books, Computational Intelligence: An Introduction and Fundamentals of Computational Swarm Intelligence.

Christopher Cleghorn received his Masters and PhD degrees in Computer Science from the University of Pretoria, South Africa, in 2013 and 2017 respectively. He is a senior lecturer in Computer Science at the University of Pretoria, and a member of the Computational Intelligence Research Group. His research interests include swarm intelligence, evolutionary computation, and machine learning, with a strong focus on theoretical research. Dr Cleghorn annually serves as a reviewer for numerous international journals and conferences in domains ranging from swarm intelligence and neural networks to mathematical optimization.

URL: https://cirg.cs.up.ac.za/CEC/index.html

Nature-Inspired Techniques for Combinatorial Problems

Abstract

Combinatorial problems refer to those applications where we either look for the existence of a consistent scenario satisfying a set of constraints (decision problem), or for one or more good/best solutions meeting a set of requirements while optimizing some objectives (optimization problem). These latter objectives include user’s preferences that reflect desires and choices that need to be satisfied as much as possible. Moreover, constraints and objectives (in the case of an optimization problem) often come with uncertainty due to lack of knowledge, missing information, or variability caused by events, which are under nature’s control. Finally, in some applications such as timetabling, urban planning and robot motion planning, these constraints and objectives can be temporal, spatial or both. In this latter case, we are dealing with entities occupying a given position in time and space.

Because of the importance of these problems in so many fields, a wide variety of techniques and programming languages from artificial intelligence, computational logic, operations research and discrete mathematics, are being developed to tackle problems of this kind. While these tools have provided very promising results at both the representation and the reasoning levels, they are still impractical to dealing with many real-world applications, especially given the challenges we listed above.

In this tutorial, we will show how to apply nature-inspired techniques in order to overcome these limitations. This requires dealing with different aspects of uncertainty, change, preferences and spatio-temporal information. The approach that we will adopt is based on the Constraint Satisfaction Problem (CSP) paradigm and its variants.

Biography of the Speaker

Dr. Malek Mouhoub obtained his MSc and PhD in Computer Science from the University of Nancy in France. He is currently Professor and was the Head of the Department of Computer Science at the University of Regina, in Canada. Dr. Mouhoub’s research interests include Constraint Solving, Metaheuristics and Nature-Inspired Techniques, Spatial and Temporal Reasoning, Preference Reasoning, Constraint and Preference Learning, with applications to Scheduling and Planning, E-commerce, Online Auctions, Vehicle Routing and Geographic Information Systems (GIS). Dr. Mouhoub’s research is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada Foundation for Innovation (CFI), and the Mathematics of Information Technology and Complex Systems (MITACS) federal grants, in addition to several other funds and awards.

Dr. Mouhoub is the past treasurer and member of the executive of the Canadian Artificial Intelligence Association / Association pour l’intelligenceartificielle au Canada (CAIAC). CAIAC is the oldest national Artificial Intelligence association in the world. It is the official arm of the Association for the Advancement of Artificial Intelligence (AAAI) in Canada.

 

Dr. Mouhoub was the program co-chair for the 30th Canadian Conference on Artificial Intelligence (AI 2017), the 31st International Conference on Industrial, Engineering & Other Applications of Applied Intelligent Systems (IEA/AIE 2018) and the IFIP International Conference on Computational Intelligence and Its Applications (IFIP CIIA 2018).

Niching Methods for Multimodal Optimization

Xiaodong Li, RMIT University, Melbourne, Australia
Mike Preuss, Universiteit Leiden, the Netherlands
Michael G. Epitropakis, The Signal Group, Athens, Greece

Date/Time: Sunday, July 19th from 7:00 – 9:00pm UK Time


Abstract

Population or single solution search-based optimization algorithms (i.e., meta-heuristics) in their original forms are usually designed for locating a single global solution. Representative examples include among others evolutionary and swarm intelligence algorithms. These search algorithms typically converge to a single solution because of the global selection scheme used. Nevertheless, many real-world problems are “multi-modal” by nature, i.e., multiple satisfactory solutions exist. It may be desirable to locate many such satisfactory solutions, or even all of them, so that a decision maker can choose one that is most proper in his/her problem context. Numerous techniques have been developed in the past for locating multiple optima (global and/or local). These techniques are commonly referred to as “niching” methods, e.g., crowding, fitness sharing, derating, restricted tournament selection, clearing, speciation, etc. In more recent times, niching methods have also been developed for meta-heuristic algorithms such as Particle Swarm Optimization (PSO) and Differential Evolution (DE). In this talk we will introduce niching methods, including its historical background, the motivation of employing niching in EAs, and the challenges in applying it to solving real-world problems. We will describe a few classic niching methods, such as the fitness sharing and crowding, as well as niching methods developed using new meta-heuristics such as PSO and DE. We will also describe a niching competition series run annually by the IEEE CIS Taskforce on Multimodal Optimization, hoping to attract more researchers to participate. Niching methods can be applied for effective handling of a wide range of problems including static and dynamic optimization, multiobjective optimization, clustering, feature selection, and machine learning. We will provide several such examples of solving real-world multimodal optimization problems. This tutorial will use several demos to show the workings of niching methods. The tutorial is supported by the IEEE CIS Task Force on Multi-modal Optimization (http://www.epitropakis.co.uk/ieee-mmo/).

Duration: 1.5 hours.

Intended audience: This tutorial should be of interest to both new beginners as well as more experienced researchers. The tutorial will provide a unique opportunity to get an update on the latest developments of this classic evolutionary computing topic, which attracts increasingly more attention in recent years.

Tutorial Presenters (names with affiliations):

Xiaodong Li, RMIT University, Melbourne, Australia
Mike Preuss, Universiteit Leiden, the Netherlands
Michael G. Epitropakis, The Signal Group, Athens, Greece

Tutorial Presenters’ Bios:

Xiaodong Li received his B.Sc. degree from Xidian University, Xi’an, China, and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively. He is a full professor at the School of Science (Computer Science and Software Engineering), RMIT University, Melbourne, Australia. His research interests include evolutionary computation, neural networks, machine learning, complex systems, multiobjective optimization, multimodal optimization (niching), and swarm intelligence. He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, Swarm Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member of IEEE CIS Task Force on Swarm Intelligence, a Vice-chair of IEEE CIS Task Force of Multi-Modal Optimization, and a former Chair of IEEE CIS Task Force on Large Scale Global Optimization. He was the General Chair of SEAL’08, a Program Co-Chair AI’09, a Program Co-Chair for IEEE CEC’2012, a General Chair for ACALCI’2017 and AI’17. He is the recipient of 2013 ACM SIGEVO Impact Award and 2017 IEEE CIS “IEEE Transactions on Evolutionary Computation Outstanding Paper Award”.

Mike Preuss is Assistant Professor at LIACS, the computer science institute of Universiteit Leiden in the Netherlands. Previously, he was with ERCIS (the information systems institute of WWU Muenster, Germany), and before with the Chair of Algorithm Engineering at TU Dortmund, Germany, where he received his PhD in 2013. His main research interests rest on the field of evolutionary algorithms for real-valued problems, namely on multimodal and multiobjective optimization, and on computational intelligence and machine learning methods for computer games, especially in procedural content generation (PGC) and real-time strategy games (RTS).

Michael G. Epitropakis received his B.S., M.S., and Ph.D. degrees from the Department of Mathematics, University of Patras, Patras, Greece. Currently, he is a Senior Research Scientist and a Product Manager at The Signal Group, Athens, Greece. From 2015 to 2018 he was a Lecturer in Foundations of Data Science at the Data Science Institute and the Department of Management Science, Lancaster University, Lancaster, UK. His current research interests include computational intelligence, evolutionary computation, swarm intelligence, machine learning and search-based software engineering. He has published more than 35 journal and conference papers. He is an active researcher on Multi-modal Optimization and a co-organized of the special session and competition series on Niching Methods for Multimodal Optimization. He is a member of the IEEE Computational Intelligence Society.

 

Fundamentals of Fuzzy Networks

Alexander Gegov
University of Portsmouth, UK
Email: alexander.gegov@port.ac.uk

Website: http://www.port.ac.uk/school-of-computing/staff/dr-alexander-gegov.html

The tutorial focuses on the theoretical foundations of fuzzy networks. The nodes of these networks are fuzzy systems represented by rule bases and the connections between the nodes are outputs from and inputs to these rule bases [1]-[6].

Fuzzy networks have an underlying two-dimensional grid structure with horizontal levels and vertical layers. The levels represent spatial hierarchy in terms of network breadth and the layers represent temporal hierarchy in terms of network depth.

The nodes of fuzzy networks are modelled by Boolean matrices or binary relations. The connections between the nodes are modelled by block schemes or topological expressions. Each network node is located in a cell within the underlying grid structure.

Nodes in fuzzy networks are manipulated by merging and splitting operations. The merging operations are for network analysis and the splitting operations are for network design. These operations are used for converting a fuzzy network into a fuzzy system and vice versa.

The operations are illustrated on feedforward and feedback fuzzy networks. Feedforward networks include combinations of narrow/broad and shallow/deep network structures. Feedback networks include combinations of single/multiple and local/global feedback loops.

Fuzzy networks are applied to case studies from engineering, computing, transport and finance. They are validated successfully against standard and hierarchical fuzzy systems. The validation uses performance evaluation indicators for feasibility, accuracy, efficiency, transparency.

[1] A.Gegov, Fuzzy Networks for Complex Systems: A Modular Rule Base Approach, Series in Studies in Fuzziness and Soft Computing (Springer, Berlin, 2011)

[2] F.Arabikhan, Telecommuting Choice Modelling using Fuzzy Rule Based Networks, PhD Thesis (University of Portsmouth, UK, 2017)

[3] A.Gegov, F.Arabikhan and N.Petrov, Linguistic composition based modelling by fuzzy networks with modular rule bases, Fuzzy Sets and Systems 269 (2015) 1-29

[4] X.Wang, A.Gegov, F.Arabikhan, Y.Chen and Q.Hu, Fuzzy network based framework for software maintainability prediction, Uncertainty, Fuzziness and Knowledge Based Systems 27/5 (2019) 841-862

[5] A.Yaakob, A.Serguieva and A.Gegov, FN-TOPSIS: Fuzzy networks for ranking traded equities, IEEE Transactions on Fuzzy Systems 25/2 (2016) 315-332

[6] A.Yaakob, A.Gegov and S.Rahman, Fuzzy networks with rule base aggregation for selection of alternatives, Fuzzy Sets and Systems341 (2018) 123-144

Presenters Names andAffiliations:

Alexander Gegov, University of Portsmouth, UK, alexander.gegov@port.ac.uk http://www.port.ac.uk/school-of-computing/staff/dr-alexander-gegov.html

Farzad Arabikhan, University of Portsmouth, UK, farzad.arabikhan@port.ac.uk https://www.port.ac.uk/about-us/structure-and-governance/our-people/our-staff/farzad-arabikhan

Presenters Bios:

Alexander Gegov is Reader in Computational Intelligence in the School of Computing, University of Portsmouth, UK. He holds a PhD in Control Systems and a DSc in Intelligent Systems – both from the Bulgarian Academy of Sciences. He has been a recipient of a National Annual Award for Best Young Researcher from the Bulgarian Union of Scientists. He has been Humboldt Guest Researcher at the University of Duisburg in Germany. He has also been EU Visiting Researcher at the University of Wuppertal in Germany and the Delft University of Technology in the Netherlands.

Alexander Gegov’s research interests are in the development of computational intelligence methods and their application for modelling and simulation of complex systems and networks. He has edited 6 books, authored 5 research monographs and over 30 book chapters – most of these published by Springer. He has authoredover 50 articles and 100 papers in international journals and conferences – many of these published and organised by IEEE. He hasalso presented over 20 invited lectures and tutorials – most of these at IEEE Conferences on Fuzzy Systems, Intelligent Systems, Computational Intelligence and Cybernetics.

Alexander Gegov is Associate Editor for ‘IEEE Transactions on Fuzzy Systems’, ‘Fuzzy Sets and Systems’, ‘Intelligent and Fuzzy Systems’ and ‘Computational Intelligence Systems’. He is also Reviewer for several IEEE journals and Assessor for several National Research Councils. He is Member of the IEEE Computational Intelligence Society and the Soft Computing Technical Committee of the IEEE Society of Systems, Man and Cybernetics. He is also Guest Editor for the forthcoming Special Issue on Deep Fuzzy Models of the IEEE Transactions on Fuzzy Systems.

Farzad Arabikhan joined the University of Portsmouth as a lecturer in 2017. He completed his PhD on 2017 at the University of Portsmouth and his thesis focus was on Modelling Telecommuting using Fuzzy Networks.  In his research, he optimised Fuzzy Networks using Genetic Algorithms and data mining approaches. Having published his research results in several journal and conference papers, he has also secured funding from European Cooperation in Science and Technology (COST) to collaborate with European scholars in University Paris 1 Pantheon Sorbonne, Paris, France and Mediterranean University of Reggio Calabria to pursue his research activities. He holds BSc and MSc degrees in Civil and Transportation Engineering from the Sharif University of Technology, Tehran, Iran.

Paving the way from Interpretable Fuzzy Systems to Explainable Artificial Intelligence Systems

José M. Alonso
Research Centre in Intelligent Technologies (CiTIUS)
University of Santiago de Compostela (USC)
Campus Vida, E-15782, Santiago de Compostela, Spain
Email: (josemaria.alonso.moral@usc.es)

Website: https://citius.usc.es/v/jose-maria-alonso-moral

Ciro Castiello
Department of Informatics
University of Bari “Aldo Moro”, Bari, Italy
Email: ciro.castiello@uniba.it

Website: http://www.di.uniba.it/~castiello/

CorradoMencar
Department of Informatics
University of Bari “Aldo Moro”, Bari, Italy
Email: corrado.mencar@uniba.it

Website: https://sites.google.com/site/cilabuniba/people/mencar

Luis Magdalena
Department of Applied Mathematics, School of Informatics
Universidad Politécnica de Madrid (UPM), Spain
Email: luis.magdalena@upm.es

Website: http://www.gsi.dit.upm.es/index.php/es/component/jresearch/?view=member&id=27&task=show

Abstract

In the era of the Internet of Things and Big Data, data scientists are required to extract valuable knowledge from the given data. They first analyze, cure and pre-process data. Then, they apply Artificial Intelligence (AI) techniques to automatically extract knowledge from data. Actually, AI has been identified as the most strategic technology of the 21st century” and is already part of our everyday life. The European Commission states that “EU must therefore ensure that AI is developed and applied in an appropriate framework which promotes innovation and respects the Union’s values and fundamental rights as well as ethical principles such as accountability and transparency”. It emphasizes the importance of eXplainable AI (XAI in short), in order to develop an AI coherent with European values:to further strengthen trust, people also need to understand how the technology works, hence the importance of research into the explainability of AI systems. Moreover, as remarked in the last challenge stated by the USA Defense Advanced Research Projects Agency (DARPA), “even though current AI systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans”. Accordingly, users without a strong background on AI, require a new generation of XAI systems. They are expected to naturally interact with humans, thus providing comprehensible explanations of decisions automatically made.

XAI is an endeavor to evolve AI methodologies and technology by focusing on the development of agents capable of both generating decisions that a human could understand in a given context, and explicitly explaining such decisions. This way, it is possible to verify if automated decisions are made on the basis of accepted rules and principles, so that decisions can be trusted and their impact justified.

Even though XAI systems are likely to make their impact felt in the near future, there is a lack of experts to develop the fundamentals of XAI, i.e., ready to develop and to maintain the new generation of AI systems that are expected to surround us soon. This is mainly due to the inherent multidisciplinary character of this field of research, with XAI researchers coming from heterogeneous research fields. Moreover, it is hard to find XAI experts with a holistic view as well as wide and solid background regarding all the related topics.

Consequently, the main goal of this tutorial is to provide attendees with a holistic view of fundamentals and current research trends in the XAI field, paying special attention to fuzzy-grounded knowledge representation and how to enhance human-machine interaction.

The tutorial will cover the main theoretical concepts of the topic, as well as examples and real applications of XAI techniques. In addition, ethical and legal aspects concerning XAI will also be considered.

Tutorial Presenters (names with affiliations):

José M. Alonso (josemaria.alonso.moral@usc.es)
Research Centre in Intelligent Technologies (CiTIUS)
University of Santiago de Compostela (USC)
Campus Vida, E-15782, Santiago de Compostela, Spain

Website: https://citius.usc.es/v/jose-maria-alonso-moral

Ciro Castiello (ciro.castiello@uniba.it)
Department of Informatics
University of Bari “Aldo Moro”, Bari, Italy

Website: http://www.di.uniba.it/~castiello/

CorradoMencar (corrado.mencar@uniba.it)
Department of Informatics
University of Bari “Aldo Moro”, Bari, Italy

Website: https://sites.google.com/site/cilabuniba/people/mencar

Luis Magdalena (luis.magdalena@upm.es)
Department of Applied Mathematics, School of Informatics
Universidad Politécnica de Madrid (UPM), Spain

Website: http://www.gsi.dit.upm.es/index.php/es/component/jresearch/?view=member&id=27&task=show

Tutorial Presenters’ Bios:

Jose M. Alonso received his M.S. and Ph.D. degrees in Telecommunication Engineering, both from the Technical University of Madrid (UPM), Spain, in 2003 and 2007, respectively. Since June 2016, he is postdoctoral researcher at the University of Santiago de Compostela, in the Research Centre in Intelligent Technologies (CiTIUS). He is currently Chair of the Task Force on “Fuzzy Systems Software” in the Fuzzy Systems Technical Committee of the IEEE Computational Intelligence Society, Associate Editor of the IEEE Computational Intelligence Magazine (ISSN:1556-603X), secretary of the ACL Special Interest Group on Natural Language Generation, and chair of the Doctoral Consortium at the 2020 European Conference on Artificial Intelligence. He is currently coordinating the H2020-MSCA-ITN-2019 project entitled “Interactive Natural Language Technology for Explainable Artificial Intelligence” (NL4XAI). He has published more than 130 papers in international journals, book chapters and in peer-review conferences. According to Google Scholar (accessed: February 15, 2020) he has h-index=21 and i10-index=42. His research interests include computational intelligence, explainable artificial intelligence, interpretable fuzzy systems, natural language generation, development of free software tools, etc.

CiroCastiello, Ph.D. graduated in Informatics in 2001 and received his Ph.D. in Informatics in 2005. Currently he is an Assistant Professor at the Department of Informatics of the University of Bari Aldo Moro, Italy. His research interests include: soft computing techniques, inductive learning mechanisms, interpretability of fuzzy systems, eXplainable Artificial Intelligence. He participated in several research projects and published more than seventy peer-reviewed papers. He is also regularly involved in the teaching activities of his department. He is member of the European Society for Fuzzy Logic and Technology (EUSFLAT) and of the INdAM Research group GNCS (Italian National Group of Scientific Computing).

CorradoMencar is Associate Professor in Computer Science at the Department of Computer Science of the University of Bari “A. Moro”, Italy. He graduated in 2000 in Computer Science and in 2005 he obtained the title of PhD in Computer Science. In 2001 he was analyst and software designer for some Italian companies. Since 2005 he has been working on research topics concerning Computational Intelligence and Granular Computing. As part of his research activity, he has participated in several research projects and has published over one hundred peer-reviewed international scientific publications. He is also Associate Editor of several international scientific journals, as well as Featured Reviewer for ACM Computing Reviews. He regularly organizes scientific events related to his research topics with international colleagues. Currently, research topics include fuzzy logic systems with a focus on Interpretability and Explainable Artificial Intelligence, Granular Computing, Computational Intelligence applied to the Semantic Web, and Intelligent Data Analysis. As part of his teaching activity, he is, or has been, the holder of numerous classes and PhD courses on various topics, including Computer Architectures, Programming Fundamentals, Computational Intelligence and Information Theory.

Luis Magdalena is with the Dept. of Applied Mathematics for ICT of the Universidad Politécnica de Madrid. From 2006 to 2016 he was Director General of the European Centre for Soft Computing in Asturias (Spain). Under his direction, the Center was recognized with the IEEE-CIS Outstanding Organization Award in 2012. Prof. Magdalena has been actively involved in more than forty research projects. He has co-author or co-edited ten books including “Genetic Fuzzy Systems”, “Accuracy Improvements in Linguistic Fuzzy Modelling”, and “Interpretability Issues in Fuzzy Modeling”. He has also authored over one hundred and fifty papers in books, journals and conferences, receiving more than 6000 citations. Prof. Magdalena has been President of the “European Society for Fuzzy Logic and Technologies”, Vice-president of the “International Fuzzy Systems Association” and is Vice-President for Technical Activities of the IEEE Computational Intelligence Society for the period 2020-21.

External website with more information on Tutorial (if applicable):

https://sites.google.com/view/tutorial-on-xai-ieee-wcci2020

 

Fuzzy Systems for Neuroscience and Neuro-Engineering

Javier Andreu-Perez
University of Essex
Email: javier.andreu@essex.ac.uk

Ching-Teng Lin
University of Technology Sydney

Abstract: This tutorial will be an introduction to new researchers to the field of Neuroscience and Neuro-engineering from a fuzzy perspective. Attendees are not required prior knowledge on fuzzy systems or neuroscience to attend this tutorial. We will focus on brain research/decoding methods that use non-invasive modalities of neuroimaging. We will present the latest and most outstanding works that have applied Fuzzy Systems in brain signals to date. Given the important challenges associated with the processing of brain signals obtained from neuroimaging modalities, fuzzy sets and systems have been proposed as a useful and effective framework for the analysis of brain activity as well as to enable a direct communication pathway between the brain and external devices (brain computer/machine interfaces). While there has been increasing interest in these questions, the contribution of fuzzy logic sets, and systems has been diverse depending on the area of application in neuroscience. With regard to the decoding of brain activity, fuzzy sets and systems represent an excellent tool to overcome the challenge of processing extremely high signals that are most likely to be affected by high uncertainty. In this tutorial we will also provide an introduction about the foundations of fuzzy sets, logic and systems for the analysis of brain signals and neuroimaging data, including related disciplines such as computational neuroscience, brain computer/machine interfaces, neuroscience, neuroengineering, neuroinformatics, neuroergonomics, affective neuroscience and neurotechnology. After the tutorial, we will conduct an interactive survey and a panel discussion among the attendees.

Tutorial Presenters (names with affiliations):  Javier Andreu-Perez (University of Essex, United Kingdom), Ching-Teng Lin (University of Technology Sydney, Australia)

Biographies:

 Javier Andreu-Perez (SMIEEE) is Senior Lecturer in the School of Computer Science and Electronic Engineering (CSEE), University of Essex, United Kingdom (UK). PhD in Intelligent Systems from University of Lancaster, UK. His research expertise lies in the development of new methods in artificial intelligence and machine learning within the healthcare domain, particularly in application-driven advances of AI and Machine Learning for the analysis of bio-medical and neuroimaging data. He has expertise in the use of Big Data, machine learning models based on deep learning and methodologies of uncertainty modelling for highly noisy non-stationary signals. Javier has published relevant highly-cited papers and several prestigious IEEE Transactions and other Top Q1 journals in Artificial Intelligence and Neuroscience. In total Javier’s work in Artificial Intelligence and Biomedical engineering has attracted more than 1400+ citations. Javier has participated in awarded projects from UK research councils such us The Innovate UK Council, NIHR Biomedical Research Centre, Welcome Trust Centre for Global Health Research and private corporations. He is also member of the EPSRC peer review college. He is also Associate/Area Editor for the Journal Neurocomputing (Elsevier) and International Journal of Computational Intelligence (EUSFLAT official journal). Javier’s is also Chair if the IEEE CIS Society Task Force on Extensions to Type-1 Fuzzy  and co-chair of an international working group on Uncertainty Modelling for Neuro-Engineering. Javier is a frequent organiser of special sessions and competitions at FUZZ-IEEE and WCCI on the use of fuzzy systems in Brain research and interfaces.

Ching-Teng Lin (FIEEE), Professor Chin-Teng Lin received the B.S. degree from National Chiao-Tung University (NCTU), Taiwan in 1986, and the Master and Ph.D. degree in electrical engineering from Purdue University, USA in 1989 and 1992, respectively. He is currently the Distinguished Professor of Faculty of Engineering and Information Technology, and Co-Director of Center for Artificial Intelligence, University of Technology Sydney, Australia. He is also invited as Honorary Chair Professor of Electrical and Computer Engineering, NCTU, International Faculty of University of California at San-Diego (UCSD), and Honorary Professorship of University of Nottingham. Dr. Lin was elevated to be an IEEE Fellow for his contributions to biologically inspired information systems in 2005 and was elevated International Fuzzy Systems Association (IFSA) Fellow in 2012. Dr. Lin received the IEEE Fuzzy Systems Pioneer Awards in 2017. He served as the Editor-in-chief of IEEE Transactions on Fuzzy Systems from 2011 to 2016. He also served on the Board of Governors at IEEE Circuits and Systems (CAS) Society in 2005-2008, IEEE Systems, Man, Cybernetics (SMC) Society in 2003-2005, IEEE Computational Intelligence Society in 2008-2010, and Chair of IEEE Taipei Section in 2009-2010. Dr. Lin was the Distinguished Lecturer of IEEE CAS Society from 2003 to 2005 and CIS Society from 2015-2017. He serves as the Chair of IEEE CIS Distinguished Lecturer Program Committee in 2018~2019. He served as the Deputy Editor-in-Chief of IEEE Transactions on Circuits and Systems-II in 2006-2008. Dr. Lin was the Program Chair of IEEE International Conference on Systems, Man, and Cybernetics in 2005 and General Chair of 2011 IEEE International Conference on Fuzzy Systems. Dr. Lin is the co-author of Neural Fuzzy Systems (Prentice-Hall), and the author of Neural Fuzzy Control Systems with Structure and Parameter Learning (World Scientific). He has published more than 300 journal papers (Total Citation: 20,163, H-index: 65, i10-index: 254) in the areas of neural networks, fuzzy systems, brain computer interface, multimedia information processing, and cognitive neuro-engineering, including about 120 IEEE journal papers.

Patch Learning: A New Method of Machine Learning, Implemented by
Means of Fuzzy Sets

Abstract

There have been different strategies to improve the performance of a machine learning model, e.g., increasing the depth, width, and/or nonlinearity of the model, and using ensemble learning to aggregate multiple base/weak learners in parallel or in series. The goal of this tutorial is to describe a novel and new strategy called patch learning (PL) for this problem. PL consists of three steps: 1) train an initial global model using all training data; 2) identify from the initial global model the patches which contribute the most to the learning error, and then train a (local) patch model for each such patch; and, 3) update the global model using training data that do not fall into any patch. To use a PL model, one first determines if the input falls into any patch. If yes, then the corresponding patch model is used to compute the output. Otherwise, the global model is used. To-date, PL can only be implemented using fuzzy systems. How this is accomplished will be explained. Some regression problems on 1D/2D/3D curve fitting, nonlinear system identification, and chaotic time-series prediction, will be explained to demonstrate the effectiveness of PL. PL opens up a promising new line of research in machine learning. Opportunities for future
research will be explained.

Overview:

This tutorial is based on materials in the following journal articles:
• D. Wu and J. M. Mendel, “Patch Learning,” IEEE Trans. on Fuzzy Systems, Early Access, July 2019.
• J. M. Mendel, “Explaining the performance potential of rule-based fuzzy systems as of the state space,” IEEE Trans. on Fuzzy Systems, vol. 26, no. 4, pp. 2362–2373, Aug a. 2 g0r1ea8t. er sculpting
• J. M. Mendel, “Adaptive variable structure basis function expansions: candidates for machine learning,” Information Sciences, vol. 496, pp. 124–149, 2019.

Outline of Covered Material
• Introduction
o Machine learning
o Present approaches to improving machine learning performance
• Three questions, as the basis for the rest of the tutorial:
o What is the general idea of Patch Learning (PL)?
o How can a patch and PL be implemented?
o How well does PL perform?
• What is the general idea of PL?
o What is a patch?
o Steps of PL
o Analogy to a sculptor who is sculpting a human figure
o Determining optimal number of patch models
o PL illustrated by a simple regression example
o Logic for determining which model to use in PL
• How can a patch and PL be implemented?
o Patches
o Partitions of the state space
o Implementing a patch using fuzzy sets
o Locating a measured value in a specific patch
o Implementation of PL
• How well does PL perform?
o Example 1: 1D curve fitting
o Example 2: 2D surface fitting
o Other examples, time permitting
• Future research topics
• Conclusions

Tutorial Presenter:

Jerry M. Mendel (Life Fellow IEEE, Fuzzy Systems Pioneer of the IEEE Computational Intelligence Society), Emeritus Professor of Electrical Engineering, University of Southern California, Los Angeles, CA.

Tutorial Presenter’s Biography:

Jerry M. Mendel (www.jmmprof.com); received the Ph.D. degree in electrical engineering from the Polytechnic Institute of Brooklyn, Brooklyn, NY. Currently, he is Emeritus Professor of Electrical Engineering at the University of Southern
California in Los Angeles, where he has been since 1974. He is also a Tianjin 1000-Talents Foreign Experts Plan Endowed Professor, and Honorary Dean of the College of Artificial Intelligence, Tianjin Normal University, Tianjin, China. He has published over 580 technical papers and is author and/or co-author of 13 books, including Uncertain Rule-based Fuzzy
Systems: Introduction and New Directions, 2nd ed. (Springer 2017), Perceptual Computing:
Aiding People in Making Subjective Judgments (Wiley & IEEE Press, 2010), and Introduction to Type-2 Fuzzy Logic Control: Theory and Application (Wiley & IEEE Press, 2014). He is a Life Fellow of the IEEE, a Distinguished Member of the IEEE Control Systems Society, and a Fellow of the International Fuzzy Systems Association. He was President of the IEEE Control Systems Society in 1986, a member of the Administrative Committee of the IEEE Computational Intelligence Society for nine years, and Chairman of its Fuzzy Systems Technical Committee and the Computing With Words Task Force of that TC. Among his awards are the 1983 Best Transactions Paper Award of the IEEE Geoscience and Remote Sensing Society, the 1992 Signal
Processing Society Paper Award, the 2002 and 2014 Transactions on Fuzzy Systems Outstanding Paper Awards, a 1984 IEEE Centennial Medal, an IEEE Third Millenium Medal, a Fuzzy Systems Pioneer Award (2008) from the IEEE Computational Intelligence Society for “fundamental theoretical contributions and seminal results in fuzzy systems”; and, 2015 USC
Viterbi School of Engineering Senior Research Award. His present research interests (yes, he is still performing research with many colleagues around the Globe) include: type-2 fuzzy logic systems and computing with words.

Adversarial Machine Learning: On The Deeper Secrets of Deep Learning

Danilo V. Vargas, Associate Professor
Faculty of Information Science and Electrical Engineering, Kyushu University
Email: vargas@inf.kyushu-u.ac.jp

Abstract

Recent research has found out that Deep Neural Networks (DNN) behave strangely to slight changes in the input. This tutorial will talk about this curious, and yet, still poorly understood behavior. Moreover, it will dig deep into the meaning of this behavior and its links to the understanding of DNNs.

In this tutorial, I will explain the basic concepts underlying adversarial machine learning and briefly review the state-of-the-art with many illustrations and examples. In the latter part of the tutorial, I will demonstrate how attacks are helping to understand the behavior of DNNs as well as show how many defenses proposed are not improving the robustness. There are still many challenges and puzzles left unsolved. I will present some of them as well as delineate a couple of paths to a solution. Lastly, the tutorial will be closed with an open discussion and promotion of cross-community collaborations.

Tutorial Presenters (names with affiliations):

Associate Professor at Kyushu University

Tutorial Presenters’ Bios:

Danilo Vasconcellos Vargas is currently an Associate Professor at Kyushu University, Japan. His research interests span Artificial Intelligence (AI), evolutionary computation, complex adaptive systems, interdisciplinary studies involving or using an AI’s perspective and AI applications. Many of his works were published in prestigious journals such as Evolutionary Computation (MIT Press), IEEE Transactions on Evolutionary Computation and and IEEE Transactions of Neural Networks and Learning Systems with press coverage in news magazines such as BBC news. He received awards such as the IEEE Excellent Student Award and scholarships to study in Germany and Japan for many years. Regarding his community activities, he was the presenter of two tutorials at the renowned GECCO conference.

Regarding adversarial machine learning, he has more than 5 invited talks about the subject. One given in a workshop in CVPR 2019. He has authored more than 10 articles and three chapters on books about adversarial machine learning, one of its research output was published on BBC news (about the paper “One pixel attack for fooling deep neural networks”).

Currently, he leads the Laboratory of Intelligent Systems aimed at building a new age of robust and adaptive artificial intelligence. More info can be found both in his website http://danilovargas.org and his Lab Page http://lis.inf.kyushu-u.ac.jp.

External website with more information on Tutorial (if applicable):

http://lis.inf.kyushu-u.ac.jp/wcci2020_tutorial.php

 

Brain-Inspired Spiking Neural Network Architectures for Deep, Incremental Learning and Knowledge Representation   

Prof. Nikola Kasabov
Director, Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland, New Zealand
Email: nkasabov@aut.ac.nz              

ABSTRACT

The 2 hour tutorial demonstrates that the third generation of artificial neural networks, the spiking neural networks (SNN) are not only capable of deep, incremental learning of temporal or spatio-temporal data, but also enabling the extraction of knowledge representation from the learned data and tracing the knowledge evolution over time from the incoming data. Similarly to how the brain learns, these SNN models do not need to be restricted in number of layers, neurons in each layer, etc. as they adopt self-organising learning principles of the brain. The tutorial consists of 2 parts:

  1. Algorithms for deep, incremental learning in SNN.
  2. Algorithms for knowledge representation and for tracing the knowledge evolution in SNN over time from incoming data. Representing fuzzy spatio-temporal rules from SNN.
  3. Selected Applications

The material is illustrated on an exemplar SNN architecture NeuCube (free software and open source along with a cloud-based version available from www.kedri.aut.ac.nz/neucube). Case studies are presented of brain and environmental data modelling and knowledge representation using incremental and transfer learning algorithms. These include: predictive modelling of EEG and fMRI data measuring cognitive processes and response to treatment; AD prediction; understanding depression; predicting environmental hazards and extreme events.

It is also demonstrated that brain-inspired SNN architectures, such as the NeuCube, allow for  knowledge transfer between humans and machines through building brain-inspired Brain-Computer Interfaces (BI-BCI). These are used to understand human-to-human knowledge transfer through hyper-scanning and also to create brain-like neuro-rehabilitation robots. This opens the way to build a new type of AI systems – the open and transparent  AI.

Reference: N.Kasabov, Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence, Springer, 2019, https://www.springer.com/gp/book/9783662577134.

 Presenter:     

Prof. Nikola Kasabov, Director, Knowledge Engineering and Discovery Research Institute,

Auckland University of Technology, Auckland, New Zealand, nkasabov@aut.ac.nz,

Biodata:

Professor Nikola Kasabov is Fellow of IEEE, Fellow of the Royal Society of New Zealand, Fellow of the INNS College of Fellows, DVF of the Royal Academy of Engineering UK. He is the Founding Director of the Knowledge Engineering and Discovery Research Institute (KEDRI), Auckland and Professor at the School of Engineering, Computing and Mathematical Sciences at Auckland University of Technology, New Zealand. Kasabov is the Immediate Past President of the Asia Pacific Neural Network Society (APNNS) and Past President of the International Neural Network Society (INNS). He is member of several technical committees of IEEE Computational Intelligence Society and Distinguished Lecturer of IEEE (2012-2014). He is Editor of Springer Handbook of Bio-Neuroinformatics, Springer Series of Bio-and Neuro-systems and Springer journal Evolving Systems. He is Associate Editor of several journals, including Neural Networks, IEEE TrNN, Tr CDS, Information Sciences, Applied Soft Computing. Kasabov holds MSc and PhD from TU Sofia, Bulgaria. His main research interests are in the areas of neural networks, intelligent information systems, soft computing, bioinformatics, neuroinformatics. He has published more than 620 publications. He has extensive academic experience at various academic and research organisations in Europe and Asia, including: TU Sofia Bulgaria; University of Essex UK; University of Otago, NZ; Advisory Professor at Shanghai Jiao Tong University and CASIA China, Visiting Professor at ETH/University of Zurich and Robert Gordon University UK, Honorary Professor of Teesside University, UK; George Moore Professor of data analytics at the University of Ulster. Prof. Kasabov has received a number of awards, among them: Doctor Honoris Causa from Obuda University, Budapest; INNS Ada Lovelace Meritorious Service Award; NN Best Paper Award for 2016; APNNA ‘Outstanding Achievements Award’; INNS Gabor Award for ‘Outstanding contributions to engineering applications of neural networks’; EU Marie Curie Fellowship; Bayer Science Innovation Award; APNNA Excellent Service Award;  RSNZ Science and Technology Medal; 2015 AUT Medal; Honorable Member of the Bulgarian, the Greek and the Scottish Societies for Computer Science. More information of Prof. Kasabov can be found from: http://www.kedri.aut.ac.nz/staff.

Advances in Deep Reinforcement Learning

Thanh Thi Nguyen, Deakin University, Victoria, Australia
Email: thanh.nguyen@deakin.edu.au
Webpage: https://www.deakin.edu.au/about-deakin/people/thanh-thi-nguyen
https://sites.google.com/view/thanh-thi-nguyen

Vijay JanapaReddi, Harvard University, Massachusetts, USA
Webpage: https://scholar.harvard.edu/vijay-janapa-reddi

 Abstract

This tutorial presents in detail the current state-of-the-art deep RL theory and applications. We highlight the differences between various types of deep RL methods and their corresponding applications. We present our recent algorithms in the multi-objective and multi-agent domains. Real-world examples for each type of deep RL methods are given along with their demonstrations. This tutorial offers a unique opportunity to disseminate in-depth knowledge on deep RL and how to use those algorithms to solve real-world problems such as autonomous vehicles (cars and drones), autonomous surgical robotics, applications in finance, cyber security, and the Internet of Things.

Tutorial Presenters (names with affiliations):

  1. Thanh Thi Nguyen, Deakin University, Victoria, Australia.

Email: thanh.nguyen@deakin.edu.au

Webpage: https://www.deakin.edu.au/about-deakin/people/thanh-thi-nguyen

https://sites.google.com/view/thanh-thi-nguyen

  1. Vijay JanapaReddi, Harvard University, Massachusetts, USA.

Webpage: https://scholar.harvard.edu/vijay-janapa-reddi

Tutorial Presenters’ Bios:

Thanh Thi Nguyen is a leading researcher in Australia in the field of Artificial Intelligence, recognized by The Australian Newspaper in a report published in 2018. He has been invited to edit the Special Issue “Deep Reinforcement Learning: Methods and Applications” for the Electronics Journal. Dr Nguyen was a Visiting Scholar with the Computer Science Department at Stanford University in 2015 and the Edge Computing Lab at Harvard University in 2019. He received an Alfred Deakin Postdoctoral Research Fellowship in 2016, a European-Pacific Partnership for ICT Expert Exchange Program Award from European Commission in 2018, and an Australia–India Strategic Research Fund Early- and Mid-Career Fellowship 2020 awarded by the Australian Academy of Science. Dr. Nguyen obtained a PhD in Mathematics and Statistics from Monash University, Australia in 2013 and has expertise in various areas, including artificial intelligence, deep learning, deep reinforcement learning, cyber security, IoT, and data science. He is currently a Senior Lecturer in the School of Information Technology, Deakin University, Victoria, Australia.

Vijay JanapaReddi completed his PhD in computer science at Harvard University in 2010. He is a recipient of multiple awards, including the National Academy of Engineering (NAE) Gilbreth Lecturer Honor (2016), IEEE TCCA Young Computer Architect Award (2016), Intel Early Career Award (2013), Google Faculty Research Awards (2012, 2013, 2015, 2017), Best Paper at the 2005 International Symposium on Microarchitecture, Best Paper at the 2009 International Symposium on High Performance Computer Architecture, and IEEE’s Top Picks in Computer Architecture Awards (2006, 2010, 2011, 2016, 2017). Dr. Reddi is currently an Associate Professor in the John A. Paulson School of Engineering and Applied Sciences at Harvard University where he directs the Edge Computing Lab. His research interests include computer architecture and system-software design, specifically in the context of mobile and edge computing platforms based on machine learning for domains like autonomous systems and mobile robotics.

External website with more information on Tutorial (if applicable):

https://sites.google.com/view/thanh-thi-nguyen/wcci-2020-tutorial

 

Deep learning for graphs

Davide Bacciu
Università di Pisa
Email: bacciu@di.unipi.it

Abstract

The tutorial will introduce the lively field of deep learning for graphs and its applications.  Dealing with graph data requires learning models capable of adapting to structured samples of varying size and topology, capturing the relevant structural patterns to perform predictive and explorative tasks while maintaining the efficiency and scalability necessary to process large scale networks. The tutorial will first introduce foundational aspects and seminal models for learning with graph structured data. Then it will discuss the most recent advancements in terms of deep learning for network and graph data, including learning structure embeddings, graph convolutions, attentional models and graph generation.

Tutorial Presenters (names with affiliations):

Davide Bacciu (Università di Pisa)

Tutorial Presenters’ Bios:

Davide Bacciu is Assistant Professor at the Computer Science Department, University of Pisa. The core of his research is on Machine Learning (ML) and deep learning models for structured data processing, including sequences, trees and graphs. He is the PI of an Italian National project on ML for structured data and the Coordinator of the H2020-RIA project TEACHING (2020-2023).  He has been teaching courses of Artificial Intelligence (AI) and ML at undergraduate and graduate levels since 2010. He is an IEEE Senior Member, a member of the IEEE NN Technical Committee and of the IEEE CIS Task Force on Deep Learning. He is an Associate Editor of the IEEE Transactions on Neural Networks and Learning Systems. Since 2017 he is the Secretary of the Italian Association for Artificial Intelligence (AI*IA).

External website with more information on Tutorial (if applicable):
https://www.learning4graphs.org/activities/tutorials/wcci-2020

Randomization Based Deep and Shallow Learning Methods for Classification and Forecasting

 Dr. P. N.Suganthan
Nanyang Technological University, Singapore.
Email:epnsugan@ntu.edu.sg

Websites:http://www.ntu.edu.sg/home/epnsugan/

https://github.com/P-N-Suganthan

http://scholar.google.com.sg/citations?hl=en&user=yZNzBU0AAAAJ&view_op=list_works&pagesize=100

This tutorial will first introduce the main randomization-based learning paradigms with closed-form solutions such as the randomization-based feedforward neural networks, randomization based recurrent neural networks and kernel ridge regression. The popular instantiation of the feedforward type called random vector functional link neural network (RVFL) originated in early 1990s. Other feedforward methods are random weight neural networks (RWNN), extreme learning machines (ELM), etc. Reservoir computing methods such as echo state networks (ESN) and liquid state machines (LSM) are randomized recurrent networks. Another paradigm is based on kernel trick such as the kernel ridge regressionwhich includes randomization for scaling to large training data. The tutorial will also consider computational complexity with increasing scale of the classification/forecasting problems. Another randomization-based paradigm is the random forest which exhibits highly competitive performances.The tutorial will also present extensive benchmarking studies using classification and forecasting datasets.

 Key Papers:

 General Bio-sketch:

PonnuthuraiNagaratnamSuganthanreceived the B.A degree, Postgraduate Certificate and M.A degree in Electrical and Information Engineering from the University of Cambridge, UK in 1990, 1992 and 1994, respectively. He received an honorary doctorate (i.e. Doctor Honoris Causa) in 2020 from University of Maribor, Slovenia.After completing his PhD research in 1995, he served as a pre-doctoral Research Assistant in the Dept of Electrical Engineering, University of Sydney in 1995–96 and a lecturer in the Dept of Computer Science and Electrical Engineering, University of Queensland in 1996–99. He moved to Singapore in 1999. He is an Editorial Board Member of the Evolutionary Computation Journal, MIT Press (2013-2018). He is/was an associate editor of the Applied Soft Computing (Elsevier, 2018-), Neurocomputing (Elsevier, 2018-), IEEE Trans on Cybernetics (2012 – 2018), IEEE Trans on Evolutionary Computation (2005 – ), Information Sciences (Elsevier) (2009 – ), Pattern Recognition (Elsevier) (2001 – ) and Int. J. of Swarm Intelligence Research (2009 – ) Journals. He is a founding co-editor-in-chief of Swarm and Evolutionary Computation (2010 – ), an SCI Indexed Elsevier Journal. His co-authored SaDE paper (published in April 2009) won the “IEEE Trans. on Evolutionary Computation outstanding paper award” in 2012. His former PhD student, Dr Jane Jing Liang, won the IEEE CIS Outstanding PhD dissertation award, in 2014. His research interests include swarm and evolutionary algorithms, pattern recognition, big data, deep learning and applications of swarm, evolutionary & machine learning algorithms. His publications have been well cited. His SCI indexed publications attracted over 1000 SCI citations in each calendar year since 2013. He was selected as one of the highly cited researchers by Thomson Reuters yearly from 2015 to 2019 in computer science. He served as the General Chair of the IEEE SSCI 2013. He has been a member of the IEEE (S’90, M’92, SM’00, F’15) since 1991 and an elected AdCom member of the IEEE Computational Intelligence Society (CIS) in 2014-2016. He is an IEEE CIS distinguished lecturer (DLP) in 2018-2020.

Deep Stochastic Learning and Understanding

Jen-TzungChien
National Chiao Tung University
Email: jtchien@nctu.edu.tw

Website: http://chien.cm.nctu.edu.tw

Abstract

This tutorial addresses the advances in deep Bayesian learning for sequence data which are ubiquitous in speech, music, text,

image, video, web, communication and networking applications. Spatial and temporal contents are analyzed and represented to fulfill a variety of tasks ranging from classification, synthesis, generation, segmentation, dialogue, search, recommendation, summarization, answering, captioning, mining, translation, adaptation to name a few. Traditionally, “deep learning” is taken to be a learning process where the inference or optimization is based on the real-valued deterministic model. The “latent semantic structure” in words, sentences, images, actions, documents or videos learned from data may not be well expressed or correctly optimized in mathematical logic or computer programs. The “distribution function” in discrete or continuous latent variable model for spatial and temporal sequences may not be properly decomposed or estimated. This tutorial addresses the fundamentals of statistical models and neural networks, and focus on a series of advanced Bayesian models and deep models including recurrent neural network, sequence-to-sequence model, variational auto-encoder (VAE), attention mechanism, memory-augmented neural network, skip neural network, temporal difference VAE, stochastic neural network, stochastic temporal convolutional network, predictive state neural network, and policy neural network. Enhancing the prior/posterior representation is addressed. We present how these models are connected and why they work for a variety of applications on symbolic and complex patterns in sequence data. The variational inference and sampling method are formulated to tackle the optimization for complicated models. The embeddings, clustering or co-clustering of words, sentences or objects are merged with linguistic and semantic constraints. A series of case studies, tasks and applications are presented to tackle different issues in deep Bayesian learning and understanding. At last, we will point out a number of directions and outlooks for future studies. This tutorial serves the objectives to introduce novices to major topics within deep Bayesian learning, motivate and explain a topic of emerging importance for natural language understanding, and present a novel synthesis combining distinct lines of machine learning work.

Tutorial Presenters (names with affiliations):

Jen-TzungChien, National Chiao Tung University, Taiwan

Tutorial Presenters’ Bios:

Jen-TzungChien is the Chair Professor at the National Chiao Tung University, Taiwan. He held the Visiting Professor position at the IBM T. J. Watson Research Center, Yorktown Heights, NY, in 2010. His research interests include machine learning, deep learning, computer vision and natural language processing. Dr. Chien served as the associate editor of the IEEE Signal Processing Letters in 2008-2011, the general co-chair of the IEEE International Workshop on Machine Learning for Signal Processing in 2017, and the tutorial speaker of the ICASSP in 2012, 2015, 2017, the INTERSPEECH in 2013, 2016, the COLING in 2018, the AAAI, ACL, KDD, IJCAI in 2019. He received the Best Paper Award of IEEE Automatic Speech Recognition and Understanding Workshop in 2011 and the AAPM Farrington Daniels Award in 2018. He has published extensively, including the books “Bayesian Speech and Language Processing”, Cambridge University Press, in 2015, and “Source Separation and Machine Learning”, Academic Press, in 2018. He is currently serving as an elected member of the IEEE Machine Learning for Signal Processing Technical Committee.

External website with more information on Tutorial:

http://chien.cm.nctu.edu.tw/home/wcci-tutorial

From Brain to Deep Neural Networks

Saeid Sanei
Nottingham Trent University UK
Email: saeid.sanei@ntu.ac.uk

Clive Cheong Took
Royal Holloway University of London UK
Email: Clive.CheongTook@rhul.ac.uk

Abstract

The aim of this tutorial is to provide the stepping stone for machine learning enthusiasts into the area of brain pathway modelling using innovative deep learning techniques through processing and learning from electroencephalogram (EEG). An insight into EEG generation and processing will provide the audience with a better understanding of deep network structures used to learn and detect the insightful information about the deep brain function.

Tutorial Presenters

Saeid Sanei, Nottingham Trent University UK

Clive Cheong Took, Royal Holloway University of London UK

Biographies

SaeidSanei is a full professor with Nottingham Trent University and a visiting professor at Imperial College London. He leads a group where several young researchers working in EEG Processing and its application in brain computer interface (BCI). He authored two research monographs on electroencephalogram (EEG) processing and pattern recognition. Saeid has delivered numerous workshops on EEG Signal Processing & Machine Learning with diverse applications all over the world particularly in Europe, China, and Singapore.

Clive Cheong Took is a senior lecturer (associate professor) at Royal Holloway, University of London. Clive has a background in machine learning and investigates its applications in biomedical problems for more than 10 years. He is an associate editor for IEEE Transactions on Neural Networks and Learning Systems since 2013, and has co-organised special issues on deep learning for healthcare and security. At WCCI 2020, he will also co-organise a special session on Generative Adversarial Learning with Ariel Ruiz-Garcia, Vasile Palade, Jürgen Schmidhuber, and Danilo Mandic.

External Website

https://sites.google.com/view/wcci-2020-eeg/home

 

Evolution of Neural Networks

RistoMiikkulainen
The University of Texas at Austin and Cognizant Technology Solutions
Email: risto@cs.utexas.edu

 Abstract

Evolution of artificial neural networks has recently emerged as a powerful technique in two areas. First, while the standard value-function based reinforcement learning works well when the environment is fully observable, neuroevolution makes it possible to disambiguate hidden state through memory. Such memory makes new applications possible in areas such as robotic control, game playing, and artificial life. Second, deep learning performance depends crucially on the network architecture and hyperparameters. While many such architectures are too complex to be optimized by hand, neuroevolution can be used to do so automatically. Such evolutionary AutoML can be used to achieve good deep learning performance even with limited resources, or state=of-the art performance with more effort. It is also possible to optimize other aspects of the architecture, like its size, speed, or fit with hardware. In this tutorial, I will review (1) neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes, (2) methods for neural architecture search and evolutionary AutoML, and (3) applications of these techniques in control, robotics, artificial life, games, image processing, and language.

Tutorial Presenters (names with affiliations):

RistoMiikkulainen
The University of Texas at Austin and Cognizant Technology Solutions

Tutorial Presenters’ Bios:

RistoMiikkulainen is a Professor of Computer Science at the University of Texas at Austin and a AVP of Evolutionary AI at Cognizant Technology Solutions. He received an M.S. in Engineering from the Helsinki University of Technology, Finland, in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His research focuses on methods and applications of neuroevolution, as well as neural network models of natural language processing and self-organization of the visual cortex; he is an author of over 430 articles in these research areas. He is an IEEE Fellow, recipient of the 2020 IEEE CIS EC Pioneer Award, recent awards from INNS and ISAL, as well as nine Best-Paper Awards at GECCO.

External website with more information on Tutorial (if applicable):

https://www.cs.utexas.edu/users/risto/talks/enn-tutorial/

Experience Replay for Deep Reinforcement Learning
A Comprehensive Review

ABDULRAHMAN ALTAHHAN
Leeds Beckett University, UK.

VASILE PALADE
Coventry University, UK.

Primary contact (a.altahhan@leedsbeckett.ac.uk )

Abstract

Reinforcement learning is expected to play an important role in our AI and machine learning era, this is evident by latest major advances, particularly in games. This is due to its flexibility and arguably minimum designer intervention especially when the feature extraction process is left to a robust model such as a deep neural network. Although deep learning alleviated the long-standing burden of manual feature design, another important issue remains to be tackled, that is the experience-hungry nature of RL models which is mainly due to bootstrapping and exploration. One important technique that will play a centre stage role in tackling this issue is experience replay. Naturally, it allows us to capitalise on the already gained experience and to shorten the time needed to train an RL agent. The frequency and depth of the replay can vary significantly and currently a unifying view and a clear understanding of the issues related to off-policy and on-policy replay is generally lacking. For example, on the far end of the spectrum, extensive experience-replay, although should conceivably help reduce the data-intensity of the training period, when done naively, put significant constrains on the practicality of the model and requires both extra time and space that can grow significantly; relegating the method impractical. On the other hand, in its optimal form, whether it is a target re-evaluation or a re-update, when importance sampling ratio uses bootstrapping, the methods computational requirements matches other model based RL methods for planning. In this tutorial we will be tackling the issues and techniques related to the theory and application of deep reinforcement learning and experience replay, and how and when these techniques can be applied effectively to produce a robust model. In addition, we will promote a unified view of experience replay that involves replaying and re-evaluation of the target updates. What is more, we will show that the generalised intensive experience replay method can be used to derive several important algorithms as special cases of other methods including n-steps true online TD and LSTD. This surprising but important view can help immensely the neuro-dynamic/RL community to move this concept further forward and will benefit both the researchers and practitioners in their quest for a better and more practical RL methods and models.

Description

Deep reinforcement learning combined with experience replay allows us to capitalise on the gained experience; capping the model appetite for new experience. Experience replay can be performed in several ways some of which may or may not be suitable for the problem in hand. For example, intensive experience replay, if performed optimally, could shorten the learning cycle of an RL agent and allow it to be taken away from the virtual arena such as games/simulation to the physical/mechanical arena such as robotics. The type of intensive training required for RL models, which can be afforded by a virtual agent, may not be tolerated, or may at least not be desired, for a physical agent. Of course, one way to move to the mechanical world is by utilising model-free policy gradient (search) methods that are based on simulation and then migrate/map the model to the physical world. However, constructing a simulation for the physical world is a tedious process that requires extra time and calibration making it impractical to the type of pervasive RL models that we hope to achieve. In both cases, whether it is a policy gradient or value-function with policy iteration, experience replay plays and important role to make them more practical. For example, for policy gradient methods adopting a softmax policy is preferable over the ε-greedy policy as it can approach asymptotically a deterministic policy after some exploration, while ε-greedy will always have a fixed percentage of exploratory actions regardless of the maturity of the policy being developed/improved.

The tutorial is timely and novel in its treatment and packaging of the topic. It will lay the necessary foundation for a unified overview of the subject. Which will allow other researcher to take it to the next level and will allow the subject area to take off on solid and unified grounds.

It turns out that extensive experience replay can be used as a generalised model from which several n-steps modern reinforcement learning techniques can be deduced, offering an easy way to unify several popular reinforcement learning methods and giving rise to several new more.

In this tutorial I will be giving a step by step detailed overview of the framework that allows us to safely deploy replay methods.
The tutorial will review all major advances of RL replay algorithms and will categorise them into: occasional replay, regular replay, and intensive regular replay.

Bellman equations are the fundamental of individual RL updates, however all the n-steps aggregate methods that have driven the latest breakthrough of RL need different treatment.

The unified view through experience replay offer a new theoretical framework to study the inner traits/properties of those techniques. For example, convergence of several new RL algorithms can be proven by proving the convergence of the unified replay algorithm and then projecting each algorithm as a special case of the general method.

Outline
——————————–First part about one hour————————by A. Altahhan————-
• Deep Reinforcement Learning a concise review
• Traditional Replay and Type of Replay: Occasional, Regular, Intensive or Both
• Unified View of Experience Replay
o Replay Past Update vs Target Re-evaluation: how to integrate
o Off-policy vs On-policy! Experience Replay
o The Role of Importance Sampling and Bootstrapping
o Emphatic TD and its Cousins
o Unifying Algorithm for Regular Intensive Sarsa(λ)-Replay
o N-steps Methods as a Special cases of Experience Replay
o Policy Search Methods and Unified Replay
o Exploration, Exploitation and Replay
o Convergence of Replay Methods
——————————–Second part about one hour———————-by V. Palade————–
• Practical Consideration for DRL and Experience Replay:
o To Replay or Not to Replay!
o Time and Space Complexity of Replay
o Combing Deep Learning and Replay in a Meaningful Way
o Softmax or ε-Greedy for Replay
o Replay for Games vs Replay for Robotics
o Live Demonstration on the Game of Packman
o Live Demo on Cheap Affordable Robot
o Audience Running the Code and Connect to the Robot
• Q&A, Discussion and Closing Notes
Goals
• To develop a deep understanding of the capabilities and limitations of deep reinforcement learning
• To develop a unified view of the different types of experience replay and how and when to apply them in the context of deep reinforcement learning

Objectives
• Demonstrate how to apply experience replay on policy search methods
• Demonstrate how to combine of experience replay and deep learning
• Demonstrate first-hand the effect of replay on multiple platforms including Games and Robotics domains

Expected Outcomes
• To gain an in-depth understanding of recent developments in DRL and experience replay
• To gain an in-depth understanding of which update rules to adopt, on-policy or off-policy
• To contrast traditional replay with the more recent re-evaluation that has been termed as replay

Target audience and session organisation:

The target audience are researchers and practitioners in the growing reinforcement learning community who are seeking a better understanding of the issues surrounding combining experience replay, deep learning and off-policy learning in their quest for a more practical methods and models.

The tutorial will take 2 hours to be completed and will provide code that can be easily run on Octave or MATLAB.

The two-hour tutorial will be delivered into two integrated sections, the first will cover the theory and the second will cover the application. The presenters will alternate between each other on both the theoretical part and the application part. Two applications will be covered one is the game of packman and the other is a hacked mini robot that will learn to navigate in small 2x1m easy to assemble arena. The robot is a cheap affordable robot, such as Lego, that is equipped with a Raspberry PI module and camera. It will use vision and deep learning combined with experience replay to learn to perform a homing task. The audience will be provided with the Octave/MATLAB code to experience first-hand with the algorithms and see how they are developed form the inside. The code is general enough to be reused for other RL problems. For remote access audiences the code will be shared on Git, so they will be able to experiments with the model directly and a web camera will be mounted on top of the small robot arena to broadcast how the robot will gradually learn to navigate towards it home using vision to learn an optimal path and using infra-red sensors for obstacle avoidance behaviour. For those attending the tutorial they can SSH to the controlling laptop, to which the intensive processing is off-boarded, in order to try and change the Octave code that is driving the robot and see its effect. If a VPN can be setup, then the same the remote audiences can be provided with the same experience.

Previous tutorials:

We have given a successful tutorial on RL and Deep Learning at the IJCNN 2018 on July 2018 in Rio.

Presenters:

ABDULRAHMAN ALTAHHAN
Senior Lecturer in Computing Email: a.altahhan@leedsbeckett.ac.uk Dr Abdulrahman Altahhan has been teaching AI and related topics since 2000, currently he is a Senior Lecturer in Computing as Leeds Beckett University. He served as the Programme Director of MSc in Data Science at Coventry University, UK. Previously, Dr Altahhan worked in Dubai as an Assistant Professor and Acting Dean. He has a PhD in 2008 in Reinforcement Learning and Neural Networks and an MPhil in Fuzzy Expert Systems. Dr Abdulrahman is actively researching in the area of Deep Reinforcement Learning applied to robotics and autonomous agents with publications in this front. He has extensively prepared designed and developed a novel reinforcement learning family of methods and studied their mathematical underlying properties. Recently he established a new set of algorithms and findings where he combined deep learning with reinforcement learning in a unique way that is hoped to contribute to the development of this new research area. He presented in prestigious conferences and venues in the area of machine learning and neural network. Dr Abdulrahman is a reviewer for important Neural Networks related journals, and venues from Springer and the IEEE; including Neural Computing and Applications journal, International Conference of Robotics and Automation ICRA, and he serves in the programme committees for related conferences such as INNS Big Data 2016. Dr Abdulrahman has organised several special sessions in Deep Reinforcement Learning in IJCNN2016 and IJCNN 2017 as well as ICONIP 2016 and 2017 conferences. Dr Abdulrahman is an EPSRC reviewer and taught Machine Learning, Neural Networks and Big Data Analysis modules in the MSc of Data Science, he is an IEEE Member, a member of the IEEE Computational Intelligence Society and International Neural Network Society.

VASILE PALADE

Professor: Pervasive Computing Email: vpalade453@gmail.com Prof Vasile Palade has joined Coventry University in 2013 as a Reader in Pervasive Computing, after working for many years as a Lecturer in the Department of Computer Science, of the University of Oxford, UK. His research interests lie in the wide area of machine learning, and encompass mainly neural networks and deep learning, neuro-fuzzy systems, various nature inspired algorithms such as swarm optimization algorithms, hybrid intelligent systems, ensemble of classifiers, class imbalance learning. Application areas include image processing, social network data analysis, Bioinformatics problems, fault diagnosis, web usage mining, health, among others. Dr Palade is author and co-author of more than 160 papers in journals and conference proceedings as well as books on computational intelligence and applications (which attracted 4250 citations and an h-index of 29 according to Scholar Google). He has also co-edited several books including conference proceedings. He is an Associate Editor for several reputed journals, such as Knowledge and Information Systems (Elsevier), Neurocomputing (Elsevier), International Journal on Artificial Intelligence Tools (World Scientific), International Journal of Hybrid Intelligent Systems (IOS Press). He has delivered keynote talks to international conferences on machine learning and applications. Prof. Vasile Palade is an IEEE Senior Member and a member of the IEEE Computational Intelligence Society.

References that will be covered

[1] L.-J. Lin, “Self-improving reactive agents based on reinforcement learning, planning and teaching,” Machine Learning, vol. 8, no. 3, pp. 293-321, 1992.
[2] V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, p. 529, 2015
[3] A. Altahhan, “TD(0)-Replay: An Efficient Model-Free Planning with full Replay,” in 2018 International Joint Conference on Neural Networks (IJCNN), 2018, pp. 1-7.
[4] A. Altahhan, “Deep Feature-Action Processing with Mixture of Updates,” in 2015 International Conference on Neural Information Processing, Istanbul, Turkey, 2015, pp. 1-10.
[5] S. Zhang and R. S. Sutton, “A Deeper Look at Experience Replay,” eprint arXiv:1712.01275, 2017
[6] H. Vanseijen and R. Sutton, “A Deeper Look at Planning as Learning from Replay,” presented at the Proceedings of the 32nd International Conference on Machine Learning, Proceedings of Machine Learning Research, 2015.
[7] Y. Pan, M. Zaheer, A. White, A. Patterson, and M. White, “Organizing Experience: A Deeper Look at Replay Mechanisms for Sample-based Planning in Continuous State Domains,” eprint arXiv:1806.04624, 2018.
[8] van Hasselt, H. and Sutton, R. S. (2015). Learning to predict independent of span. arXiv:1508.04582.
[9] H. van Seijen, A. Rupam Mahmood, P. M. Pilarski, M. C. Machado, and R. S. Sutton, “True Online Temporal-Difference Learning,” Journal of Machine Learning Research, vol. 17, no. 145, pp. 1-40, 2016.
[10] Sutton, R. S. and Barto, A. G. (2017). Reinforcement Learning: An Introduction. 2nd Edition, Accessed online, MIT Press, Cambridge.
[11] Watkins, C.J.C.H. & Dayan, P., Q-learning, Mach Learn (1992) 8: 279.
[12] J. Modayil, A. White, and R. S. Sutton, “Multi-timescale nexting in a reinforcement learning robot,” Adaptive Behavior, vol. 22, no. 2, pp. 146-160, 2014/04/01 2014.
[13] D. Precup, R. S. Sutton, and S. Dasgupta, “Off-Policy Temporal Difference Learning with Function Approximation,” presented at the Proceedings of the Eighteenth International Conference on Machine Learning, 2001.
[14] R. S. Sutton, A. Rupam Mahmood, and M. White, “An Emphatic Approach to the Problem of Off-policy Temporal-Difference Learning,” Journal of Machine Learning Research, vol. 17, pp. 1-29, 2016.
[15] H. Yu, “On Convergence of Emphatic Temporal-Difference Learning,” eprint arXiv:1506.02582, 2015.
[16] A. Hallak, A. Tamar, R. Munos, and S. Mannor, “Generalized Emphatic Temporal Difference Learning: Bias-Variance Analysis,” eprint arXiv:1509.05172, 2015.
[17] X. Gu, S. Ghiassian, and R. S. Sutton, “Should All Temporal Difference Learning Use Emphasis?,” eprint arXiv:1903.00194, 2019.
[18] M. P. Deisenroth, G. Neumann, J. Peters, et al., “A survey on policy search for robotics,” Foundations and Trends R in Robotics, vol. 2, no. 1–2, pp. 1–142, 2013.
[19] R. S. Sutton, C. Szepesvari, A. Geramifard, and M. P. Bowling, “Dyna-Style Planning with Linear Function Approximation and Prioritized Sweeping,” eprint arXiv:1206.3285, p. arXiv:1206.3285, 2012.

Machine Learning for Data Dtreams in Python with Scikit-Multiflow

Jacob Montiel
University of Waikato

HeitorMurilo Gomes
University of Waikato
Email: heitor.gomes@waikato.ac.nz

Jesse Read
École Polytechnique

Albert Bifet
University of Waikato

 Abstract

Data stream mining has gained a lot of attention in recent years as an exciting researc

h topic. However, there is still a gap between the pure research proposals and the practical applications to real world machine learning problems. In this tutorial we are going to introduce attendees to data stream mining procedures and examples of big data stream mining applications. Besides the theory we will also present examples using the \skmultiflow framework, a novel open source Python framework.

 Tutorial Presenters (names with affiliations): 

Jacob Montiel (University of Waikato), HeitorMurilo Gomes (University of Waikato), Jesse Read (École Polytechnique), Albert Bifet (University of Waikato)

 Tutorial Presenters’ Bios: 

Jacob Montiel

Jacob is a research fellow at the University of Waikato in New Zealand and the lead maintainer of \skmultiflow. His research interests are in the field of machine learning for evolving data streams. Prior to focusing on research, Jacob led development work for onboard software for aircraft and engine’s prognostics at GE Aviation; working in the development of GE’s Brilliant Machines, part of the IoT and GE’s approach to Industrial Big Data.

Website:https://jacobmontiel.github.io/

HeitorMurilo Gomes

Heitor is a senior research fellow at the University of Waikato in New Zealand. His main research area is Machine Learning, specially Evolving Data Streams, Concept Drift, Ensemble methods and Big Data Streams. He is an active contributor to the open data stream mining project MOA and a co-leader of the StreamDM project, a real-time analytics open-source software library built on top of Spark Streaming.

Website:https://www.heitorgomes.com

Jesse Read

Jesse is a Professor at the DaSciM team in LIX at École Polytechnique in France. His research interests are in the areas of Artificial Intelligence, Machine Learning, and Data Science and Mining. Jesse is the maintainer of the open-source software MEKA, a multi-label/multi-target extension to Weka.

Website:https://jmread.github.io/

Albert Bifet

Albert is a Professor at University of Waikato and Télécom Paris. His research focuses on Data Stream mining, Big Data Machine Learning and Artificial Intelligence. Problems he investigate are motivated by large scale data, the Internet of Things (IoT), and Big Data Science.

He co-leads the open source projects MOA (Massive On-line Analysis), Apache SAMOA (Scalable Advanced Massive Online Analysis) and StreamDM.

Website:http://albertbifet.com

 

External website with more information on Tutorial (if applicable): NA

Fast and Deep Neural Networks

Claudio Gallicchio
University of Pisa (Italy)
Email: gallicch@di.unipi.it

Simone Scardapane
La Sapienza University of Rome (Italy)
Email: simone.scardapane@uniroma1.it

Abstract

Deep Neural Networks (DNNs) are a fundamental tool in the modern

development of Machine Learning. Beyond the merits of properly designed training strategies, a great part of DNNs success is undoubtedly due to the inherent properties of their layered architectures, i.e., to the introduced architectural biases. In this tutorial, we analyze how far we can go by relying almost exclusively on these architectural biases. In particular, we explore recent classes of DNN models wherein the majority of connections are randomized or more generally fixed according to some specific heuristic, leading to the development of Fast and Deep Neural Network (FDNN) models. Examples of such systems consist of multi-layered neural network architectures where the connections to the hidden layer(s) are left untrained after initialization. Limiting the training algorithms to operate on a reduced set of weights implies a number of intriguing features. Among them, the extreme efficiency of the resulting learning processes is undoubtedly a striking advantage with respect to fully trained architectures. Besides, despite the involved simplifications, randomized neural systems possess remarkable properties both in practice, achieving state-of-the-art results in multiple domains, and theoretically, allowing to analyze intrinsic properties of neural architectures (e.g. before training of the hidden layers’ connections). In recent years, the study of randomized neural networks has been extended towards deep architectures, opening new research directions to the design of effective yet extremely efficient deep learning models in vectorial as well as in more complex data domains.

This tutorial will cover all the major aspects regarding the design and analysis of Fast and Deep Neural Networks, and some of the key results with respect to their approximation capabilities. In particular, the tutorial will first introduce the fundamentals of randomized neural models in the context of feedforward networks (i.e., Random Vector Functional Link and equivalent models), convolutional filters, and recurrent systems (i.e., Reservoir Computing networks). Then, it will focus specifically on recent results in the domain of deep randomized systems, and their application to structured domains (trees, graphs).

Tutorial Presenters (names with affiliations):

Claudio Gallicchio, University of Pisa (Italy)

Simone Scardapane, La Sapienza University of Rome (Italy)

*Tutorial Presenters’ Bios:

Claudio Gallicchio is Assistant Professor at the Department of Computer Science, University of Pisa. He is Chair of the IEEE CIS Task Force on Reservoir Computing, and member of IEEE CIS Data Mining and Big Data Analytics Technical Committee, and of the IEEE CIS Task Force on Deep Learning. Claudio Gallicchio has organized several events (special sessions and workshops) in major international conferences (including IJCNN/WCCI, ESANN, ICANN) on themes related to Randomized Neural Networks. He serves as member of several program committees of conferences and workshops in Machine Learning and Artificial Intelligence. He has been invited speaker for several national and international conference. His research interests include Machine Learning, Deep Learning, Randomized Neural Networks, Reservoir Computing, Recurrent and Recursive Neural Networks, Graph Neural Networks.

Simone Scardapane is Assistant Professor at the the “Sapienza” University of Rome. He is active as co-organizer of special sessions and special issues on themes related to Randomized Neural Networks and Randomized Machine Learning approaches. His research interests include Machine Learning, Neural Networks, Reservoir Computing and Randomized Neural Networks, Distributed and Semi-supervised Learning, Kernel Methods, and Audio Classification. Simone Scardapane is an Honorary Research Fellow with the CogBID Laboratory, University of Stirling, Stirling, U.K. Simone Scardapane is the co-organizer of the Rome Machine Learning & Data Science Meetup, that organizes monthly events in Rome, and a member of the advisory board for Codemotion Italy. He is also a co-founder of the Italian Association for Machine Learning, a not-for-profit organization with the aim of promoting machine learning concepts in the public. In 2017 he has been certified as a Google Developer expert for machine learning. Currently, he is the track director for the CNR sponsored “Advanced School of AI”

(https://as-ai.org/governance/).

* External website with more information on Tutorial (if applicable):

https://sites.google.com/view/fast-and-deep-neural-networks

Mechanisms of Universal Turing Machines: Vision, Audition, Natural Language, APFGP and Consciousness

Juyang Weng
juyang.weng@gmail.com

Abstract

Finite automata (i.e., finite-state machines) have been taught in almost all electrical engineering programs.  However, Turing machines, especially universal Turing machines (UTM), have not been taught in many electrical engineering programs and were dropped in many computer science and engineering programs as a required course.   This resulted in major knowledge weakness in many people working on neural networks and AI since without knowing UTM, researchers have considered neural networks as merely general-purpose function approximators, but not general-purpose computers.   This tutorial first briefly explains what a Turing machine is, what a UTM is, why a UTM is a general-purpose computer, and why Turing machines and UTMs are all symbolic and handcrafted.  In contrast, a Developmental Network (DN) not only is a new kind of neural network, but also can learn to become a general-purpose computer by learning an emergent Turing machine.  It does so by first taking a sequence of instructions as a user provided program and the data for the program to run on, and then running the program on the data.  Therefore, a universal Turing machine inside a DN emerges autonomously on the fly.  It can perform Autonomous Programming For General Purposes (APFGP).  The DN learns UTM transitions one at a time incrementally, without iterations, and refines UTM transitions from the physical experience through network’s lifetime.  Consciousness, whether natural or artificial, requires APFGP.

Presenter Biographies:

Juyang Weng: Professor at the Department of Computer Science and Engineering, the Cognitive Science Program, and the Neuroscience Program, Michigan State University, East Lansing, Michigan, USA. He is also a visiting professor at Fudan University, Shanghai, China. He received his BS degree from Fudan University in 1982, his MS and PhD degrees from University of Illinois at Urbana-Champaign, 1985 and 1989, respectively, all in Computer Science.  From August 2006 to May 2007, he is also a visiting professor at the Department of Brain and Cognitive Science of MIT.   His research interests include computational biology, computational neuroscience, computational developmental psychology, biologically inspired systems, computer vision, audition, touch, behaviors, and intelligent robots.  He has published over 300 research articles on related subjects, including task muddiness, intelligence metrics, mental architectures, emergent Turing machines, autonomous programing for general purposes (APFGP), vision, audition, touch, attention, detection, recognition, autonomous navigation, and natural language understanding.  He, T. S. Huang and N. Ahuja published the first deep learning system for 3D world, called Cresceptron and a research monograph titled Motion and Structure from Image Sequences.  He authored  a book titled Natural and Artificial Intelligence: Computational Introduction to Computational Brain-Mind.  He is an editor-in-chief of the International Journal of Humanoid Robotics and an associate editor of the IEEE Transactions on Autonomous Mental Development. He has chaired and co-chaired some conferences, including the NSF/DARPA funded Workshop on Development and Learning 2000 (1st ICDL), 2nd ICDL (2002), 7th ICDL (2008), 8th ICDL (2009), and INNS NNN 2008. He was the Chairman of the Governing Board of the International Conferences on Development and Learning (ICDLs) (2005-2007), chairman of the Autonomous Mental Development Technical Committee of the IEEE Computational Intelligence Society (2004-2005), an associate editor of IEEE Trans. On Pattern Recognition and Machine Intelligence, an associate editor of IEEE Trans. on Image Processing.  He was the General Chair of AIML Contest 2016 and taught BMI 831, BMI 861 and BMI 871 that prepared the contestants for the AIML Contest session in IJCNN 2017 in Alaska.  He is a Fellow of IEEE.

Web: http://www.cse.msu.edu/~weng/

Instance Space Analysis for Rigorous and Insightful Algorithm Testing

Prof. Kate Smith-Miles
School of Mathematics and Statistics, The University of Melbourne, Australia
Email: kate.smithmiles@gmail.com

Dr. Mario Andrés Muñoz
School of Mathematics and Statistics, The University of Melbourne, Australia
Email: munoz.m@unimelb.edu.au

Abstract

Algorithm testing often consists of reporting on-average performance across a suite of well-studied benchmark instances. Therefore, the conclusions drawn from testing depend on the choice of benchmarks. Hence, test suites should be unbiased, challenging, and contain a mix of synthetic and real-world-like instances with diverse structure. Without diversity, the conclusions drawn future expected algorithm performance are necessarily limited. Moreover, on-average performance often disguises the strengths and weaknesses of an algorithm for particular types of instances. In other words, the standard benchmarking approach has two limitations that affect the conclusions: (a) there is no mechanism to assess whether the selected test instances are unbiased and diverse enough; and (b) there is little opportunity to gain insights into the strengths and weaknesses of algorithms, when hidden by on-average performance metrics.

This tutorial introduces Instance Space Analysis (ISA), a visual methodology for algorithm evaluation that reveals the relationships between the structure of an instance and its impact on performance. ISA offers an opportunity to gain more nuanced insights into algorithm strengths and weaknesses for various types of test instances, and to objectively assess the relative power of algorithms, unbiased by the choice of test instances. A space is constructed whereby an instance is represented a point in a 2d plane, with algorithm footprints being the regions of predicted good performance of an algorithm, based on statistical evidence. From this broad instance space, we can assess the diversity of a chosen test set. Moreover, through ISA we can identify regions where additional test instances would be valuable to support greater insights. By generating new instances with controllable properties, an algorithm can be “stress-tested” under all possible conditions. The tutorial makes use of the on-line tools available at the Melbourne Algorithm Test Instance Library with Data Analytics (MATILDA) and provides access to its MATLAB computational engine. MATILDA also provides a collection of ISA results and other meta-data available for downloading for several well-studied problems from optimization and machine learning, from previously published studies.

Tutorial Presenters (names with affiliations):

Prof. Kate Smith-Miles
School of Mathematics and Statistics
The University of Melbourne, Australia

Dr. Mario Andrés Muñoz
School of Mathematics and Statistics
The University of Melbourne, Australia

Tutorial Presenters’ Bios:

Kate Smith-Miles is a Professor of Applied Mathematics in the School of Mathematics and Statistics at The University of Melbourne and holds a five year Laureate Fellowship from the Australian Research Council. Prior to joining The University of Melbourne in September 2017, she was Professor of Applied Mathematics at Monash University, and Head of the School of Mathematical Sciences (2009-2014). Previous roles include President of the Australian Mathematical Society (2016-2018), and membership of the Australian Research Council College of Experts (2017-2019). Kate was elected Fellow of the Institute of Engineers Australia (FIEAust) in 2006, and Fellow of the Australian Mathematical Society (FAustMS) in 2008.

Kate obtained a B.Sc(Hons) in Mathematics and a Ph.D. in Electrical Engineering, both from The University of Melbourne. Commencing her academic career in 1996, she has published 2 books on neural networks and data mining, and over 260 refereed journal and international conference papers in the areas of neural networks, optimization, data mining, and various applied mathematics topics. She has supervised 28 PhD students to completion and has been awarded over AUD$15 million in competitive grants, including 13 Australian Research Council grants and industry awards. MATILDA and the instance space analysis methodology has been developed through her 5-year Laureate Fellowship awarded by the Australian Research Council.

Mario Andrés Muñoz is a Researcher in Operations Research in the School of Mathematics and Statistics, at the University of Melbourne. Before joining the University of Melbourne, he was a Research Fellow in Applied Mathematics at Monash University (2014-2014), and a Lecturer at the Universidad del Valle, Colombia (2008-2009)

Mario Andrés obtained a B.Eng. (2005) and a M.Eng. (2008) in Electronics from Universidad del Valle, Colombia, and a Ph.D. (2014) in Engineering from the University of Melbourne. He has published over 40 refereed journal and conference papers in optimization, data mining, and other inter-disciplinary topics. He has developed and maintains the MATLAB code that drives MATILDA.

External website with more information on Tutorial (if applicable):

https://matilda.unimelb.edu.au/matilda/WCCI2020

Multi-modality for Biomedical Problems: Theory and Applications

 Dr. Sriparna Saha
Associate Professor, Department of Computer Science and Engineering
Indian Institute of Technology Patna
Email:sriparna@iitp.ac.in

Mr. Pratik Dutta
PhD Research Scholar, Department of Computer Science and Engineering
Indian Institute of Technology Patna
Email: pratik.pcs16@iitp.ac.in

Abstract

With the exploration of omics technologies, researchers are able to collect high-throughput biomedical data. The explosion of these new frontier omics technologies produces diverse genomic datasets such as microarray gene expression, miRNA expression, DNA sequence, 3D structures etc. These different representations (modality) of the biomedical data contain distinct, useful and complementary information of different samples. As a consequence, there is a growing interest in collecting ”multi-modal” data for the same set of subjects and integrating this heterogeneous information to obtain more profound insights into the underlying biological system. The current tutorial will discuss in detail different problems of bioinformatics and the concepts of multimodality in bioinformatics. In recent years different machine learning and deep learning based approaches become popular in dealing with multimodal data. Drawing attention from the above facts, this tutorial is a roadmap of existing deep multi-modal architectures in solving different computational biology problems. This tutorial will be an advanced survey equally of interest to academic researchers and industry practitioners – very timely with so much vibrant research in the computational biology domain over the past 5 years. As IEEE WCCI is an prestigious conference for discussion of neural network frontiers, this tutorial is very much relevant for IEEE WCCI.

Tutorial Presenters (names with affiliations): 

  1. SriparnaSaha, Associate Professor, Department of Computer Science and Engineering, Indian Institute of Technology Patna
  2. Pratik Dutta, PhD Research Scholar, Department of Computer Science and Engineering, Indian Institute of Technology Patna

Tutorial Presenters’ Bios: 

  1. SriparnaSaha:SriparnaSaha received the M.Tech and Ph.D. degrees in computer science from Indian Statistical Institute, Kolkata, India, in the years 2005 and 2011, respectively. She is currently an Associate Professor in the Department of Computer Science and Engineering, Indian Institute of Technology Patna, India. Her current research interests include machine learning, multi-objective optimization, evolutionary techniques, text mining and biomedical information extraction. She is the recipient of the Google India Women in Engineering Award, 2008, NASI YOUNG SCIENTIST PLATINUM JUBILEE AWARD 2016, BIRD Award 2016, IEI Young Engineers’ Award 2016, SERB WOMEN IN EXCELLENCE AWARD 2018 and SERB Early Career Research Award 2018. She is the recipient of DUO-India fellowship 2020, Humboldt Research Fellowship 2016, Indo-U.S. Fellowship for Women in STEMM (WISTEMM) Women Overseas Fellowship program 2018 and CNRS fellowship. She had also received India4EU fellowship of the European Union to work as a Post-doctoral Research Fellow in the University of Trento, Italy from September 2010 to January 2011. She was also the recipient of Erasmus Mundus Mobility with Asia (EMMA) fellowship of the European Union to work as a Post-doctoral Research Fellow in the Heidelberg University, Germany from September 2009 to June 2010. She had visited University of Caen, France as a visiting scientist during the period October 2013, December 2013, May-July, 2014 and May-June, 2015; University of Mainz, Germany as a visiting scientist from April-September 2016, April-August 2017; University of Kyoto, Japan as a visiting professor from June-July 2018; University of California, San Diego as a visiting scientist for the period December 2018-February 2019; Dublin City University, Ireland as a visiting scientist in July, 2019. She won the best paper awards in CLINICAL-NLP workshop of COLING 2016, IEEE-INDICON 2015, International Conference on Advances in Computing, Communications and Informatics (ICACCI 2012). According to Google Scholar, his citation count is 3488 and with h-index 24. For more information please refer to : www.iitp.ac.in/ sriparna.
  2. Pratik Dutta:Pratik Dutta is currently working as PhD scholar in the Department of Computer Science and Engineering at Indian Institute of Technology Patna. He received his BE and ME degree from Indian Institute of Engineering Science and Technology, Shibpur in 2013 and 2015, respectively. His research interest lies in computational biology, genomic sequence, protein-protein interaction, machine learning and deep learning techniques. He is the recipient of Visvesvaraya PhD research fellowship, an initiative of Ministry of Electronics and Information Technology (MeitY), Government of India (GoI). According to Google Scholar, his citation count is 34 along with h-index 4. For the last four years, he has extensively explored in various frontiers of computational biology more precisely in protein-protein interaction identification. He has published various research articles in different prestigious fora like Computers in Biology and Medicine, IEEE/ACM Transactions on Computational Biology and Bioinformatics, IEEE Journal of Biomedical and Health Informatics, Nature-Scientific Reports, etc.

External website with more information on Tutorial (if applicable):

https://www.iitp.ac.in/~sriparna/WCCI.html

RANKING GAME: How to combine human and computational intelligence?
(A Cross-disciplinary tutorial)

Organizer: Péter Érdi (Henry Luce Professor of Complex Systems Studies, Kalamazoo College;

Wigner Research Centre for Physics, Budapest
perdi@kzoo.edu
http://people.kzoo.edu/~perdi/

Goal:

Comparison, ranking and even rating is a fundamental feature of human nature. The goal of this tutorial to explain the integrative aspects of the evolutionary, psychological, institutional and algorithmic aspects of ranking. Since we humans (1) love lists; (2), are competitive and (3), are jealous of other people, we like ranking. The practice of ranking is studied in social psychology and political science, the algorithms of ranking in computer science. Initial results of the possible neural and cognitive architectures behind rankings are also reviewed.

The tutorial is based on the book of the organizer:

Ranking:

The Unwritten Rules of the Social Game We All Play, Oxford University Press, 2020
https://global.oup.com/academic/product/ranking-9780190935467?cc=us&lang=en

Tentative plan:
1. Why we rank? How we rank?
1.1 Comparison, ranking and rating
1.2 The social psychology of ranking
1.3 Biased ranking
1.4 Social ranking
Social ranking in animal societies
Pecking order
1.5 Struggle for reputation
2. Ranking in the every day’s life
2.1 Ranking countries
2.2 Ranking universities
3. A success story: ranking the web
3.1 PageRank and its variations
3.2. Rank reversal
3.3 Webometrics
4. Scientific journals and scientists
4.1 Publish and perish, but where? Impact factor, the fading superstar
4.2 h-index and its variations
5. Cognitive architectures for ranking: are lists perfectly designed for our brain?
6. Ranking, rating and everything else. The mystery of the future: how to combine
human and computational intelligence?

Bio:

Dr. Péter Érdi serves as the Henry R. Luce Professor of Complex Systems Studies at Kalamazoo College. He is also a research professor in his home town, in Budapest, at the Wigner Research Centre of Physics. In addition, he is the founding co-director of the Budapest Semester in Cognitive Science, a study abroad program. Péter is a Member of the Board of Governors of the International Neural Network Society, the past Vice President of the International Neural Network Society, and among others as the past Editor-in-Chief of Cognitive Systems Research. He served as the Honorary Chair of the IJCNN 2019, and now serving as an IJCNN Technical Chair of the IEEE World Congress on Computational Intelligence in Glasgow. His books on mathematical modeling of chemical, biological, and other complex systems have been published by Princeton University Press, MIT Press, Springer Publishing house. His new book RANKING: The Unwritten Rules of the Social Game We All Play was published recently by the Oxford University Press, and is already under translation for several languages. See also aboutranking.com

Venue: With WCCI 2020 being held as a virtual conference, there will be a virtual experience of Glasgow, Scotland accessible through the virtual platform. We hope that everyone will have a chance to visit one of Europe’s most dynamic cultural capitals and the “World’s Friendliest City” soon in the future!.