Schedule of tutorials - 19th July, 2020

11:30 - 13:30
14:00 - 16:00
16:30 - 18:30
19:00 - 21:00
11:30 - 13:30
Tutorial Title Presenter Conference Contact Email
RANKING GAME: How to combine human and computational intelligence? Peter Erdi WCCI Peter.Erdi@kzoo.edu
Adversarial Machine Learning: On The Deeper Secrets of Deep Learning Danilo V. Vargas IJCNN vargas@inf.kyushu-u.ac.jp
From brains to deep neural networks Saeid Sanei, Clive Cheong Took IJCNN Clive.CheongTook@rhul.ac.uk
Deep randomized neural networks Claudio Gallicchio, Simone Scardapane IJCNN gallicch@di.unipi.it
Brain-Inspired Spiking Neural Network Architectures for Deep, Incremental Learning and Knowledge Evolution       Nikola Kasabov IJCNN nkasabov@aut.ac.nz
Fundamentals of Fuzzy Networks Alexander Gegov, Farzad Arabikhan FUZZ alexander.gegov@port.ac.uk
Pareto Optimization for Subset Selection: Theories and Practical Algorithms Chao Qian, Yang Yu CEC qianc@lamda.nju.edu.cn
Selection Exploration and Exploitation Stephen Chen, James Montgomery CEC sychen@yorku.ca
Benchmarking and Analyzing Iterative Optimization Heuristics with IOHprofiler Carola Doerr, Thomas Bäck, Ofer Shir, Hao Wang CEC h.wang@liacs.leidenuniv.nl
Computational Complexity Analysis of Genetic Programming Pietro S. Oliveto, Andrei Lissovoi         CEC p.oliveto@sheffield.ac.uk
Self-Organizing Migrating Algorithm - Recent Advances and Progress in Swarm Intelligence Algorithms

Roman Senkerik CEC senkerik@utb.cz
Visualising the search process of EC algorithms Su Nguyen, Yi Mei, and Mengjie Zhang CEC P.Nguyen4@latrobe.edu.au
14:00 - 16:00
Tutorial Title Presenter Conference Contact Email
Instance Space Analysis for Rigorous and Insightful Algorithm Testing Kate Smith-Miles, Mario Andres, Munoz Acosta WCCI munoz.m@unimelb.edu.au
Advances in Deep Reinforcement Learning Thanh Thi Nguyen, Vijay Janapa Reddi,Ngoc Duy Nguyen, IJCNN thanh.nguyen@deakin.edu.au 
Machine learning  for data streams in Python with scikit-multi flow Jacob Montiel, Heitor Gomes,Jesse Read, Albert Bifet IJCNN heitor.gomes@waikato.ac.nz
Deep Learning for Graphs Davide Bacchiu IJCNN bacciu@di.unipi.it
Explainable-by-Design Deep Learning: Fast, Highly Accurate, Weakly Supervised, Self-evolving Plamen Angelov IJCNN p.angelov@lancaster.ac.uk
Cartesian Genetic Programming and its Applications Lukas Sekanina, Julian Miller CEC sekanina@fit.vutbr.cz
Large-Scale Global Optimization Mohammad Nabi Omidvar, Daniel Molina CEC M.N.Omidvar@leeds.ac.uk 
Niching Methods for Multimodal Optimization Mike Preuss, Michael G. Epitropakis, Xiadong Li CEC m.epitropakis@gmail.com
A Gentle Introduction to the Time Complexity Analysis of Evolutionary Algorithms     Pietro S. Oliveto CEC p.oliveto@sheffield.ac.uk
Theoretical Foundations of Evolutionary Computation for Beginners and Veterans. Darrel Whitely CEC darrell.whitley@gmail.com
Evolutionary Computation for Dynamic Multi-objective Optimization Problems Shengxiang Yang CEC syang@dmu.ac.uk
16:30 - 18:30
Tutorial Title Presenter Conference Contact Email
New and Conventional Ensemble Methods José Antonio Iglesias, María Paz Sesmero Lorente, Araceli Sanchis de Miguel WCCI jiglesia@inf.uc3m.es
Evolution of Neural Networks Risto Miikkulainen IJCNN risto@cs.utexas.edu
Mechanisms of Universal Turing Machines in Developmental Networks for Vision, Audition, and Natural Language Understanding Juyang Weng IJCNN juyang.weng@gmail.com
Generalized constraints for knowledge-driven-and-data-driven approaches Baogang Hu IJCNN hubaogang@gmail.com
Randomization Based Deep and Shallow Learning Methods for Classification and Forecasting P N Suganthan IJCNN EPNSugan@ntu.edu.sg
Using intervals to capture and handle uncertainty Chritian Wagner, Prof. Vladik Kreinovich, Dr Josie McCulloch, Dr Zack Ellerby FUZZ Christian.Wagner@nottingham.ac.uk
Fuzzy Systems for Neuroscience and Neuro-engineering Applications Javier Andreu, CT Lin FUZZ javier.andreu@essex.ac.uk
Evolutionary Algorithms and Hyper-Heuristics Nelishia Pillay CEC npillay@cs.up.ac.za
Recent Advances in Particle Swarm Optimization Analysis and Understanding Andries Engelbrecht, Christopher Cleghorn CEC engel@sun.ac.za
Recent Advanced in Landscape Analysis for Optimisation Katherine Malan, Gabriela Ochoa CEC malankm@unisa.ac.za
Evolutionary Machine Learning  Masaya Nakata, Shinichi Shirakawa, Will Browne CEC nakata-masaya-tb@ynu.ac.jp
Evolutionary Many-Objective Optimization Hisao Ishibuchi, Hiroyuki Sato CEC h.sato@uec.ac.jp
19:00 - 21:00
Tutorial Title Presenter Conference Contact Email
Multi-modality Helps in Solving Biomedical Problems: Theory and ApplicationsSriparna Saha, Pratik DuttaWCCIpratik.pcs16@iitp.ac.in
Probabilistic Tools for Analysis of Network Performance Věra Kůrková IJCNN vera@cs.cas.cz
Experience Replay for Deep Reinforcement Learning ABDULRAHMAN ALTAHHAN, VASILE PALADE IJCNN A.Altahhan@leedsbeckett.ac.uk
PHYSICS OF THE MIND Leonid I. Perlovsky IJCNN lperl@rcn.com
Deep Stochastic Learning and Understanding Jen-Tzung Chien IJCNN jtchien@nctu.edu.tw
Paving the way from Interpretable Fuzzy Systems to Explainable AI Systems José M. Alonso, Ciro Castiello, Corrado Menca, Luis Magdalena FUZZ josemaria.alonso.moral@usc.es
Evolving neuro-fuzzy systems in clustering and regression Igor Škrjanc, Fernando Gomide, Daniel Leite, Sašo Blažič FUZZ Igor.Skrjanc@fe.uni-lj.si
Differential Evolution Rammohan Mallipeddi,  Guohua Wu CEC mallipeddi.ram@gmail.com
Bilevel optimization Ankur Sinha, Kalyanmoy Deb CEC asinha@iima.ac.in
Nature-Inspired Techniques for Combinatorial Problems Malek Mouhoub

CEC Malek.Mouhoub@uregina.ca
Dynamic Parameter Choices in Evolutionary Computation Carola Doerr, Gregor Papa  CEC gregor.papa@ijs.si
Evolutionary computation for games: learning, planning, and designing

Julian Togelius , Jialin Liu 

CEC liujl@sustech.edu.cn

Randomization Based Deep and Shallow Learning Methods for Classification and Forecasting

 Dr. P. N.Suganthan
Nanyang Technological University, Singapore.
Email:epnsugan@ntu.edu.sg

Websites:http://www.ntu.edu.sg/home/epnsugan/

https://github.com/P-N-Suganthan

http://scholar.google.com.sg/citations?hl=en&user=yZNzBU0AAAAJ&view_op=list_works&pagesize=100

This tutorial will first introduce the main randomization-based learning paradigms with closed-form solutions such as the randomization-based feedforward neural networks, randomization based recurrent neural networks and kernel ridge regression. The popular instantiation of the feedforward type called random vector functional link neural network (RVFL) originated in early 1990s. Other feedforward methods are random weight neural networks (RWNN), extreme learning machines (ELM), etc. Reservoir computing methods such as echo state networks (ESN) and liquid state machines (LSM) are randomized recurrent networks. Another paradigm is based on kernel trick such as the kernel ridge regressionwhich includes randomization for scaling to large training data. The tutorial will also consider computational complexity with increasing scale of the classification/forecasting problems. Another randomization-based paradigm is the random forest which exhibits highly competitive performances.The tutorial will also present extensive benchmarking studies using classification and forecasting datasets.

 Key Papers:

 General Bio-sketch:

PonnuthuraiNagaratnamSuganthanreceived the B.A degree, Postgraduate Certificate and M.A degree in Electrical and Information Engineering from the University of Cambridge, UK in 1990, 1992 and 1994, respectively. He received an honorary doctorate (i.e. Doctor Honoris Causa) in 2020 from University of Maribor, Slovenia.After completing his PhD research in 1995, he served as a pre-doctoral Research Assistant in the Dept of Electrical Engineering, University of Sydney in 1995–96 and a lecturer in the Dept of Computer Science and Electrical Engineering, University of Queensland in 1996–99. He moved to Singapore in 1999. He is an Editorial Board Member of the Evolutionary Computation Journal, MIT Press (2013-2018). He is/was an associate editor of the Applied Soft Computing (Elsevier, 2018-), Neurocomputing (Elsevier, 2018-), IEEE Trans on Cybernetics (2012 – 2018), IEEE Trans on Evolutionary Computation (2005 – ), Information Sciences (Elsevier) (2009 – ), Pattern Recognition (Elsevier) (2001 – ) and Int. J. of Swarm Intelligence Research (2009 – ) Journals. He is a founding co-editor-in-chief of Swarm and Evolutionary Computation (2010 – ), an SCI Indexed Elsevier Journal. His co-authored SaDE paper (published in April 2009) won the “IEEE Trans. on Evolutionary Computation outstanding paper award” in 2012. His former PhD student, Dr Jane Jing Liang, won the IEEE CIS Outstanding PhD dissertation award, in 2014. His research interests include swarm and evolutionary algorithms, pattern recognition, big data, deep learning and applications of swarm, evolutionary & machine learning algorithms. His publications have been well cited. His SCI indexed publications attracted over 1000 SCI citations in each calendar year since 2013. He was selected as one of the highly cited researchers by Thomson Reuters yearly from 2015 to 2019 in computer science. He served as the General Chair of the IEEE SSCI 2013. He has been a member of the IEEE (S’90, M’92, SM’00, F’15) since 1991 and an elected AdCom member of the IEEE Computational Intelligence Society (CIS) in 2014-2016. He is an IEEE CIS distinguished lecturer (DLP) in 2018-2020.

Brain-Inspired Spiking Neural Network Architectures for Deep, Incremental Learning and Knowledge Representation   

Prof. Nikola Kasabov
Director, Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland, New Zealand
Email: nkasabov@aut.ac.nz              

ABSTRACT

The 2 hour tutorial demonstrates that the third generation of artificial neural networks, the spiking neural networks (SNN) are not only capable of deep, incremental learning of temporal or spatio-temporal data, but also enabling the extraction of knowledge representation from the learned data and tracing the knowledge evolution over time from the incoming data. Similarly to how the brain learns, these SNN models do not need to be restricted in number of layers, neurons in each layer, etc. as they adopt self-organising learning principles of the brain. The tutorial consists of 2 parts:

  1. Algorithms for deep, incremental learning in SNN.
  2. Algorithms for knowledge representation and for tracing the knowledge evolution in SNN over time from incoming data. Representing fuzzy spatio-temporal rules from SNN.
  3. Selected Applications

The material is illustrated on an exemplar SNN architecture NeuCube (free software and open source along with a cloud-based version available from www.kedri.aut.ac.nz/neucube). Case studies are presented of brain and environmental data modelling and knowledge representation using incremental and transfer learning algorithms. These include: predictive modelling of EEG and fMRI data measuring cognitive processes and response to treatment; AD prediction; understanding depression; predicting environmental hazards and extreme events.

It is also demonstrated that brain-inspired SNN architectures, such as the NeuCube, allow for  knowledge transfer between humans and machines through building brain-inspired Brain-Computer Interfaces (BI-BCI). These are used to understand human-to-human knowledge transfer through hyper-scanning and also to create brain-like neuro-rehabilitation robots. This opens the way to build a new type of AI systems – the open and transparent  AI.

Reference: N.Kasabov, Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence, Springer, 2019, https://www.springer.com/gp/book/9783662577134.

 Presenter:     

Prof. Nikola Kasabov, Director, Knowledge Engineering and Discovery Research Institute,

Auckland University of Technology, Auckland, New Zealand, nkasabov@aut.ac.nz,

Biodata:

Professor Nikola Kasabov is Fellow of IEEE, Fellow of the Royal Society of New Zealand, Fellow of the INNS College of Fellows, DVF of the Royal Academy of Engineering UK. He is the Founding Director of the Knowledge Engineering and Discovery Research Institute (KEDRI), Auckland and Professor at the School of Engineering, Computing and Mathematical Sciences at Auckland University of Technology, New Zealand. Kasabov is the Immediate Past President of the Asia Pacific Neural Network Society (APNNS) and Past President of the International Neural Network Society (INNS). He is member of several technical committees of IEEE Computational Intelligence Society and Distinguished Lecturer of IEEE (2012-2014). He is Editor of Springer Handbook of Bio-Neuroinformatics, Springer Series of Bio-and Neuro-systems and Springer journal Evolving Systems. He is Associate Editor of several journals, including Neural Networks, IEEE TrNN, Tr CDS, Information Sciences, Applied Soft Computing. Kasabov holds MSc and PhD from TU Sofia, Bulgaria. His main research interests are in the areas of neural networks, intelligent information systems, soft computing, bioinformatics, neuroinformatics. He has published more than 620 publications. He has extensive academic experience at various academic and research organisations in Europe and Asia, including: TU Sofia Bulgaria; University of Essex UK; University of Otago, NZ; Advisory Professor at Shanghai Jiao Tong University and CASIA China, Visiting Professor at ETH/University of Zurich and Robert Gordon University UK, Honorary Professor of Teesside University, UK; George Moore Professor of data analytics at the University of Ulster. Prof. Kasabov has received a number of awards, among them: Doctor Honoris Causa from Obuda University, Budapest; INNS Ada Lovelace Meritorious Service Award; NN Best Paper Award for 2016; APNNA ‘Outstanding Achievements Award’; INNS Gabor Award for ‘Outstanding contributions to engineering applications of neural networks’; EU Marie Curie Fellowship; Bayer Science Innovation Award; APNNA Excellent Service Award;  RSNZ Science and Technology Medal; 2015 AUT Medal; Honorable Member of the Bulgarian, the Greek and the Scottish Societies for Computer Science. More information of Prof. Kasabov can be found from: http://www.kedri.aut.ac.nz/staff.

Generalized constraints for knowledge-driven-and-data-driven approaches

Bao-Gang (B.-G.) Hu
Professor, Senior Member of IEEE
National Laboratory of Pattern Recognition
Institute of Automation,Chinese Academy of Sciences, China
Email: hubaogang@gmail.com

Abstract

In this tutorial, I will start with a question about the existing studies on machine learning (ML) and artificial intelligence (AI): “Do we encounter any new mathematical, yet general, problem in ML and AI?”. The question is given based on the philosophy by Kalman (2008) stated as “Once you get the physics right, the rest is mathematics”. The tutorial will take the notion of “Generalized Constraints (GCs)” by Zadeh (1986, 1996). We consider it a “new” mathematical problem due to the facts that the problem is still far away from awareness for every related community.

The tutorial will focus on GCs in the context of knowledge-and-data-driven modeling (KDDM, called KD/DD by Brinkley, 1985) approaches. When Deep Learning (DL), as a data-driven (DD) approach, is successful in various application areas, we believe that KDDM approaches will be the next for advancing the existing tools. The current Artificial Neural Networks (ANNs), including DL, are not ready to incorporate any type of prior knowledge and works as “black box” tools in general.For overcoming the difficulties, we redefine GCs and discuss the problems in comparison with the conventional constraints. We show that GCs appear more often and general in applications and will enlarge our study space from a novel mathematic problem, such as “Generalized Constraint Learning (GCL)”. Five open issues are presented to highlight missing, yet important, studies in ANNs.

The tutorial follows the previous one (Hu, 2017 http://www.escience.cn/people/hubaogang/IJCNN-2017.html) on “What to learn?” but stresses on Transparent ANN (tANN) for the learning target. We consider KDDM and GCs to be necessary solutions. Furthermore, the relations among tANN, Interpretable ANN (iANN) andExplainable ANN (xANN) are redefined. Several numerical examples are demonstrated on supervised and unsupervised problems in relations to the five issues.

The objective of this tutorial is to highlight GC problems from a KDDM perspective for transparent/interpretable/explainable AI, rather than for specific applications.

Tutorial Presenters (names with affiliations): 

Bao-Gang (B.-G.) Hu, Professor, Senior Member of IEEE

National Laboratory of Pattern Recognition,
Institute of Automation,
Chinese Academy of Sciences, China

Tutorial Presenters’ Bios: 

Dr. Bao-Gang Hu is a Professor with NLPR (National Laboratory of Pattern Recognition), Institute of Automation, Chinese Academy of Sciences, Beijing, China. He received his M.S. degree from the University of Science and Technology, Beijing, China in 1983, and his Ph.D. degree from McMaster University, Canada in 1993. From 2000 to 2005, he was the Chinese Director of LIAMA (the Chinese-French Joint Laboratory supported by CAS and INRIA). His current research interests are pattern recognition and computer modeling.  He gave a tutorial in IJCNN-2017/ICONIP-2018, entitled as “Information Theoretic Learning in Pattern Classification”. (http://www.escience.cn/people/hubaogang/IJCNN-2017.html).

External website with more information on Tutorial (if applicable): I will provide the website including slides before June 15, 2020.

Explainable-by-Design Deep Learning: Fast, Highly Accurate, Weakly Supervised, Self-evolving

Prof Plamen Angelov, PhD, DSc, FIEEE
School of Computing and Communications, Lancaster University, UK
Vice President International Neural Network Society
e-mail: p.angelov@lancaster.ac.uk; www.lancs.ac.uk/staff/angelov

Machine Learning (ML) and AI justifiably attract the attention and interest not only of the wider scientific community and industry, but also society and policy makers. Recent developments in this area range from accurately recognising images and speech to beating the best players in games like Chess, Go and Jeopardy. In such well-structured problems, the ML and AI algorithms were able to surpass the human performance, acting autonomously. These breakthroughs in performance were made possible due to the dramatic increase of computational power and the amount and ubiquity of the data available. This data-rich environment, however, led to the temptation to shortcut from raw data to the solutions without getting a deep insight and understanding of the underlying dependencies and causalities between the factors and the internal model structure. Even the most powerful (in terms of accuracy) algorithms such as deep learning (DL) can give a wrong output, which may be fatal.

Recently, a crash by a driverless Uber car was reported raising issues such as responsibility and the lack of transparency, which could help analyse the cause and prevent future crashes. Due to the opaque and cumbersome model structure used by DL, some authors started to talk about a dystopian “black box” society. Having the true potential to revolutionize industries and the way we live, the recent breakthroughs in ML and AI also raised many new questions and issues. These are related primarily to their transparency, explainability, fairness, bias and their heavy dependence on large quantities of labeled training data.

Despite the success in this area, the way computers learn is still principally different from the way people acquire new knowledge, recognise objects and make decisions. Children during their sensory-motor development stage (first two years of a child’s life) imitate observed activities and are able to learn from one or few examples in “one-shot learning”. People do not need a huge amount of annotated data. They learn by example, using similarities to previously acquired prototypes, not by using parametric analytical models. They can explain and pass aggregated knowledge to other humans. They predict based on rules they formulate using prototypes and examples.

Current ML approaches are focused primarily on accuracy and overlook explainability, the semantic meaning of the internal model representation, reasoning and its link with the problem domain. They also overlook the efforts to collect and label training data and rely on assumptions about the data distribution that are often not satisfied. For example, the widely used assumption that the validation data has the same distribution as that of the training data is usually not satisfied in reality and is the main reason for poor performance. The typical assumption for classification, that all validation data come from the same classes as the training data, may also be incorrect. It does not consider scenarios in which new classes appear. For example, if a driverless car is confronted with a scene that was never used in the training data or if a new type of malware or attack appears in a cybersecurity domain. In such scenarios, the best existing approach of transfer learning will require a heavy and long process of training with huge amounts of labeled data. While driving in real time, the car will be helpless. In the cybersecurity area it is not possible to pre-train for all possible attacks and viruses. Therefore, the ability to detect the unseen and unexpected and start learning this new class/es in real time with no or very little supervision is critically important and is something that no currently existing classifier can offer. Another big problem with the currently existing ML algorithms is that they ignore the semantic meaning, explainability and reasoning aspects of the solutions they propose. The challenge is to fill this gap between high level of accuracy and the semantically meaningful solutions.

The most efficient algorithms that have fueled interest towards ML and AI recently are also computationally very hungry – they require specific hardware accelerators such as GPU, huge amounts of labeled data and time. They produce parameterised models with hundreds of millions of coefficients, which are also impossible to interpret or be manipulated by a human. Once trained, such models are inflexible to new knowledge. They cannot dynamically evolve their internal structure to start recognising new classes. They are good only for what they were originally trained for. They also lack robustness, formal guarantees about their behaviour and explanatory and normative transparency. This makes problematic use of such algorithms in high stake complex problems such as aviation, health, bailing from jail, etc. where the clear rationale for a particular decision is very important and the errors are very costly.

All these challenges and identified gaps require a dramatic paradigm shift and a radical new approach. In this tutorial the speaker will present such a new approach towards the next generation of computationally lean ML and AI algorithms that can learn in real-time using normal CPUs on computers, laptops, smartphones or even be implemented on chip that will change dramatically the way these new technologies are being applied. It will open a huge market of truly intelligent devices that can learn lifelong, improve their performance and adapt them to the user’s demands. This approach is called anthropomorphic, because it shares similar characteristics to the way people learn, aggregate, articulate and exchange knowledge. It is explainable-by-design. It focuses on addressing the open research challenge of developing highly efficient, accurate ML algorithms and AI models that are transparent, interpretable, explainable and fair by design. Such systems are able to self-learn lifelong, and continuously improve without the need for complete re-training, can start learning from few training data samples, explore the data space, detect and learn from unseen data patterns, collaborate with humans or other such algorithms seamlessly.

References:

[1] P. P. Angelov, X. Gu, Toward anthropomorphic machine learning, IEEE Computer, 51(9):18–27, 2018.
[2] P. Angelov. X. Gu, Empirical Approach to Machine Learning, Studies in Computational Intelligence, vol.800, ISBN 978-3-030-02383-6, Springer, Cham, Switzerland, 2018.
[3] P. P. Angelov, X. Gu, Deep rule-based classifier with human-level performance and characteristics, Information Sciences, vol. 463-464, pp.196-213, Oct. 2018.
[4] P. Angelov, X. Gu, J. Principe, Autonomous learning multi-model systems from data streams, IEEE Transactions on Fuzzy Systems, 26(4): 2213-2224, Aug. 2018.
[5] P. Angelov, X. Gu, J. Principe, A generalized methodology for data analysis, IEEE Transactions on Cybernetics, 48(10): 2981-2993, Oct 2018.
[6] X. Gu, P. Angelov, C. Zhang, P. Atkinson, A massively parallel deep rule-based ensemble classifier for remote sensing scenes, IEEE Geoscience and Remote Sensing Letters, vol. 15 (3), pp. 345-349, 2018.
[7] P. Angelov, Autonomous Learning Systems: From Data Streams to Knowledge in Real time, John Willey and Sons, Dec.2012, ISBN: 978-1-1199-5152-0.
[8] P. Angelov, E. Soares, Toawards Explainable Deep Neural Networks, xDNN, ArXiv publication at arXiv:1912.02523, 5 December 2019 (publication of the week at Deepai.org https://deepai.org/research).

Biographical data of the speaker:

Prof. Angelov (MEng 1989, PhD 1993, DSc 2015) is a Fellow of the IEEE, of the IET and of the HEA. He is Vice President of the International Neural Networks Society (INNS) for Conference and Governor of the Systems, Man and Cybernetics Society of the IEEE. He has 30 years of professional experience in high level research and holds a Personal Chair in Intelligent Systems at Lancaster University, UK. He founded in 2010 the Intelligent Systems Research group which he led till 2014 when he founded the Data Science group at the School of Computing and Communications before going on sabbatical in 2017 and established LIRA (Lancaster Intelligent, Robotic and Autonomous systems) Research Centre (www.lancaster.ac.uk/lira ) which includes over 30 academics across different Faculties and Departments of the University. He is a founding member of the Data Science Institute and of the CyberSecurity Academic Centre of Excellence at Lancaster. He has authored or co-authored 300 peer-reviewed publications in leading journals, peer-reviewed conference proceedings, 3 granted patents (+ 3 filed applications), 3 research monographs (by Wiley, 2012 and Springer, 2002 and 2018) cited 9000+ times with an h-index of 49 and i10-index of 160. His single most cited paper has 960 citations. He has an active research portfolio in the area of computational intelligence and machine learning and internationally recognised results into online and evolving learning and algorithms for knowledge extraction in the form of human-intelligible fuzzy rule-based systems. Prof. Angelov leads numerous projects (including several multimillion ones) funded by UK research councils, EU, industry, UK MoD. His research was recognised by ‘The Engineer Innovation and Technology 2008 Special Award’ and ‘For outstanding Services’ (2013) by IEEE and INNS. He is also the founding co-Editor-in-Chief of Springer’s journal on Evolving Systems and Associate Editor of several leading international scientific journals, including IEEE Transactions on Fuzzy Systems (the IEEE Transactions with the highest impact factor) of the IEEE Transactions on Systems, Man and Cybernetics as well as of several other journals such as Applied Soft Computing, Fuzzy Sets and Systems, Soft Computing, etc. He gave over a dozen plenary and key note talks at high profile conferences. Prof. Angelov was General co-Chair of a number of high profile conferences including IJCNN2013, Dallas, TX; IJCNN2015, Killarney, Ireland; the inaugural INNS Conference on Big Data, San Francisco; the 2nd INNS Conference on Big Data, Thessaloniki, Greece and a series of annual IEEE Symposia on Evolving and Adaptive Intelligent Systems. Dr Angelov is the founding Chair of the Technical Committee on Evolving Intelligent Systems, SMC Society of the IEEE and was previously chairing the Standards Committee of the Computational Intelligent Society of the IEEE (2010-2012). He was also a member of International Program Committee of over 100 international conferences (primarily IEEE). More details can be found at http://www.lancs.ac.uk/staff/angelov

Deep learning for graphs

Davide Bacciu
Università di Pisa
Email: bacciu@di.unipi.it

Abstract

The tutorial will introduce the lively field of deep learning for graphs and its applications.  Dealing with graph data requires learning models capable of adapting to structured samples of varying size and topology, capturing the relevant structural patterns to perform predictive and explorative tasks while maintaining the efficiency and scalability necessary to process large scale networks. The tutorial will first introduce foundational aspects and seminal models for learning with graph structured data. Then it will discuss the most recent advancements in terms of deep learning for network and graph data, including learning structure embeddings, graph convolutions, attentional models and graph generation.

Tutorial Presenters (names with affiliations):

Davide Bacciu (Università di Pisa)

Tutorial Presenters’ Bios:

Davide Bacciu is Assistant Professor at the Computer Science Department, University of Pisa. The core of his research is on Machine Learning (ML) and deep learning models for structured data processing, including sequences, trees and graphs. He is the PI of an Italian National project on ML for structured data and the Coordinator of the H2020-RIA project TEACHING (2020-2023).  He has been teaching courses of Artificial Intelligence (AI) and ML at undergraduate and graduate levels since 2010. He is an IEEE Senior Member, a member of the IEEE NN Technical Committee and of the IEEE CIS Task Force on Deep Learning. He is an Associate Editor of the IEEE Transactions on Neural Networks and Learning Systems. Since 2017 he is the Secretary of the Italian Association for Artificial Intelligence (AI*IA).

External website with more information on Tutorial (if applicable):
https://www.learning4graphs.org/activities/tutorials/wcci-2020

Fast and Deep Neural Networks

 Claudio Gallicchio
University of Pisa (Italy)
Email: gallicch@di.unipi.it

Simone Scardapane
La Sapienza University of Rome (Italy)
Email: simone.scardapane@uniroma1.it

 Abstract

Deep Neural Networks (DNNs) are a fundamental tool in the modern

development of Machine Learning. Beyond the merits of properly designed training strategies, a great part of DNNs success is undoubtedly due to the inherent properties of their layered architectures, i.e., to the introduced architectural biases. In this tutorial, we analyze how far we can go by relying almost exclusively on these architectural biases. In particular, we explore recent classes of DNN models wherein the majority of connections are randomized or more generally fixed according to some specific heuristic, leading to the development of Fast and Deep Neural Network (FDNN) models. Examples of such systems consist of multi-layered neural network architectures where the connections to the hidden layer(s) are left untrained after initialization. Limiting the training algorithms to operate on a reduced set of weights implies a number of intriguing features. Among them, the extreme efficiency of the resulting learning processes is undoubtedly a striking advantage with respect to fully trained architectures. Besides, despite the involved simplifications, randomized neural systems possess remarkable properties both in practice, achieving state-of-the-art results in multiple domains, and theoretically, allowing to analyze intrinsic properties of neural architectures (e.g. before training of the hidden layers’ connections). In recent years, the study of randomized neural networks has been extended towards deep architectures, opening new research directions to the design of effective yet extremely efficient deep learning models in vectorial as well as in more complex data domains.

This tutorial will cover all the major aspects regarding the design and analysis of Fast and Deep Neural Networks, and some of the key results with respect to their approximation capabilities. In particular, the tutorial will first introduce the fundamentals of randomized neural models in the context of feedforward networks (i.e., Random Vector Functional Link and equivalent models), convolutional filters, and recurrent systems (i.e., Reservoir Computing networks). Then, it will focus specifically on recent results in the domain of deep randomized systems, and their application to structured domains (trees, graphs).

Tutorial Presenters (names with affiliations):

Claudio Gallicchio, University of Pisa (Italy)

Simone Scardapane, La Sapienza University of Rome (Italy)

*Tutorial Presenters’ Bios:

Claudio Gallicchio is Assistant Professor at the Department of Computer Science, University of Pisa. He is Chair of the IEEE CIS Task Force on Reservoir Computing, and member of IEEE CIS Data Mining and Big Data Analytics Technical Committee, and of the IEEE CIS Task Force on Deep Learning. Claudio Gallicchio has organized several events (special sessions and workshops) in major international conferences (including IJCNN/WCCI, ESANN, ICANN) on themes related to Randomized Neural Networks. He serves as member of several program committees of conferences and workshops in Machine Learning and Artificial Intelligence. He has been invited speaker for several national and international conference. His research interests include Machine Learning, Deep Learning, Randomized Neural Networks, Reservoir Computing, Recurrent and Recursive Neural Networks, Graph Neural Networks.

Simone Scardapane is Assistant Professor at the the “Sapienza” University of Rome. He is active as co-organizer of special sessions and special issues on themes related to Randomized Neural Networks and Randomized Machine Learning approaches. His research interests include Machine Learning, Neural Networks, Reservoir Computing and Randomized Neural Networks, Distributed and Semi-supervised Learning, Kernel Methods, and Audio Classification. Simone Scardapane is an Honorary Research Fellow with the CogBID Laboratory, University of Stirling, Stirling, U.K. Simone Scardapane is the co-organizer of the Rome Machine Learning & Data Science Meetup, that organizes monthly events in Rome, and a member of the advisory board for Codemotion Italy. He is also a co-founder of the Italian Association for Machine Learning, a not-for-profit organization with the aim of promoting machine learning concepts in the public. In 2017 he has been certified as a Google Developer expert for machine learning. Currently, he is the track director for the CNR sponsored “Advanced School of AI”

(https://as-ai.org/governance/).

* External website with more information on Tutorial (if applicable):

https://sites.google.com/view/fast-and-deep-neural-networks

Deep Stochastic Learning and Understanding

Jen-TzungChien
National Chiao Tung University
Email: jtchien@nctu.edu.tw

Website: http://chien.cm.nctu.edu.tw

Abstract

This tutorial addresses the advances in deep Bayesian learning for sequence data which are ubiquitous in speech, music, text,

image, video, web, communication and networking applications. Spatial and temporal contents are analyzed and represented to fulfill a variety of tasks ranging from classification, synthesis, generation, segmentation, dialogue, search, recommendation, summarization, answering, captioning, mining, translation, adaptation to name a few. Traditionally, “deep learning” is taken to be a learning process where the inference or optimization is based on the real-valued deterministic model. The “latent semantic structure” in words, sentences, images, actions, documents or videos learned from data may not be well expressed or correctly optimized in mathematical logic or computer programs. The “distribution function” in discrete or continuous latent variable model for spatial and temporal sequences may not be properly decomposed or estimated. This tutorial addresses the fundamentals of statistical models and neural networks, and focus on a series of advanced Bayesian models and deep models including recurrent neural network, sequence-to-sequence model, variational auto-encoder (VAE), attention mechanism, memory-augmented neural network, skip neural network, temporal difference VAE, stochastic neural network, stochastic temporal convolutional network, predictive state neural network, and policy neural network. Enhancing the prior/posterior representation is addressed. We present how these models are connected and why they work for a variety of applications on symbolic and complex patterns in sequence data. The variational inference and sampling method are formulated to tackle the optimization for complicated models. The embeddings, clustering or co-clustering of words, sentences or objects are merged with linguistic and semantic constraints. A series of case studies, tasks and applications are presented to tackle different issues in deep Bayesian learning and understanding. At last, we will point out a number of directions and outlooks for future studies. This tutorial serves the objectives to introduce novices to major topics within deep Bayesian learning, motivate and explain a topic of emerging importance for natural language understanding, and present a novel synthesis combining distinct lines of machine learning work.

Tutorial Presenters (names with affiliations):

Jen-TzungChien, National Chiao Tung University, Taiwan

Tutorial Presenters’ Bios:

Jen-TzungChien is the Chair Professor at the National Chiao Tung University, Taiwan. He held the Visiting Professor position at the IBM T. J. Watson Research Center, Yorktown Heights, NY, in 2010. His research interests include machine learning, deep learning, computer vision and natural language processing. Dr. Chien served as the associate editor of the IEEE Signal Processing Letters in 2008-2011, the general co-chair of the IEEE International Workshop on Machine Learning for Signal Processing in 2017, and the tutorial speaker of the ICASSP in 2012, 2015, 2017, the INTERSPEECH in 2013, 2016, the COLING in 2018, the AAAI, ACL, KDD, IJCAI in 2019. He received the Best Paper Award of IEEE Automatic Speech Recognition and Understanding Workshop in 2011 and the AAPM Farrington Daniels Award in 2018. He has published extensively, including the books “Bayesian Speech and Language Processing”, Cambridge University Press, in 2015, and “Source Separation and Machine Learning”, Academic Press, in 2018. He is currently serving as an elected member of the IEEE Machine Learning for Signal Processing Technical Committee.

External website with more information on Tutorial:

http://chien.cm.nctu.edu.tw/home/wcci-tutorial

Physics of The Mind

Leonid I. Perlovsky
Harvard University
Email: lperl@rcn.com

Abstract

What is physics of the mind? Is it possible? Physics of the mind uses the methodology of physics for extending neural networks towards more realistic modeling of the mind from perception through the entire mental hierarchy including language, higher cognition and emotions. The presentation focuses on mathematical models of the fundamental principles of the mind-brain neural mechanisms and practical applications in several fields. Big data and autonomous learning algorithms are discussed for cybersecurity, gene-phenotype associations, medical applications to disease diagnostics, financial predictions, data mining in distributed data bases, learning of patterns under noise, interaction of language and cognition in mental hierarchy. Mathematical models of the mind-brain are discussed for mechanisms of concepts, emotions, instincts, behavior, language, cognition, intuitions, conscious and unconscious, abilities for symbols, functions of the beautiful and musical emotions in cognition and evolution. This new area of science was created recently and won National and International Awards.

A mathematical and cognitive breakthrough, dynamic logic is described. It models cognitive processes ìfrom vague and unconscious to crisp and conscious,î from vague representations, plans, thoughts to crisp ones. It resulted in more than 100 times improvements in several engineering applications; brain imaging experiments at Harvard Medical School, and several labs around the world proved it to be a valid model for various brain-mind processes. New cognitive and mathematical principles are discussed, language-cognition interaction, function of music in cognition, and co-evolution of music and cultures. How does language interact with cognition? Do we think using language or is language just a label for completed thoughts? Why the music ability has evolved from animal cries to Bach and Elvis? I briefly review past mathematical difficulties of computational intelligence and new mathematical techniques of dynamic logic and neural networks implementing it, which overcome past limitations. Dynamic logic reveals the role of unconscious mechanisms, which will lead to revolution in psychology.

The presentation discusses cognitive functions of emotions. Why human cognition needs emotions of beautiful, music, sublime. Dynamic logic is related to knowledge instinct and language instinct; why are they different? How languages affect evolution of cultures. Language networks are scale-free and small-world, what does this tell us about cultural values? What are the biases of English, Spanish, French, German, Arabic, Chinese; what is the role of language in cultural differences?

Relations between cognition, language, and music, are discussed. Mathematical models of the mind and cultures bear on contemporary world, and may be used to improve mutual understanding among peoples around the globe and reduce tensions among cultures.

Leonid I. Perlovsky
Harvard University, lperl@rcn.com

Bio

Dr. Leonid Perlovsky is Visiting Professor at Harvard University School of Engineering and Applied Science, Harvard University Medical School, Professor at Northeastern University Psychology and Northeastern University Engineering,  Professor at St. Petersburg Polytechnic University, CEO LPIT, past Principal Research Physicist and Technical Advisor at the Air Force Research Laboratory (AFRL). He leads research projects on neural networks, modeling the mind and cognitive algorithms for integration of sensor data with knowledge, multi-sensor systems, recognition, fusion, languages, aesthetic emotions, emotions of the beautiful, music cognition, and cultures. He developed dynamic logic that overcame computational complexity in engineering and psychology. As Chief Scientist at Nichols Research, a $0.5B high-tech organization, he led the corporate research in intelligent systems and neural networks. He served as professor at Novosibirsk University and New York University; as a principal in commercial startups developing tools for biotechnology, text understanding, and financial predictions. His company predicted the market crash following 9/11 a week before the event. He is invited as a keynote plenary speaker and tutorial lecturer worldwide, including most prestigious venues, such as Nobel Forum, published more than 600 papers, 20 book chapters, and 8 books, including ìNeural Networks and Intellect,î Oxford University Press 2001 (currently in the 3rd printing), ìCognitive Emotional Algorithmsî Springer 2011, “Music: Passions and Cognitive Functions” Academic Press 2017. Dr. Perlovsky participates in organizing conferences on Neural Networks, CI, Past chair of IEEE Boston CI Chapter; serves on the Editorial Boards for ten journals, including Editor-in-Chief for ìPhysics of Life Reviewsî, IF=13.84, Thomson Reuter rank #1 in the world, past member of the INNS Board of Governors, Chair of the INNS Award Committee. Received National and International awards including the Gabor Award, the top engineering award from the INNS; and the John McLucas Award, the highest US Air Force Award for basic research.

Accompanying text

I propose to include in course registration, as an option, a new book by Academic Press, “Music: Passions and Cognitive Functions” (a mixture of science and popular science, $49.90). This book addresses the entire mind from basic principles to learning mechanisms, to higher cognition, including mechanisms of musical emotions, beautiful, meaning + co-evolution of culture and music. Aristotle, Kant, Darwin called music “the greatest mystery,” and contemporary evolutionary musicologists agree with Darwin, music is a millennial mystery, which just have been understood.

 

From Brain to Deep Neural Networks

SaeidSanei
Nottingham Trent University UK
Email: Clive.CheongTook@rhul.ac.uk

Clive Cheong Took
Royal Holloway University of London UK
Email: saeid.sanei@ntu.ac.uk

Abstract

The aim of this tutorial is to provide the stepping stone for machine learning enthusiasts into the area of brain pathway modelling using innovative deep learning techniques through processing and learning from electroencephalogram (EEG). An insight into EEG generation and processing will provide the audience with a better understanding of deep network structures used to learn and detect the insightful information about the deep brain function.

Tutorial Presenters

SaeidSanei, Nottingham Trent University UK

Clive Cheong Took, Royal Holloway University of London UK

Biographies

SaeidSanei is a full professor with Nottingham Trent University and a visiting professor at Imperial College London. He leads a group where several young researchers working in EEG Processing and its application in brain computer interface (BCI). He authored two research monographs on electroencephalogram (EEG) processing and pattern recognition. Saeid has delivered numerous workshops on EEG Signal Processing & Machine Learning with diverse applications all over the world particularly in Europe, China, and Singapore.

Clive Cheong Took is a senior lecturer (associate professor) at Royal Holloway, University of London. Clive has a background in machine learning and investigates its applications in biomedical problems for more than 10 years. He is an associate editor for IEEE Transactions on Neural Networks and Learning Systems since 2013, and has co-organised special issues on deep learning for healthcare and security. At WCCI 2020, he will also co-organise a special session on Generative Adversarial Learning with Ariel Ruiz-Garcia, Vasile Palade, Jürgen Schmidhuber, and Danilo Mandic.

External Website

https://sites.google.com/view/wcci-2020-eeg/home

 

Adversarial Machine Learning: On The Deeper Secrets of Deep Learning

Danilo V. Vargas, Associate Professor
Faculty of Information Science and Electrical Engineering, Kyushu University
Email: vargas@inf.kyushu-u.ac.jp

Abstract

Recent research has found out that Deep Neural Networks (DNN) behave strangely to slight changes in the input. This tutorial will talk about this curious, and yet, still poorly understood behavior. Moreover, it will dig deep into the meaning of this behavior and its links to the understanding of DNNs.

In this tutorial, I will explain the basic concepts underlying adversarial machine learning and briefly review the state-of-the-art with many illustrations and examples. In the latter part of the tutorial, I will demonstrate how attacks are helping to understand the behavior of DNNs as well as show how many defenses proposed are not improving the robustness. There are still many challenges and puzzles left unsolved. I will present some of them as well as delineate a couple of paths to a solution. Lastly, the tutorial will be closed with an open discussion and promotion of cross-community collaborations.

Tutorial Presenters (names with affiliations):

Associate Professor at Kyushu University

Tutorial Presenters’ Bios:

Danilo Vasconcellos Vargas is currently an Associate Professor at Kyushu University, Japan. His research interests span Artificial Intelligence (AI), evolutionary computation, complex adaptive systems, interdisciplinary studies involving or using an AI’s perspective and AI applications. Many of his works were published in prestigious journals such as Evolutionary Computation (MIT Press), IEEE Transactions on Evolutionary Computation and and IEEE Transactions of Neural Networks and Learning Systems with press coverage in news magazines such as BBC news. He received awards such as the IEEE Excellent Student Award and scholarships to study in Germany and Japan for many years. Regarding his community activities, he was the presenter of two tutorials at the renowned GECCO conference.

Regarding adversarial machine learning, he has more than 5 invited talks about the subject. One given in a workshop in CVPR 2019. He has authored more than 10 articles and three chapters on books about adversarial machine learning, one of its research output was published on BBC news (about the paper “One pixel attack for fooling deep neural networks”).

Currently, he leads the Laboratory of Intelligent Systems aimed at building a new age of robust and adaptive artificial intelligence. More info can be found both in his website http://danilovargas.org and his Lab Page http://lis.inf.kyushu-u.ac.jp.

External website with more information on Tutorial (if applicable):

http://lis.inf.kyushu-u.ac.jp/wcci2020_tutorial.php

 

Machine Learning for Data Dtreams in Python with Scikit-Multiflow

Jacob Montiel
University of Waikato

HeitorMurilo Gomes
University of Waikato
Email: heitor.gomes@waikato.ac.nz

Jesse Read
École Polytechnique

Albert Bifet
University of Waikato

 Abstract

Data stream mining has gained a lot of attention in recent years as an exciting researc

h topic. However, there is still a gap between the pure research proposals and the practical applications to real world machine learning problems. In this tutorial we are going to introduce attendees to data stream mining procedures and examples of big data stream mining applications. Besides the theory we will also present examples using the \skmultiflow framework, a novel open source Python framework.

 Tutorial Presenters (names with affiliations): 

Jacob Montiel (University of Waikato), HeitorMurilo Gomes (University of Waikato), Jesse Read (École Polytechnique), Albert Bifet (University of Waikato)

 Tutorial Presenters’ Bios: 

Jacob Montiel

Jacob is a research fellow at the University of Waikato in New Zealand and the lead maintainer of \skmultiflow. His research interests are in the field of machine learning for evolving data streams. Prior to focusing on research, Jacob led development work for onboard software for aircraft and engine’s prognostics at GE Aviation; working in the development of GE’s Brilliant Machines, part of the IoT and GE’s approach to Industrial Big Data.

Website:https://jacobmontiel.github.io/

HeitorMurilo Gomes

Heitor is a senior research fellow at the University of Waikato in New Zealand. His main research area is Machine Learning, specially Evolving Data Streams, Concept Drift, Ensemble methods and Big Data Streams. He is an active contributor to the open data stream mining project MOA and a co-leader of the StreamDM project, a real-time analytics open-source software library built on top of Spark Streaming.

Website:https://www.heitorgomes.com

Jesse Read

Jesse is a Professor at the DaSciM team in LIX at École Polytechnique in France. His research interests are in the areas of Artificial Intelligence, Machine Learning, and Data Science and Mining. Jesse is the maintainer of the open-source software MEKA, a multi-label/multi-target extension to Weka.

Website:https://jmread.github.io/

Albert Bifet

Albert is a Professor at University of Waikato and Télécom Paris. His research focuses on Data Stream mining, Big Data Machine Learning and Artificial Intelligence. Problems he investigate are motivated by large scale data, the Internet of Things (IoT), and Big Data Science.

He co-leads the open source projects MOA (Massive On-line Analysis), Apache SAMOA (Scalable Advanced Massive Online Analysis) and StreamDM.

Website:http://albertbifet.com

 

External website with more information on Tutorial (if applicable): NA

Experience Replay for Deep Reinforcement Learning
A Comprehensive Review

ABDULRAHMAN ALTAHHAN
Leeds Beckett University, UK.

VASILE PALADE
Coventry University, UK.

Primary contact (a.altahhan@leedsbeckett.ac.uk )

Abstract

Reinforcement learning is expected to play an important role in our AI and machine learning era, this is evident by latest major advances, particularly in games. This is due to its flexibility and arguably minimum designer intervention especially when the feature extraction process is left to a robust model such as a deep neural network. Although deep learning alleviated the long-standing burden of manual feature design, another important issue remains to be tackled, that is the experience-hungry nature of RL models which is mainly due to bootstrapping and exploration. One important technique that will play a centre stage role in tackling this issue is experience replay. Naturally, it allows us to capitalise on the already gained experience and to shorten the time needed to train an RL agent. The frequency and depth of the replay can vary significantly and currently a unifying view and a clear understanding of the issues related to off-policy and on-policy replay is generally lacking. For example, on the far end of the spectrum, extensive experience-replay, although should conceivably help reduce the data-intensity of the training period, when done naively, put significant constrains on the practicality of the model and requires both extra time and space that can grow significantly; relegating the method impractical. On the other hand, in its optimal form, whether it is a target re-evaluation or a re-update, when importance sampling ratio uses bootstrapping, the methods computational requirements matches other model based RL methods for planning. In this tutorial we will be tackling the issues and techniques related to the theory and application of deep reinforcement learning and experience replay, and how and when these techniques can be applied effectively to produce a robust model. In addition, we will promote a unified view of experience replay that involves replaying and re-evaluation of the target updates. What is more, we will show that the generalised intensive experience replay method can be used to derive several important algorithms as special cases of other methods including n-steps true online TD and LSTD. This surprising but important view can help immensely the neuro-dynamic/RL community to move this concept further forward and will benefit both the researchers and practitioners in their quest for a better and more practical RL methods and models.

Description

Deep reinforcement learning combined with experience replay allows us to capitalise on the gained experience; capping the model appetite for new experience. Experience replay can be performed in several ways some of which may or may not be suitable for the problem in hand. For example, intensive experience replay, if performed optimally, could shorten the learning cycle of an RL agent and allow it to be taken away from the virtual arena such as games/simulation to the physical/mechanical arena such as robotics. The type of intensive training required for RL models, which can be afforded by a virtual agent, may not be tolerated, or may at least not be desired, for a physical agent. Of course, one way to move to the mechanical world is by utilising model-free policy gradient (search) methods that are based on simulation and then migrate/map the model to the physical world. However, constructing a simulation for the physical world is a tedious process that requires extra time and calibration making it impractical to the type of pervasive RL models that we hope to achieve. In both cases, whether it is a policy gradient or value-function with policy iteration, experience replay plays and important role to make them more practical. For example, for policy gradient methods adopting a softmax policy is preferable over the ε-greedy policy as it can approach asymptotically a deterministic policy after some exploration, while ε-greedy will always have a fixed percentage of exploratory actions regardless of the maturity of the policy being developed/improved.

The tutorial is timely and novel in its treatment and packaging of the topic. It will lay the necessary foundation for a unified overview of the subject. Which will allow other researcher to take it to the next level and will allow the subject area to take off on solid and unified grounds.

It turns out that extensive experience replay can be used as a generalised model from which several n-steps modern reinforcement learning techniques can be deduced, offering an easy way to unify several popular reinforcement learning methods and giving rise to several new more.

In this tutorial I will be giving a step by step detailed overview of the framework that allows us to safely deploy replay methods.
The tutorial will review all major advances of RL replay algorithms and will categorise them into: occasional replay, regular replay, and intensive regular replay.

Bellman equations are the fundamental of individual RL updates, however all the n-steps aggregate methods that have driven the latest breakthrough of RL need different treatment.

The unified view through experience replay offer a new theoretical framework to study the inner traits/properties of those techniques. For example, convergence of several new RL algorithms can be proven by proving the convergence of the unified replay algorithm and then projecting each algorithm as a special case of the general method.

Outline
——————————–First part about one hour————————by A. Altahhan————-
• Deep Reinforcement Learning a concise review
• Traditional Replay and Type of Replay: Occasional, Regular, Intensive or Both
• Unified View of Experience Replay
o Replay Past Update vs Target Re-evaluation: how to integrate
o Off-policy vs On-policy! Experience Replay
o The Role of Importance Sampling and Bootstrapping
o Emphatic TD and its Cousins
o Unifying Algorithm for Regular Intensive Sarsa(λ)-Replay
o N-steps Methods as a Special cases of Experience Replay
o Policy Search Methods and Unified Replay
o Exploration, Exploitation and Replay
o Convergence of Replay Methods
——————————–Second part about one hour———————-by V. Palade————–
• Practical Consideration for DRL and Experience Replay:
o To Replay or Not to Replay!
o Time and Space Complexity of Replay
o Combing Deep Learning and Replay in a Meaningful Way
o Softmax or ε-Greedy for Replay
o Replay for Games vs Replay for Robotics
o Live Demonstration on the Game of Packman
o Live Demo on Cheap Affordable Robot
o Audience Running the Code and Connect to the Robot
• Q&A, Discussion and Closing Notes
Goals
• To develop a deep understanding of the capabilities and limitations of deep reinforcement learning
• To develop a unified view of the different types of experience replay and how and when to apply them in the context of deep reinforcement learning

Objectives
• Demonstrate how to apply experience replay on policy search methods
• Demonstrate how to combine of experience replay and deep learning
• Demonstrate first-hand the effect of replay on multiple platforms including Games and Robotics domains

Expected Outcomes
• To gain an in-depth understanding of recent developments in DRL and experience replay
• To gain an in-depth understanding of which update rules to adopt, on-policy or off-policy
• To contrast traditional replay with the more recent re-evaluation that has been termed as replay

Target audience and session organisation:

The target audience are researchers and practitioners in the growing reinforcement learning community who are seeking a better understanding of the issues surrounding combining experience replay, deep learning and off-policy learning in their quest for a more practical methods and models.

The tutorial will take 2 hours to be completed and will provide code that can be easily run on Octave or MATLAB.

The two-hour tutorial will be delivered into two integrated sections, the first will cover the theory and the second will cover the application. The presenters will alternate between each other on both the theoretical part and the application part. Two applications will be covered one is the game of packman and the other is a hacked mini robot that will learn to navigate in small 2x1m easy to assemble arena. The robot is a cheap affordable robot, such as Lego, that is equipped with a Raspberry PI module and camera. It will use vision and deep learning combined with experience replay to learn to perform a homing task. The audience will be provided with the Octave/MATLAB code to experience first-hand with the algorithms and see how they are developed form the inside. The code is general enough to be reused for other RL problems. For remote access audiences the code will be shared on Git, so they will be able to experiments with the model directly and a web camera will be mounted on top of the small robot arena to broadcast how the robot will gradually learn to navigate towards it home using vision to learn an optimal path and using infra-red sensors for obstacle avoidance behaviour. For those attending the tutorial they can SSH to the controlling laptop, to which the intensive processing is off-boarded, in order to try and change the Octave code that is driving the robot and see its effect. If a VPN can be setup, then the same the remote audiences can be provided with the same experience.

Previous tutorials:

We have given a successful tutorial on RL and Deep Learning at the IJCNN 2018 on July 2018 in Rio.

Presenters:

ABDULRAHMAN ALTAHHAN
Senior Lecturer in Computing Email: a.altahhan@leedsbeckett.ac.uk Dr Abdulrahman Altahhan has been teaching AI and related topics since 2000, currently he is a Senior Lecturer in Computing as Leeds Beckett University. He served as the Programme Director of MSc in Data Science at Coventry University, UK. Previously, Dr Altahhan worked in Dubai as an Assistant Professor and Acting Dean. He has a PhD in 2008 in Reinforcement Learning and Neural Networks and an MPhil in Fuzzy Expert Systems. Dr Abdulrahman is actively researching in the area of Deep Reinforcement Learning applied to robotics and autonomous agents with publications in this front. He has extensively prepared designed and developed a novel reinforcement learning family of methods and studied their mathematical underlying properties. Recently he established a new set of algorithms and findings where he combined deep learning with reinforcement learning in a unique way that is hoped to contribute to the development of this new research area. He presented in prestigious conferences and venues in the area of machine learning and neural network. Dr Abdulrahman is a reviewer for important Neural Networks related journals, and venues from Springer and the IEEE; including Neural Computing and Applications journal, International Conference of Robotics and Automation ICRA, and he serves in the programme committees for related conferences such as INNS Big Data 2016. Dr Abdulrahman has organised several special sessions in Deep Reinforcement Learning in IJCNN2016 and IJCNN 2017 as well as ICONIP 2016 and 2017 conferences. Dr Abdulrahman is an EPSRC reviewer and taught Machine Learning, Neural Networks and Big Data Analysis modules in the MSc of Data Science, he is an IEEE Member, a member of the IEEE Computational Intelligence Society and International Neural Network Society.

VASILE PALADE

Professor: Pervasive Computing Email: vpalade453@gmail.com Prof Vasile Palade has joined Coventry University in 2013 as a Reader in Pervasive Computing, after working for many years as a Lecturer in the Department of Computer Science, of the University of Oxford, UK. His research interests lie in the wide area of machine learning, and encompass mainly neural networks and deep learning, neuro-fuzzy systems, various nature inspired algorithms such as swarm optimization algorithms, hybrid intelligent systems, ensemble of classifiers, class imbalance learning. Application areas include image processing, social network data analysis, Bioinformatics problems, fault diagnosis, web usage mining, health, among others. Dr Palade is author and co-author of more than 160 papers in journals and conference proceedings as well as books on computational intelligence and applications (which attracted 4250 citations and an h-index of 29 according to Scholar Google). He has also co-edited several books including conference proceedings. He is an Associate Editor for several reputed journals, such as Knowledge and Information Systems (Elsevier), Neurocomputing (Elsevier), International Journal on Artificial Intelligence Tools (World Scientific), International Journal of Hybrid Intelligent Systems (IOS Press). He has delivered keynote talks to international conferences on machine learning and applications. Prof. Vasile Palade is an IEEE Senior Member and a member of the IEEE Computational Intelligence Society.

References that will be covered

[1] L.-J. Lin, “Self-improving reactive agents based on reinforcement learning, planning and teaching,” Machine Learning, vol. 8, no. 3, pp. 293-321, 1992.
[2] V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, p. 529, 2015
[3] A. Altahhan, “TD(0)-Replay: An Efficient Model-Free Planning with full Replay,” in 2018 International Joint Conference on Neural Networks (IJCNN), 2018, pp. 1-7.
[4] A. Altahhan, “Deep Feature-Action Processing with Mixture of Updates,” in 2015 International Conference on Neural Information Processing, Istanbul, Turkey, 2015, pp. 1-10.
[5] S. Zhang and R. S. Sutton, “A Deeper Look at Experience Replay,” eprint arXiv:1712.01275, 2017
[6] H. Vanseijen and R. Sutton, “A Deeper Look at Planning as Learning from Replay,” presented at the Proceedings of the 32nd International Conference on Machine Learning, Proceedings of Machine Learning Research, 2015.
[7] Y. Pan, M. Zaheer, A. White, A. Patterson, and M. White, “Organizing Experience: A Deeper Look at Replay Mechanisms for Sample-based Planning in Continuous State Domains,” eprint arXiv:1806.04624, 2018.
[8] van Hasselt, H. and Sutton, R. S. (2015). Learning to predict independent of span. arXiv:1508.04582.
[9] H. van Seijen, A. Rupam Mahmood, P. M. Pilarski, M. C. Machado, and R. S. Sutton, “True Online Temporal-Difference Learning,” Journal of Machine Learning Research, vol. 17, no. 145, pp. 1-40, 2016.
[10] Sutton, R. S. and Barto, A. G. (2017). Reinforcement Learning: An Introduction. 2nd Edition, Accessed online, MIT Press, Cambridge.
[11] Watkins, C.J.C.H. & Dayan, P., Q-learning, Mach Learn (1992) 8: 279.
[12] J. Modayil, A. White, and R. S. Sutton, “Multi-timescale nexting in a reinforcement learning robot,” Adaptive Behavior, vol. 22, no. 2, pp. 146-160, 2014/04/01 2014.
[13] D. Precup, R. S. Sutton, and S. Dasgupta, “Off-Policy Temporal Difference Learning with Function Approximation,” presented at the Proceedings of the Eighteenth International Conference on Machine Learning, 2001.
[14] R. S. Sutton, A. Rupam Mahmood, and M. White, “An Emphatic Approach to the Problem of Off-policy Temporal-Difference Learning,” Journal of Machine Learning Research, vol. 17, pp. 1-29, 2016.
[15] H. Yu, “On Convergence of Emphatic Temporal-Difference Learning,” eprint arXiv:1506.02582, 2015.
[16] A. Hallak, A. Tamar, R. Munos, and S. Mannor, “Generalized Emphatic Temporal Difference Learning: Bias-Variance Analysis,” eprint arXiv:1509.05172, 2015.
[17] X. Gu, S. Ghiassian, and R. S. Sutton, “Should All Temporal Difference Learning Use Emphasis?,” eprint arXiv:1903.00194, 2019.
[18] M. P. Deisenroth, G. Neumann, J. Peters, et al., “A survey on policy search for robotics,” Foundations and Trends R in Robotics, vol. 2, no. 1–2, pp. 1–142, 2013.
[19] R. S. Sutton, C. Szepesvari, A. Geramifard, and M. P. Bowling, “Dyna-Style Planning with Linear Function Approximation and Prioritized Sweeping,” eprint arXiv:1206.3285, p. arXiv:1206.3285, 2012.

Mechanisms of Universal Turing Machines inDevelopmental Networks for Vision, Audition, and Natural Language Understanding

Juyang Weng
Department of Computer Science and Engineering
Cognitive Science Program
Neuroscience Program

Michigan State University, East Lansing, MI, 48824 USA
Tutorial URL: http://www.cse.msu.edu/ei/WCCI2020-Tutorial-Weng/

Abstract:

Finite automata (i.e., finite-state machines) have been taught in almost all electrical engineering programs.  However, Turing machines, especially universal Turing machines (UTM), have not been taught in many electrical engineering programs and were dropped inmany computer science and engineering programs as a required course.This resulted inmajor knowledge weakness in many people working on neural networks and AI since without knowing UTM, researchers have considered neural networks as merely general-purpose function approximators, but notgeneral-purpose computers. This tutorialfirst briefly explains what a Turing machine is, what aUTM is,why a UTM is a general-purpose computer, and why Turing machines and UTMs are all symbolic and handcrafted.  In contrast, a Developmental Network (DN)not only is a new kind of neural network, but also canlearn to becomea general-purposecomputer by learning an emergent Turing machine.  It does so by first taking a sequence of instructions as a user provided program and the data for the program to run on, and then running the program on the data.  Therefore, a universal Turing machine inside a DN emerges autonomously on the fly.  The DN learns UTM transitionsone at a time incrementally, without iterations,and refines UTM transitionsfrom the physical experience through network’slifetime.

Presenter Biographies:

Juyang Weng: Professor at the Department of Computer Science and Engineering, the Cognitive Science Program, and the Neuroscience Program, Michigan State University, East Lansing, Michigan, USA. He is also a visiting professor at Fudan University, Shanghai, China. He received his BS degree from Fudan University in 1982, his MS and PhD degrees from University of Illinois at Urbana-Champaign, 1985 and 1989, respectively, all in Computer Science.  From August 2006 to May 2007, he is also a visiting professor at the Department of Brain and Cognitive Science of MIT.   His research interests include computational biology, computational neuroscience, computational developmental psychology, biologically inspired systems, computer vision, audition, touch, behaviors, and intelligent robots.  He is the author or coauthor of over 250 research articles.  He is an editor-in-chief of the International Journal of Humanoid Robotics and an associate editor of the IEEE Transactions on Autonomous Mental Development. He has chaired and co-chaired some conferences, including the NSF/DARPA funded Workshop on Development and Learning 2000 (1st ICDL), 2nd ICDL (2002), 7th ICDL (2008), 8th ICDL (2009), and INNS NNN 2008. He was the Chairman of the Governing Board of the International Conferences on Development and Learning (ICDLs) (2005-2007), chairman of the Autonomous Mental Development Technical Committee of the IEEE Computational Intelligence Society (2004-2005), an associate editor of IEEE Trans. On Pattern Recognition and Machine Intelligence, an associate editor of IEEE Trans. on Image Processing.  He was the General Chair of AIML Contest 2016 and taught BMI 831, BMI 861 and BMI 871 that prepared the contestants for the AIML Contest session in IJCNN 2017 in Alaska.  He is a Fellow of IEEE.

Web: http://www.cse.msu.edu/~weng/

 

Evolution of Neural Networks

RistoMiikkulainen
The University of Texas at Austin and Cognizant Technology Solutions
Email: risto@cs.utexas.edu

 Abstract

Evolution of artificial neural networks has recently emerged as a powerful technique in two areas. First, while the standard value-function based reinforcement learning works well when the environment is fully observable, neuroevolution makes it possible to disambiguate hidden state through memory. Such memory makes new applications possible in areas such as robotic control, game playing, and artificial life. Second, deep learning performance depends crucially on the network architecture and hyperparameters. While many such architectures are too complex to be optimized by hand, neuroevolution can be used to do so automatically. Such evolutionary AutoML can be used to achieve good deep learning performance even with limited resources, or state=of-the art performance with more effort. It is also possible to optimize other aspects of the architecture, like its size, speed, or fit with hardware. In this tutorial, I will review (1) neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes, (2) methods for neural architecture search and evolutionary AutoML, and (3) applications of these techniques in control, robotics, artificial life, games, image processing, and language.

Tutorial Presenters (names with affiliations):

RistoMiikkulainen
The University of Texas at Austin and Cognizant Technology Solutions

Tutorial Presenters’ Bios:

RistoMiikkulainen is a Professor of Computer Science at the University of Texas at Austin and a AVP of Evolutionary AI at Cognizant Technology Solutions. He received an M.S. in Engineering from the Helsinki University of Technology, Finland, in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His research focuses on methods and applications of neuroevolution, as well as neural network models of natural language processing and self-organization of the visual cortex; he is an author of over 430 articles in these research areas. He is an IEEE Fellow, recipient of the 2020 IEEE CIS EC Pioneer Award, recent awards from INNS and ISAL, as well as nine Best-Paper Awards at GECCO.

External website with more information on Tutorial (if applicable):

https://www.cs.utexas.edu/users/risto/talks/enn-tutorial/

Probabilistic Tools for Analysis of Network Performance

RNDr. Věra Kůrková, DrSc.
Institute of Computer Science of the Academy of Sciences of the Czech Republic, Pod Vodárenskou věží 2, 182 07 Prague, Czech  Republic
e-mail  vera@cs.cas.cz
http://www.cs.cas.cz/~vera

 Abstract

Due to the recent progress of hardware, neural networks with large numbers of parameters can perform classification and regression tasks on large data sets. In particular, randomized models and algorithms have turned out to be quite efficient for performing high-dimensional tasks. Some insights into computation of such tasks can be obtained using a probabilistic approach which shows that with increase in data dimension and network size, outputs tend to be sharply concentrated around precalculated values. This behavior can be explained by rather counter-intuitive properties of the geometry of high  dimensional spaces. It exhibits “concentration of measure phenomenon” which implies probabilistic inequalities on concentration of values of random variables around their mean values and possibilities of reduction of dimensionality of data by random projections.

This tutorial will present probabilistic tools for analysis of performance of neural networks on randomly selected tasks and for analysis of randomized algorithms. It will review recent results on choice of optimal networks for tasks described by probability distributions and on behaviour of networks with large numbers of randomly selected parameters. The tutorial will focus on the following topics:

Counter-intuitive properties of geometry of high-dimensional spaces, “concentration of measure phenomenon”, correlation and quasi-orthogonal dimension, Levy Lemma on concentration of values of smooth functions, Johnson-Lindenstrauss Lemma on random projections and reduction of dimension, Chernoff-Hoeffding Inequality on sums of large numbers of independent random variables, Azuma Inequality on functions of random variables, probabilistic inequalities holding without the “naive Bayes assumption”, implications of the probabilistic approach to analysis of suitability of networks for tasks characterized by probability distributions and to performance of randomized algorithms.

The tutorial is self-contained, and is suitable for researchers who already use neural networks as a tool and wish to understand their mathematical foundations, capabilities and limitations. The tutorial does not require a sophisticated mathematical background.

Tutorial Presenter:

RNDr. Věra Kůrková, DrSc.

Institute of Computer Science of the Academy of Sciences of the Czech Republic, Pod Vodárenskou věží 2, 182 07 Prague, Czech  Republic

e-mail  vera@cs.cas.cz, http://www.cs.cas.cz/~vera

Tutorial Presenter’s Bio:

Věra Kůrková received Ph.D. in mathematicsfromthe Charles University, Prague, and DrSc. (Prof.) in theoreticalcomputer science  fromtheAcademyofSciencesofthe Czech Republic. Since 1990 sheisaffiliatedwiththe Institute ofComputer Science, Prague, in 2002-2009 shewastheHeadofthe Department ofTheoreticalComputer Science. Her researchinterests are in mathematicaltheoryofneurocomputing and machinelearning. Her workincludescharacterizationsofrelationshipsbetweennetworksofvarioustypes, estimatesoftheir model complexities, characterizationoftheircapabilitiesofgeneralization and ofprocessinghigh-dimensionaltasks. Since 2008, she has been a memberoftheBoardoftheEuropeanNeural Network Society (ENNS) and in 2017-2019 its president. Sheis a memberoftheeditorialboardsofthejournalsNeuralNetworks and NeuralProcessingLetters, and shewas a guest editor ofspecialissuesofthejournalsNeuralNetworks and Neurocomputing. ShewasthegeneralchairofEuropeanconferences ICANNGA 2001, ICANN 2008, co-chairorhonorarychairof ICANN 2017, ICANN 2018, and ICANN 2019.

Venue: With WCCI 2020 being held as a virtual conference, there will be a virtual experience of Glasgow, Scotland accessible through the virtual platform. We hope that everyone will have a chance to visit one of Europe’s most dynamic cultural capitals and the “World’s Friendliest City” soon in the future!.