Development and use of Deep Neural Networks (DNNs) has been a key research issue in the last few years. This usually involves training of a CNN, CNN-RNN, GAN, or Capsule type of network for classification, regression, or prediction purposes, in a large variety of application domains. In this presentation, and based on our recent research results, we will focus on extraction of latent information from trained DNNs, on using this information for explaining the DNN decision making, on adapting the generated knowledge in other domains and on providing cues about the decision uncertainty.
In our approach, we first leverage the feature extraction power inherent to DNNs, through a combination of transfer learning, k-means clustering and k-Nearest Neighbour classification of DNN learned representations; we use this to perform adaptation of the DNN data-driven knowledge to datasets from different domains, or distributions. New loss function types can be used in this procedure.
Nevertheless, training DNNs suffers from the need of availability of large amounts of labelled data. Such limitations can be reduced by adopting a recent Bayesian approach, which can lead to self-annotating systems. We describe such an approach which uses approximate variational inference for deep models to estimate uncertainty within supervised self-training.
Examples from real life problems are provided for illustrating the performance of the presented approaches.
Stefanos Kollias has been Professor in the Computer Science Division of the School of Electrical and Computer Engineering, National Technical University of Athens, since 1997 and Director of the Intelligent Systems, Content & Interaction Laboratory (recently renamed to Artificial Intelligence & Learning Systems Lab). Moreover, since 2016, he has been Professor of Machine Learning in the Computer Science School of the University of Lincoln, UK. His research focuses on machine and deep learning, explainable AI, applications in signal and data analysis, in a variety of application areas, including healthcare, agri-food, nuclear power reactor monitoring, anomaly prediction in industrial, operational, or social environments.
He is IEEE Fellow (since 2015, nominated by the IEEE Computational Intelligence Society). He has been member of the Executive Committee of the European Neural Network Society, 2007-2016. He has published 110 papers in International Journals & 305 papers in Proceedings of International Conferences. He has been Co-Editor of the book ‘Multimedia and the Semantic Web’, Wiley, 2005. His research work has attracted 9700 Citations, with an h-Index 45 (Google Scholar). He has supervised 42 Ph.D. students.
Knowledge representation and machine learning are two complementary aspects that can guide the acquisition of information in intelligent systems. The success of deep learning technologies in domains like computer vision, natural language processing or game playing has shown the immense potential of machine learning in changing our societies and everyday lives. On the other hand, the high-level reasoning capabilities of humans and their ability to quickly adapt to novel problems are still out of reach of current artificial intelligence systems. In this talk I will argue that a combination of knowledge and learning is crucial to achieve truly general and human-like intelligence. I will provide an introduction to the fields of statistical relational learning and neuro-symbolic integration, and present an overview of the most recent results of my research in these fields.
Andrea Passerini is Associate Professor at the Department of Information Engineering and Computer Science (DISI) of the University of Trento, and Adjunt Professor at Aalborg University. He is director of the Structured Machine Learning Group and coordinator of the Research Program on Deep and Structured Machine Learning, both at DISI. His research interests include structured machine learning, statistical relational learning, learning and optimization, preference elicitation and learning with constraints. He co-authored over 100 refereed papers, including 40 journal articles, and he regularly publishes at top AI conferences and journals like AAAI, IJCAI, AIJ and MLJ.
and CERTH, Greece
While traditional computer vision approaches can address to an extent the needs of service robots operating in relatively constrained environments, further advanced, AI-enhanced computer vision is required for service robots that operate in real environments, with a high degree of autonomy. Computer vision empowered with deep models, optimization-based generative, discriminative and hybrid tracking, as well as coupled semantic constraints and hierarchical knowledge representations, can lead to key advances robot perception, cognition, navigation and human-robot interaction capabilities.
In this line, this talk focuses first on fused, optimization-based metric and semantic home environment mapping, as well as on object recognition and 6DoF object pose estimation based on deep models, which advance accuracy so as to allow robust object grasping by a mobile manipulator robot in real environments. Focusing then on human tracking, a hybrid human body pose tracker has been developed, based on the coupling of an adaptive model-based generative approach with a discriminative and semantics. Alongside, methods have been developed for human activity recognition and on top of this, social-aware mobile robot navigation. Moreover, human emotion recognition through computer vision and multimodal fusion is a further research topic addressed herein. In order to further enhance robot learning by demonstration through AI, we have focused on joint hand and object detection and tracking in 3D, based on RGBD data, coupled with automatic key-frame extraction based on derivative graphs that drives the necessary subsequent robot learning functions. Notably, our research also extends robot learning by demonstration within collaborative human-robot working scenarios. Last but not least, our AI-enhanced computer vision efforts have also been oriented to the joint mapping of the surface and subsurface space, by coupling stereo vision of a field mobile robot, rover, and the processing of data derived from a Ground Penetrating Radar (GPR) on-board the rover.
Applications of the above will be presented, in diverse application fields, which include personal domestic service robots, professional service robots applied in agile manufacturing and field service robots.
Dr. Dimitrios Tzovaras is a Senior Researcher Grade A’ (Professor) and Director at CERTH/ITI (the Information Technologies Institute of the Centre for Research and Technology Hellas). He received the Diploma in Electrical Engineering and the Ph.D. in 2D and 3D Image Compression from the Aristotle University of Thessaloniki, Greece in 1992 and 1997, respectively. Prior to his current position, he was a Senior Researcher on the Information Processing Laboratory at the Electrical and Computer Engineering Department of the Aristotle University of Thessaloniki. His main research interests include computer vision, visual analytics, virtual and augmented reality, machine learning and artificial intelligence. He is author or co-author of over 140 articles in refereed journals and over 400 papers in international conferences. He is a Senior Associate Editor in the IEEE Transactions on Image Processing journal. Over the same period, Dr. Tzovaras acted as ad hoc reviewer for a large number of International Journals and Magazines such as IEEE, ACM, Elsevier and EURASIP, as well as International Scientific Conferences. Since 1992, Dr. Tzovaras has been involved in more than 200 European projects, funded by the EC and the Greek Ministry of Research and Technology. Within these research projects, he has acted as the Scientific Responsible of the research group of CERTH/ITI, but also as the Coordinator and/or the Technical/Scientific Manager of many of them (coordinator of technical manager in 30 projects.
In the existing machine learning literature, the labels of the training examples are usually just used in the calculation of loss. Most sophisticated operations are actually conducted on the instances, such as feature extraction, feature selection, manifold embedding, dimensionality reduction, etc. Researchers take obviously more efforts in the feature space than in the label space, which is not strange since labels are traditionally represented by logical values, i.e., 1 if the label is relevant to the instance and 0 otherwise. However, if we can somehow transform the logical label vectors into real-valued label vectors, then we can expect much more profound analysis in the label space.
Label distribution learning (LDL) is a recently proposed machine learning paradigm, where each instance is labeled by a real-valued label vector called label distribution. Each element in the label distribution indicates the description degree of the corresponding label to the instance. Considering most existing data sets are annotated by logical labels, we need a way to transform logical labels into label distributions, which is called label enhancement. Label enhancement could unleash the power of label space: many analytic operations meant for the feature space are now applicable to the label space!
Xin Geng is currently a professor and the dean of School of Computer Science and Engineering at Southeast University, China. He received the B.Sc. (2001) and M.Sc. (2004) degrees in computer science from Nanjing University, China, and the Ph.D. (2008) degree in computer science from Deakin University, Australia. His research interests include machine learning, pattern recognition, and computer vision. He has published over 70 refereed papers in these areas, including those published in prestigious journals and top international conferences. He has been an Associate Editor of IEEE T-MM, FCS and MFC, a Steering Committee Member of PRICAI, a Program Committee Chair for conferences such as PRICAI’18, VALSE’13, etc., an Area Chair for conferences such as ACMMM'18, PRCV'19, CCPR'16, and a Senior Program Committee Member for conferences such as IJCAI, AAAI, etc.