Professor Jong-Moon Chung, IEEE FellowYonsei University, KoreaJong-Moon Chung (IEEE Fellow) received the B.S. and M.S. degrees in electronic engineering from Yonsei University (Seoul, South Korea) and the Ph.D. degree in electrical engineering from the Pennsylvania State University (University Park, PA, USA). Since 2005, he has been a Professor with the School of Electrical and Electronic Engineering at Yonsei University, where he is also the Associate Dean of the College of Engineering and a Professor with the Department of Emergency Medicine, College of Medicine. From 1997 to 1999, he was an Assistant Professor and Instructor with the Department of Electrical Engineering, Pennsylvania State University. From 2000 to 2005, he was a tenured Associate Professor with the School of Electrical and Computer Engineering, Oklahoma State University. He has published the books "Emerging Secure Networks, Blockchains and Smart Contract Technologies" (Springer ©2024, https://doi.org/10.1007/978-3-031-65866-2) and "Emerging Metaverse XR and Video Multimedia Technologies" (Apress ©2023, https://doi.org/10.1007/978-1-4842-8928-0_2) based on invitations from Springer Nature. He is currently a Vice President of the IEEE Product Safety Engineering Society (PSES), Director of the Asia & Pacific Region of the IEEE Consumer Technology Society (CTSoc), Senior Editor of the IEEE Transactions on Consumer Electronics, Section Editor of the Wiley ETRI Journal, and Chair Editor-in-Chief of the KSII Transactions on Internet and Information Systems (TIIS). He serves as an IEEE Distinguished Lecturer for CTSoc and PSES. He is an IEEE Fellow and member of the National Academy of Engineering of Korea (NAEK) and the IEEE Eta Kappa Nu (HKN) honor society. Speech Title:XR Disaster Management Systems with 5G & 6G NTN Edge Computing Supported Data Mining Technologies Abstract: In this keynote speech, extended reality (XR) based emergency and disaster management systems that use mixed reality (MR) computer vision technology are introduced. Two South Korea government XR emergency and disaster management (and training) systems are introduced, which include the Ministry of Interior & Safety's 'Augmented Reality (AR) Emergency Response Training System' and the National Fire Agency's 'XR Flagship Project System,' which use various computer vision technologies supported by digital twin (DT) modeling as well as artificial intelligence (AI) and generative AI (GenAI) computing. The 5G mobile network is used to support real-time edge computing-based data mining and provide disaster evolution predictions as well as recommend recovery strategies. The difficulties and challenges in making these systems will be discussed and the limitations of 5G mobile networks will be explained. The new expected features of future 6G mobile networks (which include non-terrestrial networks (NTNs)) and how the new capabilities can enhance future real-time computer vision and data mining systems will be described. |
Professor Liming ChenL'Ecole Centrale de Lyon, FranceProf. Liming Chen was awarded a joint BSc degree in Mathematics and Computer Science from the University of Nantes in 1984. He obtained a Master degree in 1986 and a PhD in computer science from the University of Paris 6 in 1989. He first served as associate professor at the Université de Technologie de Compiègne, then joined Ecole Centrale de Lyon as Professor in 1998, where he leads an advanced research team on multimedia computing and pattern recognition. From 2001 to 2003, he also served as Chief Scientific Officer in a Paris-based company, Avivias, specialized in media asset management. In 2005, he served as Scientific multimedia expert in France Telecom R&D China. He has been Head of the department of Mathematics and Computer science from 2007. Prof. Liming Chen has taken out 3 patents, authored more than 100 publications and acted as chairman, PC member and reviewer in a number of high profile journal and conferences since 1995. He has been a (co)-principal investigator on a number of research grants from EU FP programme, French research funding bodies and local government departments. He has directed more than 15 PhD theses. His current research spans from 2D/3D face analysis and recognition, image and video analysis and categorization, to affect analysis both in image, audio and video. Speech Title:TBD Abstract: TBD |
Professor Maozhen LiBrunel University of London, UKMaozhen Li received the Ph.D. degree from the Institute of Software, Chinese Academy of Sciences, Beijing, China, in 1997. He did his Post-Doctoral research in the Department of Computer Science at Cardiff University UK in 1999-2002. He is a Professor with the Department of Electronic and Electrical Engineering, Brunel University of London, UK. His main research interests include high-performance computing, big data analytics, and intelligent systems with applications to smart grids, smart manufacturing and smart cities. He has about 240 research publications in these areas, including four books. His book entitled “The Grid: Core Technologies” was introduced by Tsinghua publisher as a classical textbook on Grid computing. He is a Fellow of the British Computer Society (BCS) and the Institute of Engineering and Technology (IET). He has served over 30 IEEE conferences and serves on the editorial board for a number of journals. His research work on Big Data was shortlisted by Computing in May 2018 for BIG DATA EXCELLENCE AWARDS in the category of Most Innovative Big Data Solution. His recent research on a computation efficient AI model outperforms the two pioneering works in this field – GhostNet and MobileNet. This work is now available on IEEE Transactions on Neural Networks and Learning Systems. Speech Title:Interpretation and Computation Efficiency in Deep Neural Networks Abstract: The past two decades have witnessed a tremendous success of AI applications in many areas mainly due to the rapid development of sophisticated deep neural networks (DNNs). However, DNNs normally work in a black-box style, making it challenging for deployment of DNNs in life critical situations such as autonomous driving where safety has to be guaranteed. This talk starts with a brief review on interpretation methods of DNNs based on which it presents SA-CAM, generating self-attention activation maps for visual interpretations of CNNs. Further down the line, this talk focuses on computation efficient lightweight AI models which can potentially be deployed on resource constrained mobile devices. Specifically, it presents CEModule, a computation efficient module for lightweight CNNs through model interpretation. Towards the end, this talk touches upon computation intensive heavyweight AI models like ChatGPT and brings up some discussions on their challenges. |