Register NOW for the webinar: CLICK ME
The audio-visual analysis of the environment surrounding a robot is important for the recognition of activities, objects, interactions and intentions. In this talk I will discuss methods that enable a robot to understand a dynamic scene using only its on-board sensors in order to interact with humans. These methods include a multi-modal training strategy that leverages complementary information across observation modalities to improve the testing performance of a uni-modal system and the estimation of the physical properties of unknown containers manipulated by humans to inform the control of a robot grasping the container during a dynamic handover. I will show examples of multi-modal dynamic scene understanding, present the results of an international challenge for physical human-robot interaction and discuss open research directions.
Andrea Cavallaro is Turing Fellow at The Alan Turing Institute, the UK National Institute for Data Science and Artificial Intelligence, and Full Professor at Queen Mary University of London, a Russell Group university. He is the founding Director of the Centre for Intelligent Sensing and the Director of Research of the School of Electronic Engineering and Computer Science. He is a Fellow of the International Association for Pattern Recognition (IAPR) for “contributions to image processing and multi-sensor scene understanding,” and a Fellow of the Higher Education Academy. He serves as Editor-in-Chief of Signal Processing: Image Communication, as Senior Area Editor for the IEEE Transactions on Image Processing, as member of the IEEE Video Signal Processing and Communication Technical Committee and as member of the Technical Directions Board of the IEEE Signal Processing Society. Professor Cavallaro received his Ph.D. in Electrical Engineering from the École polytechnique fédérale de Lausanne (EPFL) in 2002. He was a Research Fellow with British Telecommunications (BT) in 2004 and was awarded the Royal Academy of Engineering Teaching Prize in 2007; three student paper awards on target tracking and perceptually sensitive coding at IEEE ICASSP in 2005, 2007 and 2009; and the best paper award at IEEE AVSS 2009. He was selected as IEEE Signal Processing Society Distinguished Lecturer (2020-2021) and is the past Chair of the IEEE Image, Video, and Multidimensional Signal Processing Technical Committee (2020-2021). He also served as elected member of the IEEE Multimedia Signal Processing Technical Committee and chair of the Awards committee of the IEEE Signal Processing Society, Image, Video, and Multidimensional Signal Processing Technical Committee. He served as Area Editor for the IEEE Signal Processing Magazine (2012-2014); and as Associate Editor for the IEEE Transactions on Image Processing (2011-2015), IEEE Transactions on Signal Processing (2009-2011), IEEE Transactions on Multimedia (2009-2010), IEEE Signal Processing Magazine (2008-2011) and IEEE Multimedia (2016-2018). He also served as Guest Editor the IEEE Transactions on Multimedia (2019), IEEE Transactions on Circuits and Systems for Video Technology (2017, 2011), Pattern Recognition Letters (2016), IEEE Transactions on Information Forensics and Security (2013), International Journal of Computer Vision (2011), IEEE Signal Processing Magazine (2010), Computer Vision and Image Understanding (2010), Annals of the British Machine Vision Association (2010), Journal of Image and Video Processing (2010, 2008), and Journal on Signal, Image and Video Processing (2007). He has published over 270 journal and conference papers, one monograph on Video tracking (2011, Wiley) and three edited books: Multi-camera networks (2009, Elsevier); Analysis, retrieval and delivery of multimedia content (2012, Springer); and Intelligent multimedia surveillance (2013, Springer).
When and how to participate
The webinar will be broadcasted live on January 30, 2023 at 8 am (PST) – 5 pm (CET) on Zoom (approx duration 1h + 30m Q&A)