Audiovisual computing: From virtual reality to tangible reality
10:30—11:30, Jul 4, 2018 (Wed)
Over the last decades, the success in the field of visual computing has revolutionized our digital visual experience --- from special effects in Hollywood movies, face recognition on smartphones, to the stunning promise offered by VR/AR goggles. Yet, in this grand picture, one piece remains missing. Our real world has never been silent. Not only is it colorful to our eyes, its sound is also rich and vivid to our ears. In current paradigms, visual computing is often performed in isolation from its audio counterparts. In this talk, I will propose audiovisual computing, a research area that renders, analyzes, and processes audiovisual information. I will first introduce our recent works in this area on physics-based audiovisual models from first principles, and then illustrate audiovisual processing using our work on 360 videos. In the second part of the talk, I’ll discuss the implication of these models on improving the physical world --- namely how to harness the computational audiovisual models to enable tangible forms and objects that offer unprecedented new functionalities. I’ll close my talk by briefly discuss the extensions of our methods beyond the audiovisual modalities, in such fields as food engineering, nanophotonic devices, and wireless communications.
Co-director of Columbia’s Computer Graphics Group, Changxi Zheng is currently an Associate Professor in the Department of Computer Science at Columbia University, working on audiovisual processing, computer graphics, acoustic and optical engineering, and scientific computing. He received his Ph.D. from Cornell University with the Best Dissertation Award and his B.S. from Shanghai Jiaotong University. He currently serves as an associated editor of ACM Transactions on Graphics. He was a Conference Chair for SCA in 2017, has won several Best Paper awards, a NSF CAREER Award, and was named one of Forbes’ “30 under 30” in science and healthcare in 2013.