【物流讲坛】Augmented Reality and Wearable Computing for Image Guided Neurosurgery Systems
主题：Augmented Reality and Wearable Computing for Image Guided Neurosurgery Systems
主讲人：Ying (Gina) Tang
Ying (Gina) Tang received the B.S. and M.S. degrees from the Northeastern University, P. R. China, in 1996 and 1998, respectively, and Ph. D degree from New Jersey Institute of Technology in 2001. She is a professor of Electrical and Computer Engineering at Rowan University, Glassboro, New Jersey. Her current research interests lie in the area of discrete event systems and visualization, including virtual reality/augmented reality, modeling and adaptive control for
Computer-integrated Systems, green manufacturing and automation, Petri Nets, and intelligent serious games. Dr. Tang has led or participated in several research and education projects funded by National Science Foundation, US Department of Transportation, US Navy, the Charles A. and Anne Morrow Lindbergh Foundation, the Christian R. and Mary F. Lindback Foundation and industry firms. Her work has resulted in over 116 peer-reviewed publications, including 29 journal articles, 4 book / encyclopedia chapters, and 87 conference articles (The majority of her journals have been published in IEEE Transactions on Systems, Man, & Cybernetics, IEEE Transactions on Robotics and Automation, IEEE Transactions on Automation Science and Engineering, IEEE transactions on Semiconductor Manufacturing, and International Journal of Production Research, etc.).
She served as Associate Editor of IEEE Transaction on Automation Science and Engineering from 2009 to 2014, is currently serving as Associate Editor of International Journal of Intelligent Control and Systems, Editorial Board Member of International Journal of Remanufacturing, and Guest Editor of the Special Issue of Mathematical Problems in Petri Nets Theory and Applications in Mathematical Problems in Engineering. She is the Founding Chair of Technical Committee on Sustainable Production Automation for IEEE Robotic and Automation, member of Technical Committee on Automation in Logistics for IEEE Robotic and Automation, and founding member of Technical Committee on Discrete Event Systems for IEEE Systems, Man, & Cybernetics. She has chaired several technical sessions and served on program committees for many conferences.
Image guided surgery (IGS) is a technology that guides surgical operations through a visual correlation of preoperative imaging data (e.g., MRI/PET/CAT scans) with intraoperative anatomy in real time. Neurosurgery is a very complex and risky procedure on vital structures with irregular configurations where slight damage to such eloquent brain structures can severely impair the patient. Therefore, the development of IGS systems that integrate multi-modal data and produce an optimal surgical plan for minimally invasive neurosurgeries is of significant importance. However, most of IGS systems on the market, although very useful, present several practical and technical limitation. For instance, the use of a computer monitor for display of the multi-modal information requires a surgeon to look away from the surgical field and confirm his intraoperative progress. The set-up of the system often presents risk of failure as the movement of the surgeon or staff around the surgical field might block the field-of-view of the cameras. While the recent development using augmented reality (AR) sounds promising, the device is too cumbersome for surgeons to use. In this talk, we present intellectual merit of the recent development of a ubiquitous monocular IGS system. With the assistance of wearable computing akin to Google Glass and AR technology, surgeons would easily visualize on their own glass MRI/PET/CAT scans correctly overlaid on the patient at any point of a surgical procedure. Making the surgeon as the system interface ensures that surgeon’s attention is no longer diverted from the field. Assimilating display and camera to the loupe that surgeons already use further makes the proposed IGS more practical and appealing to surgeons.