I have a strong research interest in the intersection of Artificial Intelligence (AI), Human-Computer Interaction, and Accessibility. My research centers on developing human-AI interactive systems to improve video accessibility for blind and low-vision (BLV) individuals. Audio Descriptions (AD) provide a verbal narration of on-screen actions, objects, and settings, allowing BLV individuals to enjoy videos without seeing the visuals. Leveraging advancements in AI, specifically in computer vision and natural language processing, I aim to enhance AD for both traditional and immersive formats, such as 360° videos. My work also seeks to streamline the AD creation process and accommodate individual viewer preferences when consuming AD. Thus, these efforts broaden accessibility and interactivity in digital media.
- Science Mentor: Anhong Guo, Information; Electrical Engineering and Computer Science, College of Engineering
- Research Theme: Methodology / Healthcare