Sanghani Center Student Spotlight: Ibrahim Tahmid
March 26, 2025

“We are living at the highest peak of the information age, constantly overwhelmed by information from our phones, tablets, and laptops. It’s often hard to keep track of them all. Extended reality (XR) headsets offer a potential solution by providing virtually unlimited space to organize data. Yet the core challenge remains: effectively processing, filtering, and deriving meaning from this information.
That’s where I come in,” said Ibrahim Tahmid, a Ph.D. student in computer science, co-advised by Doug Bowman and Chris North.
In his research, Tahmid is developing a human-centered artificial intelligence agent whereby the AI leverages users' natural gaze patterns to understand their interests and deliver targeted, personalized insights through custom recommendations and adaptive annotations.
This proves particularly powerful, he said, when users explore complex networks of interconnected documents to uncover meaningful patterns and construct compelling narratives, similar to what intelligence analysts do while solving crimes from the gathered evidence list.
“With our proposed system, Eye-Enhanced Immersive Space to Think (EyeST), we have shown that analysts can extract relevant information more efficiently while discarding noise. One thing to remember though,” he said, “is that for people to trust the system and actually want to use it, the AI needs to be transparent about what it's doing and why, with proper rationale behind its decisions.”
His research can be directly applied to various analytical domains, including intelligence analysis, investigative journalism, financial assessment, and literature reviews.
Being at the intersection of XR and AI emerging technologies, Tahmid said he is eager to explore and shape the countless ways these innovations can positively impact human lives and society in general.
“Virginia Tech’s proven track record of research excellence in human-computer interaction combined with distinguished faculty and the opportunity for interdisciplinary collaboration at the Sanghani Center is invaluable,” said Tahmid. “I wanted to be part of this vibrant intellectual community. It is an ideal place for my graduate school journey and learning from colleagues with diverse backgrounds and expertise helps me grow as a researcher every day.”
He has published and presented two first-author papers at the IEEE International Symposium on Mixed and Augmented Reality: "Evaluating the Feasibility of Predicting Information Relevance During Sensemaking with Eye Gaze Data" in 2023; and “Evaluating the Benefits of Explicit and Semi-Automated Clusters for Immersive Sensemaking” in 2022.
His paper “Enhancing Immersive Sensemaking with Gaze-Driven Recommendation Cues,” was accepted at the 30th Annual ACM Conference on Intelligent User Interfaces (IUI), being held in Cagliari, Italy, this week. He will present it on a pre-recorded video.
Projected to graduate in May 2025, Tahmid is seeking an industry position where he can apply his research skills in augmented reality, virtual reality, human-computer interaction and human-centered AI.