A group of researchers from Qatar University (QU) has developed an innovative assistive system called ‘Smart Hat’ aimed at empowering individuals with visual impairments.

Designed as a lightweight, wearable device, 'Smart Hat' integrates seamlessly into the user’s daily life and enhances the ability of visually impaired people to navigate their surroundings and perform tasks independently.

The team comprised of Prof Sumaya al-Maadeed, professor of computer engineering at the College of Engineering, an accomplished researcher in Computer Vision and Artificial intelligence (AI), and PhD student Jayakanth Kunhoth, Dr Mohammed Zied Chaari, and MSc student Nandhini Subhramanian.

Prof al-Maadeed and her team, according to an article on the latest edition of the QU Research Magazine, noted the the potential of AI to address the challenges of the visually impaired people by providing real- time solutions that combine advanced computing techniques with user-friendly interfaces. Their goal was to design a system that could adapt to various environments, respond intelligently to user needs, and provide continuous support.

The assistive system integrates multiple advanced technologies, including AI, computer vision, and sensor-based systems, to create a robust and user- centric solution. One of the key features of the system is object detection and recognition. By using computer vision algorithms, the system identifies and labels objects in the user’s environment in real-time.

The system provides audio feedback to guide users by describing objects, directions, or hazards. This allows visually impaired individuals to better understand their surroundings without relying on sight. By leveraging deep learning models, the system maps out the user’s environment and provides step-by-step navigation instructions. This feature is particularly useful for avoiding obstacles and navigating crowded spaces. The AI in the system learns user preferences over time, adapting its responses to provide more personalised assistance.

AI plays a central role in the assistive system, powering both the recognition and decision-making processes. Deep learning models were employed and trained on diverse datasets to ensure accurate and reliable performance in various scenarios. The system’s AI components were trained on large datasets of images, objects, and environmental scenarios to develop robust object detection and recognition capabilities. This training ensures that the system can operate effectively in diverse lighting conditions and settings.

The integration of NLP technologies in both Arabic and English significantly enhances the system’s communication abilities. It enables the system to describe objects and provide instructions in a natural, conversational tone, making it more user- friendly and less daunting for individuals with visual impairments.

AI models in the system are designed to learn continuously from user interactions and feedback. This iterative improvement ensures that the device remains relevant and effective as user needs evolve.

The assistive system has the potential to transform the lives of individuals with visual impairments by providing them greater independence and reducing reliance on caregivers and empowers users to engage more fully in social, professional, and recreational activities.

Building on the success of this project, Prof al- Maadeed and her team plan to expand their research to include additional functionalities and applications. They also plan to partner with international organisations and researchers to share insights and further develop assistive technologies.

By combining cutting-edge technology with a user-focused approach, the team has created a solution that not only improves the lives of individuals with disabilities but also sets a new standard for innovation in assistive technology.
Related Story