Dr. Yeasin is an Associate Professor of the department of Electrical and Computer Engineering, adjunct faculty member of Biomedical Engineering and Bioinformatics Program, and an affiliated member of the Institute for Intelligent Systems (IIS) at The University of Memphis (U of M). He is a senior member of IEEE and received his B.Sc. degree in Electrical and Electronic Engineering from Khulna University of Engineering and Technology (Erstwhile, Bangladesh Institute of Technology, Khulna), Bangladesh in 1989, M.Sc. in Computer Science and Engineering from Bangladesh University of Engineering and Technology (BUET), Bangladesh in 1994 and the Ph.D. degree in Electrical Engineering from Indian Institute of Technology (IIT), Bombay, India in 1998.
Dr. Yeasin made significant contributions in the research and development of real-time computer vision solutions for academic research and commercial applications. He has been involved with several technological innovations, including face detection, classifying gender, age group, ethnicity, emotions, recognition of human activities in video, and speech-gesture enabled sophisticated human-computer interfaces. Some of his research on facial image analysis and hand gesture recognition is used in developing several commercial products by the Videomining Inc. He introduced the idea of co-analysis of signal and sense using prosodic relationship between verbal and non-verbal modalities, sophisticated method of mining the multimodal feature space for the analysis and application of multimodal co-articulation. The co-analyses of multimodal articulations will help to obtain a deeper understanding of (a) how the nucleus of an utterance and a visual prosody interact to render the intent of the utterance, and (b) how the synchronization with other modalities affects the production of multimodal co-articulation. These discoveries will facilitate the design and development of a perceptual interface for Meta-Tutor. This will also enable the development of collaborative environments for agents and humans, and assistive technologies for the elderly and disabled.
At the U of M, Dr. Yeasin leads the Computer Vision, Perception and Image Analysis (CVPIA) laboratory. Main thrust of research in the CVPIA lab is in the general areas of computer vision, data mining, bio-informatics/computational biology, pattern recognition and human computer interfaces (HCI). The common underlying theme is to address research issues that would allow (i) integration of large heterogeneous robust analysis and modeling of all possible types of signals (text, speech, images, video, time series and gene expressions etc.). Major motivations of his works includes (but not limited to):
(i) Understanding Nonverbal communications through the Co-analysis of signal and sense and the interplay between the complementary modalities. Main goal is to develop novel algorithms for the recognition of affective states, emotions, gestures, behavior-based biometrics, and Perceptual Meta-Tutor.
(ii) Blind Ambition! Integration of hardware and software in developing assistive technology solution to enhance the quality of life of people who are blind or visually impaired. A prototype system called R-MAP was designed to provide a number of service such as reading out loud, finding sense of direction in an open space, reading barcode to enhance shopping experience, etc.
(iii) Develop efficient and scalable algorithms for distributed data and graph mining, and their application to knowledge discovery from heterogeneous data. Also of interest is to develop service oriented architecture to provide Web services in emerging areas like epigenetic and genome wide study.
(iv) Sensor networks for monitoring large areas such as trails of drug smugglers.
(v) Image analysis and computer vision solutions for biomedical applications and analysis of human motion to build articulated models of human body parts.
(vi) Music therapy! It’s a hobby topic to play with scale-space analysis of signals!