Neuro Computational science

Brain Imaging Modalities

Brain imaging modalities encompass advanced medical technologies and techniques designed to visualize the structure, function, and biochemical activities of the brain, playing a critical role in diagnosing neurological conditions, researching cognitive functions, and monitoring brain disorders. Structural imaging techniques like CT and MRI provide detailed anatomical insights, aiding in the detection of tumors, strokes, and other abnormalities. Functional imaging methods, including fMRI, PET, and SPECT, allow real-time mapping of brain activity and metabolic processes, while electrophysiological techniques like EEG and MEG capture neural electrical and magnetic activity with high temporal precision. Advanced methods such as DTI and optical imaging further explore white matter integrity and cerebral blood flow. These tools are instrumental in diagnosing and managing conditions like epilepsy, Alzheimer’s, and traumatic brain injuries, as well as advancing research into neural pathways and cognitive processes, making them indispensable to modern neuroscience and medical practice.  The focus of our research is functional neuroimaging modalities.

Brain Signatures of Healthy Controls and Subjects with MDD

Distinct neural signatures, such as altered alpha, beta, or theta wave activity, are observable in EEG data from patients with Major Depressive Disorder (MDD), including reduced left frontal lobe activity, increased theta activity at rest, and changes in the P300 ERP component, which reflect impaired cognitive processes. Given depression’s status as the leading mental health disorder globally, affecting millions and contributing to significant disability, the development of EEG-based diagnostic tools is both urgent and impactful. Feature extraction for EEG data combines local handcrafted features, such as power spectral density and wavelet decomposition, with global features derived through machine learning and deep learning to enhance sensitivity and specificity. Real-time applications of EEG biomarkers enable continuous patient monitoring, instant feedback on brain activity, and timely interventions for depression. However, challenges such as noise in raw EEG data, the “black-box” nature of deep learning models, and difficulties in achieving high sensitivity and specificity due to overlapping neural signatures among disorders highlight the need for advanced preprocessing, model explain ability, and targeted feature selection to make EEG-based diagnostics for MDD reliable and clinically effective.

Brain Signatures of Healthy Controls vs. Subjects with Alcohol Addiction (AD)

Research into brain activity patterns in individuals with Alcohol Use Disorder (AUD) has identified distinct neural signatures, such as reduced P300 amplitude indicating impaired attention and cognitive processing, disrupted frontal alpha activity linked to poor executive functioning and impulse control, increased theta power in the prefrontal cortex reflecting cognitive and emotional deficits, and altered functional connectivity between brain regions involved in reward processing and inhibition. These findings underscore the potential for EEG/ERP-based biomarkers to provide objective diagnostic tools, addressing the limitations of self-reported assessments often hindered by memory loss, dishonesty, or stigma. The urgency of developing such biomarkers stems from AUD’s significant health risks, including liver disease, neurological damage, and safety concerns, compounded by the inability to control alcohol consumption despite severe consequences. However, challenges in data acquisition, feature selection, and addressing demographic variability require robust solutions, including noise minimization, the differentiation of AUD-specific neural patterns from confounding conditions, and inclusive datasets to ensure biomarker applicability across diverse populations. By overcoming these hurdles, EEG/ERP-based approaches can revolutionize AUD diagnosis and monitoring, offering deeper insights into addiction’s neurobiological underpinnings and enabling more effective prevention and treatment strategies

Emotion Recognition Using EEG: Understanding Human Emotions Through Brain Activity

EEG-based emotion recognition leverages neural signals to identify and classify emotional states, providing a transformative tool for individuals, particularly those with physical disabilities, who struggle to express emotions through traditional means such as speech or facial expressions. By analysing brain activity patterns, including neural oscillations like alpha, beta, and gamma waves, emotion recognition systems enable the direct interpretation of cognitive and emotional states. Using datasets like SEED, advanced methodologies involving feature extraction and machine learning models (e.g., support vector machines and deep learning networks) achieve high classification accuracy, demonstrating the potential to enhance human-computer interaction and create inclusive technologies. Despite challenges such as noise in EEG signals, individual variability, and real-time processing demands, robust preprocessing techniques and ethical considerations are addressing these hurdles. This progress underscores the potential of EEG-based systems to empower individuals with disabilities, fostering meaningful communication and paving the way for empathetic, responsive computing.

Deep Learning-Based Assessment for Identifying Visual Learning Style Using Raw EEG Data

This study introduces a deep learning-based framework for identifying visual learning styles using raw EEG data collected during memory recall tests, offering an objective alternative to traditional subjective assessments. By leveraging neuroimaging evidence and advanced machine learning techniques, the framework analyses brain activity patterns, such as enhanced alpha wave activity in occipital regions and increased theta activity during memory encoding, to classify visual learners accurately. The integration of convolutional and recurrent neural networks enables the extraction of spatial and temporal EEG features, achieving high classification accuracy and recall rates, and validating the relevance of previously established EEG features for long-term memory recall. Despite challenges such as EEG signal variability, noise, and computational demands, the optimized model supports real-time processing, demonstrating its practical utility for personalized education. Future directions include expanding datasets, incorporating multi-modal approaches, and developing user-friendly systems, paving the way for inclusive, tailored learning experiences that enhance educational outcomes for diverse student populations.


Relevant Publications

Jawed, Soyiba, Hafeez Ullah Amin, Aamir Saeed Malik, and Ibrahima Faye. “Classification of visual and non-visual learners using electroencephalographic alpha and gamma activities.” Frontiers in behavioral neuroscience 13 (2019): 86.