Decoding Brain Signals from Rapid-Event EEG for Visual Analysis Using Deep Learning

Artículo Materias > Ingeniería Universidad Europea del Atlántico > Investigación > Producción Científica
Universidad Internacional Iberoamericana Puerto Rico > Investigación > Producción Científica
Universidad Internacional do Cuanza > Investigación > Producción Científica
Universidad de La Romana > Investigación > Producción Científica
Abierto Inglés The perception and recognition of objects around us empower environmental interaction. Harnessing the brain’s signals to achieve this objective has consistently posed difficulties. Researchers are exploring whether the poor accuracy in this field is a result of the design of the temporal stimulation (block versus rapid event) or the inherent complexity of electroencephalogram (EEG) signals. Decoding perceptive signal responses in subjects has become increasingly complex due to high noise levels and the complex nature of brain activities. EEG signals have high temporal resolution and are non-stationary signals, i.e., their mean and variance vary overtime. This study aims to develop a deep learning model for the decoding of subjects’ responses to rapid-event visual stimuli and highlights the major factors that contribute to low accuracy in the EEG visual classification task.The proposed multi-class, multi-channel model integrates feature fusion to handle complex, non-stationary signals. This model is applied to the largest publicly available EEG dataset for visual classification consisting of 40 object classes, with 1000 images in each class. Contemporary state-of-the-art studies in this area investigating a large number of object classes have achieved a maximum accuracy of 17.6%. In contrast, our approach, which integrates Multi-Class, Multi-Channel Feature Fusion (MCCFF), achieves a classification accuracy of 33.17% for 40 classes. These results demonstrate the potential of EEG signals in advancing EEG visual classification and offering potential for future applications in visual machine models. metadata Rehman, Madiha; Anwer, Humaira; Garay, Helena; Alemany Iturriaga, Josep; Díez, Isabel De la Torre; Siddiqui, Hafeez ur Rehman y Ullah, Saleem mail SIN ESPECIFICAR, SIN ESPECIFICAR, helena.garay@uneatlantico.es, josep.alemany@uneatlantico.es, SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR (2024) Decoding Brain Signals from Rapid-Event EEG for Visual Analysis Using Deep Learning. Sensors, 24 (21). p. 6965. ISSN 1424-8220

[img]
Vista Previa
Texto
sensors-24-06965-v2.pdf
Available under License Creative Commons Attribution.

Descargar (2MB) | Vista Previa

Resumen

The perception and recognition of objects around us empower environmental interaction. Harnessing the brain’s signals to achieve this objective has consistently posed difficulties. Researchers are exploring whether the poor accuracy in this field is a result of the design of the temporal stimulation (block versus rapid event) or the inherent complexity of electroencephalogram (EEG) signals. Decoding perceptive signal responses in subjects has become increasingly complex due to high noise levels and the complex nature of brain activities. EEG signals have high temporal resolution and are non-stationary signals, i.e., their mean and variance vary overtime. This study aims to develop a deep learning model for the decoding of subjects’ responses to rapid-event visual stimuli and highlights the major factors that contribute to low accuracy in the EEG visual classification task.The proposed multi-class, multi-channel model integrates feature fusion to handle complex, non-stationary signals. This model is applied to the largest publicly available EEG dataset for visual classification consisting of 40 object classes, with 1000 images in each class. Contemporary state-of-the-art studies in this area investigating a large number of object classes have achieved a maximum accuracy of 17.6%. In contrast, our approach, which integrates Multi-Class, Multi-Channel Feature Fusion (MCCFF), achieves a classification accuracy of 33.17% for 40 classes. These results demonstrate the potential of EEG signals in advancing EEG visual classification and offering potential for future applications in visual machine models.

Tipo de Documento: Artículo
Palabras Clave: BCI; EEG; visual classification; rapid-event design; block design
Clasificación temática: Materias > Ingeniería
Divisiones: Universidad Europea del Atlántico > Investigación > Producción Científica
Universidad Internacional Iberoamericana Puerto Rico > Investigación > Producción Científica
Universidad Internacional do Cuanza > Investigación > Producción Científica
Universidad de La Romana > Investigación > Producción Científica
Depositado: 31 Oct 2024 23:30
Ultima Modificación: 31 Oct 2024 23:30
URI: https://repositorio.uniromana.edu.do/id/eprint/14951

Acciones (logins necesarios)

Ver Objeto Ver Objeto

en

close

Liquorice alters adipocyte–breast cancer cell crosstalk by modulating oxidative stress and suppressing aromatase and renin–angiotensin signalling

Obesity is recognised to be a risk factor for breast cancer since adipose tissue influences the tumour microenvironment. This study aims to investigate the effect of the secretome of 3T3-L1 adipocytes untreated or treated with liquorice root extract (LRE), containing flavonoids, phenolic acids, and saponins on MCF-7 breast cancer cells. By treating adipocytes with LRE, the secretion of certain pro-tumorigenic factors like IGFBP-6, resistin, and VEGF was reduced. MCF-7 cells exposed to conditioned medium from LRE-treated adipocytes exhibited an increase in reactive oxygen species levels, downregulation of the Nrf2 antioxidant pathway, and increased autophagy. Those conditions reduced cell viability, migration, and colony formation. Additionally, there was downregulation of genes associated with oestrogen signalling and tumour-related processes, including CYP19A1 (aromatase), ERα, Her2, and components of the renin–angiotensin system (RAS). These findings suggest that LRE can modulate the adipocyte secretome to influence breast cancer cell behaviour under obesity-related in vitro conditions.

Producción Científica

Danila Cianciosi mail , Yasmany Armas Diaz mail , Bei Yang mail , Zexiu Qi mail , Ge Chen mail , José L. Quiles mail jose.quiles@uneatlantico.es, Massimiliano Gasparrini mail , Manuela Cassotta mail manucassotta@gmail.com, Rubén Calderón Iglesias mail ruben.calderon@uneatlantico.es, Maurizio Battino mail maurizio.battino@uneatlantico.es, Francesca Giampieri mail francesca.giampieri@uneatlantico.es,

Cianciosi

<a class="ep_document_link" href="/28577/1/PIIS0002944026001367.pdf"><img class="ep_doc_icon" alt="[img]" src="/28577/1.hassmallThumbnailVersion/PIIS0002944026001367.pdf" border="0"/></a>

en

open

An Integrated Machine Learning and Genomic Framework for Precise Detection of Gastric Cancer

This study presents a novel integrative approach for the analysis of high-dimensional gene expression data, leveraging the complementary strengths of unsupervised clustering and supervised classification. Using K-means clustering, the dataset is stratified into three distinct clusters, revealing intrinsic biological patterns and relationships. The resulting cluster assignments are subsequently employed as pseudo-labels to train machine learning models, including support vector machines, random forest, and a stacking ensemble classifier. To validate and enhance the robustness of clustering, complementary methodologies such as hierarchical clustering and DBSCAN are employed, with results visualized through PCA-driven dimensionality reduction. The high predictive accuracy achieved by the classifiers underscores the separability and reliability of the identified clusters. Furthermore, feature importance analysis highlighted key genetic determinants within each cluster, offering actionable insights into potential biomarkers and critical genomic features. This framework bridges the gap between exploratory unsupervised learning and predictive supervised modeling, providing a scalable and interpretable methodology for analyzing complex genomic datasets. Its applicability extends to biomarker discovery, patient stratification, and other precision medicine applications, emphasizing its utility in advancing genomic research and clinical practice.

Producción Científica

Eshmal Iman mail , Sohail Jabbar mail , Shabana Ramzan mail , Ali Raza mail , Farwa Raoof mail , Stefanía Carvajal-Altamiranda mail stefania.carvajal@uneatlantico.es, Vivian Lipari mail vivian.lipari@uneatlantico.es, Imran Ashraf mail ,

Iman

<a class="ep_document_link" href="/28319/1/s41598-026-45575-1_reference.pdf"><img class="ep_doc_icon" alt="[img]" src="/28319/1.hassmallThumbnailVersion/s41598-026-45575-1_reference.pdf" border="0"/></a>

en

open

A novel approach for disease and pests detection in potato production system based on deep learning

Vulnerability of potato crops to diseases and pest infestation can affect its quality and lead to significant yield losses. Timely detection of such diseases can help take effective decisions. For this purpose, a deep learning-based object detection framework is designed in this study to identify and classify major potato diseases and pests under real-world field conditions. A total of 2,688 field images were collected from two research farms in Punjab, Pakistan, across multiple growth stages in various seasonal conditions. Excluding 285 symptoms-free images from the earliest collection led to 2,403 images which were annotated into four biotic-stress classes: blight disease (n = 630), leaf spot disease (n = 370), leafroll virus (viral symptom complex; n = 888), and Colorado potato beetle (larvae/adults; n = 515), indicating class imbalance. Several state-of-the-art models were used including YOLOv8 variants (n/s/m), YOLOv7, YOLOv5, and Faster R-CNN, and the results are discussed in relation to recent potato disease classification studies involving cropped leaf images. Stratified splitting (70% training, 20% validation, 10% testing) was applied to preserve class distribution across all subsets. YOLOv8-medium achieve the best performance with mean average precision (mAP)@0.5 of 98% on the held-out test images. Results for stable 5-fold cross-validation show a mean mAP@0.5 of 97.8%, which offers a balance between accuracy and inference time. Model robustness was evaluated using 5-fold cross-validation and repeated training with different random seeds, showing a low variance of ±0.4% mAP. Results demonstrate promising outcomes under the real-world field conditions, while, broader cross-region and cross-season validation is intended for the future.

Producción Científica

Ahmed Abbas mail , Saif Ur Rehman mail , Khalid Mahmood mail , Santos Gracia Villar mail santos.gracia@uneatlantico.es, Luis Alonso Dzul López mail luis.dzul@uneatlantico.es, Aseel Smerat mail , Imran Ashraf mail ,

Abbas

<a class="ep_document_link" href="/28569/1/s12870-026-08847-6_reference.pdf"><img class="ep_doc_icon" alt="[img]" src="/28569/1.hassmallThumbnailVersion/s12870-026-08847-6_reference.pdf" border="0"/></a>

en

open

An attention-based deep learning model for early detection of polyphagous shot hole borer infestations in plants

The Polyphagous Shot Hole Borer (PSHB) is a highly invasive beetle that has been spreading like an epidemic across agricultural and forestry landscapes in recent years. Its rapid and destructive spread has turned it into a major global threat, causing widespread damage that continues to grow with time. Countries like South Africa, the United States, and Australia have implemented extensive measures to control the spread of PSHB, including the establishment of specialized agricultural support centers for early detection. However, there is still a strong need to make PSHB detection more accessible, allowing even non-experts to easily identify infections at an early stage. Artificial Intelligence (AI) has shown great promise in plant disease detection, but a major challenge in the case of PSHB was the lack of a suitable dataset for training AI models. In the proposed work, we first created a dedicated dataset by collecting images of trees infected with PSHB. We applied a range of preprocessing techniques to refine the dataset and prepare it for AI applications. Building on this, we developed a novel AI-based method, where we trained a deep learning model using a multi-convolutional layer network combined with a Fourier transformation layer. Additionally, an attention mechanism and advanced feature extraction techniques were incorporated to further boost model performance. As a result, the proposed approach achieved an impressive top accuracy of 92.3% in detecting PSHB infections, showing the potential of AI to offer a simple, efficient, and highly accurate solution for early disease detection.

Producción Científica

Rabbiya Younas mail , Hafiz Muhammad Raza ur Rehman mail , Gyu Sang Choi mail , Ángel Gabriel Kuc Castilla mail angel.kuc@uneatlantico.es, Carlos Eduardo Uc Ríos mail carlos.uc@unini.edu.mx, Imran Ashraf mail ,

Younas

<a class="ep_document_link" href="/28572/1/s41598-026-47906-8.pdf"><img class="ep_doc_icon" alt="[img]" src="/28572/1.hassmallThumbnailVersion/s41598-026-47906-8.pdf" border="0"/></a>

en

open

Correction: Enhancing fault detection in new energy vehicles via novel ensemble approach

In the original version of this Article, Umair Shahid was incorrectly listed as a corresponding author. The correct corresponding authors for this Article are Imran Ashraf and Kashif Munir. Correspondence and request for materials should be addressed to ashrafimran@live.com and kashif.munir@kfueit.edu.pk.

Producción Científica

Iqra Akhtar mail , Mahnoor Nabeel mail , Umair Shahid mail , Kashif Munir mail , Ali Raza mail , Irene Delgado Noya mail irene.delgado@uneatlantico.es, Santos Gracia Villar mail santos.gracia@uneatlantico.es, Imran Ashraf mail ,

Akhtar