Therapeutic Exercise Recognition Using a Single UWB Radar with AI-Driven Feature Fusion and ML Techniques in a Real Environment

Artículo Materias > Educación física y el deporte
Materias > Ingeniería
Universidad de La Romana > Investigación > Producción Científica Abierto Inglés Physiotherapy plays a crucial role in the rehabilitation of damaged or defective organs due to injuries or illnesses, often requiring long-term supervision by a physiotherapist in clinical settings or at home. AI-based support systems have been developed to enhance the precision and effectiveness of physiotherapy, particularly during the COVID-19 pandemic. These systems, which include game-based or tele-rehabilitation monitoring using camera-based optical systems like Vicon and Microsoft Kinect, face challenges such as privacy concerns, occlusion, and sensitivity to environmental light. Non-optical sensor alternatives, such as Inertial Movement Units (IMUs), Wi-Fi, ultrasound sensors, and ultrawide band (UWB) radar, have emerged to address these issues. Although IMUs are portable and cost-effective, they suffer from disadvantages like drift over time, limited range, and susceptibility to magnetic interference. In this study, a single UWB radar was utilized to recognize five therapeutic exercises related to the upper limb, performed by 34 male volunteers in a real environment. A novel feature fusion approach was developed to extract distinguishing features for these exercises. Various machine learning methods were applied, with the EnsembleRRGraBoost ensemble method achieving the highest recognition accuracy of 99.45%. The performance of the EnsembleRRGraBoost model was further validated using five-fold cross-validation, maintaining its high accuracy. metadata Hussain, Shahzad; Siddiqui, Hafeez Ur Rehman; Saleem, Adil Ali; Raza, Muhammad Amjad; Alemany Iturriaga, Josep; Velarde-Sotres, Álvaro y Díez, Isabel De la Torre mail SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR, josep.alemany@uneatlantico.es, alvaro.velarde@uneatlantico.es, SIN ESPECIFICAR (2024) Therapeutic Exercise Recognition Using a Single UWB Radar with AI-Driven Feature Fusion and ML Techniques in a Real Environment. Sensors, 24 (17). p. 5533. ISSN 1424-8220

[img]
Vista Previa
Texto
sensors-24-05533.pdf
Available under License Creative Commons Attribution.

Descargar (12MB) | Vista Previa

Resumen

Physiotherapy plays a crucial role in the rehabilitation of damaged or defective organs due to injuries or illnesses, often requiring long-term supervision by a physiotherapist in clinical settings or at home. AI-based support systems have been developed to enhance the precision and effectiveness of physiotherapy, particularly during the COVID-19 pandemic. These systems, which include game-based or tele-rehabilitation monitoring using camera-based optical systems like Vicon and Microsoft Kinect, face challenges such as privacy concerns, occlusion, and sensitivity to environmental light. Non-optical sensor alternatives, such as Inertial Movement Units (IMUs), Wi-Fi, ultrasound sensors, and ultrawide band (UWB) radar, have emerged to address these issues. Although IMUs are portable and cost-effective, they suffer from disadvantages like drift over time, limited range, and susceptibility to magnetic interference. In this study, a single UWB radar was utilized to recognize five therapeutic exercises related to the upper limb, performed by 34 male volunteers in a real environment. A novel feature fusion approach was developed to extract distinguishing features for these exercises. Various machine learning methods were applied, with the EnsembleRRGraBoost ensemble method achieving the highest recognition accuracy of 99.45%. The performance of the EnsembleRRGraBoost model was further validated using five-fold cross-validation, maintaining its high accuracy.

Tipo de Documento: Artículo
Palabras Clave: physiotherapy; ultrawide band (UWB) radar; therapeutic exercise; machine learning; opto-electronic sensors; ensemble method
Clasificación temática: Materias > Educación física y el deporte
Materias > Ingeniería
Divisiones: Universidad de La Romana > Investigación > Producción Científica
Depositado: 16 Sep 2024 23:30
Ultima Modificación: 16 Sep 2024 23:30
URI: https://repositorio.uniromana.edu.do/id/eprint/14207

Acciones (logins necesarios)

Ver Objeto Ver Objeto

en

close

Single-cell omics for nutrition research: an emerging opportunity for human-centric investigations

Understanding how dietary compounds affect human health is challenged by their molecular complexity and cell-type–specific effects. Conventional multi-cell type (bulk) analyses obscure cellular heterogeneity, while animal and standard in vitro models often fail to replicate human physiology. Single-cell omics technologies—such as single-cell RNA sequencing, as well as single-cell–resolved proteomic and metabolomic approaches—enable high-resolution investigation of nutrient–cell interactions and reveal mechanisms at a single-cell resolution. When combined with advanced human-derived in vitro systems like organoids and organ-on-chip platforms, they support mechanistic studies in physiologically relevant contexts. This review outlines emerging applications of single-cell omics in nutrition research, emphasizing their potential to uncover cell-specific dietary responses, identify nutrient-sensitive pathways, and capture interindividual variability. It also discusses key challenges—including technical limitations, model selection, and institutional biases—and identifies strategic directions to facilitate broader adoption in the field. Collectively, single-cell omics offer a transformative framework to advance human-centric nutrition research.

Producción Científica

Manuela Cassotta mail manucassotta@gmail.com, Yasmany Armas Diaz mail , Danila Cianciosi mail , Bei Yang mail , Zexiu Qi mail , Ge Chen mail , Santos Gracia Villar mail santos.gracia@uneatlantico.es, Luis Alonso Dzul López mail luis.dzul@uneatlantico.es, Giuseppe Grosso mail , José L. Quiles mail , Jianbo Xiao mail , Maurizio Battino mail maurizio.battino@uneatlantico.es, Francesca Giampieri mail francesca.giampieri@uneatlantico.es,

Cassotta

<a class="ep_document_link" href="/17880/1/nutrients-17-03613.pdf"><img class="ep_doc_icon" alt="[img]" src="/17880/1.hassmallThumbnailVersion/nutrients-17-03613.pdf" border="0"/></a>

en

open

Image-Based Dietary Energy and Macronutrients Estimation with ChatGPT-5: Cross-Source Evaluation Across Escalating Context Scenarios

Background/Objectives: Estimating energy and macronutrients from food images is clinically relevant yet challenging, and rigorous evaluation requires transparent accuracy metrics with uncertainty and clear acknowledgement of reference data limitations across heterogeneous sources. This study assessed ChatGPT-5, a general-purpose vision-language model, across four scenarios differing in the amount and type of contextual information provided, using a composite dataset to quantify accuracy for calories and macronutrients. Methods: A total of 195 dishes were evaluated, sourced from Allrecipes.com, the SNAPMe dataset, and Home-prepared, weighed meals. Each dish was evaluated under Case 1 (image only), Case 2 (image plus standardized non-visual descriptors), Case 3 (image plus ingredient lists with amounts), and Case 4 (replicates Case 3 but excluding the image). The primary endpoint was kcal Mean Absolute Error (MAE); secondary endpoints included Median Absolute Error (MedAE) and Root Mean Square Error (RMSE) for kcal and macronutrients (protein, carbohydrates, and lipids), all reported with 95% Confidence Intervals (CIs) via dish-level bootstrap resampling and accompanied by absolute differences (Δ) between scenarios. Inference settings were standardized to support reproducibility and variance estimation. Source stratified analyses and quartile summaries were conducted to examine heterogeneity by curation level and nutrient ranges, with additional robustness checks for error complexity relationships. Results and Discussion: Accuracy improved from Case 1 to Case 2 and further in Case 3 for energy and all macronutrients when summarized by MAE, MedAE, and RMSE with 95% CIs, with absolute reductions (Δ) indicating material gains as contextual information increased. In contrast to Case 3, estimation accuracy declined in Case 4, underscoring the contribution of visual cues. Gains were largest in the Home-prepared dietitian-weighed subset and smaller yet consistent for Allrecipes.com and SNAPMe, reflecting differences in reference curation and measurement fidelity across sources. Scenario-level trends were concordant across sources, and stratified and quartile analyses showed coherent patterns of decreasing absolute errors with the provision of structured non-visual information and detailed ingredient data. Conclusions: ChatGPT-5 can deliver practically useful calorie and macronutrient estimates from food images, particularly when augmented with standardized nonvisual descriptors and detailed ingredients, as evidenced by reductions in MAE, MedAE, and RMSE with 95% CIs across scenarios. The decline in accuracy observed when the image was omitted, despite providing detailed ingredient information, indicates that visual cues contribute meaningfully to estimation performance and that improvements are not solely attributable to arithmetic from ingredient lists. Finally, to promote generalizability, it is recommended that future studies include repeated evaluations across diverse datasets, ensure public availability of prompts and outputs, and incorporate systematic comparisons with non-artificial-intelligence baselines.

Producción Científica

Marcela Rodríguez- Jiménez mail , Gustavo Daniel Martín-del-Campo-Becerra mail , Sandra Sumalla Cano mail sandra.sumalla@uneatlantico.es, Jorge Crespo-Álvarez mail jorge.crespo@uneatlantico.es, Iñaki Elío Pascual mail inaki.elio@uneatlantico.es,

Rodríguez- Jiménez

<a class="ep_document_link" href="/17862/1/sensors-25-06419.pdf"><img class="ep_doc_icon" alt="[img]" src="/17862/1.hassmallThumbnailVersion/sensors-25-06419.pdf" border="0"/></a>

en

open

Edge-Based Autonomous Fire and Smoke Detection Using MobileNetV2

Forest fires pose significant threats to ecosystems, human life, and the global climate, necessitating rapid and reliable detection systems. Traditional fire detection approaches, including sensor networks, satellite monitoring, and centralized image analysis, often suffer from delayed response, high false positives, and limited deployment in remote areas. Recent deep learning-based methods offer high classification accuracy but are typically computationally intensive and unsuitable for low-power, real-time edge devices. This study presents an autonomous, edge-based forest fire and smoke detection system using a lightweight MobileNetV2 convolutional neural network. The model is trained on a balanced dataset of fire, smoke, and non-fire images and optimized for deployment on resource-constrained edge devices. The system performs near real-time inference, achieving a test accuracy of 97.98% with an average end-to-end prediction latency of 0.77 s per frame (approximately 1.3 FPS) on the Raspberry Pi 5 edge device. Predictions include the class label, confidence score, and timestamp, all generated locally without reliance on cloud connectivity, thereby enhancing security and robustness against potential cyber threats. Experimental results demonstrate that the proposed solution maintains high predictive performance comparable to state-of-the-art methods while providing efficient, offline operation suitable for real-world environmental monitoring and early wildfire mitigation. This approach enables cost-effective, scalable deployment in remote forest regions, combining accuracy, speed, and autonomous edge processing for timely fire and smoke detection.

Producción Científica

Dilshod Sharobiddinov mail , Hafeez Ur Rehman Siddiqui mail , Adil Ali Saleem mail , Gerardo Méndez Mezquita mail , Debora L. Ramírez-Vargas mail debora.ramirez@unini.edu.mx, Isabel de la Torre Díez mail ,

Sharobiddinov

<a class="ep_document_link" href="/17863/1/v16p4316.pdf"><img class="ep_doc_icon" alt="[img]" src="/17863/1.hassmallThumbnailVersion/v16p4316.pdf" border="0"/></a>

en

open

Divulging Patterns: An Analytical Review for Machine Learning Methodologies for Breast Cancer Detection

Breast cancer is a lethal carcinoma impacting a considerable number of women across the globe. While preventive measures are limited, early detection remains the most effective strategy. Accurate classification of breast tumors into benign and malignant categories is important which may help physicians in diagnosing the disease faster. This survey investigates the emerging inclination and approaches in the area of machine learning (ML) for the diagnosis of breast cancer, pointing out the classification techniques based on both segmentation and feature selection. Certain datasets such as the Wisconsin Diagnostic Breast Cancer Dataset (WDBC), Wisconsin Breast Cancer Dataset Original (WBCD), Wisconsin Prognostic Breast Cancer Dataset (WPBC), BreakHis, and others are being evaluated in this study for the demonstration of their influence on the performance of the diagnostic tools and the accuracy of the models such as Support vector machine, Convolutional Neural Networks (CNNs) and ensemble approaches. The main shortcomings or research gaps such as prejudice of datasets, scarcity of generalizability, and interpretation challenges are highlighted. This research emphasizes the importance of the hybrid methodologies, cross-dataset validation, and the engineering of explainable AI to narrow these gaps and enhance the overall clinical acceptance of ML-based detection tools.

Producción Científica

Alveena Saleem mail , Muhammad Umair mail , Muhammad Tahir Naseem mail , Muhammad Zubair mail , Silvia Aparicio Obregón mail silvia.aparicio@uneatlantico.es, Rubén Calderón Iglesias mail ruben.calderon@uneatlantico.es, Shoaib Hassan mail , Imran Ashraf mail ,

Saleem

<a class="ep_document_link" href="/17849/1/1-s2.0-S2590005625001043-main.pdf"><img class="ep_doc_icon" alt="[img]" src="/17849/1.hassmallThumbnailVersion/1-s2.0-S2590005625001043-main.pdf" border="0"/></a>

en

open

Ultra Wideband radar-based gait analysis for gender classification using artificial intelligence

Gender classification plays a vital role in various applications, particularly in security and healthcare. While several biometric methods such as facial recognition, voice analysis, activity monitoring, and gait recognition are commonly used, their accuracy and reliability often suffer due to challenges like body part occlusion, high computational costs, and recognition errors. This study investigates gender classification using gait data captured by Ultra-Wideband radar, offering a non-intrusive and occlusion-resilient alternative to traditional biometric methods. A dataset comprising 163 participants was collected, and the radar signals underwent preprocessing, including clutter suppression and peak detection, to isolate meaningful gait cycles. Spectral features extracted from these cycles were transformed using a novel integration of Feedforward Artificial Neural Networks and Random Forests , enhancing discriminative power. Among the models evaluated, the Random Forest classifier demonstrated superior performance, achieving 94.68% accuracy and a cross-validation score of 0.93. The study highlights the effectiveness of Ultra-wideband radar and the proposed transformation framework in advancing robust gender classification.

Producción Científica

Adil Ali Saleem mail , Hafeez Ur Rehman Siddiqui mail , Muhammad Amjad Raza mail , Sandra Dudley mail , Julio César Martínez Espinosa mail ulio.martinez@unini.edu.mx, Luis Alonso Dzul López mail luis.dzul@uneatlantico.es, Isabel de la Torre Díez mail ,

Saleem