Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 24(2)2024 Jan 11.
Article in English | MEDLINE | ID: mdl-38257551

ABSTRACT

Assessing pain in non-verbal patients is challenging, often depending on clinical judgment which can be unreliable due to fluctuations in vital signs caused by underlying medical conditions. To date, there is a notable absence of objective diagnostic tests to aid healthcare practitioners in pain assessment, especially affecting critically-ill or advanced dementia patients. Neurophysiological information, i.e., functional near-infrared spectroscopy (fNIRS) or electroencephalogram (EEG), unveils the brain's active regions and patterns, revealing the neural mechanisms behind the experience and processing of pain. This study focuses on assessing pain via the analysis of fNIRS signals combined with machine learning, utilising multiple fNIRS measures including oxygenated haemoglobin (ΔHBO2) and deoxygenated haemoglobin (ΔHHB). Initially, a channel selection process filters out highly contaminated channels with high-frequency and high-amplitude artifacts from the 24-channel fNIRS data. The remaining channels are then preprocessed by applying a low-pass filter and common average referencing to remove cardio-respiratory artifacts and common gain noise, respectively. Subsequently, the preprocessed channels are averaged to create a single time series vector for both ΔHBO2 and ΔHHB measures. From each measure, ten statistical features are extracted and fusion occurs at the feature level, resulting in a fused feature vector. The most relevant features, selected using the Minimum Redundancy Maximum Relevance method, are passed to a Support Vector Machines classifier. Using leave-one-subject-out cross validation, the system achieved an accuracy of 68.51%±9.02% in a multi-class task (No Pain, Low Pain, and High Pain) using a fusion of ΔHBO2 and ΔHHB. These two measures collectively demonstrated superior performance compared to when they were used independently. This study contributes to the pursuit of an objective pain assessment and proposes a potential biomarker for human pain using fNIRS.


Subject(s)
Pain Measurement , Pain , Humans , Oxyhemoglobins , Pain/diagnosis , Pain Measurement/methods , Spectroscopy, Near-Infrared
2.
Article in English | MEDLINE | ID: mdl-38083346

ABSTRACT

Pain is a highly unpleasant sensory experience, for which currently no objective diagnostic test exists to measure it. Identification and localisation of pain, where the subject is unable to communicate, is a key step in enhancing therapeutic outcomes. Numerous studies have been conducted to categorise pain, but no reliable conclusion has been achieved. This is the first study that aims to show a strict relation between Electrodermal Activity (EDA) signal features and the presence of pain and to clarify the relation of classified signals to the location of the pain. For that purpose, EDA signals were recorded from 28 healthy subjects by inducing electrical pain at two anatomical locations (hand and forearm) of each subject. The EDA data were preprocessed with a Discrete Wavelet Transform to remove any irrelevant information. Chi-square feature selection was used to select features extracted from three domains: time, frequency, and cepstrum. The final feature vector was fed to a pool of classification schemes where an Artificial Neural Network classifier performed best. The proposed method, evaluated through leave-one-subject-out cross-validation, provided 90% accuracy in pain detection (no pain vs. pain), whereas the pain localisation experiment (hand pain vs. forearm pain) achieved 66.67% accuracy.Clinical relevance- This is the first study to provide an analysis of EDA signals in finding the source of the pain. This research explores the viability of using EDA for pain localisation, which may be helpful in the treatment of noncommunicable patients.


Subject(s)
Acute Pain , Humans , Neural Networks, Computer , Wavelet Analysis , Hand , Upper Extremity
3.
BMC Med Inform Decis Mak ; 23(1): 274, 2023 11 29.
Article in English | MEDLINE | ID: mdl-38031040

ABSTRACT

BACKGROUND: Point-of-care lung ultrasound (LUS) allows real-time patient scanning to help diagnose pleural effusion (PE) and plan further investigation and treatment. LUS typically requires training and experience from the clinician to accurately interpret the images. To address this limitation, we previously demonstrated a deep-learning model capable of detecting the presence of PE on LUS at an accuracy greater than 90%, when compared to an experienced LUS operator. METHODS: This follow-up study aimed to develop a deep-learning model to provide segmentations for PE in LUS. Three thousand and forty-one LUS images from twenty-four patients diagnosed with PE were selected for this study. Two LUS experts provided the ground truth for training by reviewing and segmenting the images. The algorithm was then trained using ten-fold cross-validation. Once training was completed, the algorithm segmented a separate subset of patients. RESULTS: Comparing the segmentations, we demonstrated an average Dice Similarity Coefficient (DSC) of 0.70 between the algorithm and experts. In contrast, an average DSC of 0.61 was observed between the experts. CONCLUSION: In summary, we showed that the trained algorithm achieved a comparable average DSC at PE segmentation. This represents a promising step toward developing a computational tool for accurately augmenting PE diagnosis and treatment.


Subject(s)
Deep Learning , Pleural Effusion , Humans , Follow-Up Studies , Algorithms , Lung/diagnostic imaging , Pleural Effusion/diagnostic imaging
4.
Int J Inf Technol ; 15(1): 129-136, 2023.
Article in English | MEDLINE | ID: mdl-36466771

ABSTRACT

Ensuring high quality of a vehicle will increase the lifetime and customer experience, in addition to the maintenance problems, and it is important that there are objective scientific methods available, for evaluating the quality of the vehicle. In this paper, we present a computational framework for evaluating the vehicle quality based on interpretable machine learning techniques. The validation of the proposed framework for a publicly available vehicle quality evaluation dataset has shown an objective machine learning based approach with improved interpretability and deep insight, by using several post-hoc model interpretability enhancement techniques.

5.
Sci Rep ; 12(1): 17581, 2022 10 20.
Article in English | MEDLINE | ID: mdl-36266463

ABSTRACT

Our automated deep learning-based approach identifies consolidation/collapse in LUS images to aid in the identification of late stages of COVID-19 induced pneumonia, where consolidation/collapse is one of the possible associated pathologies. A common challenge in training such models is that annotating each frame of an ultrasound video requires high labelling effort. This effort in practice becomes prohibitive for large ultrasound datasets. To understand the impact of various degrees of labelling precision, we compare labelling strategies to train fully supervised models (frame-based method, higher labelling effort) and inaccurately supervised models (video-based methods, lower labelling effort), both of which yield binary predictions for LUS videos on a frame-by-frame level. We moreover introduce a novel sampled quaternary method which randomly samples only 10% of the LUS video frames and subsequently assigns (ordinal) categorical labels to all frames in the video based on the fraction of positively annotated samples. This method outperformed the inaccurately supervised video-based method and more surprisingly, the supervised frame-based approach with respect to metrics such as precision-recall area under curve (PR-AUC) and F1 score, despite being a form of inaccurate learning. We argue that our video-based method is more robust with respect to label noise and mitigates overfitting in a manner similar to label smoothing. The algorithm was trained using a ten-fold cross validation, which resulted in a PR-AUC score of 73% and an accuracy of 89%. While the efficacy of our classifier using the sampled quaternary method significantly lowers the labelling effort, it must be verified on a larger consolidation/collapse dataset, our proposed classifier using the sampled quaternary video-based method is clinically comparable with trained experts' performance.


Subject(s)
COVID-19 , Deep Learning , Humans , COVID-19/diagnostic imaging , Ultrasonography/methods , Algorithms , Lung/diagnostic imaging
6.
Int J Inf Technol ; 14(1): 95-103, 2022.
Article in English | MEDLINE | ID: mdl-35005425

ABSTRACT

The success of deep learning, a subfield of Artificial Intelligence technologies in the field of image analysis and computer can be leveraged for building better decision support systems for clinical radiological settings. Detecting and segmenting tumorous tissues in brain region using deep learning and artificial intelligence is one such scenario, where radiologists can benefit from the computer based second opinion or decision support, for detecting the severity of disease, and survival of the subject with an accurate and timely clinical diagnosis. Gliomas are the aggressive form of brain tumors having irregular shape and ambiguous boundaries, making them one of the hardest tumors to detect, and often require a combined analysis of different types of radiological scans to make an accurate detection. In this paper, we present a fully automatic deep learning method for brain tumor segmentation in multimodal multi-contrast magnetic resonance image scans. The proposed approach is based on light weight UNET architecture, consisting of a multimodal CNN encoder-decoder based computational model. Using the publicly available Brain Tumor Segmentation (BraTS) Challenge 2018 dataset, available from the Medical Image Computing and Computer Assisted Intervention (MICCAI) society, our novel approach based on proposed light-weight UNet model, with no data augmentation requirements and without use of heavy computational resources, has resulted in an improved performance, as compared to the previous models in the challenge task that used heavy computational architectures and resources and with different data augmentation approaches. This makes the model proposed in this work more suitable for remote, extreme and low resource health care settings.

7.
Phys Med ; 83: 38-45, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33706149

ABSTRACT

Lung ultrasound (LUS) imaging as a point-of-care diagnostic tool for lung pathologies has been proven superior to X-ray and comparable to CT, enabling earlier and more accurate diagnosis in real-time at the patient's bedside. The main limitation to widespread use is its dependence on the operator training and experience. COVID-19 lung ultrasound findings predominantly reflect a pneumonitis pattern, with pleural effusion being infrequent. However, pleural effusion is easy to detect and to quantify, therefore it was selected as the subject of this study, which aims to develop an automated system for the interpretation of LUS of pleural effusion. A LUS dataset was collected at the Royal Melbourne Hospital which consisted of 623 videos containing 99,209 2D ultrasound images of 70 patients using a phased array transducer. A standardized protocol was followed that involved scanning six anatomical regions providing complete coverage of the lungs for diagnosis of respiratory pathology. This protocol combined with a deep learning algorithm using a Spatial Transformer Network provides a basis for automatic pathology classification on an image-based level. In this work, the deep learning model was trained using supervised and weakly supervised approaches which used frame- and video-based ground truth labels respectively. The reference was expert clinician image interpretation. Both approaches show comparable accuracy scores on the test set of 92.4% and 91.1%, respectively, not statistically significantly different. However, the video-based labelling approach requires significantly less effort from clinical experts for ground truth labelling.


Subject(s)
COVID-19 , Deep Learning , Pleural Effusion , Humans , Lung/diagnostic imaging , Pleural Effusion/diagnostic imaging , SARS-CoV-2 , Ultrasonography
8.
Int J Comput Assist Radiol Surg ; 11(9): 1599-610, 2016 Sep.
Article in English | MEDLINE | ID: mdl-27492067

ABSTRACT

PURPOSE: Optical colonoscopy is a prominent procedure by which clinicians examine the surface of the colon for cancerous polyps using a flexible colonoscope. One of the main concerns regarding the quality of the colonoscopy is to ensure that the whole colonic surface has been inspected for abnormalities. In this paper, we aim at estimating areas that have not been covered thoroughly by providing a map from the internal colon surface. METHODS: Camera parameters were estimated using optical flow between consecutive colonoscopy frames. A cylinder model was fitted to the colon structure using 3D pseudo stereo vision and projected into each frame. A circumferential band from the cylinder was extracted to unroll the internal colon surface (band image). By registering these band images, drift in estimating camera motion could be reduced, and a visibility map of the colon surface could be generated, revealing uncovered areas by the colonoscope. Hidden areas behind haustral folds were ignored in this study. The method was validated on simulated and actual colonoscopy videos. The realistic simulated videos were generated using a colonoscopy simulator with known ground truth, and the actual colonoscopy videos were manually assessed by a clinical expert. RESULTS: The proposed method obtained a sensitivity and precision of 98 and 96 % for detecting the number of uncovered areas on simulated data, whereas validation on real videos showed a sensitivity and precision of 96 and 78 %, respectively. Error in camera motion drift could be reduced by almost 50 % using results from band image registration. CONCLUSION: Using a simple cylindrical model for the colon and reducing drift by registering band images allows for the generation of visibility maps. The current results also suggest that the provided feedback through the visibility map could enhance clinicians' awareness of uncovered areas, which in return could reduce the probability of missing polyps.


Subject(s)
Colon/diagnostic imaging , Colonic Polyps/diagnosis , Colonoscopy/methods , Imaging, Three-Dimensional , Video Recording , Colonoscopes , Equipment Design , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...