Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Epilepsy Behav ; 154: 109735, 2024 May.
Article in English | MEDLINE | ID: mdl-38522192

ABSTRACT

Seizure events can manifest as transient disruptions in the control of movements which may be organized in distinct behavioral sequences, accompanied or not by other observable features such as altered facial expressions. The analysis of these clinical signs, referred to as semiology, is subject to observer variations when specialists evaluate video-recorded events in the clinical setting. To enhance the accuracy and consistency of evaluations, computer-aided video analysis of seizures has emerged as a natural avenue. In the field of medical applications, deep learning and computer vision approaches have driven substantial advancements. Historically, these approaches have been used for disease detection, classification, and prediction using diagnostic data; however, there has been limited exploration of their application in evaluating video-based motion detection in the clinical epileptology setting. While vision-based technologies do not aim to replace clinical expertise, they can significantly contribute to medical decision-making and patient care by providing quantitative evidence and decision support. Behavior monitoring tools offer several advantages such as providing objective information, detecting challenging-to-observe events, reducing documentation efforts, and extending assessment capabilities to areas with limited expertise. The main applications of these could be (1) improved seizure detection methods; (2) refined semiology analysis for predicting seizure type and cerebral localization. In this paper, we detail the foundation technologies used in vision-based systems in the analysis of seizure videos, highlighting their success in semiology detection and analysis, focusing on work published in the last 7 years. We systematically present these methods and indicate how the adoption of deep learning for the analysis of video recordings of seizures could be approached. Additionally, we illustrate how existing technologies can be interconnected through an integrated system for video-based semiology analysis. Each module can be customized and improved by adapting more accurate and robust deep learning approaches as these evolve. Finally, we discuss challenges and research directions for future studies.


Subject(s)
Deep Learning , Seizures , Video Recording , Humans , Seizures/diagnosis , Seizures/physiopathology , Video Recording/methods , Electroencephalography/methods
2.
Heliyon ; 9(6): e16763, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37303525

ABSTRACT

Advances in machine learning and contactless sensors have enabled the understanding complex human behaviors in a healthcare setting. In particular, several deep learning systems have been introduced to enable comprehensive analysis of neuro-developmental conditions such as Autism Spectrum Disorder (ASD). This condition affects children from their early developmental stages onwards, and diagnosis relies entirely on observing the child's behavior and detecting behavioral cues. However, the diagnosis process is time-consuming as it requires long-term behavior observation, and the scarce availability of specialists. We demonstrate the effect of a region-based computer vision system to help clinicians and parents analyze a child's behavior. For this purpose, we adopt and enhance a dataset for analyzing autism-related actions using videos of children captured in uncontrolled environments (e.g. videos collected with consumer-grade cameras, in varied environments). The data is pre-processed by detecting the target child in the video to reduce the impact of background noise. Motivated by the effectiveness of temporal convolutional models, we propose both light-weight and conventional models capable of extracting action features from video frames and classifying autism-related behaviors by analyzing the relationships between frames in a video. By extensively evaluating feature extraction and learning strategies, we demonstrate that the highest performance is attained through the use of an Inflated 3D Convnet and Multi-Stage Temporal Convolutional Network. Our model achieved a Weighted F1-score of 0.83 for the classification of the three autism-related actions. We also propose a light-weight solution by employing the ESNet backbone with the same action recognition model, achieving a competitive 0.71 Weighted F1-score, and enabling potential deployment on embedded systems. Experimental results demonstrate the ability of our proposed models to recognize autism-related actions from videos captured in an uncontrolled environment, and thus can assist clinicians in analyzing ASD.

3.
Comput Med Imaging Graph ; 95: 102027, 2022 01.
Article in English | MEDLINE | ID: mdl-34959100

ABSTRACT

With the remarkable success of representation learning for prediction problems, we have witnessed a rapid expansion of the use of machine learning and deep learning for the analysis of digital pathology and biopsy image patches. However, learning over patch-wise features using convolutional neural networks limits the ability of the model to capture global contextual information and comprehensively model tissue composition. The phenotypical and topological distribution of constituent histological entities play a critical role in tissue diagnosis. As such, graph data representations and deep learning have attracted significant attention for encoding tissue representations, and capturing intra- and inter- entity level interactions. In this review, we provide a conceptual grounding for graph analytics in digital pathology, including entity-graph construction and graph architectures, and present their current success for tumor localization and classification, tumor invasion and staging, image retrieval, and survival prediction. We provide an overview of these methods in a systematic manner organized by the graph representation of the input image, scale, and organ on which they operate. We also outline the limitations of existing techniques, and suggest potential future research directions in this domain.


Subject(s)
Deep Learning , Neoplasms , Humans , Machine Learning , Neural Networks, Computer
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2601-2604, 2021 11.
Article in English | MEDLINE | ID: mdl-34891786

ABSTRACT

Inpatient falls are a serious safety issue in hospitals and healthcare facilities. Recent advances in video analytics for patient monitoring provide a non-intrusive avenue to reduce this risk through continuous activity monitoring. However, in- bed fall risk assessment systems have received less attention in the literature. The majority of prior studies have focused on fall event detection, and do not consider the circumstances that may indicate an imminent inpatient fall. Here, we propose a video-based system that can monitor the risk of a patient falling, and alert staff of unsafe behaviour to help prevent falls before they occur. We propose an approach that leverages recent advances in human localisation and skeleton pose estimation to extract spatial features from video frames recorded in a simulated environment. We demonstrate that body positions can be effectively recognised and provide useful evidence for fall risk assessment. This work highlights the benefits of video-based models for analysing behaviours of interest, and demonstrates how such a system could enable sufficient lead time for healthcare professionals to respond and address patient needs, which is necessary for the development of fall intervention programs.


Subject(s)
Accidental Falls , Inpatients , Accidental Falls/prevention & control , Hospitals , Humans , Monitoring, Physiologic , Risk Assessment
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3613-3616, 2021 11.
Article in English | MEDLINE | ID: mdl-34892020

ABSTRACT

Recent advances in deep learning have enabled the development of automated frameworks for analysing medical images and signals, including analysis of cervical cancer. Many previous works focus on the analysis of isolated cervical cells, or do not offer explainable methods to explore and understand how the proposed models reach their classification decisions on multi-cell images which contain multiple cells. Here, we evaluate various state-of-the-art deep learning models and attention-based frameworks to classify multiple cervical cells. Our aim is to provide interpretable deep learning models by comparing their explainability through the gradients visualization. We demonstrate the importance of using images that contain multiple cells over using isolated single-cell images. We show the effectiveness of the residual channel attention model for extracting important features from a group of cells, and demonstrate this model's efficiency for multiple cervical cells classification. This work highlights the benefits of attention networks to exploit relations and distributions within multi-cell images for cervical cancer analysis. Such an approach can assist clinicians in understanding a model's prediction by providing interpretable results.


Subject(s)
Neural Networks, Computer , Uterine Cervical Neoplasms , Female , Humans
6.
Sensors (Basel) ; 21(14)2021 Jul 12.
Article in English | MEDLINE | ID: mdl-34300498

ABSTRACT

With the advances of data-driven machine learning research, a wide variety of prediction problems have been tackled. It has become critical to explore how machine learning and specifically deep learning methods can be exploited to analyse healthcare data. A major limitation of existing methods has been the focus on grid-like data; however, the structure of physiological recordings are often irregular and unordered, which makes it difficult to conceptualise them as a matrix. As such, graph neural networks have attracted significant attention by exploiting implicit information that resides in a biological system, with interacting nodes connected by edges whose weights can be determined by either temporal associations or anatomical junctions. In this survey, we thoroughly review the different types of graph architectures and their applications in healthcare. We provide an overview of these methods in a systematic manner, organized by their domain of application including functional connectivity, anatomical structure, and electrical-based analysis. We also outline the limitations of existing techniques and discuss potential directions for future research.


Subject(s)
Deep Learning , Attention , Machine Learning , Neural Networks, Computer
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 184-187, 2020 07.
Article in English | MEDLINE | ID: mdl-33017960

ABSTRACT

Recent advances in deep learning have enabled the development of automated frameworks for analysing medical images and signals. For analysis of physiological recordings, models based on temporal convolutional networks and recurrent neural networks have demonstrated encouraging results and an ability to capture complex patterns and dependencies in the data. However, representations that capture the entirety of the raw signal are suboptimal as not all portions of the signal are equally important. As such, attention mechanisms are proposed to divert focus to regions of interest, reducing computational cost and enhancing accuracy. Here, we evaluate attention-based frameworks for the classification of physiological signals in different clinical domains. We evaluated our methodology on three classification scenarios: neurogenerative disorders, neurological status and seizure type. We demonstrate that attention networks can outperform traditional deep learning models for sequence modelling by identifying the most relevant attributes of an input signal for decision making. This work highlights the benefits of attention-based models for analysing raw data in the field of biomedical research.


Subject(s)
Attention , Neural Networks, Computer , Databases, Genetic , Humans , Seizures
8.
Healthc Technol Lett ; 6(6): 187-190, 2019 Dec.
Article in English | MEDLINE | ID: mdl-32038855

ABSTRACT

Optical colonoscopy is known as a gold standard screening method in detecting and removing cancerous polyps. During this procedure, some polyps may be undetected due to their positions, not being covered by the camera or missed by the surgeon. In this Letter, the authors introduce a novel convolutional neural network (ConvNet) algorithm to map the internal colon surface to a 2D map (visibility map), which can be used to increase the awareness of clinicians about areas they might miss. This was achieved by leveraging a colonoscopy simulator to generate a dataset consisting of colonoscopy video frames and their corresponding colon centreline (CCL) points in 3D camera coordinates. A pair of video frames were used as input to a ConvNet, whereas the output was a point on the CCL and its direction vector. By knowing CCL for each frame and roughly modelling the colon as a cylinder, frames could be unrolled to build a visibility map. They validated their results using both simulated and real colonoscopy frames. Their results showed that using consecutive simulated frames to learn the CCL can be generalised to real colonoscopy video frames to generate a visibility map.

9.
Int J Comput Assist Radiol Surg ; 11(9): 1599-610, 2016 Sep.
Article in English | MEDLINE | ID: mdl-27492067

ABSTRACT

PURPOSE: Optical colonoscopy is a prominent procedure by which clinicians examine the surface of the colon for cancerous polyps using a flexible colonoscope. One of the main concerns regarding the quality of the colonoscopy is to ensure that the whole colonic surface has been inspected for abnormalities. In this paper, we aim at estimating areas that have not been covered thoroughly by providing a map from the internal colon surface. METHODS: Camera parameters were estimated using optical flow between consecutive colonoscopy frames. A cylinder model was fitted to the colon structure using 3D pseudo stereo vision and projected into each frame. A circumferential band from the cylinder was extracted to unroll the internal colon surface (band image). By registering these band images, drift in estimating camera motion could be reduced, and a visibility map of the colon surface could be generated, revealing uncovered areas by the colonoscope. Hidden areas behind haustral folds were ignored in this study. The method was validated on simulated and actual colonoscopy videos. The realistic simulated videos were generated using a colonoscopy simulator with known ground truth, and the actual colonoscopy videos were manually assessed by a clinical expert. RESULTS: The proposed method obtained a sensitivity and precision of 98 and 96 % for detecting the number of uncovered areas on simulated data, whereas validation on real videos showed a sensitivity and precision of 96 and 78 %, respectively. Error in camera motion drift could be reduced by almost 50 % using results from band image registration. CONCLUSION: Using a simple cylindrical model for the colon and reducing drift by registering band images allows for the generation of visibility maps. The current results also suggest that the provided feedback through the visibility map could enhance clinicians' awareness of uncovered areas, which in return could reduce the probability of missing polyps.


Subject(s)
Colon/diagnostic imaging , Colonic Polyps/diagnosis , Colonoscopy/methods , Imaging, Three-Dimensional , Video Recording , Colonoscopes , Equipment Design , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...