Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
Med Phys ; 50(8): 4973-4980, 2023 Aug.
Article in English | MEDLINE | ID: mdl-36724170

ABSTRACT

BACKGROUND: Measurement of cross-sectional muscle area (CSMA) at the mid third lumbar vertebra (L3) level from computed tomography (CT) images is becoming one of the reference methods for sarcopenia diagnosis. However, manual skeletal muscle segmentation is tedious and is thus restricted to research. Automated solutions are required for use in clinical practice. PURPOSE: The aim of this study was to compare the reliability of two automated solutions for the measurement of CSMA. METHODS: We conducted a retrospective analysis of CT images in our hospital database. We included consecutive individuals hospitalized at the Grenoble University Hospital in France between January and May 2018 with abdominal CT images and sagittal reconstruction. We used two types of software to automatically segment skeletal muscle: ABACS, a module of the SliceOmatic software solution "ABACS-SliceOmatic," and a deep learning-based solution called "AutoMATiCA." Manual segmentation was performed by a medical expert to generate reference data using "SliceOmatic." The Dice similarity coefficient (DSC) was used to measure overlap between the results of the manual and the automated segmentations. The DSC value for each method was compared with the Mann-Whitney U test. RESULTS: A total of 676 hospitalized individuals was retrospectively included (365 males [53.8%] and 312 females [46.2%]). The median DSC for SliceOmatic vs AutoMATiCA (0.969 [5th percentile: 0.909]) was greater than the median DSC for SliceOmatic vs. ABACS-SliceOmatic (0.949 [5th percentile: 0.836]) (p < 0.001). CONCLUSIONS: AutoMATiCA, which used artificial intelligence, was more reliable than ABACS-SliceOmatic for skeletal muscle segmentation at the L3 level in a cohort of hospitalized individuals. The next step is to develop and validate a neural network that can identify L3 slices, which is currently a fastidious process.


Subject(s)
Artificial Intelligence , Tomography, X-Ray Computed , Male , Female , Humans , Retrospective Studies , Reproducibility of Results , Cross-Sectional Studies , Tomography, X-Ray Computed/methods , Muscle, Skeletal/diagnostic imaging
2.
Stud Health Technol Inform ; 290: 1068-1069, 2022 Jun 06.
Article in English | MEDLINE | ID: mdl-35673209

ABSTRACT

Big Data and Deep Learning approaches offer new opportunities for medical data analysis. With these technologies, PREDIMED, the clinical data warehouse of Grenoble Alps University Hospital, sets up first clinical studies on retrospective data. In particular, ODIASP study, aims to develop and evaluate deep learning-based tools for automatic sarcopenia diagnosis, while using data collected via PREDIMED, in particular, medical images. Here we describe a methodology of data preparation for a clinical study via PREDIMED.


Subject(s)
Sarcopenia , Big Data , Data Warehousing , Humans , Image Processing, Computer-Assisted , Retrospective Studies , Sarcopenia/diagnostic imaging
3.
Stud Health Technol Inform ; 270: 108-112, 2020 Jun 16.
Article in English | MEDLINE | ID: mdl-32570356

ABSTRACT

Grenoble Alpes University Hospital (CHUGA) is currently deploying a health data warehouse called PREDIMED [1], a platform designed to integrate and analyze for research, education and institutional management the data of patients treated at CHUGA. PREDIMED contains healthcare data, administrative data and, potentially, data from external databases. PREDIMED is hosted by the CHUGA Information Systems Department and benefits from its strict security rules. CHUGA's institutional project PREDIMED aims to collaborate with similar projects in France and worldwide. In this paper, we present how the data model defined to implement PREDIMED at CHUGA is useful for medical experts to interactively build a cohort of patients and to visualize this cohort.


Subject(s)
Data Warehousing , Cohort Studies , Databases, Factual , Delivery of Health Care , France , Humans
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2017: 2002-2005, 2017 Jul.
Article in English | MEDLINE | ID: mdl-29060288

ABSTRACT

The automatic detection of surgical tools in surgery videos is a promising solution for surgical workflow analysis. It paves the way to various applications, including surgical workflow optimization, surgical skill evaluation and real-time warning generation. A solution based on convolutional neural networks (CNNs) is proposed in this paper. Unlike existing solutions, the proposed CNN does not analyze images independently. it analyzes sequences of consecutive images. Features extracted from each image by the CNN are fused inside the network using the optical flow. For improved performance, this multi-image fusion strategy is also applied while training the CNN. The proposed framework was evaluated in a dataset of 30 cataract surgery videos (6 hours of videos). Ten tool categories were defined by surgeons. The proposed system was able to detect each of these categories with a high area under the ROC curve (0.953 ≤ Az ≤ 0.987). The proposed detector, based on multi-image fusion, was significantly more sensitive and specific than a similar system analyzing images independently (p = 2.98 × 10-6 and p = 2.07 × 10-3, respectively).


Subject(s)
Cataract , Cataract Extraction , Humans , Neural Networks, Computer , ROC Curve
5.
Med Image Anal ; 39: 178-193, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28511066

ABSTRACT

Deep learning is quickly becoming the leading methodology for medical image analysis. Given a large medical archive, where each image is associated with a diagnosis, efficient pathology detectors or classifiers can be trained with virtually no expert knowledge about the target pathologies. However, deep learning algorithms, including the popular ConvNets, are black boxes: little is known about the local patterns analyzed by ConvNets to make a decision at the image level. A solution is proposed in this paper to create heatmaps showing which pixels in images play a role in the image-level predictions. In other words, a ConvNet trained for image-level classification can be used to detect lesions as well. A generalization of the backpropagation method is proposed in order to train ConvNets that produce high-quality heatmaps. The proposed solution is applied to diabetic retinopathy (DR) screening in a dataset of almost 90,000 fundus photographs from the 2015 Kaggle Diabetic Retinopathy competition and a private dataset of almost 110,000 photographs (e-ophtha). For the task of detecting referable DR, very good detection performance was achieved: Az=0.954 in Kaggle's dataset and Az=0.949 in e-ophtha. Performance was also evaluated at the image level and at the lesion level in the DiaretDB1 dataset, where four types of lesions are manually segmented: microaneurysms, hemorrhages, exudates and cotton-wool spots. For the task of detecting images containing these four lesion types, the proposed detector, which was trained to detect referable DR, outperforms recent algorithms trained to detect those lesions specifically, with pixel-level supervision. At the lesion level, the proposed detector outperforms heatmap generation algorithms for ConvNets. This detector is part of the Messidor® system for mobile eye pathology screening. Because it does not rely on expert knowledge or manual segmentation for detecting relevant patterns, the proposed solution is a promising image mining tool, which has the potential to discover new biomarkers in images.


Subject(s)
Diabetic Retinopathy/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Machine Learning , Retina/diagnostic imaging , Algorithms , Artifacts , Data Mining/methods , Humans
6.
Med Image Anal ; 18(3): 579-90, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24637155

ABSTRACT

Nowadays, many surgeries, including eye surgeries, are video-monitored. We present in this paper an automatic video analysis system able to recognize surgical tasks in real-time. The proposed system relies on the Content-Based Video Retrieval (CBVR) paradigm. It characterizes short subsequences in the video stream and searches for video subsequences with similar structures in a video archive. Fixed-length feature vectors are built for each subsequence: the feature vectors are unchanged by variations in duration and temporal structure among the target surgical tasks. Therefore, it is possible to perform fast nearest neighbor searches in the video archive. The retrieved video subsequences are used to recognize the current surgical task by analogy reasoning. The system can be trained to recognize any surgical task using weak annotations only. It was applied to a dataset of 23 epiretinal membrane surgeries and a dataset of 100 cataract surgeries. Three surgical tasks were annotated in the first dataset. Nine surgical tasks were annotated in the second dataset. To assess its generality, the system was also applied to a dataset of 1,707 movie clips in which 12 human actions were annotated. High task recognition scores were measured in all three datasets. Real-time task recognition will be used in future works to communicate with surgeons (trainees in particular) or with surgical devices.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Ophthalmologic Surgical Procedures/methods , Pattern Recognition, Automated/methods , Photography/methods , Surgery, Computer-Assisted/methods , Video Recording/methods , Algorithms , Artificial Intelligence , Computer Systems , Eye Diseases/surgery , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
7.
Article in English | MEDLINE | ID: mdl-25569912

ABSTRACT

Anterior eye segment surgeries are usually video-recorded. If we are able to efficiently analyze surgical videos in real-time, new decision support tools will emerge. The main anatomical landmarks in these videos are the pupil boundaries and the limbus, but segmenting them is challenging due to the variety of colors and textures in the pupil, the iris, the sclera and the lids. In this paper, we present a solution to reliably normalize the center and the scale in videos, without explicitly segmenting these landmarks. First, a robust solution to track the pupil center is presented: it uses the fact that the pupil boundaries, the limbus and the sclera / lid interface are concentric. Second, a solution to estimate the zoom level is presented: it relies on the illumination pattern reflected on the cornea. The proposed solution was assessed in a dataset of 186 real-live cataract surgery videos. The distance between the true and estimated pupil centers was equal to 8.0 ± 6.9% of the limbus radius. The correlation between the estimated zoom level and the true limbus size in images was high: R = 0.834.


Subject(s)
Cataract Extraction/methods , Cataract/diagnosis , Cornea/surgery , Cornea/pathology , Decision Support Techniques , Humans , Image Interpretation, Computer-Assisted , Sclera/pathology , Video Recording/methods
8.
Article in English | MEDLINE | ID: mdl-25571028

ABSTRACT

Huge amounts of surgical data are recorded during video-monitored surgery. Content-based video retrieval systems intent to reuse those data for computer-aided surgery. In this paper, we focus on real-time recognition of cataract surgery steps: the goal is to retrieve from a database surgery videos that were recorded during the same surgery step. The proposed system relies on motion features for video characterization. Motion features are usually impacted by eye motion or zoom level variations, which are not necessarily relevant for surgery step recognition. Those problems certainly limit the performance of the retrieval system. We therefore propose to refine motion feature extraction by applying pre-processing steps based on a novel pupil center and scale tracking method. Those pre-processing steps are evaluated for two different motion features. In this paper, a similarity measure adapted from Piciarelli's video surveillance system is evaluated for the first time in a surgery dataset. This similarity measure provides good results and for both motion features, the proposed preprocessing steps improved the retrieval performance of the system significantly.


Subject(s)
Cataract Extraction , Pattern Recognition, Automated/methods , Video Recording , Algorithms , Automation , Databases, Factual , Eye Movements , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...