Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 71
Filter
1.
Int J Retina Vitreous ; 10(1): 42, 2024 May 31.
Article in English | MEDLINE | ID: mdl-38822446

ABSTRACT

AIM: To adopt a novel artificial intelligence (AI) optical coherence tomography (OCT)-based program to identify the presence of biomarkers associated with central serous chorioretinopathy (CSC) and whether these can differentiate between acute and chronic central serous chorioretinopathy (aCSC and cCSC). METHODS: Multicenter, observational study with a retrospective design enrolling treatment-naïve patients with aCSC and cCSC. The diagnosis of aCSC and cCSC was established with multimodal imaging and for the current study subsequent follow-up visits were also considered. Baseline OCTs were analyzed by an AI-based platform (Discovery® OCT Fluid and Biomarker Detector, RetinAI AG, Switzerland). This software allows to detect several different biomarkers in each single OCT scan, including subretinal fluid (SRF), intraretinal fluid (IRF), hyperreflective foci (HF) and flat irregular pigment epithelium detachment (FIPED). The presence of SRF was considered as a necessary inclusion criterion for performing biomarker analysis and OCT slabs without SRF presence were excluded from the analysis. RESULTS: Overall, 160 eyes of 144 patients with CSC were enrolled, out of which 100 (62.5%) eyes were diagnosed with cCSC and 60 eyes (34.5%) with aCSC. In the OCT slabs showing presence of SRF the presence of biomarkers was found to be clinically relevant (> 50%) for HF and FIPED in aCSC and cCSC. HF had an average percentage of 81% (± 20) in the cCSC group and 81% (± 15) in the aCSC group (p = 0.4295) and FIPED had a mean percentage of 88% (± 18) in cCSC vs. 89% (± 15) in the aCSC (p = 0.3197). CONCLUSION: We demonstrate that HF and FIPED are OCT biomarkers positively associated with CSC when present at baseline. While both HF and FIPED biomarkers could aid in CSC diagnosis, they could not distinguish between aCSC and cCSC at the first visit. AI-assisted biomarker detection shows promise for reducing invasive imaging needs, but further validation through longitudinal studies is needed.

2.
Cell Calcium ; 121: 102893, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38701707

ABSTRACT

The release of Ca2+ ions from intracellular stores plays a crucial role in many cellular processes, acting as a secondary messenger in various cell types, including cardiomyocytes, smooth muscle cells, hepatocytes, and many others. Detecting and classifying associated local Ca2+ release events is particularly important, as these events provide insight into the mechanisms, interplay, and interdependencies of local Ca2+release events underlying global intracellular Ca2+signaling. However, time-consuming and labor-intensive procedures often complicate analysis, especially with low signal-to-noise ratio imaging data. Here, we present an innovative deep learning-based approach for automatically detecting and classifying local Ca2+ release events. This approach is exemplified with rapid full-frame confocal imaging data recorded in isolated cardiomyocytes. To demonstrate the robustness and accuracy of our method, we first use conventional evaluation methods by comparing the intersection between manual annotations and the segmentation of Ca2+ release events provided by the deep learning method, as well as the annotated and recognized instances of individual events. In addition to these methods, we compare the performance of the proposed model with the annotation of six experts in the field. Our model can recognize more than 75 % of the annotated Ca2+ release events and correctly classify more than 75 %. A key result was that there were no significant differences between the annotations produced by human experts and the result of the proposed deep learning model. We conclude that the proposed approach is a robust and time-saving alternative to conventional full-frame confocal imaging analysis of local intracellular Ca2+ events.


Subject(s)
Calcium Signaling , Calcium , Deep Learning , Microscopy, Confocal , Myocytes, Cardiac , Calcium/metabolism , Microscopy, Confocal/methods , Animals , Myocytes, Cardiac/metabolism , Image Processing, Computer-Assisted/methods
3.
Comput Struct Biotechnol J ; 24: 334-342, 2024 Dec.
Article in English | MEDLINE | ID: mdl-38690550

ABSTRACT

Malaria, a significant global health challenge, is caused by Plasmodium parasites. The Plasmodium liver stage plays a pivotal role in the establishment of the infection. This study focuses on the liver stage development of the model organism Plasmodium berghei, employing fluorescent microscopy imaging and convolutional neural networks (CNNs) for analysis. Convolutional neural networks have been recently proposed as a viable option for tasks such as malaria detection, prediction of host-pathogen interactions, or drug discovery. Our research aimed to predict the transition of Plasmodium-infected liver cells to the merozoite stage, a key development phase, 15 hours in advance. We collected and analyzed hourly imaging data over a span of at least 38 hours from 400 sequences, encompassing 502 parasites. Our method was compared to human annotations to validate its efficacy. Performance metrics, including the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity, were evaluated on an independent test dataset. The outcomes revealed an AUC of 0.873, a sensitivity of 84.6%, and a specificity of 83.3%, underscoring the potential of our CNN-based framework to predict liver stage development of P. berghei. These findings not only demonstrate the feasibility of our methodology but also could potentially contribute to the broader understanding of parasite biology.

4.
IEEE Trans Med Imaging ; PP2024 Apr 19.
Article in English | MEDLINE | ID: mdl-38640052

ABSTRACT

In Ultrasound Localization Microscopy (ULM), achieving high-resolution images relies on the precise localization of contrast agent particles across a series of beamformed frames. However, our study uncovers an enormous potential: The process of delay-and-sum beamforming leads to an irreversible reduction of Radio-Frequency (RF) channel data, while its implications for localization remain largely unexplored. The rich contextual information embedded within RF wavefronts, including their hyperbolic shape and phase, offers great promise for guiding Deep Neural Networks (DNNs) in challenging localization scenarios. To fully exploit this data, we propose to directly localize scatterers in RF channel data. Our approach involves a custom super-resolution DNN using learned feature channel shuffling, non-maximum suppression, and a semi-global convolutional block for reliable and accurate wavefront localization. Additionally, we introduce a geometric point transformation that facilitates seamless mapping to the B-mode coordinate space. To understand the impact of beamforming on ULM, we validate the effectiveness of our method by conducting an extensive comparison with State-Of-The-Art (SOTA) techniques. We present the inaugural in vivo results from a wavefront-localizing DNN, highlighting its real-world practicality. Our findings show that RF-ULM bridges the domain shift between synthetic and real datasets, offering a considerable advantage in terms of precision and complexity. To enable the broader research community to benefit from our findings, our code and the associated SOTA methods are made available at https://github.com/hahnec/rf-ulm.

5.
Transl Vis Sci Technol ; 13(4): 1, 2024 Apr 02.
Article in English | MEDLINE | ID: mdl-38564203

ABSTRACT

Purpose: The purpose of this study was to develop a deep learning algorithm, to detect retinal breaks and retinal detachments on ultra-widefield fundus (UWF) optos images using artificial intelligence (AI). Methods: Optomap UWF images of the database were annotated to four groups by two retina specialists: (1) retinal breaks without detachment, (2) retinal breaks with retinal detachment, (3) retinal detachment without visible retinal breaks, and (4) a combination of groups 1 to 3. The fundus image data set was split into a training set and an independent test set following an 80% to 20% ratio. Image preprocessing methods were applied. An EfficientNet classification model was trained with the training set and evaluated with the test set. Results: A total of 2489 UWF images were included into the dataset, resulting in a training set size of 2008 UWF images and a test set size of 481 images. The classification models achieved an area under the receiver operating characteristic curve (AUC) on the testing set of 0.975 regarding lesion detection, an AUC of 0.972 for retinal detachment and an AUC of 0.913 for retinal breaks. Conclusions: A deep learning system to detect retinal breaks and retinal detachment using UWF images is feasible and has a good specificity. This is relevant for clinical routine as there can be a high rate of missed breaks in clinics. Future clinical studies will be necessary to evaluate the cost-effectiveness of applying such an algorithm as an automated auxiliary tool in a large practices or tertiary referral centers. Translational Relevance: This study demonstrates the relevance of applying AI in diagnosing peripheral retinal breaks in clinical routine in UWF fundus images.


Subject(s)
Deep Learning , Retinal Detachment , Retinal Perforations , Humans , Retinal Detachment/diagnosis , Artificial Intelligence , Photography
6.
Sci Data ; 11(1): 373, 2024 Apr 12.
Article in English | MEDLINE | ID: mdl-38609405

ABSTRACT

In recent years, the landscape of computer-assisted interventions and post-operative surgical video analysis has been dramatically reshaped by deep-learning techniques, resulting in significant advancements in surgeons' skills, operation room management, and overall surgical outcomes. However, the progression of deep-learning-powered surgical technologies is profoundly reliant on large-scale datasets and annotations. In particular, surgical scene understanding and phase recognition stand as pivotal pillars within the realm of computer-assisted surgery and post-operative assessment of cataract surgery videos. In this context, we present the largest cataract surgery video dataset that addresses diverse requisites for constructing computerized surgical workflow analysis and detecting post-operative irregularities in cataract surgery. We validate the quality of annotations by benchmarking the performance of several state-of-the-art neural network architectures for phase recognition and surgical scene segmentation. Besides, we initiate the research on domain adaptation for instrument segmentation in cataract surgery by evaluating cross-domain instrument segmentation performance in cataract surgery videos. The dataset and annotations are publicly available in Synapse.


Subject(s)
Cataract Extraction , Cataract , Deep Learning , Video Recording , Humans , Benchmarking , Neural Networks, Computer , Cataract Extraction/methods
7.
Ophthalmologica ; 2024 Mar 29.
Article in English | MEDLINE | ID: mdl-38555632

ABSTRACT

INTRODUCTION: The aim of this study is to investigate the role of an artificial intelligence (AI)-developed OCT program to predict the clinical course of central serous chorioretinopathy (CSC ) based on baseline pigment epithelium detachment (PED) features. METHODS: Single-center, observational study with a retrospective design. Treatment-naïve patients with acute CSC and chronic CSC were recruited and OCTs were analyzed by an AI-developed platform (Discovery OCT Fluid and Biomarker Detector, RetinAI AG, Switzerland), providing automatic detection and volumetric quantification of PEDs. Flat irregular PED presence was annotated manually and afterwards measured by the AI program automatically. RESULTS: 115 eyes of 101 patients with CSC were included, of which 70 were diagnosed with chronic CSC and 45 with acute CSC. It was found that patients with baseline presence of foveal flat PEDs and multiple flat foveal and extrafoveal PEDs had a higher chance of developing chronic form. AI-based volumetric analysis revealed no significant differences between the groups. CONCLUSIONS: While more evidence is needed to confirm the effectiveness of AI-based PED quantitative analysis, this study highlights the significance of identifying flat irregular PEDs at the earliest stage possible in patients with CSC, to optimize patient management and long-term visual outcomes.

8.
Int J Comput Assist Radiol Surg ; 19(5): 851-859, 2024 May.
Article in English | MEDLINE | ID: mdl-38189905

ABSTRACT

PURPOSE: Semantic segmentation plays a pivotal role in many applications related to medical image and video analysis. However, designing a neural network architecture for medical image and surgical video segmentation is challenging due to the diverse features of relevant classes, including heterogeneity, deformability, transparency, blunt boundaries, and various distortions. We propose a network architecture, DeepPyramid+, which addresses diverse challenges encountered in medical image and surgical video segmentation. METHODS: The proposed DeepPyramid+ incorporates two major modules, namely "Pyramid View Fusion" (PVF) and "Deformable Pyramid Reception" (DPR), to address the outlined challenges. PVF replicates a deduction process within the neural network, aligning with the human visual system, thereby enhancing the representation of relative information at each pixel position. Complementarily, DPR introduces shape- and scale-adaptive feature extraction techniques using dilated deformable convolutions, enhancing accuracy and robustness in handling heterogeneous classes and deformable shapes. RESULTS: Extensive experiments conducted on diverse datasets, including endometriosis videos, MRI images, OCT scans, and cataract and laparoscopy videos, demonstrate the effectiveness of DeepPyramid+ in handling various challenges such as shape and scale variation, reflection, and blur degradation. DeepPyramid+ demonstrates significant improvements in segmentation performance, achieving up to a 3.65% increase in Dice coefficient for intra-domain segmentation and up to a 17% increase in Dice coefficient for cross-domain segmentation. CONCLUSIONS: DeepPyramid+ consistently outperforms state-of-the-art networks across diverse modalities considering different backbone networks, showcasing its versatility. Accordingly, DeepPyramid+ emerges as a robust and effective solution, successfully overcoming the intricate challenges associated with relevant content segmentation in medical images and surgical videos. Its consistent performance and adaptability indicate its potential to enhance precision in computerized medical image and surgical video analysis applications.


Subject(s)
Neural Networks, Computer , Humans , Image Processing, Computer-Assisted/methods , Video Recording , Magnetic Resonance Imaging/methods , Tomography, Optical Coherence/methods , Female , Laparoscopy/methods , Algorithms
9.
Retina ; 44(2): 316-323, 2024 Feb 01.
Article in English | MEDLINE | ID: mdl-37883530

ABSTRACT

PURPOSE: To identify optical coherence tomography (OCT) features to predict the course of central serous chorioretinopathy (CSC) with an artificial intelligence-based program. METHODS: Multicenter, observational study with a retrospective design. Treatment-naïve patients with acute CSC and chronic CSC were enrolled. Baseline OCTs were examined by an artificial intelligence-developed platform (Discovery OCT Fluid and Biomarker Detector, RetinAI AG, Switzerland). Through this platform, automated retinal layer thicknesses and volumes, including intaretinal and subretinal fluid, and pigment epithelium detachment were measured. Baseline OCT features were compared between acute CSC and chronic CSC patients. RESULTS: One hundred and sixty eyes of 144 patients with CSC were enrolled, of which 100 had chronic CSC and 60 acute CSC. Retinal layer analysis of baseline OCT scans showed that the inner nuclear layer, the outer nuclear layer, and the photoreceptor-retinal pigmented epithelium complex were significantly thicker at baseline in eyes with acute CSC in comparison with those with chronic CSC ( P < 0.001). Similarly, choriocapillaris and choroidal stroma and retinal thickness (RT) were thicker in acute CSC than chronic CSC eyes ( P = 0.001). Volume analysis revealed average greater subretinal fluid volumes in the acute CSC group in comparison with chronic CSC ( P = 0.041). CONCLUSION: Optical coherence tomography features may be helpful to predict the clinical course of CSC. The baseline presence of an increased thickness in the outer retinal layers, choriocapillaris and choroidal stroma, and subretinal fluid volume seems to be associated with acute course of the disease.


Subject(s)
Central Serous Chorioretinopathy , Humans , Central Serous Chorioretinopathy/diagnosis , Tomography, Optical Coherence/methods , Retrospective Studies , Artificial Intelligence , Retina , Fluorescein Angiography
10.
Sci Rep ; 13(1): 19667, 2023 11 11.
Article in English | MEDLINE | ID: mdl-37952011

ABSTRACT

Recent developments in deep learning have shown success in accurately predicting the location of biological markers in Optical Coherence Tomography (OCT) volumes of patients with Age-Related Macular Degeneration (AMD) and Diabetic Retinopathy (DR). We propose a method that automatically locates biological markers to the Early Treatment Diabetic Retinopathy Study (ETDRS) rings, only requiring B-scan-level presence annotations. We trained a neural network using 22,723 OCT B-Scans of 460 eyes (433 patients) with AMD and DR, annotated with slice-level labels for Intraretinal Fluid (IRF) and Subretinal Fluid (SRF). The neural network outputs were mapped into the corresponding ETDRS rings. We incorporated the class annotations and domain knowledge into a loss function to constrain the output with biologically plausible solutions. The method was tested on a set of OCT volumes with 322 eyes (189 patients) with Diabetic Macular Edema, with slice-level SRF and IRF presence annotations for the ETDRS rings. Our method accurately predicted the presence of IRF and SRF in each ETDRS ring, outperforming previous baselines even in the most challenging scenarios. Our model was also successfully applied to en-face marker segmentation and showed consistency within C-scans, despite not incorporating volume information in the training process. We achieved a correlation coefficient of 0.946 for the prediction of the IRF area.


Subject(s)
Diabetic Retinopathy , Macular Degeneration , Macular Edema , Humans , Diabetic Retinopathy/diagnostic imaging , Macular Edema/diagnostic imaging , Tomography, Optical Coherence/methods , Macular Degeneration/diagnostic imaging , Biomarkers
11.
Eur J Radiol ; 167: 111047, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37690351

ABSTRACT

PURPOSE: To evaluate the effectiveness of automated liver segmental volume quantification and calculation of the liver segmental volume ratio (LSVR) on a non-contrast T1-vibe Dixon liver MRI sequence using a deep learning segmentation pipeline. METHOD: A dataset of 200 liver MRI with a non-contrast 3 mm T1-vibe Dixon sequence was manually labeledslice-by-sliceby an expert for Couinaud liver segments, while portal and hepatic veins were labeled separately. A convolutional neural networkwas trainedusing 170 liver MRI for training and 30 for evaluation. Liver segmental volumes without liver vessels were retrieved and LSVR was calculated as the liver segmental volumes I-III divided by the liver segmental volumes IV-VIII. LSVR was compared with the expert manual LSVR calculation and the LSVR calculated on CT scans in 30 patients with CT and MRI within 6 months. RESULTS: Theconvolutional neural networkclassified the Couinaud segments I-VIII with an average Dice score of 0.770 ± 0.03, ranging between 0.726 ± 0.13 (segment IVb) and 0.810 ± 0.09 (segment V). The calculated mean LSVR with liver MRI unseen by the model was 0.32 ± 0.14, as compared with manually quantified LSVR of 0.33 ± 0.15, resulting in a mean absolute error (MAE) of 0.02. A comparable LSVR of 0.35 ± 0.14 with a MAE of 0.04 resulted with the LSRV retrieved from the CT scans. The automated LSVR showed significant correlation with the manual MRI LSVR (Spearman r = 0.97, p < 0.001) and CT LSVR (Spearman r = 0.95, p < 0.001). CONCLUSIONS: A convolutional neural network allowed for accurate automated liver segmental volume quantification and calculation of LSVR based on a non-contrast T1-vibe Dixon sequence.


Subject(s)
Deep Learning , Humans , Liver/diagnostic imaging , Radiography , Radionuclide Imaging , Magnetic Resonance Imaging
12.
Sci Rep ; 13(1): 16417, 2023 09 29.
Article in English | MEDLINE | ID: mdl-37775538

ABSTRACT

Polarimetry is an optical characterization technique capable of analyzing the polarization state of light reflected by materials and biological samples. In this study, we investigate the potential of Müller matrix polarimetry (MMP) to analyze fresh pancreatic tissue samples. Due to its highly heterogeneous appearance, pancreatic tissue type differentiation is a complex task. Furthermore, its challenging location in the body makes creating direct imaging difficult. However, accurate and reliable methods for diagnosing pancreatic diseases are critical for improving patient outcomes. To this end, we measured the Müller matrices of ex-vivo unfixed human pancreatic tissue and leverage the feature-learning capabilities of a machine-learning model to derive an optimized data representation that minimizes normal-abnormal classification error. We show experimentally that our approach accurately differentiates between normal and abnormal pancreatic tissue. This is, to our knowledge, the first study to use ex-vivo unfixed human pancreatic tissue combined with feature-learning from raw Müller matrix readings for this purpose.


Subject(s)
Diagnostic Imaging , Humans , Diagnostic Imaging/methods , Spectrum Analysis
13.
Int J Comput Assist Radiol Surg ; 18(6): 1085-1091, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37133678

ABSTRACT

PURPOSE: A fundamental problem in designing safe machine learning systems is identifying when samples presented to a deployed model differ from those observed at training time. Detecting so-called out-of-distribution (OoD) samples is crucial in safety-critical applications such as robotically guided retinal microsurgery, where distances between the instrument and the retina are derived from sequences of 1D images that are acquired by an instrument-integrated optical coherence tomography (iiOCT) probe. METHODS: This work investigates the feasibility of using an OoD detector to identify when images from the iiOCT probe are inappropriate for subsequent machine learning-based distance estimation. We show how a simple OoD detector based on the Mahalanobis distance can successfully reject corrupted samples coming from real-world ex vivo porcine eyes. RESULTS: Our results demonstrate that the proposed approach can successfully detect OoD samples and help maintain the performance of the downstream task within reasonable levels. MahaAD outperformed a supervised approach trained on the same kind of corruptions and achieved the best performance in detecting OoD cases from a collection of iiOCT samples with real-world corruptions. CONCLUSION: The results indicate that detecting corrupted iiOCT data through OoD detection is feasible and does not need prior knowledge of possible corruptions. Consequently, MahaAD could aid in ensuring patient safety during robotically guided microsurgery by preventing deployed prediction models from estimating distances that put the patient at risk.


Subject(s)
Microsurgery , Retina , Animals , Swine , Microsurgery/methods , Retina/diagnostic imaging , Retina/surgery , Machine Learning , Tomography, Optical Coherence/methods
14.
Med Image Anal ; 87: 102822, 2023 07.
Article in English | MEDLINE | ID: mdl-37182321

ABSTRACT

Recent advances in machine learning models have greatly increased the performance of automated methods in medical image analysis. However, the internal functioning of such models is largely hidden, which hinders their integration in clinical practice. Explainability and trust are viewed as important aspects of modern methods, for the latter's widespread use in clinical communities. As such, validation of machine learning models represents an important aspect and yet, most methods are only validated in a limited way. In this work, we focus on providing a richer and more appropriate validation approach for highly powerful Visual Question Answering (VQA) algorithms. To better understand the performance of these methods, which answer arbitrary questions related to images, this work focuses on an automatic visual Turing test (VTT). That is, we propose an automatic adaptive questioning method, that aims to expose the reasoning behavior of a VQA algorithm. Specifically, we introduce a reinforcement learning (RL) agent that observes the history of previously asked questions, and uses it to select the next question to pose. We demonstrate our approach in the context of evaluating algorithms that automatically answer questions related to diabetic macular edema (DME) grading. The experiments show that such an agent has similar behavior to a clinician, whereby asking questions that are relevant to key clinical concepts.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Macular Edema , Humans , Diabetic Retinopathy/diagnostic imaging , Macular Edema/diagnostic imaging , Algorithms , Machine Learning
15.
Eur Phys J Plus ; 138(5): 391, 2023.
Article in English | MEDLINE | ID: mdl-37192839

ABSTRACT

Medical imaging has been intensively employed in screening, diagnosis and monitoring during the COVID-19 pandemic. With the improvement of RT-PCR and rapid inspection technologies, the diagnostic references have shifted. Current recommendations tend to limit the application of medical imaging in the acute setting. Nevertheless, efficient and complementary values of medical imaging have been recognized at the beginning of the pandemic when facing unknown infectious diseases and a lack of sufficient diagnostic tools. Optimizing medical imaging for pandemics may still have encouraging implications for future public health, especially for long-lasting post-COVID-19 syndrome theranostics. A critical concern for the application of medical imaging is the increased radiation burden, particularly when medical imaging is used for screening and rapid containment purposes. Emerging artificial intelligence (AI) technology provides the opportunity to reduce the radiation burden while maintaining diagnostic quality. This review summarizes the current AI research on dose reduction for medical imaging, and the retrospective identification of their potential in COVID-19 may still have positive implications for future public health.

16.
Int J Comput Assist Radiol Surg ; 18(7): 1185-1192, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37184768

ABSTRACT

PURPOSE: Surgical scene understanding plays a critical role in the technology stack of tomorrow's intervention-assisting systems in endoscopic surgeries. For this, tracking the endoscope pose is a key component, but remains challenging due to illumination conditions, deforming tissues and the breathing motion of organs. METHOD: We propose a solution for stereo endoscopes that estimates depth and optical flow to minimize two geometric losses for camera pose estimation. Most importantly, we introduce two learned adaptive per-pixel weight mappings that balance contributions according to the input image content. To do so, we train a Deep Declarative Network to take advantage of the expressiveness of deep learning and the robustness of a novel geometric-based optimization approach. We validate our approach on the publicly available SCARED dataset and introduce a new in vivo dataset, StereoMIS, which includes a wider spectrum of typically observed surgical settings. RESULTS: Our method outperforms state-of-the-art methods on average and more importantly, in difficult scenarios where tissue deformations and breathing motion are visible. We observed that our proposed weight mappings attenuate the contribution of pixels on ambiguous regions of the images, such as deforming tissues. CONCLUSION: We demonstrate the effectiveness of our solution to robustly estimate the camera pose in challenging endoscopic surgical scenes. Our contributions can be used to improve related tasks like simultaneous localization and mapping (SLAM) or 3D reconstruction, therefore advancing surgical scene understanding in minimally invasive surgery.


Subject(s)
Algorithms , Imaging, Three-Dimensional , Humans , Imaging, Three-Dimensional/methods , Endoscopy/methods , Minimally Invasive Surgical Procedures/methods , Endoscopes
17.
EClinicalMedicine ; 55: 101745, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36457646

ABSTRACT

Background: Diagnosing heparin-induced thrombocytopenia (HIT) at the bedside remains challenging, exposing a significant number of patients at risk of delayed diagnosis or overtreatment. We hypothesized that machine-learning algorithms could be utilized to develop a more accurate and user-friendly diagnostic tool that integrates diverse clinical and laboratory information and accounts for complex interactions. Methods: We conducted a prospective cohort study including 1393 patients with suspected HIT between 2018 and 2021 from 10 study centers. Detailed clinical information and laboratory data were collected, and various immunoassays were conducted. The washed platelet heparin-induced platelet activation assay (HIPA) served as the reference standard. Findings: HIPA diagnosed HIT in 119 patients (prevalence 8.5%). The feature selection process in the training dataset (75% of patients) yielded the following predictor variables: (1) immunoassay test result, (2) platelet nadir, (3) unfractionated heparin use, (4) CRP, (5) timing of thrombocytopenia, and (6) other causes of thrombocytopenia. The best performing models were a support vector machine in case of the chemiluminescent immunoassay (CLIA) and the ELISA, as well as a gradient boosting machine in particle-gel immunoassay (PaGIA). In the validation dataset (25% of patients), the AUROC of all models was 0.99 (95% CI: 0.97, 1.00). Compared to the currently recommended diagnostic algorithm (4Ts score, immunoassay), the numbers of false-negative patients were reduced from 12 to 6 (-50.0%; ELISA), 9 to 3 (-66.7%, PaGIA) and 14 to 5 (-64.3%; CLIA). The numbers of false-positive individuals were reduced from 87 to 61 (-29.8%; ELISA), 200 to 63 (-68.5%; PaGIA) and increased from 50 to 63 (+29.0%) for the CLIA. Interpretation: Our user-friendly machine-learning algorithm for the diagnosis of HIT (https://toradi-hit.org) was substantially more accurate than the currently recommended diagnostic algorithm. It has the potential to reduce delayed diagnosis and overtreatment in clinical practice. Future studies shall validate this model in wider settings. Funding: Swiss National Science Foundation (SNSF), and International Society on Thrombosis and Haemostasis (ISTH).

18.
Sci Rep ; 12(1): 22059, 2022 12 21.
Article in English | MEDLINE | ID: mdl-36543852

ABSTRACT

We evaluated the effectiveness of automated segmentation of the liver and its vessels with a convolutional neural network on non-contrast T1 vibe Dixon acquisitions. A dataset of non-contrast T1 vibe Dixon liver magnetic resonance images was labelled slice-by-slice for the outer liver border, portal, and hepatic veins by an expert. A 3D U-Net convolutional neural network was trained with different combinations of Dixon in-phase, opposed-phase, water, and fat reconstructions. The neural network trained with the single-modal in-phase reconstructions achieved a high performance for liver parenchyma (Dice 0.936 ± 0.02), portal veins (0.634 ± 0.09), and hepatic veins (0.532 ± 0.12) segmentation. No benefit of using multi-modal input was observed (p = 1.0 for all experiments), combining in-phase, opposed-phase, fat, and water reconstruction. Accuracy for differentiation between portal and hepatic veins was 99% for portal veins and 97% for hepatic veins in the central region and slightly lower in the peripheral region (91% for portal veins, 80% for hepatic veins). In conclusion, deep learning-based automated segmentation of the liver and its vessels on non-contrast T1 vibe Dixon was highly effective. The single-modal in-phase input achieved the best performance in segmentation and differentiation between portal and hepatic veins.


Subject(s)
Liver , Neural Networks, Computer , Liver/diagnostic imaging , Magnetic Resonance Imaging/methods , Portal Vein/diagnostic imaging , Water , Image Processing, Computer-Assisted/methods
19.
Nat Commun ; 13(1): 5882, 2022 10 06.
Article in English | MEDLINE | ID: mdl-36202816

ABSTRACT

Despite the potential of deep learning (DL)-based methods in substituting CT-based PET attenuation and scatter correction for CT-free PET imaging, a critical bottleneck is their limited capability in handling large heterogeneity of tracers and scanners of PET imaging. This study employs a simple way to integrate domain knowledge in DL for CT-free PET imaging. In contrast to conventional direct DL methods, we simplify the complex problem by a domain decomposition so that the learning of anatomy-dependent attenuation correction can be achieved robustly in a low-frequency domain while the original anatomy-independent high-frequency texture can be preserved during the processing. Even with the training from one tracer on one scanner, the effectiveness and robustness of our proposed approach are confirmed in tests of various external imaging tracers on different scanners. The robust, generalizable, and transparent DL development may enhance the potential of clinical translation.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging , Positron Emission Tomography Computed Tomography , Positron-Emission Tomography/methods
20.
Ophthalmologica ; 245(6): 516-527, 2022.
Article in English | MEDLINE | ID: mdl-36215958

ABSTRACT

INTRODUCTION: In this retrospective cohort study, we wanted to evaluate the performance and analyze the insights of an artificial intelligence (AI) algorithm in detecting retinal fluid in spectral-domain OCT volume scans from a large cohort of patients with neovascular age-related macular degeneration (AMD) and diabetic macular edema (DME). METHODS: A total of 3,981 OCT volumes from 374 patients with AMD and 11,501 OCT volumes from 811 patients with DME were acquired with Heidelberg-Spectralis OCT device (Heidelberg Engineering Inc., Heidelberg, Germany) between 2013 and 2021. Each OCT volume was annotated for the presence or absence of intraretinal fluid (IRF) and subretinal fluid (SRF) by masked reading center graders (ground truth). The performance of an already published AI algorithm to detect IRF and SRF separately, and a combined fluid detector (IRF and/or SRF) of the same OCT volumes was evaluated. An analysis of the sources of disagreement between annotation and prediction and their relationship to central retinal thickness was performed. We computed the mean areas under the curves (AUC) and under the precision-recall curves (AP), accuracy, sensitivity, specificity, and precision. RESULTS: The AUC for IRF was 0.92 and 0.98, for SRF 0.98 and 0.99, in the AMD and DME cohort, respectively. The AP for IRF was 0.89 and 1.00, for SRF 0.97 and 0.93, in the AMD and DME cohort, respectively. The accuracy, specificity, and sensitivity for IRF were 0.87, 0.88, 0.84, and 0.93, 0.95, 0.93, and for SRF 0.93, 0.93, 0.93, and 0.95, 0.95, 0.95 in the AMD and DME cohort, respectively. For detecting any fluid, the AUC was 0.95 and 0.98, and the accuracy, specificity, and sensitivity were 0.89, 0.93, and 0.90 and 0.95, 0.88, and 0.93, in the AMD and DME cohort, respectively. False positives were present when retinal shadow artifacts and strong retinal deformation were present. False negatives were due to small hyporeflective areas in combination with poor image quality. The combined detector correctly predicted more OCT volumes than the single detectors for IRF and SRF, 89.0% versus 81.6% in the AMD and 93.1% versus 88.6% in the DME cohort. DISCUSSION/CONCLUSION: The AI-based fluid detector achieves high performance for retinal fluid detection in a very large dataset dedicated to AMD and DME. Combining single detectors provides better fluid detection accuracy than considering the single detectors separately. The observed independence of the single detectors ensures that the detectors learned features particular to IRF and SRF.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Macular Degeneration , Macular Edema , Wet Macular Degeneration , Humans , Macular Edema/diagnosis , Diabetic Retinopathy/diagnosis , Tomography, Optical Coherence/methods , Subretinal Fluid , Retrospective Studies , Artificial Intelligence , Macular Degeneration/diagnosis , Angiogenesis Inhibitors
SELECTION OF CITATIONS
SEARCH DETAIL
...