Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 51.124
Filter
1.
Radiat Oncol ; 19(1): 69, 2024 May 31.
Article in English | MEDLINE | ID: mdl-38822385

ABSTRACT

BACKGROUND: Multiple artificial intelligence (AI)-based autocontouring solutions have become available, each promising high accuracy and time savings compared with manual contouring. Before implementing AI-driven autocontouring into clinical practice, three commercially available CT-based solutions were evaluated. MATERIALS AND METHODS: The following solutions were evaluated in this work: MIM-ProtégéAI+ (MIM), Radformation-AutoContour (RAD), and Siemens-DirectORGANS (SIE). Sixteen organs were identified that could be contoured by all solutions. For each organ, ten patients that had manually generated contours approved by the treating physician (AP) were identified, totaling forty-seven different patients. CT scans in the supine position were acquired using a Siemens-SOMATOMgo 64-slice helical scanner and used to generate autocontours. Physician scoring of contour accuracy was performed by at least three physicians using a five-point Likert scale. Dice similarity coefficient (DSC), Hausdorff distance (HD) and mean distance to agreement (MDA) were calculated comparing AI contours to "ground truth" AP contours. RESULTS: The average physician score ranged from 1.00, indicating that all physicians reviewed the contour as clinically acceptable with no modifications necessary, to 3.70, indicating changes are required and that the time taken to modify the structures would likely take as long or longer than manually generating the contour. When averaged across all sixteen structures, the AP contours had a physician score of 2.02, MIM 2.07, RAD 1.96 and SIE 1.99. DSC ranged from 0.37 to 0.98, with 41/48 (85.4%) contours having an average DSC ≥ 0.7. Average HD ranged from 2.9 to 43.3 mm. Average MDA ranged from 0.6 to 26.1 mm. CONCLUSIONS: The results of our comparison demonstrate that each vendor's AI contouring solution exhibited capabilities similar to those of manual contouring. There were a small number of cases where unusual anatomy led to poor scores with one or more of the solutions. The consistency and comparable performance of all three vendors' solutions suggest that radiation oncology centers can confidently choose any of the evaluated solutions based on individual preferences, resource availability, and compatibility with their existing clinical workflows. Although AI-based contouring may result in high-quality contours for the majority of patients, a minority of patients require manual contouring and more in-depth physician review.


Subject(s)
Artificial Intelligence , Radiotherapy Planning, Computer-Assisted , Tomography, X-Ray Computed , Humans , Radiotherapy Planning, Computer-Assisted/methods , Organs at Risk/radiation effects , Algorithms , Image Processing, Computer-Assisted/methods
2.
Hum Brain Mapp ; 45(8): e26718, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38825985

ABSTRACT

The early stages of human development are increasingly acknowledged as pivotal in laying the groundwork for subsequent behavioral and cognitive development. Spatiotemporal (4D) brain functional atlases are important in elucidating the development of human brain functions. However, the scarcity of such atlases for early life stages stems from two primary challenges: (1) the significant noise in functional magnetic resonance imaging (fMRI) that complicates the generation of high-quality atlases for each age group, and (2) the rapid and complex changes in the early human brain that hinder the maintenance of temporal consistency in 4D atlases. This study tackles these challenges by integrating low-rank tensor learning with spectral embedding, thereby proposing a novel, data-driven 4D functional atlas generation framework based on spectral functional network learning (SFNL). This method utilizes low-rank tensor learning to capture common functional connectivity (FC) patterns across different ages, thus optimizing FCs for each age group to improve the temporal consistency of functional networks. Incorporating spectral embedding aids in mitigating potential noise in FC networks derived from fMRI data by reconstructing networks in the spectral space. Utilizing SFNL-generated functional networks enables the creation of consistent and highly qualified spatiotemporal functional atlases. The framework was applied to the developing Human Connectome Project (dHCP) dataset, generating the first neonatal 4D functional atlases with fine-grained temporal and spatial resolutions. Experimental evaluations focusing on functional homogeneity, reliability, and temporal consistency demonstrated the superiority of our framework compared to existing methods for constructing 4D atlases. Additionally, network analysis experiments, including individual identification, functional systems development, and local efficiency assessments, further corroborate the efficacy and robustness of the generated atlases. The 4D atlases and related codes will be made publicly accessible (https://github.com/zhaoyunxi/neonate-atlases).


Subject(s)
Atlases as Topic , Connectome , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Infant, Newborn , Connectome/methods , Male , Female , Brain/diagnostic imaging , Brain/physiology , Brain/growth & development , Infant , Image Processing, Computer-Assisted/methods , Machine Learning , Nerve Net/diagnostic imaging , Nerve Net/physiology , Nerve Net/growth & development
3.
J Vis Exp ; (207)2024 May 17.
Article in English | MEDLINE | ID: mdl-38829110

ABSTRACT

PyDesigner is a Python-based software package based on the original Diffusion parameter EStImation with Gibbs and NoisE Removal (DESIGNER) pipeline (Dv1) for dMRI preprocessing and tensor estimation. This software is openly provided for non-commercial research and may not be used for clinical care. PyDesigner combines tools from FSL and MRtrix3 to perform denoising, Gibbs ringing correction, eddy current motion correction, brain masking, image smoothing, and Rician bias correction to optimize the estimation of multiple diffusion measures. It can be used across platforms on Windows, Mac, and Linux to accurately derive commonly used metrics from DKI, DTI, WMTI, FBI, and FBWM datasets as well as tractography ODFs and .fib files. It is also file-format agnostic, accepting inputs in the form of .nii, .nii.gz, .mif, and dicom format. User-friendly and easy to install, this software also outputs quality control metrics illustrating signal-to-noise ratio graphs, outlier voxels, and head motion to evaluate data integrity. Additionally, this dMRI processing pipeline supports multiple echo-time dataset processing and features pipeline customization, allowing the user to specify which processes are employed and which outputs are produced to meet a variety of user needs.


Subject(s)
Diffusion Magnetic Resonance Imaging , Software , Humans , Diffusion Magnetic Resonance Imaging/methods , Image Processing, Computer-Assisted/methods , Brain/diagnostic imaging
4.
PLoS One ; 19(6): e0298698, 2024.
Article in English | MEDLINE | ID: mdl-38829850

ABSTRACT

With the accelerated development of the technological power of society, aerial images of drones gradually penetrated various industries. Due to the variable speed of drones, the captured images are shadowed, blurred, and obscured. Second, drones fly at varying altitudes, leading to changing target scales and making it difficult to detect and identify small targets. In order to solve the above problems, an improved ASG-YOLOv5 model is proposed in this paper. Firstly, this research proposes a dynamic contextual attention module, which uses feature scores to dynamically assign feature weights and output feature information through channel dimensions to improve the model's attention to small target feature information and increase the network's ability to extract contextual information; secondly, this research designs a spatial gating filtering multi-directional weighted fusion module, which uses spatial filtering and weighted bidirectional fusion in the multi-scale fusion stage to improve the characterization of weak targets, reduce the interference of redundant information, and better adapt to the detection of weak targets in images under unmanned aerial vehicle remote sensing aerial photography; meanwhile, using Normalized Wasserstein Distance and CIoU regression loss function, the similarity metric value of the regression frame is obtained by modeling the Gaussian distribution of the regression frame, which increases the smoothing of the positional difference of the small targets and solves the problem that the positional deviation of the small targets is very sensitive, so that the model's detection accuracy of the small targets is effectively improved. This paper trains and tests the model on the VisDrone2021 and AI-TOD datasets. This study used the NWPU-RESISC dataset for visual detection validation. The experimental results show that ASG-YOLOv5 has a better detection effect in unmanned aerial vehicle remote sensing aerial images, and the frames per second (FPS) reaches 86, which meets the requirement of real-time small target detection, and it can be better adapted to the detection of the weak and small targets in the aerial image dataset, and ASG-YOLOv5 outperforms many existing target detection methods, and its detection accuracy reaches 21.1% mAP value. The mAP values are improved by 2.9% and 1.4%, respectively, compared with the YOLOv5 model. The project is available at https://github.com/woaini-shw/asg-yolov5.git.


Subject(s)
Remote Sensing Technology , Unmanned Aerial Devices , Remote Sensing Technology/methods , Remote Sensing Technology/instrumentation , Algorithms , Image Processing, Computer-Assisted/methods
5.
PLoS One ; 19(6): e0304789, 2024.
Article in English | MEDLINE | ID: mdl-38829858

ABSTRACT

Malaria is a deadly disease that is transmitted through mosquito bites. Microscopists use a microscope to examine thin blood smears at high magnification (1000x) to identify parasites in red blood cells (RBCs). Estimating parasitemia is essential in determining the severity of the Plasmodium falciparum infection and guiding treatment. However, this process is time-consuming, labor-intensive, and subject to variation, which can directly affect patient outcomes. In this retrospective study, we compared three methods for measuring parasitemia from a collection of anonymized thin blood smears of patients with Plasmodium falciparum obtained from the Clinical Department of Parasitology-Mycology, National Reference Center (NRC) for Malaria in Paris, France. We first analyzed the impact of the number of field images on parasitemia count using our framework, MALARIS, which features a top-classifier convolutional neural network (CNN). Additionally, we studied the variation between different microscopists using two manual techniques to demonstrate the need for a reliable and reproducible automated system. Finally, we included thin blood smear images from an additional 102 patients to compare the performance and correlation of our system with manual microscopy and flow cytometry. Our results showed strong correlations between the three methods, with a coefficient of determination between 0.87 and 0.92.


Subject(s)
Malaria, Falciparum , Microscopy , Parasitemia , Plasmodium falciparum , Humans , Plasmodium falciparum/isolation & purification , Parasitemia/diagnosis , Parasitemia/blood , Parasitemia/parasitology , Malaria, Falciparum/diagnosis , Malaria, Falciparum/blood , Malaria, Falciparum/parasitology , Retrospective Studies , Microscopy/methods , Erythrocytes/parasitology , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Flow Cytometry/methods
6.
PLoS One ; 19(6): e0304716, 2024.
Article in English | MEDLINE | ID: mdl-38829872

ABSTRACT

Optical microscopy videos enable experts to analyze the motion of several biological elements. Particularly in blood samples infected with Trypanosoma cruzi (T. cruzi), microscopy videos reveal a dynamic scenario where the parasites' motions are conspicuous. While parasites have self-motion, cells are inert and may assume some displacement under dynamic events, such as fluids and microscope focus adjustments. This paper analyzes the trajectory of T. cruzi and blood cells to discriminate between these elements by identifying the following motion patterns: collateral, fluctuating, and pan-tilt-zoom (PTZ). We consider two approaches: i) classification experiments for discrimination between parasites and cells; and ii) clustering experiments to identify the cell motion. We propose the trajectory step dispersion (TSD) descriptor based on standard deviation to characterize these elements, outperforming state-of-the-art descriptors. Our results confirm motion is valuable in discriminating T. cruzi of the cells. Since the parasites perform the collateral motion, their trajectory steps tend to randomness. The cells may assume fluctuating motion following a homogeneous and directional path or PTZ motion with trajectory steps in a restricted area. Thus, our findings may contribute to developing new computational tools focused on trajectory analysis, which can advance the study and medical diagnosis of Chagas disease.


Subject(s)
Microscopy, Video , Trypanosoma cruzi , Trypanosoma cruzi/physiology , Microscopy, Video/methods , Chagas Disease/parasitology , Humans , Image Processing, Computer-Assisted/methods
7.
PLoS One ; 19(6): e0300976, 2024.
Article in English | MEDLINE | ID: mdl-38829868

ABSTRACT

Multi beam forward looking sonar plays an important role in underwater detection. However, due to the complex underwater environment, unclear features, and susceptibility to noise interference, most forward looking sonar systems have poor recognition performance. The research on MFLS for underwater target detection faces some challenges. Therefore, this study proposes innovative improvements to the YOLOv5 algorithm to address the above issues. On the basis of maintaining the original YOLOv5 architecture, this improved model introduces transfer learning technology to overcome the limitation of scarce sonar image data. At the same time, by incorporating the concept of coordinate convolution, the improved model can extract features with rich positional information, significantly enhancing the model's detection ability for small underwater targets. Furthermore, in order to solve the problem of feature extraction in forward looking sonar images, this study integrates attention mechanisms. This mechanism expands the receptive field of the model and optimizes the feature learning process by highlighting key details while suppressing irrelevant information. These improvements not only enhance the recognition accuracy of the model for sonar images, but also enhance its applicability and generalization performance in different underwater environments. In response to the common problem of uneven training sample quality in forward looking sonar imaging technology, this study made a key improvement to the classic YOLOv5 algorithm. By adjusting the bounding box loss function of YOLOv5, the model's over sensitivity to low-quality samples was reduced, thereby reducing the punishment on these samples. After a series of comparative experiments, the newly proposed CCW-YOLOv5 algorithm has achieved detection accuracy in object detection mAP@0.5 Reached 85.3%, and the fastest inference speed tested on the local machine was 54 FPS, showing significant improvement and performance improvement compared to existing advanced algorithms.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Sound
8.
Sci Rep ; 14(1): 12699, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38830932

ABSTRACT

Medical image segmentation has made a significant contribution towards delivering affordable healthcare by facilitating the automatic identification of anatomical structures and other regions of interest. Although convolution neural networks have become prominent in the field of medical image segmentation, they suffer from certain limitations. In this study, we present a reliable framework for producing performant outcomes for the segmentation of pathological structures of 2D medical images. Our framework consists of a novel deep learning architecture, called deep multi-level attention dilated residual neural network (MADR-Net), designed to improve the performance of medical image segmentation. MADR-Net uses a U-Net encoder/decoder backbone in combination with multi-level residual blocks and atrous pyramid scene parsing pooling. To improve the segmentation results, channel-spatial attention blocks were added in the skip connection to capture both the global and local features and superseded the bottleneck layer with an ASPP block. Furthermore, we introduce a hybrid loss function that has an excellent convergence property and enhances the performance of the medical image segmentation task. We extensively validated the proposed MADR-Net on four typical yet challenging medical image segmentation tasks: (1) Left ventricle, left atrium, and myocardial wall segmentation from Echocardiogram images in the CAMUS dataset, (2) Skin cancer segmentation from dermoscopy images in ISIC 2017 dataset, (3) Electron microscopy in FIB-SEM dataset, and (4) Fluid attenuated inversion recovery abnormality from MR images in LGG segmentation dataset. The proposed algorithm yielded significant results when compared to state-of-the-art architectures such as U-Net, Residual U-Net, and Attention U-Net. The proposed MADR-Net consistently outperformed the classical U-Net by 5.43%, 3.43%, and 3.92% relative improvement in terms of dice coefficient, respectively, for electron microscopy, dermoscopy, and MRI. The experimental results demonstrate superior performance on single and multi-class datasets and that the proposed MADR-Net can be utilized as a baseline for the assessment of cross-dataset and segmentation tasks.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Neural Networks, Computer , Humans , Image Processing, Computer-Assisted/methods , Algorithms , Magnetic Resonance Imaging/methods
9.
Sci Rep ; 14(1): 12630, 2024 06 02.
Article in English | MEDLINE | ID: mdl-38824210

ABSTRACT

In this study, we present the development of a fine structural human phantom designed specifically for applications in dentistry. This research focused on assessing the viability of applying medical computer vision techniques to the task of segmenting individual teeth within a phantom. Using a virtual cone-beam computed tomography (CBCT) system, we generated over 170,000 training datasets. These datasets were produced by varying the elemental densities and tooth sizes within the human phantom, as well as varying the X-ray spectrum, noise intensity, and projection cutoff intensity in the virtual CBCT system. The deep-learning (DL) based tooth segmentation model was trained using the generated datasets. The results demonstrate an agreement with manual contouring when applied to clinical CBCT data. Specifically, the Dice similarity coefficient exceeded 0.87, indicating the robust performance of the developed segmentation model even when virtual imaging was used. The present results show the practical utility of virtual imaging techniques in dentistry and highlight the potential of medical computer vision for enhancing precision and efficiency in dental imaging processes.


Subject(s)
Cone-Beam Computed Tomography , Phantoms, Imaging , Tooth , Humans , Tooth/diagnostic imaging , Tooth/anatomy & histology , Cone-Beam Computed Tomography/methods , Dentistry/methods , Image Processing, Computer-Assisted/methods , Deep Learning
10.
J Robot Surg ; 18(1): 237, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38833204

ABSTRACT

A major obstacle in applying machine learning for medical fields is the disparity between the data distribution of the training images and the data encountered in clinics. This phenomenon can be explained by inconsistent acquisition techniques and large variations across the patient spectrum. The result is poor translation of the trained models to the clinic, which limits their implementation in medical practice. Patient-specific trained networks could provide a potential solution. Although patient-specific approaches are usually infeasible because of the expenses associated with on-the-fly labeling, the use of generative adversarial networks enables this approach. This study proposes a patient-specific approach based on generative adversarial networks. In the presented training pipeline, the user trains a patient-specific segmentation network with extremely limited data which is supplemented with artificial samples generated by generative adversarial models. This approach is demonstrated in endoscopic video data captured during fetoscopic laser coagulation, a procedure used for treating twin-to-twin transfusion syndrome by ablating the placental blood vessels. Compared to a standard deep learning segmentation approach, the pipeline was able to achieve an intersection over union score of 0.60 using only 20 annotated images compared to 100 images using a standard approach. Furthermore, training with 20 annotated images without the use of the pipeline achieves an intersection over union score of 0.30, which, therefore, corresponds to a 100% increase in performance when incorporating the pipeline. A pipeline using GANs was used to generate artificial data which supplements the real data, this allows patient-specific training of a segmentation network. We show that artificial images generated using GANs significantly improve performance in vessel segmentation and that training patient-specific models can be a viable solution to bring automated vessel segmentation to the clinic.


Subject(s)
Placenta , Humans , Pregnancy , Placenta/blood supply , Placenta/diagnostic imaging , Female , Deep Learning , Image Processing, Computer-Assisted/methods , Fetofetal Transfusion/surgery , Fetofetal Transfusion/diagnostic imaging , Machine Learning , Robotic Surgical Procedures/methods , Neural Networks, Computer
11.
Med Eng Phys ; 127: 104162, 2024 May.
Article in English | MEDLINE | ID: mdl-38692762

ABSTRACT

OBJECTIVE: Early detection of cardiovascular diseases is based on accurate quantification of the left ventricle (LV) function parameters. In this paper, we propose a fully automatic framework for LV volume and mass quantification from 2D-cine MR images already segmented using U-Net. METHODS: The general framework consists of three main steps: Data preparation including automatic LV localization using a convolution neural network (CNN) and application of morphological operations to exclude papillary muscles from the LV cavity. The second step consists in automatically extracting the LV contours using U-Net architecture. Finally, by integrating temporal information which is manifested by a spatial motion of myocytes as a third dimension, we calculated LV volume, LV ejection fraction (LVEF) and left ventricle mass (LVM). Based on these parameters, we detected and quantified cardiac contraction abnormalities using Python software. RESULTS: CNN was trained with 35 patients and tested on 15 patients from the ACDC database with an accuracy of 99,15 %. U-Net architecture was trained using ACDC database and evaluated using local dataset with a Dice similarity coefficient (DSC) of 99,78 % and a Hausdorff Distance (HD) of 4.468 mm (p < 0,001). Quantification results showed a strong correlation with physiological measures with a Pearson correlation coefficient (PCC) of 0,991 for LV volume, 0.962 for LVEF, 0.98 for stroke volume (SV) and 0.923 for LVM after pillars' elimination. Clinically, our method allows regional and accurate identification of pathological myocardial segments and can serve as a diagnostic aid tool of cardiac contraction abnormalities. CONCLUSION: Experimental results prove the usefulness of the proposed method for LV volume and function quantification and verify its potential clinical applicability.


Subject(s)
Automation , Heart Ventricles , Image Processing, Computer-Assisted , Magnetic Resonance Imaging, Cine , Papillary Muscles , Humans , Heart Ventricles/diagnostic imaging , Magnetic Resonance Imaging, Cine/methods , Papillary Muscles/diagnostic imaging , Papillary Muscles/physiology , Image Processing, Computer-Assisted/methods , Organ Size , Male , Middle Aged , Neural Networks, Computer , Female , Stroke Volume
12.
BMC Oral Health ; 24(1): 521, 2024 May 03.
Article in English | MEDLINE | ID: mdl-38698377

ABSTRACT

BACKGROUND: Oral mucosal diseases are similar to the surrounding normal tissues, i.e., their many non-salient features, which poses a challenge for accurate segmentation lesions. Additionally, high-precision large models generate too many parameters, which puts pressure on storage and makes it difficult to deploy on portable devices. METHODS: To address these issues, we design a non-salient target segmentation model (NTSM) to improve segmentation performance while reducing the number of parameters. The NTSM includes a difference association (DA) module and multiple feature hierarchy pyramid attention (FHPA) modules. The DA module enhances feature differences at different levels to learn local context information and extend the segmentation mask to potentially similar areas. It also learns logical semantic relationship information through different receptive fields to determine the actual lesions and further elevates the segmentation performance of non-salient lesions. The FHPA module extracts pathological information from different views by performing the hadamard product attention (HPA) operation on input features, which reduces the number of parameters. RESULTS: The experimental results on the oral mucosal diseases (OMD) dataset and international skin imaging collaboration (ISIC) dataset demonstrate that our model outperforms existing state-of-the-art methods. Compared with the nnU-Net backbone, our model has 43.20% fewer parameters while still achieving a 3.14% increase in the Dice score. CONCLUSIONS: Our model has high segmentation accuracy on non-salient areas of oral mucosal diseases and can effectively reduce resource consumption.


Subject(s)
Mouth Diseases , Mouth Mucosa , Humans , Mouth Diseases/diagnostic imaging , Mouth Mucosa/pathology , Mouth Mucosa/diagnostic imaging , Image Processing, Computer-Assisted/methods
13.
PLoS One ; 19(5): e0302124, 2024.
Article in English | MEDLINE | ID: mdl-38696446

ABSTRACT

Image data augmentation plays a crucial role in data augmentation (DA) by increasing the quantity and diversity of labeled training data. However, existing methods have limitations. Notably, techniques like image manipulation, erasing, and mixing can distort images, compromising data quality. Accurate representation of objects without confusion is a challenge in methods like auto augment and feature augmentation. Preserving fine details and spatial relationships also proves difficult in certain techniques, as seen in deep generative models. To address these limitations, we propose OFIDA, an object-focused image data augmentation algorithm. OFIDA implements one-to-many enhancements that not only preserve essential target regions but also elevate the authenticity of simulating real-world settings and data distributions. Specifically, OFIDA utilizes a graph-based structure and object detection to streamline augmentation. Specifically, by leveraging graph properties like connectivity and hierarchy, it captures object essence and context for improved comprehension in real-world scenarios. Then, we introduce DynamicFocusNet, a novel object detection algorithm built on the graph framework. DynamicFocusNet merges dynamic graph convolutions and attention mechanisms to flexibly adjust receptive fields. Finally, the detected target images are extracted to facilitate one-to-many data augmentation. Experimental results validate the superiority of our OFIDA method over state-of-the-art methods across six benchmark datasets.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Humans
14.
PLoS One ; 19(5): e0298227, 2024.
Article in English | MEDLINE | ID: mdl-38696503

ABSTRACT

Medical image segmentation is a critical application that plays a significant role in clinical research. Despite the fact that many deep neural networks have achieved quite high accuracy in the field of medical image segmentation, there is still a scarcity of annotated labels, making it difficult to train a robust and generalized model. Few-shot learning has the potential to predict new classes that are unseen in training with a few annotations. In this study, a novel few-shot semantic segmentation framework named prototype-based generative adversarial network (PG-Net) is proposed for medical image segmentation without annotations. The proposed PG-Net consists of two subnetworks: the prototype-based segmentation network (P-Net) and the guided evaluation network (G-Net). On one hand, the P-Net as a generator focuses on extracting multi-scale features and local spatial information in order to produce refined predictions with discriminative context between foreground and background. On the other hand, the G-Net as a discriminator, which employs an attention mechanism, further distills the relation knowledge between support and query, and contributes to P-Net producing segmentation masks of query with more similar distributions as support. Hence, the PG-Net can enhance segmentation quality by an adversarial training strategy. Compared to the state-of-the-art (SOTA) few-shot segmentation methods, comparative experiments demonstrate that the proposed PG-Net provides noticeably more robust and prominent generalization ability on different medical image modality datasets, including an abdominal Computed Tomography (CT) dataset and an abdominal Magnetic Resonance Imaging (MRI) dataset.


Subject(s)
Neural Networks, Computer , Humans , Image Processing, Computer-Assisted/methods , Deep Learning , Algorithms , Magnetic Resonance Imaging/methods
15.
PLoS One ; 19(5): e0302880, 2024.
Article in English | MEDLINE | ID: mdl-38718092

ABSTRACT

Gastrointestinal (GI) cancer is leading general tumour in the Gastrointestinal tract, which is fourth significant reason of tumour death in men and women. The common cure for GI cancer is radiation treatment, which contains directing a high-energy X-ray beam onto the tumor while avoiding healthy organs. To provide high dosages of X-rays, a system needs for accurately segmenting the GI tract organs. The study presents a UMobileNetV2 model for semantic segmentation of small and large intestine and stomach in MRI images of the GI tract. The model uses MobileNetV2 as an encoder in the contraction path and UNet layers as a decoder in the expansion path. The UW-Madison database, which contains MRI scans from 85 patients and 38,496 images, is used for evaluation. This automated technology has the capability to enhance the pace of cancer therapy by aiding the radio oncologist in the process of segmenting the organs of the GI tract. The UMobileNetV2 model is compared to three transfer learning models: Xception, ResNet 101, and NASNet mobile, which are used as encoders in UNet architecture. The model is analyzed using three distinct optimizers, i.e., Adam, RMS, and SGD. The UMobileNetV2 model with the combination of Adam optimizer outperforms all other transfer learning models. It obtains a dice coefficient of 0.8984, an IoU of 0.8697, and a validation loss of 0.1310, proving its ability to reliably segment the stomach and intestines in MRI images of gastrointestinal cancer patients.


Subject(s)
Gastrointestinal Neoplasms , Gastrointestinal Tract , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Gastrointestinal Neoplasms/diagnostic imaging , Gastrointestinal Neoplasms/pathology , Gastrointestinal Tract/diagnostic imaging , Semantics , Image Processing, Computer-Assisted/methods , Female , Male , Stomach/diagnostic imaging , Stomach/pathology
16.
Sci Rep ; 14(1): 10560, 2024 05 08.
Article in English | MEDLINE | ID: mdl-38720020

ABSTRACT

The research on video analytics especially in the area of human behavior recognition has become increasingly popular recently. It is widely applied in virtual reality, video surveillance, and video retrieval. With the advancement of deep learning algorithms and computer hardware, the conventional two-dimensional convolution technique for training video models has been replaced by three-dimensional convolution, which enables the extraction of spatio-temporal features. Specifically, the use of 3D convolution in human behavior recognition has been the subject of growing interest. However, the increased dimensionality has led to challenges such as the dramatic increase in the number of parameters, increased time complexity, and a strong dependence on GPUs for effective spatio-temporal feature extraction. The training speed can be considerably slow without the support of powerful GPU hardware. To address these issues, this study proposes an Adaptive Time Compression (ATC) module. Functioning as an independent component, ATC can be seamlessly integrated into existing architectures and achieves data compression by eliminating redundant frames within video data. The ATC module effectively reduces GPU computing load and time complexity with negligible loss of accuracy, thereby facilitating real-time human behavior recognition.


Subject(s)
Algorithms , Data Compression , Video Recording , Humans , Data Compression/methods , Human Activities , Deep Learning , Image Processing, Computer-Assisted/methods , Pattern Recognition, Automated/methods
17.
Platelets ; 35(1): 2344512, 2024 Dec.
Article in English | MEDLINE | ID: mdl-38722090

ABSTRACT

The last decade has seen increasing use of advanced imaging techniques in platelet research. However, there has been a lag in the development of image analysis methods, leaving much of the information trapped in images. Herein, we present a robust analytical pipeline for finding and following individual platelets over time in growing thrombi. Our pipeline covers four steps: detection, tracking, estimation of tracking accuracy, and quantification of platelet metrics. We detect platelets using a deep learning network for image segmentation, which we validated with proofreading by multiple experts. We then track platelets using a standard particle tracking algorithm and validate the tracks with custom image sampling - essential when following platelets within a dense thrombus. We show that our pipeline is more accurate than previously described methods. To demonstrate the utility of our analytical platform, we use it to show that in vivo thrombus formation is much faster than that ex vivo. Furthermore, platelets in vivo exhibit less passive movement in the direction of blood flow. Our tools are free and open source and written in the popular and user-friendly Python programming language. They empower researchers to accurately find and follow platelets in fluorescence microscopy experiments.


In this paper we describe computational tools to find and follow individual platelets in blood clots recorded with fluorescence microscopy. Our tools work in a diverse range of conditions, both in living animals and in artificial flow chamber models of thrombosis. Our work uses deep learning methods to achieve excellent accuracy. We also provide tools for visualizing data and estimating error rates, so you don't have to just trust the output. Our workflow measures platelet density, shape, and speed, which we use to demonstrate differences in the kinetics of clotting in living vessels versus a synthetic environment. The tools we wrote are open source, written in the popular Python programming language, and freely available to all. We hope they will be of use to other platelet researchers.


Subject(s)
Blood Platelets , Deep Learning , Thrombosis , Blood Platelets/metabolism , Thrombosis/blood , Humans , Image Processing, Computer-Assisted/methods , Animals , Mice , Algorithms
18.
Methods Cell Biol ; 186: 213-231, 2024.
Article in English | MEDLINE | ID: mdl-38705600

ABSTRACT

Advancements in multiplexed tissue imaging technologies are vital in shaping our understanding of tissue microenvironmental influences in disease contexts. These technologies now allow us to relate the phenotype of individual cells to their higher-order roles in tissue organization and function. Multiplexed Ion Beam Imaging (MIBI) is one of such technologies, which uses metal isotope-labeled antibodies and secondary ion mass spectrometry (SIMS) to image more than 40 protein markers simultaneously within a single tissue section. Here, we describe an optimized MIBI workflow for high-plex analysis of Formalin-Fixed Paraffin-Embedded (FFPE) tissues following antigen retrieval, metal isotope-conjugated antibody staining, imaging using the MIBI instrument, and subsequent data processing and analysis. While this workflow is focused on imaging human FFPE samples using the MIBI, this workflow can be easily extended to model systems, biological questions, and multiplexed imaging modalities.


Subject(s)
Paraffin Embedding , Humans , Paraffin Embedding/methods , Spectrometry, Mass, Secondary Ion/methods , Tissue Fixation/methods , Image Processing, Computer-Assisted/methods , Formaldehyde/chemistry
19.
Methods Cell Biol ; 187: 223-248, 2024.
Article in English | MEDLINE | ID: mdl-38705626

ABSTRACT

Super-resolution cryo-correlative light and electron microscopy (SRcryoCLEM) is emerging as a powerful method to enable targeted in situ structural studies of biological samples. By combining the high specificity and localization accuracy of single-molecule localization microscopy (cryoSMLM) with the high resolution of cryo-electron tomography (cryoET), this method enables accurately targeted data acquisition and the observation and identification of biomolecules within their natural cellular context. Despite its potential, the adaptation of SRcryoCLEM has been hindered by the need for specialized equipment and expertise. In this chapter, we outline a workflow for cryoSMLM and cryoET-based SRcryoCLEM, and we demonstrate that, given the right tools, it is possible to incorporate cryoSMLM into an established cryoET workflow. Using Vimentin as an exemplary target of interest, we demonstrate all stages of an SRcryoCLEM experiment: performing cryoSMLM, targeting cryoET acquisition based on single-molecule localization maps, and correlation of cryoSMLM and cryoET datasets using scNodes, a software package dedicated to SRcryoCLEM. By showing how SRcryoCLEM enables the imaging of specific intracellular components in situ, we hope to facilitate adoption of the technique within the field of cryoEM.


Subject(s)
Cryoelectron Microscopy , Cryoelectron Microscopy/methods , Humans , Single Molecule Imaging/methods , Electron Microscope Tomography/methods , Software , Image Processing, Computer-Assisted/methods , Vimentin/metabolism , Animals
20.
Methods Cell Biol ; 187: 249-292, 2024.
Article in English | MEDLINE | ID: mdl-38705627

ABSTRACT

Cryogenic ultrastructural imaging techniques such as cryo-electron tomography have produced a revolution in how the structure of biological systems is investigated by enabling the determination of structures of protein complexes immersed in a complex biological matrix within vitrified cell and model organisms. However, so far, the portfolio of successes has been mostly limited to highly abundant complexes or to structures that are relatively unambiguous and easy to identify through electron microscopy. In order to realize the full potential of this revolution, researchers would have to be able to pinpoint lower abundance species and obtain functional annotations on the state of objects of interest which would then be correlated to ultrastructural information to build a complete picture of the structure-function relationships underpinning biological processes. Fluorescence imaging at cryogenic conditions has the potential to be able to meet these demands. However, wide-field images acquired at low numeric aperture (NA) using air immersion objective have a low resolving power and cannot provide accurate enough three-dimensional (3D) localization to enable the assignment of functional annotations to individual objects of interest or target sample debulking to ensure the preservation of the structures of interest. It is therefore necessary to develop super-resolved cryo-fluorescence workflows capable of fulfilling this role and enabling new biological discoveries. In this chapter, we present the current state of development of two super-resolution cryogenic fluorescence techniques, superSIL-STORM and astigmatism-based 3D STORM, show their application to a variety of biological systems and discuss their advantages and limitations. We further discuss the future applicability to cryo-CLEM workflows though examples of practical application to the study of membrane protein complexes both in mammalian cells and in Escherichia coli.


Subject(s)
Cryoelectron Microscopy , Cryoelectron Microscopy/methods , Humans , Animals , Imaging, Three-Dimensional/methods , Electron Microscope Tomography/methods , Image Processing, Computer-Assisted/methods , Microscopy, Fluorescence/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...