Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
1.
Med Phys ; 50(9): 5460-5478, 2023 Sep.
Article in English | MEDLINE | ID: mdl-36864700

ABSTRACT

BACKGROUND: Multi-modal learning is widely adopted to learn the latent complementary information between different modalities in multi-modal medical image segmentation tasks. Nevertheless, the traditional multi-modal learning methods require spatially well-aligned and paired multi-modal images for supervised training, which cannot leverage unpaired multi-modal images with spatial misalignment and modality discrepancy. For training accurate multi-modal segmentation networks using easily accessible and low-cost unpaired multi-modal images in clinical practice, unpaired multi-modal learning has received comprehensive attention recently. PURPOSE: Existing unpaired multi-modal learning methods usually focus on the intensity distribution gap but ignore the scale variation problem between different modalities. Besides, within existing methods, shared convolutional kernels are frequently employed to capture common patterns in all modalities, but they are typically inefficient at learning global contextual information. On the other hand, existing methods highly rely on a large number of labeled unpaired multi-modal scans for training, which ignores the practical scenario when labeled data is limited. To solve the above problems, we propose a modality-collaborative convolution and transformer hybrid network (MCTHNet) using semi-supervised learning for unpaired multi-modal segmentation with limited annotations, which not only collaboratively learns modality-specific and modality-invariant representations, but also could automatically leverage extensive unlabeled scans for improving performance. METHODS: We make three main contributions to the proposed method. First, to alleviate the intensity distribution gap and scale variation problems across modalities, we develop a modality-specific scale-aware convolution (MSSC) module that can adaptively adjust the receptive field sizes and feature normalization parameters according to the input. Secondly, we propose a modality-invariant vision transformer (MIViT) module as the shared bottleneck layer for all modalities, which implicitly incorporates convolution-like local operations with the global processing of transformers for learning generalizable modality-invariant representations. Third, we design a multi-modal cross pseudo supervision (MCPS) method for semi-supervised learning, which enforces the consistency between the pseudo segmentation maps generated by two perturbed networks to acquire abundant annotation information from unlabeled unpaired multi-modal scans. RESULTS: Extensive experiments are performed on two unpaired CT and MR segmentation datasets, including a cardiac substructure dataset derived from the MMWHS-2017 dataset and an abdominal multi-organ dataset consisting of the BTCV and CHAOS datasets. Experiment results show that our proposed method significantly outperforms other existing state-of-the-art methods under various labeling ratios, and achieves a comparable segmentation performance close to single-modal methods with fully labeled data by only leveraging a small portion of labeled data. Specifically, when the labeling ratio is 25%, our proposed method achieves overall mean DSC values of 78.56% and 76.18% in cardiac and abdominal segmentation, respectively, which significantly improves the average DSC value of two tasks by 12.84% compared to single-modal U-Net models. CONCLUSIONS: Our proposed method is beneficial for reducing the annotation burden of unpaired multi-modal medical images in clinical applications.


Subject(s)
Algorithms , Heart , Supervised Machine Learning , Image Processing, Computer-Assisted
2.
Magn Reson Imaging ; 99: 98-109, 2023 06.
Article in English | MEDLINE | ID: mdl-36681311

ABSTRACT

Prostate cancer is one of the deadest cancers among human beings. To better diagnose the prostate cancer, prostate lesion segmentation becomes a very important work, but its progress is very slow due to the prostate lesions small in size, irregular in shape, and blurred in contour. Therefore, automatic prostate lesion segmentation from mp-MRI is a great significant work and a challenging task. However, the most existing multi-step segmentation methods based on voxel-level classification are time-consuming, may introduce errors in different steps and lead to error accumulation. To decrease the computation time, harness richer 3D spatial features, and fuse the multi-level contextual information of mp-MRI, we present an automatic segmentation method in which all steps are optimized conjointly as one step to form our end-to-end convolutional neural network. The proposed end-to-end network DMSA-V-Net consists of two parts: (1) a 3D V-Net is used as the backbone network, it is the first attempt in employing 3D convolutional neural network for CS prostate lesion segmentation, (2) a deep multi-scale attention mechanism is introduced into the 3D V-Net which can highly focus on the ROI while suppressing the redundant background. As a merit, the attention can adaptively re-align the context information between the feature maps at different scales and the saliency maps in high-levels. We performed experiments based on five cross-fold validation with data including 97 patients. The results show that the Dice and sensitivity are 0.7014 and 0.8652 respectively, which demonstrates that our segmentation approach is more significant and accurate compared to other methods.


Subject(s)
Prostate , Prostatic Neoplasms , Male , Humans , Neural Networks, Computer , Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging/methods , Image Processing, Computer-Assisted/methods
3.
IEEE J Biomed Health Inform ; 27(1): 75-86, 2023 01.
Article in English | MEDLINE | ID: mdl-36251915

ABSTRACT

Accurate volumetric segmentation of brain tumors and tissues is beneficial for quantitative brain analysis and brain disease identification in multi-modal Magnetic Resonance (MR) images. Nevertheless, due to the complex relationship between modalities, 3D Fully Convolutional Networks (3D FCNs) using simple multi-modal fusion strategies hardly learn the complex and nonlinear complementary information between modalities. Meanwhile, the indiscriminative feature aggregation between low-level and high-level features easily causes volumetric feature misalignment in 3D FCNs. On the other hand, the 3D convolution operations of 3D FCNs are excellent at modeling local relations but typically inefficient at capturing global relations between distant regions in volumetric images. To tackle these issues, we propose an Aligned Cross-Modality Interaction Network (ACMINet) for segmenting the regions of brain tumors and tissues from MR images. In this network, the cross-modality feature interaction module is first designed to adaptively and efficiently fuse and refine multi-modal features. Secondly, the volumetric feature alignment module is developed for dynamically aligning low-level and high-level features by the learnable volumetric feature deformation field. Thirdly, we propose the volumetric dual interaction graph reasoning module for graph-based global context modeling in spatial and channel dimensions. Our proposed method is applied to brain glioma, vestibular schwannoma, and brain tissue segmentation tasks, and we performed extensive experiments on BraTS2018, BraTS2020, Vestibular Schwannoma, and iSeg-2017 datasets. Experimental results show that ACMINet achieves state-of-the-art segmentation performance on all four benchmark datasets and obtains the highest DSC score of hard-segmented enhanced tumor region on the validation leaderboard of the BraTS2020 challenge.


Subject(s)
Brain Neoplasms , Neuroma, Acoustic , Humans , Neural Networks, Computer , Neuroma, Acoustic/pathology , Brain Neoplasms/pathology , Magnetic Resonance Imaging/methods , Brain/pathology , Image Processing, Computer-Assisted/methods
4.
Comput Biol Med ; 149: 105964, 2022 10.
Article in English | MEDLINE | ID: mdl-36007288

ABSTRACT

Multi-modal medical image segmentation has achieved great success through supervised deep learning networks. However, because of domain shift and limited annotation information, unpaired cross-modality segmentation tasks are still challenging. The unsupervised domain adaptation (UDA) methods can alleviate the segmentation degradation of cross-modality segmentation by knowledge transfer between different domains, but current methods still suffer from the problems of model collapse, adversarial training instability, and mismatch of anatomical structures. To tackle these issues, we propose a bidirectional multilayer contrastive adaptation network (BMCAN) for unpaired cross-modality segmentation. The shared encoder is first adopted for learning modality-invariant encoding representations in image synthesis and segmentation simultaneously. Secondly, to retain the anatomical structure consistency in cross-modality image synthesis, we present a structure-constrained cross-modality image translation approach for image alignment. Thirdly, we construct a bidirectional multilayer contrastive learning approach to preserve the anatomical structures and enhance encoding representations, which utilizes two groups of domain-specific multilayer perceptron (MLP) networks to learn modality-specific features. Finally, a semantic information adversarial learning approach is designed to learn structural similarities of semantic outputs for output space alignment. Our proposed method was tested on three different cross-modality segmentation tasks: brain tissue, brain tumor, and cardiac substructure segmentation. Compared with other UDA methods, experimental results show that our proposed BMCAN achieves state-of-the-art segmentation performance on the above three tasks, and it has fewer training components and better feature representations for overcoming overfitting and domain shift problems. Our proposed method can efficiently reduce the annotation burden of radiologists in cross-modality image analysis.


Subject(s)
Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Brain/diagnostic imaging , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Semantics
5.
IEEE J Biomed Health Inform ; 26(2): 749-761, 2022 02.
Article in English | MEDLINE | ID: mdl-34197331

ABSTRACT

Brain tissue segmentation in multi-modal magnetic resonance (MR) images is significant for the clinical diagnosis of brain diseases. Due to blurred boundaries, low contrast, and intricate anatomical relationships between brain tissue regions, automatic brain tissue segmentation without prior knowledge is still challenging. This paper presents a novel 3D fully convolutional network (FCN) for brain tissue segmentation, called APRNet. In this network, we first propose a 3D anisotropic pyramidal convolutional reversible residual sequence (3DAPC-RRS) module to integrate the intra-slice information with the inter-slice information without significant memory consumption; secondly, we design a multi-modal cross-dimension attention (MCDA) module to automatically capture the effective information in each dimension of multi-modal images; then, we apply 3DAPC-RRS modules and MCDA modules to a 3D FCN with multiple encoded streams and one decoded stream for constituting the overall architecture of APRNet. We evaluated APRNet on two benchmark challenges, namely MRBrainS13 and iSeg-2017. The experimental results show that APRNet yields state-of-the-art segmentation results on both benchmark challenge datasets and achieves the best segmentation performance on the cerebrospinal fluid region. Compared with other methods, our proposed approach exploits the complementary information of different modalities to segment brain tissue regions in both adult and infant MR images, and it achieves the average Dice coefficient of 87.22% and 93.03% on the MRBrainS13 and iSeg-2017 testing data, respectively. The proposed method is beneficial for quantitative brain analysis in the clinical study, and our code is made publicly available.


Subject(s)
Brain Diseases , Magnetic Resonance Imaging , Attention , Brain/diagnostic imaging , Disease Progression , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging/methods
6.
Medicine (Baltimore) ; 100(31): e26783, 2021 Aug 06.
Article in English | MEDLINE | ID: mdl-34397827

ABSTRACT

BACKGROUND: Types of general anesthesia may affect the quality of recovery, but few studies have investigated the quality of postoperative recovery, and none has focused on patients undergoing breast augmentation. METHODS: This prospective, parallel, randomized controlled study enrolled 104 patients undergoing transaxillary endoscopic breast augmentation. Eligible patients were randomly assigned to receive inhalation anesthesia (IH, n = 52) or total intravenous anesthesia (TIVA, n = 52). Quality of recovery was assessed on the first and on the second postoperative days using the 15-item Quality of Recovery questionnaire (QoR-15). Baseline demographic, clinical characteristics, and operative data were also collected. RESULTS: The IH and TIVA groups had similar QoR-15 total scores on the first postoperative day (P = .921) and on the second postoperative day (P = .960), but the IH group had a significantly higher proportion of patients receiving antiemetics than the TIVA group (53.6% vs 23.1%, P = .002). Multivariate analysis revealed that the type of general anesthesia was not significantly associated with QoR-15 total scores on the first postoperative day (ß = 0.68, P = .874) and with QoR-15 total scores on the second postoperative day (ß = 0.56, P = .892), after adjusting for age, BMI, operation time, steroids use, and antiemetics use. CONCLUSION: For the patients undergoing transaxillary endoscopic breast augmentation, the type of general anesthesia did not significantly impact the quality of recovery. Both IH or TIVA could provide good quality of recovery demonstrated by high QoR-15 total scores. The results suggested that the type of general anesthesia may not be the most critical factors of quality of recovery in the patients undergoing transaxillary endoscopic breast augmentation.


Subject(s)
Breast Implantation/standards , Endoscopy/standards , Recovery of Function , Adult , Aged , Anesthesia Recovery Period , Anesthesia, General/methods , Breast Implantation/methods , Breast Implantation/statistics & numerical data , Endoscopy/methods , Endoscopy/statistics & numerical data , Female , Humans , Middle Aged , Postoperative Complications , Prospective Studies , Surveys and Questionnaires
7.
J Acoust Soc Am ; 149(2): 1338, 2021 02.
Article in English | MEDLINE | ID: mdl-33639796

ABSTRACT

Speech plays an important role in human-computer emotional interaction. FaceNet used in face recognition achieves great success due to its excellent feature extraction. In this study, we adopt the FaceNet model and improve it for speech emotion recognition. To apply this model for our work, speech signals are divided into segments at a given time interval, and the signal segments are transformed into a discrete waveform diagram and spectrogram. Subsequently, the waveform and spectrogram are separately fed into FaceNet for end-to-end training. Our empirical study shows that the pretraining is effective on the spectrogram for FaceNet. Hence, we pretrain the network on the CASIA dataset and then fine-tune it on the IEMOCAP dataset with waveforms. It will derive the maximum transfer learning knowledge from the CASIA dataset due to its high accuracy. This high accuracy may be due to its clean signals. Our preliminary experimental results show an accuracy of 68.96% and 90% on the emotion benchmark datasets IEMOCAP and CASIA, respectively. The cross-training is then conducted on the dataset, and comprehensive experiments are performed. Experimental results indicate that the proposed approach outperforms state-of-the-art methods on the IEMOCAP dataset among single modal approaches.


Subject(s)
Neural Networks, Computer , Speech , Emotions , Humans , Machine Learning
8.
Med Phys ; 48(4): 1685-1696, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33300190

ABSTRACT

PURPOSE: The segmentation accuracy of medical images was improved by increasing the number of training samples using a local image warping technique. The performance of the proposed method was evaluated in the segmentation of breast masses, prostate and brain tumors, and lung nodules. METHODS: We propose a simple data augmentation method which is called stochastic evolution (SE). Specifically, the idea of SE stems from our thinking about the deterioration of the diseased tissue and the healing process. In order to simulate this natural process, we implement it according to the local distortion algorithm in image warping. In other words, the irregular deterioration and healing processes of the diseased tissue is simulated according to the direction of the local distortion, thereby producing a natural sample that is indistinguishable by humans. RESULTS: The proposed method is evaluated on four segmentation tasks of breast masses, prostate, brain tumors, and lung nodules. Comparing the experimental results of four segmentation methods based on the UNet segmentation architecture without adding any expanded data during training, the accuracy and the Hausdorff distance obtained in our approach remain almost the same as other methods. However, the dice similarity coefficient (DSC) and sensitivity (SEN) have both improved to some extent. Among them, DSC is increased by 5.2%, 2.8%, 1.0%, and 3.2%, respectively; SEN is increased by 6.9%, 4.3%, 1.2%, and 4.5%, respectively. CONCLUSIONS: Experimental results show that the proposed SE data augmentation method could improve the segmentation accuracy of breast masses, prostate, brain tumors, and lung nodules. The method also shows the robustness with different image datasets and imaging modalities.


Subject(s)
Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Algorithms , Breast , Humans , Male , Prostate
9.
J Digit Imaging ; 33(5): 1242-1256, 2020 10.
Article in English | MEDLINE | ID: mdl-32607905

ABSTRACT

Classification of benign and malignant in lung nodules using chest CT images is a key step in the diagnosis of early-stage lung cancer, as well as an effective way to improve the patients' survival rate. However, due to the diversity of lung nodules and the visual similarity of lung nodules to their surrounding tissues, it is difficult to construct a robust classification model with conventional deep learning-based diagnostic methods. To address this problem, we propose a multi-model ensemble learning architecture based on 3D convolutional neural network (MMEL-3DCNN). This approach incorporates three key ideas: (1) Constructed multi-model network architecture can be well adapted to the heterogeneity of lung nodules. (2) The input that concatenated of the intensity image corresponding to the nodule mask, the original image, and the enhanced image corresponding to which can help training model to extract advanced feature with more discriminative capacity. (3) Select the corresponding model to different nodule size dynamically for prediction, which can improve the generalization ability of the model effectively. In addition, ensemble learning is applied in this paper to further improve the robustness of the nodule classification model. The proposed method has been experimentally verified on the public dataset, LIDC-IDRI. The experimental results show that the proposed MMEL-3DCNN architecture can obtain satisfactory classification results.


Subject(s)
Lung Neoplasms , Humans , Lung , Lung Neoplasms/diagnostic imaging , Machine Learning , Radiographic Image Interpretation, Computer-Assisted , Solitary Pulmonary Nodule/diagnostic imaging , Tomography, X-Ray Computed
10.
J Plast Reconstr Aesthet Surg ; 73(12): 2225-2231, 2020 Dec.
Article in English | MEDLINE | ID: mdl-32674909

ABSTRACT

Capsulectomy is a standard treatment for capsular contracture after breast augmentation. Incision via the endoscopic transaxillary approach is generally preferred by Asian women, but relevant literature addressing endoscopic transaxillary capsulectomy is limited. This study described the techniques of endoscopic transaxillary capsulectomy with reimplantation performed as a single-operator outpatient procedure. This retrospective study included patients with diagnosis of capsular contracture underwent endoscopic transaxillary capsulectomy with immediate reimplantation between January 1, 2013 and December 31, 2017. Data regarding history, implant type, operation time, duration of postoperative drainage, and complications were collected and analyzed. A total of 42 patients with a mean age of 36 years were included (11 unilateral and 31 bilateral capsulectomy). Total capsulectomy was performed on four (10%) patients for previous subglandular augmentation, and anterior capsulectomy was performed on 38 (91%) patients for previous submuscular augmentation. Mean sizes of previous and new (or reused) implants were 268 ml (median 283 ml, SD 57) and 317 ml (median 307 ml, SD 49), respectively. Mean operation time for unilateral and bilateral procedures were 4 h 15 min and 6 h 28 min, respectively. Postoperatively, mean duration of wound drainage was 10 (SD 3) days. Six (14%) patients experienced complications, including two (5%) patients with seroma, two (5%) with hematoma, one (2%) with infection, and four (10%) with recurrent capsular contracture. The four recurrent cases underwent repeat endoscopic transaxillary capsulectomy. All of the 42 patients had satisfactory clinical and esthetic outcomes. This study demonstrated the feasibility of endoscopic transaxillary capsulectomy with immediate reimplantation performed as an ambulatory surgery by a single surgeon who is in a stable and comfortable sitting position without the aid of a surgical assistant.


Subject(s)
Breast Implantation , Contracture/surgery , Endoscopy/methods , Mammaplasty/methods , Postoperative Complications/surgery , Replantation/methods , Adult , Axilla/surgery , Esthetics , Female , Humans , Retrospective Studies
11.
IEEE J Biomed Health Inform ; 24(7): 2006-2015, 2020 07.
Article in English | MEDLINE | ID: mdl-31905154

ABSTRACT

Early detection of lung cancer is an effective way to improve the survival rate of patients. It is a critical step to have accurate detection of lung nodules in computed tomography (CT) images for the diagnosis of lung cancer. However, due to the heterogeneity of the lung nodules and the complexity of the surrounding environment, it is a challenge to develop a robust nodule detection method. In this study, we propose a two-stage convolutional neural networks (TSCNN) for lung nodule detection. The first stage based on the improved U-Net segmentation network is to establish an initial detection of lung nodules. During this stage, in order to obtain a high recall rate without introducing excessive false positive nodules, we propose a new sampling strategy for training. Simultaneously, a two-phase prediction method is also proposed in this stage. The second stage in the TSCNN architecture based on the proposed dual pooling structure is built into three 3D-CNN classification networks for false positive reduction. Since the network training requires a significant amount of training data, we designed a random mask as the data augmentation method in this study. Furthermore, we have improved the generalization ability of the false positive reduction model by means of ensemble learning. We verified the proposed architecture on the LUNA dataset in our experiments, which showed that the proposed TSCNN architecture did obtain competitive detection performance.


Subject(s)
Lung Neoplasms/diagnostic imaging , Neural Networks, Computer , Radiographic Image Interpretation, Computer-Assisted/methods , Humans , Imaging, Three-Dimensional , Tomography, X-Ray Computed/methods
12.
Phys Med ; 63: 112-121, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31221402

ABSTRACT

It is difficult to obtain an accurate segmentation due to the variety of lung nodules in computed tomography (CT) images. In this study, we propose a data-driven model, called the Cascaded Dual-Pathway Residual Network (CDP-ResNet) to improve the segmentation of lung nodules in the CT images. Our approach incorporates the multi-view and multi-scale features of different nodules from CT images. The proposed residual block based dual-path network extracts local features and rich contextual information of lung nodules. In addition, we designed an improved weighted sampling strategy to select training samples based on the edge. The proposed method was extensively evaluated on an LIDC dataset, which contains 986 nodules. Experimental results show that the CDP-ResNet achieves superior segmentation performance with an average DICE score (standard deviation) of 81.58% (11.05) on the LIDC dataset. Moreover, we compared our results with those of four radiologists on the same dataset. The comparison shows that the CDP-ResNet is slightly better than human experts in terms of segmentation accuracy. Meanwhile, the proposed segmentation method outperforms existing methods.


Subject(s)
Image Processing, Computer-Assisted/methods , Lung Neoplasms/diagnostic imaging , Neural Networks, Computer , Tomography, X-Ray Computed , Humans
13.
J Med Syst ; 43(8): 241, 2019 Jun 21.
Article in English | MEDLINE | ID: mdl-31227923

ABSTRACT

The multi-atlas method is one of the efficient and common automatic labeling method, which uses the prior information provided by expert-labeled images to guide the labeling of the target. However, most multi-atlas-based methods depend on the registration that may not give the correct information during the label propagation. To address the issue, we designed a new automatic labeling method through the hashing retrieval based atlas forest. The proposed method propagates labels without registration to reduce the errors, and constructs a target-oriented learning model to integrate information among the atlases. This method innovates a coarse classification strategy to preprocess the dataset, which retains the integrity of dataset and reduces computing time. Furthermore, the method considers each voxel in the atlas as a sample and encodes these samples with hashing for the fast sample retrieval. In the stage of labeling, the method selects suitable samples through hashing learning and trains atlas forests by integrating the information from the dataset. Then, the trained model is used to predict the labels of the target. Experimental results on two datasets illustrated that the proposed method is promising in the automatic labeling of MR brain images.


Subject(s)
Image Processing, Computer-Assisted , Neuroimaging , Pattern Recognition, Automated/methods , Algorithms , Humans , Machine Learning
14.
J Med Syst ; 42(8): 138, 2018 Jun 25.
Article in English | MEDLINE | ID: mdl-29938379

ABSTRACT

Iatrogenic injury of ureter in the clinical operation may cause the serious complication and kidney damage. To avoid such a medical accident, it is necessary to provide the ureter position information to the doctor. For the detection of ureter position, an ureter position detection and display system with the augmented ris proposed to detect the ureter that is covered by human tissue. There are two key issues which should be considered in this new system. One is how to detect the covered ureter that cannot be captured by the electronic endoscope and the other is how to display the ureter position that provides stable and high-quality images. Simultaneously, any delayed processing of the system should disturb the surgery. The aided hardware detection method and target detection algorithms are proposed in this system. To mark the ureter position, a surface-lighting plastic optical fiber (POF) with the encoded light-emitting diode (LED) light is used to indicate the ureter position. The monochrome channel filtering algorithm (MCFA) is proposed to locate the ureter region more precisely. The ureter position is extracted using the proposed automatic region growing algorithm (ARGA) that utilizes the statistical information of the monochrome channel for the selection of growing seed point. In addition, according to the pulse signal of encoded light, the recognition of bright and dark frames based on the aided hardware (BDAH) is proposed to expedite the processing speed. Experimental results demonstrate that the proposed endoscope system can identify 92.04% ureter region in average.


Subject(s)
Algorithms , Ureter , Virtual Reality , Endoscopes , Endoscopy/methods , Humans , Iatrogenic Disease/prevention & control
15.
Med Phys ; 44(12): 6329-6340, 2017 Dec.
Article in English | MEDLINE | ID: mdl-28921541

ABSTRACT

PURPOSE: Multiatlas-based method is extensively used in MR brain images segmentation because of its simplicity and robustness. This method provides excellent accuracy although it is time consuming and limited in terms of obtaining information about new atlases. In this study, an automatic labeling of MR brain images through extensible learning and atlas forest is presented to address these limitations. METHODS: We propose an extensible learning model which allows the multiatlas-based framework capable of managing the datasets with numerous atlases or dynamic atlas datasets and simultaneously ensure the accuracy of automatic labeling. Two new strategies are used to reduce the time and space complexity and improve the efficiency of the automatic labeling of brain MR images. First, atlases are encoded to atlas forests through random forest technology to reduce the time consumed for cross-registration between atlases and target image, and a scatter spatial vector is designed to eliminate errors caused by inaccurate registration. Second, an atlas selection method based on the extensible learning model is used to select atlases for target image without traversing the entire dataset and then obtain the accurate labeling. RESULTS: The labeling results of the proposed method were evaluated in three public datasets, namely, IBSR, LONI LPBA40, and ADNI. With the proposed method, the dice coefficient metric values on the three datasets were 84.17 ± 4.61%, 83.25 ± 4.29%, and 81.88 ± 4.53% which were 5% higher than those of the conventional method, respectively. The efficiency of the extensible learning model was evaluated by state-of-the-art methods for labeling of MR brain images. Experimental results showed that the proposed method could achieve accurate labeling for MR brain images without traversing the entire datasets. CONCLUSION: In the proposed multiatlas-based method, extensible learning and atlas forests were applied to control the automatic labeling of brain anatomies on large atlas datasets or dynamic atlas datasets and obtain accurate results.


Subject(s)
Brain/diagnostic imaging , Image Processing, Computer-Assisted/methods , Machine Learning , Magnetic Resonance Imaging , Automation , Humans
16.
J Med Syst ; 40(12): 266, 2016 Dec.
Article in English | MEDLINE | ID: mdl-27730392

ABSTRACT

Iatrogenic injury of ureter occurs occasionally in the clinical laparoscopic surgery. The ureter injury may cause the serious complications and kidney damage. To avoid such an injury, it is necessary to detect the ureter position in real-time. Currently, the endoscope cannot perform this type of function in detecting the ureter position in real-time. In order to have the real-time display of ureter position during the surgical operation, we propose a novel endoscope system which consists of a modified endoscope light and a new lumiontron tube with the LED light. The endoscope light is modified to detect the position of ureter by using our proposed dim target detection algorithm (DTDA). To make this new system functioning, two algorithmic approaches are proposed for the display of ureter position. The horizontal position of ureter is detected by the center line extraction method and the depth of ureter is estimated by the depth estimation method. Experimental results demonstrate that the proposed endoscope system can extract the position and depth information of ureter and exhibit superior performance in terms of accuracy and stabilization.


Subject(s)
Algorithms , Endoscopes , Imaging, Three-Dimensional/methods , Ureter/anatomy & histology , Equipment Design , Humans
17.
Int J Comput Assist Radiol Surg ; 11(12): 2139-2151, 2016 Dec.
Article in English | MEDLINE | ID: mdl-27423650

ABSTRACT

PURPOSE: Accurate target delineation is a critical step in radiotherapy. In this study, a robust contour propagation method is proposed to help physicians delineate lung tumors in four-dimensional computer tomography (4D-CT) images efficiently and accurately. METHODS: The proposed method starts with manually delineated contours on the reference phase. Each contour is fitted by a non-uniform cubic B-spline curve, and its deformation on the target phase is achieved by moving its control vertexes such that the intensity similarity between the two contours is maximized. Since contour is usually the boundary of lesion or tissue which may deform quite differently from the tissues outside the boundary, the proposed method treats each contour as a deformable entity, a non-uniform cubic B-spline curve, and focuses on the registration of contour entity instead of the entire image to avoid the deformation of contour to be smoothed by its surrounding tissues, meanwhile to greatly reduce the time consumption while keeping the accuracy of the contour propagation. Eighteen 4D-CT cases with 444 gross tumor volume (GTV) contours manually delineated slice by slice on the maximal inhale and exhale phases are used to verify the proposed method. RESULTS: The Jaccard similarity coefficient (JSC) between the propagated GTV and the manually delineated GTV is 0.885 ± 0.026, and the Hausdorff distance (HD) is [Formula: see text] mm. In addition, the time for propagating GTV to all the phases is 3.67 ± 3.41 minutes. The results are better than fast adaptive stochastic gradient descent (FASGD) B-spline method, 3D+t B-spline method and diffeomorphic Demons method. CONCLUSIONS: The proposed method is useful to help physicians delineate target volumes efficiently and accurately.


Subject(s)
Four-Dimensional Computed Tomography/methods , Lung Neoplasms/diagnostic imaging , Radiotherapy Planning, Computer-Assisted/methods , Algorithms , Humans , Lung Neoplasms/pathology , Lung Neoplasms/radiotherapy
18.
Magn Reson Imaging ; 34(4): 579-95, 2016 May.
Article in English | MEDLINE | ID: mdl-26712656

ABSTRACT

Myocardial motion estimation of tagged cardiac magnetic resonance (TCMR) images is of great significance in clinical diagnosis and the treatment of heart disease. Currently, the harmonic phase analysis method (HARP) and the local sine-wave modeling method (SinMod) have been proven as two state-of-the-art motion estimation methods for TCMR images, since they can directly obtain the inter-frame motion displacement vector field (MDVF) with high accuracy and fast speed. By comparison, SinMod has better performance over HARP in terms of displacement detection, noise and artifacts reduction. However, the SinMod method has some drawbacks: 1) it is unable to estimate local displacements larger than half of the tag spacing; 2) it has observable errors in tracking of tag motion; and 3) the estimated MDVF usually has large local errors. To overcome these problems, we present a novel motion estimation method in this study. The proposed method tracks the motion of tags and then estimates the dense MDVF by using the interpolation. In this new method, a parameter estimation procedure for global motion is applied to match tag intersections between different frames, ensuring specific kinds of large displacements being correctly estimated. In addition, a strategy of tag motion constraints is applied to eliminate most of errors produced by inter-frame tracking of tags and the multi-level b-splines approximation algorithm is utilized, so as to enhance the local continuity and accuracy of the final MDVF. In the estimation of the motion displacement, our proposed method can obtain a more accurate MDVF compared with the SinMod method and our method can overcome the drawbacks of the SinMod method. However, the motion estimation accuracy of our method depends on the accuracy of tag lines detection and our method has a higher time complexity.


Subject(s)
Algorithms , Heart/diagnostic imaging , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Motion , Artifacts , Humans
19.
Magn Reson Imaging ; 32(9): 1139-55, 2014 Nov.
Article in English | MEDLINE | ID: mdl-25087857

ABSTRACT

A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod.


Subject(s)
Algorithms , Heart/physiology , Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Myocardial Contraction , Artifacts , Humans , Motion , Reproducibility of Results
20.
Technol Cancer Res Treat ; 12(5): 391-401, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23617286

ABSTRACT

In order to facilitate the leaf sequencing process in intensity modulated radiation therapy (IMRT), and design of a practical leaf sequencing algorithm, it is an important issue to smooth the planned fluence maps. The objective is to achieve both high-efficiency and high-precision dose delivering by considering characteristics of leaf sequencing process. The key factor which affects total number of monitor units for the leaf sequencing optimization process is the max flow value of the digraph which formulated from the fluence maps. Therefore, we believe that one strategy for compromising dose conformity and total number of monitor units in dose delivery is to balance the dose distribution function and the max flow value mentioned above. However, there are too many paths in the digraph, and we don't know the flow value of which path is the maximum. The maximum flow value among the horizontal paths was selected and used in the objective function of the fluence map optimization to formulate the model. The model is a traditional linear constrained quadratic optimization model which can be solved by interior point method easily. We believe that the smoothed maps from this model are more suitable for leaf sequencing optimization process than other smoothing models. A clinical head-neck case and a prostate case were tested and compared using our proposed model and the smoothing model which is based on the minimization of total variance. The optimization results with the same level of total number of monitor units (TNMU) show that the fluence maps obtained from our model have much better dose performance for the target/non-target region than the maps from total variance based on the smoothing model. This indicates that our model achieves better dose distribution when the algorithm suppresses the TNMU at the same level. Although we have just used the max flow value of the horizontal paths in the diagraph in the objective function, a good balance has been achieved between the dose conformity and the total number of monitor units. This idea can be extended to other fluence map optimization model, and we believe it can also achieve good performance.


Subject(s)
Algorithms , Head and Neck Neoplasms/radiotherapy , Models, Theoretical , Prostatic Neoplasms/radiotherapy , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Intensity-Modulated , Humans , Male , Radiotherapy Dosage
SELECTION OF CITATIONS
SEARCH DETAIL
...