Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
Comput Biol Med ; 174: 108420, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38613896

RESUMO

BACKGROUND AND OBJECTIVE: Liver tumor segmentation (LiTS) accuracy on contrast-enhanced computed tomography (CECT) images is higher than that on non-contrast computed tomography (NCCT) images. However, CECT requires contrast medium and repeated scans to obtain multiphase enhanced CT images, which is time-consuming and cost-increasing. Therefore, despite the lower accuracy of LiTS on NCCT images, which still plays an irreplaceable role in some clinical settings, such as guided brachytherapy, ablation, or evaluation of patients with renal function damage. In this study, we intend to generate enhanced high-contrast pseudo-color CT (PCCT) images to improve the accuracy of LiTS and RECIST diameter measurement on NCCT images. METHODS: To generate high-contrast CT liver tumor region images, an intensity-based tumor conspicuity enhancement (ITCE) model was first developed. In the ITCE model, a pseudo color conversion function from an intensity distribution of the tumor was established, and it was applied in NCCT to generate enhanced PCCT images. Additionally, we design a tumor conspicuity enhancement-based liver tumor segmentation (TCELiTS) model, which was applied to improve the segmentation of liver tumors on NCCT images. The TCELiTS model consists of three components: an image enhancement module based on the ITCE model, a segmentation module based on a deep convolutional neural network, and an attention loss module based on restricted activation. Segmentation performance was analyzed using the Dice similarity coefficient (DSC), sensitivity, specificity, and RECIST diameter error. RESULTS: To develop the deep learning model, 100 patients with histopathologically confirmed liver tumors (hepatocellular carcinoma, 64 patients; hepatic hemangioma, 36 patients) were randomly divided into a training set (75 patients) and an independent test set (25 patients). Compared with existing tumor automatic segmentation networks trained on CECT images (U-Net, nnU-Net, DeepLab-V3, Modified U-Net), the DSCs achieved on the enhanced PCCT images are both improved compared with those on NCCT images. We observe improvements of 0.696-0.713, 0.715 to 0.776, 0.748 to 0.788, and 0.733 to 0.799 in U-Net, nnU-Net, DeepLab-V3, and Modified U-Net, respectively, in terms of DSC values. In addition, an observer study including 5 doctors was conducted to compare the segmentation performance of enhanced PCCT images with that of NCCT images and showed that enhanced PCCT images are more advantageous for doctors to segment tumor regions. The results showed an accuracy improvement of approximately 3%-6%, but the time required to segment a single CT image was reduced by approximately 50 %. CONCLUSIONS: Experimental results show that the ITCE model can generate high-contrast enhanced PCCT images, especially in liver regions, and the TCELiTS model can improve LiTS accuracy in NCCT images.


Assuntos
Neoplasias Hepáticas , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Masculino , Feminino , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Fígado/diagnóstico por imagem , Pessoa de Meia-Idade , Idoso
2.
NPJ Digit Med ; 7(1): 97, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38622284

RESUMO

Meniscal injury represents a common type of knee injury, accounting for over 50% of all knee injuries. The clinical diagnosis and treatment of meniscal injury heavily rely on magnetic resonance imaging (MRI). However, accurately diagnosing the meniscus from a comprehensive knee MRI is challenging due to its limited and weak signal, significantly impeding the precise grading of meniscal injuries. In this study, a visual interpretable fine grading (VIFG) diagnosis model has been developed to facilitate intelligent and quantified grading of meniscal injuries. Leveraging a multilevel transfer learning framework, it extracts comprehensive features and incorporates an attributional attention module to precisely locate the injured positions. Moreover, the attention-enhancing feedback module effectively concentrates on and distinguishes regions with similar grades of injury. The proposed method underwent validation on FastMRI_Knee and Xijing_Knee dataset, achieving mean grading accuracies of 0.8631 and 0.8502, surpassing the state-of-the-art grading methods notably in error-prone Grade 1 and Grade 2 cases. Additionally, the visually interpretable heatmaps generated by VIFG provide accurate depictions of actual or potential meniscus injury areas beyond human visual capability. Building upon this, a novel fine grading criterion was introduced for subtypes of meniscal injury, further classifying Grade 2 into 2a, 2b, and 2c, aligning with the anatomical knowledge of meniscal blood supply. It can provide enhanced injury-specific details, facilitating the development of more precise surgical strategies. The efficacy of this subtype classification was evidenced in 20 arthroscopic cases, underscoring the potential enhancement brought by intelligent-assisted diagnosis and treatment for meniscal injuries.

3.
IEEE J Biomed Health Inform ; 28(5): 3042-3054, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38376973

RESUMO

Accurate fine-grained grading of lumbar intervertebral disc (LIVD) degeneration is essential for the diagnosis and treatment design of high-incidence low back pain. However, the grading accuracy is still challenged by lacking the fine-grained degenerative details, which is mainly due to the existing grading methods are easily dominated by the salient nucleus pulposus regions in LIVD, overlooking the inconspicuous degeneration changes of the surrounding structures. In this study, a novel regional feature recalibration network (RFRecNet) is proposed to achieve accurate and reliable LIVD degeneration grading. Detection transformer (DETR) is first utilized to detect all LIVDs and then input to the proposed RFRecNet for the fine-grained grading. To obtain sufficient features from both the salient nucleus pulposus and the surrounding regions, a regional cube-based feature boosting and suppression (RC-FBS) module is designed to adaptively recalibrate the feature extraction and utilization from the various regions in LIVD, and a feature diversification (FD) module is proposed to capture the complementary semantic information from the multi-scale features for the comprehensive fine-grained degeneration grading. Extensive experiments were conducted on a clinically collected dataset, which consists of 500 MR scans with a total of 10225 LIVDs. An average grading accuracy of 90.5%, specificity of 97.5%, sensitivity of 90.8%, and Cohen's kappa correlation coefficient of 0.876 are obtained, which indicate that the proposed framework is promising to provide doctors with reliable and consistent fine-grained quantitative evaluation results of the LIVD degeneration conditions for the optimal surgical plan design.


Assuntos
Interpretação de Imagem Assistida por Computador , Degeneração do Disco Intervertebral , Vértebras Lombares , Imageamento por Ressonância Magnética , Humanos , Degeneração do Disco Intervertebral/diagnóstico por imagem , Vértebras Lombares/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos
4.
Phys Med Biol ; 68(23)2023 Nov 28.
Artigo em Inglês | MEDLINE | ID: mdl-37918343

RESUMO

Objective.Ultrasound is the most commonly used examination for the detection and identification of thyroid nodules. Since manual detection is time-consuming and subjective, attempts to introduce machine learning into this process are ongoing. However, the performance of these methods is limited by the low signal-to-noise ratio and tissue contrast of ultrasound images. To address these challenges, we extend thyroid nodule detection from image-based to video-based using the temporal context information in ultrasound videos.Approach.We propose a video-based deep learning model with adjacent frame perception (AFP) for accurate and real-time thyroid nodule detection. Compared to image-based methods, AFP can aggregate semantically similar contextual features in the video. Furthermore, considering the cost of medical image annotation for video-based models, a patch scale self-supervised model (PASS) is proposed. PASS is trained on unlabeled datasets to improve the performance of the AFP model without additional labelling costs.Main results.The PASS model is trained by 92 videos containing 23 773 frames, of which 60 annotated videos containing 16 694 frames were used to train and evaluate the AFP model. The evaluation is performed from the video, frame, nodule, and localization perspectives. In the evaluation of the localization perspective, we used the average precision metric with the intersection-over-union threshold set to 50% (AP@50), which is the area under the smoothed Precision-Recall curve. Our proposed AFP improved AP@50 from 0.256 to 0.390, while the PASS-enhanced AFP further improved the AP@50 to 0.425. AFP and PASS also improve the performance in the valuations of other perspectives based on the localization results.Significance.Our video-based model can mitigate the effects of low signal-to-noise ratio and tissue contrast in ultrasound images and enable the accurate detection of thyroid nodules in real-time. The evaluation from multiple perspectives of the ablation experiments demonstrates the effectiveness of our proposed AFP and PASS models.


Assuntos
Nódulo da Glândula Tireoide , Humanos , Nódulo da Glândula Tireoide/diagnóstico por imagem , alfa-Fetoproteínas , Ultrassonografia , Aprendizado de Máquina , Razão Sinal-Ruído
5.
Phys Med ; 110: 102595, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37178624

RESUMO

PURPOSE: Although many deep learning-based abdominal multi-organ segmentation networks have been proposed, the various intensity distributions and organ shapes of the CT images from multi-center, multi-phase with various diseases introduce new challenges for robust abdominal CT segmentation. To achieve robust and efficient abdominal multi-organ segmentation, a new two-stage method is presented in this study. METHODS: A binary segmentation network is used for coarse localization, followed by a multi-scale attention network for the fine segmentation of liver, kidney, spleen, and pancreas. To constrain the organ shapes produced by the fine segmentation network, an additional network is pre-trained to learn the shape features of the organs with serious diseases and then employed to constrain the training of the fine segmentation network. RESULTS: The performance of the presented segmentation method was extensively evaluated on the multi-center data set from the Fast and Low GPU Memory Abdominal oRgan sEgmentation (FLARE) challenge, which was held in conjunction with International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2021. Dice Similarity Coefficient (DSC) and Normalized Surface Dice (NSD) were calculated to quantitatively evaluate the segmentation accuracy and efficiency. An average DSC and NSD of 83.7% and 64.4% were achieved, and our method finally won the second place among more than 90 participating teams. CONCLUSIONS: The evaluation results on the public challenge demonstrate that our method shows promising performance in robustness and efficiency, which may promote the clinical application of the automatic abdominal multi-organ segmentation.


Assuntos
Algoritmos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos , Abdome/diagnóstico por imagem , Baço/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
6.
Artigo em Inglês | MEDLINE | ID: mdl-37097794

RESUMO

Developmental coordination disorder (DCD) is a motor learning disability with a prevalence of 5%-6% in school-aged children, which may seriously affect the physical and mental health of affected children. Behavior analysis of children helps explore the mechanism of DCD and develop better diagnosis protocols. In this study, we investigate the behavioral pattern of children with DCD in the gross movement using a visual-motor tracking system. First, visual components of interest are detected and extracted using a series of intelligent algorithms. Then, the kinematic features are defined and calculated to describe the children behavior, including eye movement, body movement, and interacting object trajectory. Finally, statistical analysis is conducted both between groups with different motor coordination abilities and between groups with different task outcomes. The experimental results show that groups of children with different coordination abilities differ significantly both in the duration of eye gaze focusing on the target and in the degree of concentration during aiming, which can serve as behavioral markers to distinguish children with DCD. This finding also provides precise guidance for the interventions for children with DCD. In addition to increasing the amount of time spent on concentrating, we should focus on improving children's attention levels.


Assuntos
Transtornos das Habilidades Motoras , Criança , Humanos , Transtornos das Habilidades Motoras/diagnóstico , Desempenho Psicomotor , Movimento , Movimentos Oculares , Aprendizagem , Destreza Motora
7.
Med Image Anal ; 82: 102616, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36179380

RESUMO

Automatic segmentation of abdominal organs in CT scans plays an important role in clinical practice. However, most existing benchmarks and datasets only focus on segmentation accuracy, while the model efficiency and its accuracy on the testing cases from different medical centers have not been evaluated. To comprehensively benchmark abdominal organ segmentation methods, we organized the first Fast and Low GPU memory Abdominal oRgan sEgmentation (FLARE) challenge, where the segmentation methods were encouraged to achieve high accuracy on the testing cases from different medical centers, fast inference speed, and low GPU memory consumption, simultaneously. The winning method surpassed the existing state-of-the-art method, achieving a 19× faster inference speed and reducing the GPU memory consumption by 60% with comparable accuracy. We provide a summary of the top methods, make their code and Docker containers publicly available, and give practical suggestions on building accurate and efficient abdominal organ segmentation models. The FLARE challenge remains open for future submissions through a live platform for benchmarking further methodology developments at https://flare.grand-challenge.org/.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Abdome/diagnóstico por imagem , Benchmarking , Processamento de Imagem Assistida por Computador/métodos
8.
NPJ Digit Med ; 5(1): 151, 2022 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-36168038

RESUMO

With the increase of the ageing in the world's population, the ageing and degeneration studies of physiological characteristics in human skin, bones, and muscles become important topics. Research on the ageing of bones, especially the skull, are paid much attention in recent years. In this study, a novel deep learning method representing the ageing-related dynamic attention (ARDA) is proposed. The proposed method can quantitatively display the ageing salience of the bones and their change patterns with age on lateral cephalometric radiographs images (LCR) images containing the craniofacial and cervical spine. An age estimation-based deep learning model based on 14142 LCR images from 4 to 40 years old individuals is trained to extract ageing-related features, and based on these features the ageing salience maps are generated by the Grad-CAM method. All ageing salience maps with the same age are merged as an ARDA map corresponding to that age. Ageing salience maps show that ARDA is mainly concentrated in three regions in LCR images: the teeth, craniofacial, and cervical spine regions. Furthermore, the dynamic distribution of ARDA at different ages and instances in LCR images is quantitatively analyzed. The experimental results on 3014 cases show that ARDA can accurately reflect the development and degeneration patterns in LCR images.

9.
Phys Med Biol ; 67(12)2022 06 08.
Artigo em Inglês | MEDLINE | ID: mdl-35611711

RESUMO

Objective.Locoregional recurrence (LRR) is one of the leading causes of treatment failure in head and neck (H&N) cancer. Accurately predicting LRR after radiotherapy is essential to achieving better treatment outcomes for patients with H&N cancer through developing personalized treatment strategies. We aim to develop an end-to-end multi-modality and multi-view feature extension method (MMFE) to predict LRR in H&N cancer.Approach.Deep learning (DL) has been widely used for building prediction models and has achieved great success. Nevertheless, 2D-based DL models inherently fail to utilize the contextual information from adjacent slices, while complicated 3D models have a substantially larger number of parameters, which require more training samples, memory and computing resources. In the proposed MMFE scheme, through the multi-view feature expansion and projection dimension reduction operations, we are able to reduce the model complexity while preserving volumetric information. Additionally, we designed a multi-modality convolutional neural network that can be trained in an end-to-end manner and can jointly optimize the use of deep features of CT, PET and clinical data to improve the model's prediction ability.Main results.The dataset included 206 eligible patients, of which, 49 had LRR while 157 did not. The proposed MMFE method obtained a higher AUC value than the other four methods. The best prediction result was achieved when using all three modalities, which yielded an AUC value of 0.81.Significance.Comparison experiments demonstrated the superior performance of the MMFE as compared to other 2D/3D-DL-based methods. By combining CT, PET and clinical features, the MMFE could potentially identify H&N cancer patients at high risk for LRR such that personalized treatment strategy can be developed accordingly.


Assuntos
Neoplasias de Cabeça e Pescoço , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia , Humanos , Redes Neurais de Computação
10.
Mod Rheumatol ; 32(5): 968-973, 2022 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-34918143

RESUMO

OBJECTIVE: This study has developed a new automatic algorithm for the quantificationy and grading of ankylosing spondylitis (AS)-hip arthritis with magnetic resonance imaging (MRI). METHODS: (1) This study designs a new segmentation network based on deep learning, and a classification network based on deep learning. (2) We train the segmentation model and classification model with the training data and validate the performance of the model. (3) The segmentation results of inflammation in MRI images were obtained and the hip joint was quantified using the segmentation results. RESULTS: A retrospective analysis was performed on 141 cases; 101 patients were included in the derived cohort and 40 in the validation cohort. In the derivation group, median percentage of bone marrow oedema (BME) for each grade was as follows: 36% for grade 1 (<15%), 42% for grade 2 (15-30%),and 22% for grade 3 (≥30%). The accuracy of 44 cases on 835 AS images was 85.7%. Our model made 31 correct decisions out of 40 AS test cases. This study showed that THE accuracy rate 85.7%. CONCLUSIONS: An automatic computer-based analysis of MRI has the potential of being a useful method for the diagnosis and grading of AS hip BME.


Assuntos
Aprendizado Profundo , Espondilite Anquilosante , Medula Óssea/diagnóstico por imagem , Medula Óssea/patologia , Edema/diagnóstico por imagem , Edema/etiologia , Humanos , Imageamento por Ressonância Magnética/métodos , Estudos Retrospectivos , Espondilite Anquilosante/complicações , Espondilite Anquilosante/diagnóstico por imagem , Espondilite Anquilosante/patologia
11.
Sensors (Basel) ; 21(21)2021 Oct 28.
Artigo em Inglês | MEDLINE | ID: mdl-34770469

RESUMO

Heart rate is one of the most important diagnostic bases for cardiovascular disease. This paper introduces a deep autoencoding strategy into feature extraction of electrocardiogram (ECG) signals, and proposes a beat-to-beat heart rate estimation method based on convolution autoencoding and Gaussian mixture clustering. The high-level heartbeat features were first extracted in an unsupervised manner by training the convolutional autoencoder network, and then the adaptive Gaussian mixture clustering was applied to detect the heartbeat locations from the extracted features, and calculated the beat-to-beat heart rate. Compared with the existing heartbeat classification/detection methods, the proposed unsupervised feature learning and heartbeat clustering method does not rely on accurate labeling of each heartbeat location, which could save a lot of time and effort in human annotations. Experimental results demonstrate that the proposed method maintains better accuracy and generalization ability compared with the existing ECG heart rate estimation methods and could be a robust long-time heart rate monitoring solution for wearable ECG devices.


Assuntos
Eletrocardiografia , Dispositivos Eletrônicos Vestíveis , Algoritmos , Análise por Conglomerados , Frequência Cardíaca , Humanos , Distribuição Normal , Processamento de Sinais Assistido por Computador
12.
Phys Med Biol ; 66(20)2021 10 05.
Artigo em Inglês | MEDLINE | ID: mdl-34517352

RESUMO

Objective.Ankylosing spondylitis (AS) is a disabling systemic disease that seriously threatens the patient's quality of life. Magnetic resonance imaging (MRI) is highly preferred in clinical diagnosis due to its high contrast and tissue resolution. However, since the uncertainty and intensity inhomogeneous of the AS lesions in MRI, it is still challenging and time-consuming for doctors to quantify the lesions to determine the grade of the patient's condition. Thus, an automatic AS grading method is presented in this study, which integrates the lesion segmentation and grading in a pipeline.Approach. To tackle the large variations in lesion shapes, sizes, and intensity distributions, a lightweight hybrid multi-scale convolutional neural network with reinforcement learning (LHR-Net) is proposed for the AS lesion segmentation. Specifically, the proposed LHR-Net is equipped with the newly proposed hybrid multi-scale module, which consists of multiply convolution layers with different kernel sizes and dilation rates for extracting sufficient multi-scale features. Additionally, a reinforcement learning-based data augmentation module is utilized to deal with the subjects with diffuse and fuzzy lesions that are difficult to segment. Furthermore, to resolve the incomplete segmentation results caused by the inhomogeneous intensity distributions of the AS lesions in MR images, a voxel constraint strategy is proposed to weigh the training voxel labels in the lesion regions. With the accurately segmented AS lesions, automatic AS grading is then performed by a ResNet-50-based classification network.Main results. The performance of the proposed LHR-Net was extensively evaluated on a clinically collected AS MRI dataset, which includes 100 subjects. Dice similarity coefficient (DSC), average surface distance, Hausdorff Distance at95thpercentile (HD95), predicted positive volume, and sensitivity were employed to quantitatively evaluate the segmentation results. The average DSC of the proposed LHR-Net on the AS dataset reached 0.71 on the test set, which outperforms the other state-of-the-art segmentation method by 0.04.Significance. With the accurately segmented lesions, 31 subjects in the test set (38 subjects) were correctly graded, which demonstrates that the proposed LHR-Net might provide a potential automatic method for reproducible computer-assisted diagnosis of AS grading.


Assuntos
Espondilite Anquilosante , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Qualidade de Vida , Espondilite Anquilosante/diagnóstico por imagem
13.
IEEE Trans Cybern ; PP2021 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-34236979

RESUMO

Attention-based deep multiple-instance learning (MIL) has been applied to many machine-learning tasks with imprecise training labels. It is also appealing in hyperspectral target detection, which only requires the label of an area containing some targets, relaxing the effort of labeling the individual pixel in the scene. This article proposes an L1 sparsity-regularized attention multiple-instance neural network (L1-attention MINN) for hyperspectral target detection with imprecise labels that enforces the discrimination of false-positive instances from positively labeled bags. The sparsity constraint applied to the attention estimated for the positive training bags strictly complies with the definition of MIL and maintains better discriminative ability. The proposed algorithm has been evaluated on both simulated and real-field hyperspectral (subpixel) target detection tasks, where advanced performance has been achieved over the state-of-the-art comparisons, showing the effectiveness of the proposed method for target detection from imprecisely labeled hyperspectral data.

14.
IEEE J Biomed Health Inform ; 25(9): 3396-3407, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33945489

RESUMO

Non-invasive heart rate estimation is of great importance in daily monitoring of cardiovascular diseases. In this paper, a bidirectional long short term memory (bi-LSTM) regression network is developed for non-invasive heart rate estimation from the ballistocardiograms (BCG) signals. The proposed deep regression model provides an effective solution to the existing challenges in BCG heart rate estimation, such as the mismatch between the BCG signals and ground-truth reference, multi-sensor fusion and effective time series feature learning. Allowing label uncertainty in the estimation can reduce the manual cost of data annotation while further improving the heart rate estimation performance. Compared with the state-of-the-art BCG heart rate estimation methods, the strong fitting and generalization ability of the proposed deep regression model maintains better robustness to noise (e.g., sensor noise) and perturbations (e.g., body movements) in the BCG signals and provides a more reliable solution for long term heart rate monitoring.


Assuntos
Balistocardiografia , Curadoria de Dados , Frequência Cardíaca , Humanos , Monitorização Fisiológica , Movimento
15.
Biomed Res Int ; 2021: 6683931, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33542924

RESUMO

Colorectal imaging improves on diagnosis of colorectal diseases by providing colorectal images. Manual diagnosis of colorectal disease is labor-intensive and time-consuming. In this paper, we present a method for automatic colorectal disease classification and segmentation. Because of label unbalanced and difficult colorectal data, the classification based on self-paced transfer VGG network (STVGG) is proposed. ImageNet pretraining network parameters are transferred to VGG network with training colorectal data to acquire good initial network performance. And self-paced learning is used to optimize the network so that the classification performance of label unbalanced and difficult samples is improved. In order to assist the colonoscopist to accurately determine whether the polyp needs surgical resection, feature of trained STVGG model is shared to Unet segmentation network as the encoder part and to avoid repeat learning of polyp segmentation model. The experimental results on 3061 colorectal images illustrated that the proposed method obtained higher classification accuracy (96%) and segmentation performance compared with a few other methods. The polyp can be segmented accurately from around tissues by the proposed method. The segmentation results underpin the potential of deep learning methods for assisting colonoscopist in identifying polyps and enabling timely resection of these polyps at an early stage.


Assuntos
Algoritmos , Colonoscopia/métodos , Neoplasias Colorretais/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Automação , Neoplasias Colorretais/classificação , Neoplasias Colorretais/patologia , Humanos , Aprendizado de Máquina
16.
Biomed Res Int ; 2021: 6679603, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33628806

RESUMO

Accurate segmentation of abdominal organs has always been a difficult problem, especially for organs with cavities. And MRI-guided radiotherapy is particularly attractive for abdominal targets compared with low CT contrast. But in the limit of radiotherapy environment, only low field MRI segmentation can be used for stomach location, tracking, and treatment planning. In clinical applications, the existing 3D segmentation network model is trained by the low field MRI, and the segmentation result cannot be used in radiotherapy plan since the bad segmentation performance. Another way is that historical high field intensity MR images are directly used for data expansion to network learning; there will be a domain shift problem. How to use different domain images to improve the segmentation accuracy of deep neural network? A 3D low field MRI stomach segmentation method based on transfer learning image enhancement is proposed in this paper. In this method, Cycle Generative Adversarial Network (CycleGAN) is used to construct and learn the mapping relationship between high and low field intensity MRI and to overcome domain shift. Then, the image generated by the high field intensity MRI through the CycleGAN network is with transferred information as the extended data. The low field MRI combines these extended datasets to form the training data for training the 3D Res-Unet segmentation network. Furthermore, the convolution layer, batch normalization layer, and Relu layer together were replaced with a residual module to relieve the gradient disappearance of the neural network. The experimental results show that the Dice coefficient is 2.5 percent better than the baseline method. The over segmentation and under segmentation are reduced by 0.7 and 5.5 percent, respectively. And the sensitivity is improved by 6.4 percent.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Estômago/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Humanos
17.
IEEE Trans Cybern ; 51(5): 2835-2846, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-31425063

RESUMO

Ensemble learning performs better than a single classifier in most tasks due to the diversity among multiple classifiers. However, the enhancement of the diversity is at the expense of reducing the accuracies of individual classifiers in general and, thus, how to balance the diversity and accuracies is crucial for improving the ensemble performance. In this paper, we propose a new ensemble method which exploits the correlation between individual classifiers and their corresponding weights by constructing a joint optimization model to achieve the tradeoff between the diversity and the accuracy. Specifically, the proposed framework can be modeled as a shallow network and efficiently trained by the end-to-end manner. In the proposed ensemble method, not only can a high total classification performance be achieved by the weighted classifiers but also the individual classifier can be updated based on the error of the optimized weighted classifiers ensemble. Furthermore, the sparsity constraint is imposed on the weight to enforce that partial individual classifiers are selected for final classification. Finally, the experimental results on the UCI datasets demonstrate that the proposed method effectively improves the performance of classification compared with relevant existing ensemble methods.

18.
Phys Med Biol ; 66(3): 035001, 2021 01 26.
Artigo em Inglês | MEDLINE | ID: mdl-33197901

RESUMO

Automated male pelvic multi-organ segmentation on CT images is highly desired for applications, including radiotherapy planning. To further improve the performance and efficiency of existing automated segmentation methods, in this study, we propose a multi-task edge-recalibrated network (MTER-Net), which aims to overcome the challenges, including blurry boundaries, large inter-patient appearance variations, and low soft-tissue contrast. The proposed MTER-Net is equipped with the following novel components. (a) To exploit the saliency and stability of femoral heads, we employed a light-weight localization module to locate the target region and efficiently remove the complex background. (b) We add an edge stream to the regular segmentation stream to focus on processing the edge-related information, distinguish the organs with blurry boundaries, and then boost the overall segmentation performance. Between the regular segmentation stream and edge stream, we introduce an edge recalibration module at each resolution level to connect the intermediate layers and deliver the higher-level activations from the regular stream to the edge stream to denoise the irrelevant activations. (c) Finally, using a 3D Atrous Spatial Pyramid Pooling (ASPP) feature fusion module, we fuse the features at different scales in the regular stream and the predictions from the edge stream to form the final segmentation result. The proposed segmentation network was evaluated on 200 prostate cancer patient CT images with manually delineated contours of bladder, rectum, seminal vesicle, and prostate. The segmentation performance of the proposed method was quantitatively evaluated using three metrics including Dice similarity coefficient (DSC), average surface distance (ASD), and 95% surface distance (95SD). The proposed MTER-Net achieves average DSC of 86.35%, ASD of 1.09 mm, and 95SD of 3.53 mm on the four organs, which outperforms the state-of-the-art segmentation networks by a large margin. Specifically, the quantitative DSC evaluation results of the four organs are 96.49% (bladder), 86.39% (rectum), 76.38% (seminal vesicle), and 86.14% (prostate), respectively. In conclusion, we demonstrate that the proposed MTER-Net efficiently attains superior performance to state-of-the-art pelvic organ segmentation methods.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Pelve/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Humanos , Masculino
19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 451-454, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018025

RESUMO

Inspired by the application of recurrent neural networks (RNNs) to image recognition, in this paper, we propose a heartbeat detection framework based on the Gated Recurrent Unit (GRU) network. In this contribution, the heartbeat detection task from ballistocardiogram (BCG) signals was modeled as a classification problem where the segments of BCG signals were formulated as images fed into the GRU network for feature extraction. The proposed framework has advantages in fusion of multi-channel BCG signals and effective extraction of the temporal and waveform characteristics of the heartbeat signal, thereby enhancing heart rate estimation accuracy. In laboratory collected BCG data, the proposed method achieved the best heart rate estimation results compared to previous algorithms.


Assuntos
Balistocardiografia , Algoritmos , Coleta de Dados , Frequência Cardíaca , Humanos , Redes Neurais de Computação
20.
Comput Math Methods Med ; 2020: 4942121, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32802148

RESUMO

Transesophageal echocardiography (TEE) has become an essential tool in interventional cardiologist's daily toolbox which allows a continuous visualization of the movement of the visceral organ without trauma and the observation of the heartbeat in real time, due to the sensor's location at the esophagus directly behind the heart and it becomes useful for navigation during the surgery. However, TEE images provide very limited data on clear anatomically cardiac structures. Instead, computed tomography (CT) images can provide anatomical information of cardiac structures, which can be used as guidance to interpret TEE images. In this paper, we will focus on how to transfer the anatomical information from CT images to TEE images via registration, which is quite challenging but significant to physicians and clinicians due to the extreme morphological deformation and different appearance between CT and TEE images of the same person. In this paper, we proposed a learning-based method to register cardiac CT images to TEE images. In the proposed method, to reduce the deformation between two images, we introduce the Cycle Generative Adversarial Network (CycleGAN) into our method simulating TEE-like images from CT images to reduce their appearance gap. Then, we perform nongrid registration to align TEE-like images with TEE images. The experimental results on both children' and adults' CT and TEE images show that our proposed method outperforms other compared methods. It is quite noted that reducing the appearance gap between CT and TEE images can benefit physicians and clinicians to get the anatomical information of ROIs in TEE images during the cardiac surgical operation.


Assuntos
Ecocardiografia Transesofagiana/estatística & dados numéricos , Cardiopatias Congênitas/diagnóstico por imagem , Cardiopatias Congênitas/cirurgia , Cirurgia Assistida por Computador/estatística & dados numéricos , Tomografia Computadorizada por Raios X/estatística & dados numéricos , Adolescente , Adulto , Bases de Dados Factuais/estatística & dados numéricos , Humanos , Interpretação de Imagem Assistida por Computador/estatística & dados numéricos , Imagem Multimodal/estatística & dados numéricos , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...