Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
BMC Med Imaging ; 24(1): 138, 2024 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-38858645

RESUMO

BACKGROUND: This study aimed to investigate the alterations in structural integrity of superior longitudinal fasciculus subcomponents with increasing white matter hyperintensity severity as well as the relationship to cognitive performance in cerebral small vessel disease. METHODS: 110 cerebral small vessel disease study participants with white matter hyperintensities were recruited. According to Fazekas grade scale, white matter hyperintensities of each subject were graded. All subjects were divided into two groups. The probabilistic fiber tracking method was used for analyzing microstructure characteristics of superior longitudinal fasciculus subcomponents. RESULTS: Probabilistic fiber tracking results showed that mean diffusion, radial diffusion, and axial diffusion values of the left arcuate fasciculus as well as the mean diffusion value of the right arcuate fasciculus and left superior longitudinal fasciculus III in high white matter hyperintensities rating group were significantly higher than those in low white matter hyperintensities rating group (p < 0.05). The mean diffusion value of the left superior longitudinal fasciculus III was negatively related to the Montreal Cognitive Assessment score of study participants (p < 0.05). CONCLUSIONS: The structural integrity injury of bilateral arcuate fasciculus and left superior longitudinal fasciculus III is more severe with the aggravation of white matter hyperintensities. The structural integrity injury of the left superior longitudinal fasciculus III correlates to cognitive impairment in cerebral small vessel disease.


Assuntos
Doenças de Pequenos Vasos Cerebrais , Imagem de Tensor de Difusão , Substância Branca , Humanos , Doenças de Pequenos Vasos Cerebrais/diagnóstico por imagem , Doenças de Pequenos Vasos Cerebrais/patologia , Doenças de Pequenos Vasos Cerebrais/complicações , Masculino , Feminino , Substância Branca/diagnóstico por imagem , Substância Branca/patologia , Idoso , Pessoa de Meia-Idade , Imagem de Tensor de Difusão/métodos , Cognição , Disfunção Cognitiva/diagnóstico por imagem , Disfunção Cognitiva/patologia , Disfunção Cognitiva/etiologia
2.
IEEE J Biomed Health Inform ; 28(5): 3003-3014, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38470599

RESUMO

Fusing multi-modal radiology and pathology data with complementary information can improve the accuracy of tumor typing. However, collecting pathology data is difficult since it is high-cost and sometimes only obtainable after the surgery, which limits the application of multi-modal methods in diagnosis. To address this problem, we propose comprehensively learning multi-modal radiology-pathology data in training, and only using uni-modal radiology data in testing. Concretely, a Memory-aware Hetero-modal Distillation Network (MHD-Net) is proposed, which can distill well-learned multi-modal knowledge with the assistance of memory from the teacher to the student. In the teacher, to tackle the challenge in hetero-modal feature fusion, we propose a novel spatial-differentiated hetero-modal fusion module (SHFM) that models spatial-specific tumor information correlations across modalities. As only radiology data is accessible to the student, we store pathology features in the proposed contrast-boosted typing memory module (CTMM) that achieves type-wise memory updating and stage-wise contrastive memory boosting to ensure the effectiveness and generalization of memory items. In the student, to improve the cross-modal distillation, we propose a multi-stage memory-aware distillation (MMD) scheme that reads memory-aware pathology features from CTMM to remedy missing modal-specific information. Furthermore, we construct a Radiology-Pathology Thymic Epithelial Tumor (RPTET) dataset containing paired CT and WSI images with annotations. Experiments on the RPTET and CPTAC-LUAD datasets demonstrate that MHD-Net significantly improves tumor typing and outperforms existing multi-modal methods on missing modality situations.


Assuntos
Neoplasias Epiteliais e Glandulares , Neoplasias do Timo , Humanos , Neoplasias do Timo/diagnóstico por imagem , Neoplasias Epiteliais e Glandulares/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Redes Neurais de Computação , Aprendizado Profundo , Imagem Multimodal/métodos
3.
Comput Biol Med ; 170: 108039, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38308874

RESUMO

Brain tumors are among the most prevalent neoplasms in current medical studies. Accurately distinguishing and classifying brain tumor types accurately is crucial for patient treatment and survival in clinical practice. However, existing computer-aided diagnostic pipelines are inadequate for practical medical use due to tumor complexity. In this study, we curated a multi-centre brain tumor dataset that includes various clinical brain tumor data types, including segmentation and classification annotations, surpassing previous efforts. To enhance brain tumor segmentation accuracy, we propose a new segmentation method: HSA-Net. This method utilizes the Shared Weight Dilated Convolution module (SWDC) and Hybrid Dense Dilated Convolution module (HDense) to capture multi-scale information while minimizing parameter count. The Effective Multi-Dimensional Attention (EMA) and Important Feature Attention (IFA) modules effectively aggregate task-related information. We introduce a novel clinical brain tumor computer-aided diagnosis pipeline (CAD) that combines HSA-Net with pipeline modification. This approach not only improves segmentation accuracy but also utilizes the segmentation mask as an additional channel feature to enhance brain tumor classification results. Our experimental evaluation of 3327 real clinical data demonstrates the effectiveness of the proposed method, achieving an average Dice coefficient of 86.85 % for segmentation and a classification accuracy of 95.35 %. We also validated the effectiveness of our proposed method using the publicly available BraTS dataset.


Assuntos
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Diagnóstico por Computador , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador
4.
Comput Med Imaging Graph ; 110: 102307, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37913635

RESUMO

Glioblastoma (GBM), isolated brain metastasis (SBM), and primary central nervous system lymphoma (PCNSL) possess a high level of similarity in histomorphology and clinical manifestations on multimodal MRI. Such similarities have led to challenges in the clinical diagnosis of these three malignant tumors. However, many existing models solely focus on either the task of segmentation or classification, which limits the application of computer-aided diagnosis in clinical diagnosis and treatment. To solve this problem, we propose a multi-task learning transformer with neural architecture search (NAS) for brain tumor segmentation and classification (BTSC-TNAS). In the segmentation stage, we use a nested transformer U-shape network (NTU-NAS) with NAS to directly predict brain tumor masks from multi-modal MRI images. In the tumor classification stage, we use the multiscale features obtained from the encoder of NTU-NAS as the input features of the classification network (MSC-NET), which are integrated and corrected by the classification feature correction enhancement (CFCE) block to improve the accuracy of classification. The proposed BTSC-TNAS achieves an average Dice coefficient of 80.86% and 87.12% for the segmentation of tumor region and the maximum abnormal region in clinical data respectively. The model achieves a classification accuracy of 0.941. The experiments performed on the BraTS 2019 dataset show that the proposed BTSC-TNAS has excellent generalizability and can provide support for some challenging tasks in the diagnosis and treatment of brain tumors.


Assuntos
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Encéfalo , Diagnóstico por Computador , Aprendizagem , Processamento de Imagem Assistida por Computador
5.
IEEE J Transl Eng Health Med ; 11: 441-450, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37817826

RESUMO

BACKGROUND: In the past few years, U-Net based U-shaped architecture and skip-connections have made incredible progress in the field of medical image segmentation. U2-Net achieves good performance in computer vision. However, in the medical image segmentation task, U2-Net with over nesting is easy to overfit. PURPOSE: A 2D network structure TransU2-Net combining transformer and a lighter weight U2-Net is proposed for automatic segmentation of brain tumor magnetic resonance image (MRI). METHODS: The light-weight U2-Net architecture not only obtains multi-scale information but also reduces redundant feature extraction. Meanwhile, the transformer block embedded in the stacked convolutional layer obtains more global information; the transformer with skip-connection enhances spatial domain information representation. A new multi-scale feature map fusion strategy as a postprocessing method was proposed for better fusing high and low-dimensional spatial information. RESULTS: Our proposed model TransU2-Net achieves better segmentation results, on the BraTS2021 dataset, our method achieves an average dice coefficient of 88.17%; Evaluation on the publicly available MSD dataset, we perform tumor evaluation, we achieve a dice coefficient of 74.69%; in addition to comparing the TransU2-Net results are compared with previously proposed 2D segmentation methods. CONCLUSIONS: We propose an automatic medical image segmentation method combining transformers and U2-Net, which has good performance and is of clinical importance. The experimental results show that the proposed method outperforms other 2D medical image segmentation methods. Clinical Translation Statement: We use the BarTS2021 dataset and the MSD dataset which are publicly available databases. All experiments in this paper are in accordance with medical ethics.


Assuntos
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Relevância Clínica , Bases de Dados Factuais , Fontes de Energia Elétrica , Ética Médica
6.
Comput Biol Med ; 166: 107519, 2023 Sep 25.
Artigo em Inglês | MEDLINE | ID: mdl-37801919

RESUMO

With the increasing popularity of the use of 3D scanning equipment in capturing oral cavity in dental health applications, the quality of 3D dental models has become vital in oral prosthodontics and orthodontics. However, the point cloud data obtained can often be sparse and thus missing information. To address this issue, we construct a high-resolution teeth point cloud completion method named TUCNet to fill up the sparse and incomplete oral point cloud collected and output a dense and complete teeth point cloud. First, we propose a Channel and Spatial Attentive EdgeConv (CSAE) module to fuse local and global contexts in the point feature extraction. Second, we propose a CSAE-based point cloud upsample (CPCU) module to gradually increase the number of points in the point clouds. TUCNet employs a tree-based approach to generate complete point clouds, where child points are derived through a splitting process from parent points following each CPCU. The CPCU learns the up-sampling pattern of each parent point by combining the attention mechanism and the point deconvolution operation. Skip connections are introduced between CPCUs to summarize the split mode of the previous layer of CPCUs, which is used to generate the split mode of the current CPCUs. We conduct numerous experiments on the teeth point cloud completion dataset and the PCN dataset. The experimental results show that our TUCNet not only achieves the state-of-the-art performance on the teeth dataset, but also achieves excellent performance on the PCN dataset.

7.
Front Med (Lausanne) ; 10: 1232496, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37841015

RESUMO

Objectives: Gliomas and brain metastases (Mets) are the most common brain malignancies. The treatment strategy and clinical prognosis of patients are different, requiring accurate diagnosis of tumor types. However, the traditional radiomics diagnostic pipeline requires manual annotation and lacks integrated methods for segmentation and classification. To improve the diagnosis process, a gliomas and Mets computer-aided diagnosis method with automatic lesion segmentation and ensemble decision strategy on multi-center datasets was proposed. Methods: Overall, 1,022 high-grade gliomas and 775 Mets patients' preoperative MR images were adopted in the study, including contrast-enhanced T1-weighted (T1-CE) and T2-fluid attenuated inversion recovery (T2-flair) sequences from three hospitals. Two segmentation models trained on the gliomas and Mets datasets, respectively, were used to automatically segment tumors. Multiple radiomics features were extracted after automatic segmentation. Several machine learning classifiers were used to measure the impact of feature selection methods. A weight soft voting (RSV) model and ensemble decision strategy based on prior knowledge (EDPK) were introduced in the radiomics pipeline. Accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC) were used to evaluate the classification performance. Results: The proposed pipeline improved the diagnosis of gliomas and Mets with ACC reaching 0.8950 and AUC reaching 0.9585 after automatic lesion segmentation, which was higher than those of the traditional radiomics pipeline (ACC:0.8850, AUC:0.9450). Conclusion: The proposed model accurately classified gliomas and Mets patients using MRI radiomics. The novel pipeline showed great potential in diagnosing gliomas and Mets with high generalizability and interpretability.

8.
IEEE J Biomed Health Inform ; 27(12): 5860-5871, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37738185

RESUMO

Multimodal volumetric segmentation and fusion are two valuable techniques for surgical treatment planning, image-guided interventions, tumor growth detection, radiotherapy map generation, etc. In recent years, deep learning has demonstrated its excellent capability in both of the above tasks, while these methods inevitably face bottlenecks. On the one hand, recent segmentation studies, especially the U-Net-style series, have reached the performance ceiling in segmentation tasks. On the other hand, it is almost impossible to capture the ground truth of the fusion in multimodal imaging, due to differences in physical principles among imaging modalities. Hence, most of the existing studies in the field of multimodal medical image fusion, which fuse only two modalities at a time with hand-crafted proportions, are subjective and task-specific. To address the above concerns, this work proposes an integration of multimodal segmentation and fusion, namely SegCoFusion, which consists of a novel feature frequency dividing network named FDNet and a segmentation part using a dual-single path feature supplementing strategy to optimize the segmentation inputs and suture with the fusion part. Furthermore, focusing on multimodal brain tumor volumetric fusion and segmentation, the qualitative and quantitative results demonstrate that SegCoFusion can break the ceiling both of segmentation and fusion methods. Moreover, the effectiveness of the proposed framework is also revealed by comparing it with state-of-the-art fusion methods on 2D two-modality fusion tasks, our method achieves better fusion performance than others. Therefore, the proposed SegCoFusion develops a novel perspective that improves the performance in volumetric fusion by cooperating with segmentation and enhances lesion awareness.


Assuntos
Neoplasias Encefálicas , Procedimentos Neurocirúrgicos , Humanos , Exame Físico , Extremidade Superior , Processamento de Imagem Assistida por Computador
9.
Artigo em Inglês | MEDLINE | ID: mdl-37590112

RESUMO

As one of the effective ways of ocular disease recognition, early fundus screening can help patients avoid unrecoverable blindness. Although deep learning is powerful for image-based ocular disease recognition, the performance mainly benefits from a large number of labeled data. For ocular disease, data collection and annotation in a single site usually take a lot of time. If multi-site data are obtained, there are two main issues: 1) the data privacy is easy to be leaked; 2) the domain gap among sites will influence the recognition performance. Inspired by the above, first, a Gaussian randomized mechanism is adopted in local sites, which are then engaged in a global model to preserve the data privacy of local sites and models. Second, to bridge the domain gap among different sites, a two-step domain adaptation method is introduced, which consists of a domain confusion module and a multi-expert learning strategy. Based on the above, a privacy-preserving federated learning framework with domain adaptation is constructed. In the experimental part, a multi-disease early fundus screening dataset, including a detailed ablation study and four experimental settings, is used to show the stepwise performance, which verifies the efficiency of our proposed framework.

10.
Eur Radiol ; 33(12): 8925-8935, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37505244

RESUMO

OBJECTIVES: The first treatment strategy for brain metastases (BM) plays a pivotal role in the prognosis of patients. Among all strategies, stereotactic radiosurgery (SRS) is considered a promising therapy method. Therefore, we developed and validated a radiomics-based prediction pipeline to prospectively identify BM patients who are insensitive to SRS therapy, especially those who are at potential risk of progressive disease. METHODS: A total of 337 BM patients (277, 30, and 30 in the training set, internal validation set, and external validation set, respectively) were enrolled in the study. 19,377 radiomics features (3 masks × 3 MRI sequences × 2153 features) extracted from 9 ROIs were filtered through LASSO and Max-Relevance and Min-Redundancy (mRMR) algorithms. The selected radiomics features were combined with 4 clinical features to construct a two-stage cascaded model for the prediction of BM patients' response to SRS therapy using SVM and an ensemble learning classifier. The performance of the model was evaluated by its accuracy, specificity, sensitivity, and AUC curve. RESULTS: Radiomics features were integrated with the clinical features of patients in our optimal model, which showed excellent discriminative performance in the training set (AUC: 0.95, 95% CI: 0.88-0.98). The model was also verified in the internal validation set and external validation set (AUC 0.93, 95% CI: 0.76-0.95 and AUC 0.90, 95% CI: 0.73-0.93, respectively). CONCLUSIONS: The proposed prediction pipeline could non-invasively predict the response to SRS therapy in patients with brain metastases thus assisting doctors to precisely designate individualized first treatment decisions. CLINICAL RELEVANCE STATEMENT: The proposed prediction pipeline combines the radiomics features of multi-modal MRI with clinical features to construct machine learning models that noninvasively predict the response of patients with brain metastases to stereotactic radiosurgery therapy, assisting neuro-oncologists to develop personalized first treatment plans. KEY POINTS: • The proposed prediction pipeline can non-invasively predict the response to SRS therapy. • The combination of multi-modality and multi-mask contributes significantly to the prediction. • The edema index also shows a certain predictive value.


Assuntos
Neoplasias Encefálicas , Radiocirurgia , Humanos , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/radioterapia , Relevância Clínica , Aprendizado de Máquina , Estudos Retrospectivos
11.
Comput Biol Med ; 163: 107076, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37379616

RESUMO

Fundus images are an essential basis for diagnosing ocular diseases, and using convolutional neural networks has shown promising results in achieving accurate fundus image segmentation. However, the difference between the training data (source domain) and the testing data (target domain) will significantly affect the final segmentation performance. This paper proposes a novel framework named DCAM-NET for fundus domain generalization segmentation, which substantially improves the generalization ability of the segmentation model to the target domain data and enhances the extraction of detailed information on the source domain data. This model can effectively overcome the problem of poor model performance due to cross-domain segmentation. To enhance the adaptability of the segmentation model to target domain data, this paper proposes a multi-scale attention mechanism module (MSA) that functions at the feature extraction level. Extracting different attribute features to enter the corresponding scale attention module further captures the critical features in channel, position, and spatial regions. The MSA attention mechanism module also integrates the characteristics of the self-attention mechanism, it can capture dense context information, and the aggregation of multi-feature information effectively enhances the generalization of the model when dealing with unknown domain data. In addition, this paper proposes the multi-region weight fusion convolution module (MWFC), which is essential for the segmentation model to extract feature information from the source domain data accurately. Fusing multiple region weights and convolutional kernel weights on the image to enhance the model adaptability to information at different locations on the image, the fusion of weights deepens the capacity and depth of the model. It enhances the learning ability of the model for multiple regions on the source domain. Our experiments on fundus data for cup/disc segmentation show that the introduction of MSA and MWFC modules in this paper effectively improves the segmentation ability of the segmentation model on the unknown domain. And the performance of the proposed method is significantly better than other methods in the current domain generalization segmentation of the optic cup/disc.


Assuntos
Disco Óptico , Disco Óptico/diagnóstico por imagem , Fundo de Olho , Aprendizagem , Algoritmos , Face , Processamento de Imagem Assistida por Computador
12.
Comput Biol Med ; 157: 106769, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36947904

RESUMO

Image fusion techniques have been widely used for multi-modal medical image fusion tasks. Most existing methods aim to improve the overall quality of the fused image and do not focus on the more important textural details and contrast between the tissues of the lesion in the regions of interest (ROIs). This can lead to the distortion of important tumor ROIs information and thus limits the applicability of the fused images in clinical practice. To improve the fusion quality of ROIs relevant to medical implications, we propose a multi-modal MRI fusion generative adversarial network (BTMF-GAN) for the task of multi-modal MRI fusion of brain tumors. Unlike existing deep learning approaches which focus on improving the global quality of the fused image, the proposed BTMF-GAN aims to achieve a balance between tissue details and structural contrasts in brain tumor, which is the region of interest crucial to many medical applications. Specifically, we employ a generator with a U-shaped nested structure and residual U-blocks (RSU) to enhance multi-scale feature extraction. To enhance and recalibrate features of the encoder, the multi-perceptual field adaptive transformer feature enhancement module (MRF-ATFE) is used between the encoder and the decoder instead of a skip connection. To increase contrast between tumor tissues of the fused image, a mask-part block is introduced to fragment the source image and the fused image, based on which, we propose a novel salient loss function. Qualitative and quantitative analysis of the results on the public and clinical datasets demonstrate the superiority of the proposed approach to many other commonly used fusion methods.


Assuntos
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética , Processamento de Imagem Assistida por Computador
13.
Eur J Radiol ; 158: 110639, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36463703

RESUMO

BACKGROUND: The histological sub-classes of brain tumors and the Ki-67 labeling index (LI) of tumor cells are major factors in the diagnosis, prognosis, and treatment management of patients. Many existing studies primarily focused on the classification of two classes of brain tumors and the Ki-67LI of gliomas. This study aimed to develop a preoperative non-invasive radiomics pipeline based on multiparametric-MRI to classify-three types of brain tumors, glioblastoma (GBM), metastasis (MET) and primary central nervous system lymphoma (PCNSL), and to predict their corresponding Ki-67LI. METHODS: In this retrospective study, 153 patients with malignant brain tumors were involved. The radiomics features were extracted from three types of MRI (T1-weighted imaging (T1WI), fluid-attenuated inversion recovery (FLAIR), and contrast-enhanced T1-weighted imaging (CE-T1WI)) with three masks (tumor core, edema, and whole tumor masks) and selected by a combination of Pearson correlation coefficient (CORR), LASSO, and Max-Relevance and Min-Redundancy (mRMR) filters. The performance of six classifiers was compared and the top three performing classifiers were used to construct the ensemble learning model (ELM). The proposed ELM was evaluated in the training dataset (108 patients) by 5-fold cross-validation and in the test dataset (45 patients) by hold-out. The accuracy (ACC), sensitivity (SEN), specificity (SPE), F1-Score, and the area under the receiver operating characteristic curve (AUC) indicators evaluated the performance of the models. RESULTS: The best feature sets and ELM with the optimal performance were selected to construct the tri-categorized brain tumor aided diagnosis model (training dataset AUC: 0.96 (95% CI: 0.93, 0.99); test dataset AUC: 0.93) and Ki-67LI prediction model (training dataset AUC: 0.96 (95% CI: 0.94, 0.98); test dataset AUC: 0.91). The CE-T1WI was the best single modality for all classifiers. Meanwhile, the whole tumor was the most vital mask for the tumor classification and the tumor core was the most vital mask for the Ki-67LI prediction. CONCLUSION: The developed radiomics models led to the precise preoperative classification of GBM, MET, and PCNSL and the prediction of Ki-67LI, which could be utilized in clinical practice for the treatment planning for brain tumors.


Assuntos
Neoplasias Encefálicas , Glioblastoma , Glioma , Humanos , Antígeno Ki-67 , Estudos Retrospectivos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Imageamento por Ressonância Magnética/métodos , Glioblastoma/diagnóstico por imagem , Glioblastoma/patologia
14.
Eur J Anaesthesiol ; 39(9): 758-765, 2022 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-35919026

RESUMO

BACKGROUND: Identifying the interscalene brachial plexus can be challenging during ultrasound-guided interscalene block. OBJECTIVE: We hypothesised that an algorithm based on deep learning could locate the interscalene brachial plexus in ultrasound images better than a nonexpert anaesthesiologist, thus possessing the potential to aid anaesthesiologists. DESIGN: Observational study. SETTING: A tertiary hospital in Shanghai, China. PATIENTS: Patients undergoing elective surgery. INTERVENTIONS: Ultrasound images at the interscalene level were collected from patients. Two independent image datasets were prepared to train and evaluate the deep learning model. Three senior anaesthesiologists who were experts in regional anaesthesia annotated the images. A deep convolutional neural network was developed, trained and optimised to locate the interscalene brachial plexus in the ultrasound images. Expert annotations on the datasets were regarded as an accurate baseline (ground truth). The test dataset was also annotated by five nonexpert anaesthesiologists. MAIN OUTCOME MEASURES: The primary outcome of the research was the distance between the lateral midpoints of the nerve sheath contours of the model predictions and ground truth. RESULTS: The data set was obtained from 1126 patients. The training dataset comprised 11 392 images from 1076 patients. The test dataset constituted 100 images from 50 patients. In the test dataset, the median [IQR] distance between the lateral midpoints of the nerve sheath contours of the model predictions and ground truth was 0.8 [0.4 to 2.9] mm: this was significantly shorter than that between nonexpert predictions and ground truth (3.4 mm [2.1 to 4.5] mm; P < 0.001). CONCLUSION: The proposed model was able to locate the interscalene brachial plexus in ultrasound images more accurately than nonexperts. TRIAL REGISTRATION: ClinicalTrials.gov (https://clinicaltrials.gov) identifier: NCT04183972.


Assuntos
Bloqueio do Plexo Braquial , Plexo Braquial , Anestésicos Locais , Inteligência Artificial , Plexo Braquial/diagnóstico por imagem , Bloqueio do Plexo Braquial/métodos , China , Humanos , Redes Neurais de Computação , Ultrassonografia de Intervenção/métodos
15.
J Clin Neurosci ; 104: 1-9, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35931000

RESUMO

The current prediction models for the clinical outcome of acute ischaemic stroke (AIS) remain insufficient for individualized patient management strategies. We aimed to investigate machine learning (ML) performance in the clinical outcome prediction of AIS in anterior circulation and evaluate the clinical outcome by combining the quantitative evaluation indicators of perfusion features and basic clinical information. Four ML classifiers, support vector machine (SVM), naive Bayes (NB), logistic regression (LR), and random forest (RF) were trained on a cohort of 389 adult patients (training cohort [70 %]; external validation cohort [30 %]) from the Acute Stroke Center Registry of Huashan Hospital. Model performance was compared by a range of learning metrics. Most imaging parameters were strongly correlated with the outcome (range, 0.57 to 0.81), and the correlation between relative cerebral blood flow (rCBF) < 30 % and clinical outcome was the strongest (ρ = 0.81). As the reference parameters increased, the performance of the four models was greatly improved [SVM (AUC: from 0.79 to 0.99, F1-score: from 0.61 to 0.90), RF (AUC: from 0.88 to 0.98, F1-score: from 0.71 to 0.96), LR (AUC: from 0.80 to 0.97, F1-score: from 0.64 to 0.95), and NB (AUC: from 0.74 to 0.97, F1-score: from 0.66 to 0.92)]. The ensemble classifier model with all parameters had the highest F1-score (0.97). All the ML models, jointly considering the basic clinical information and quantitative evaluation indicators of computed tomography perfusion (CTP), showed good performance in the prediction of clinical outcome of AIS in anterior circulation.


Assuntos
Isquemia Encefálica , AVC Isquêmico , Acidente Vascular Cerebral , Adulto , Humanos , Teorema de Bayes , Isquemia Encefálica/diagnóstico por imagem , AVC Isquêmico/diagnóstico por imagem , Aprendizado de Máquina , Acidente Vascular Cerebral/diagnóstico por imagem , Acidente Vascular Cerebral/terapia
16.
BMC Med Imaging ; 22(1): 40, 2022 03 09.
Artigo em Inglês | MEDLINE | ID: mdl-35264145

RESUMO

BACKGROUND: White matter hyperintensity (WMH) is one of the typical neuroimaging manifestations of cerebral small vessel disease (CSVD), and the WMH correlates closely to cognitive impairment (CI). CSVD patients with WMH own altered topological properties of brain functional network, which is a possible mechanism that leads to CI. This study aims to identify differences in the characteristics of some brain functional network among patients with different grades of WMH and estimates the correlations between these different brain functional network characteristics and cognitive assessment scores. METHODS: 110 CSVD patients underwent 3.0 T Magnetic resonance imaging scans and neuropsychological cognitive assessments. WMH of each participant was graded on the basis of Fazekas grade scale and was divided into two groups: (A) WMH score of 1-2 points (n = 64), (B) WMH score of 3-6 points (n = 46). Topological indexes of brain functional network were analyzed using graph-theoretical method. T-test and Mann-Whitney U test was used to compare the differences in topological properties of brain functional network between groups. Partial correlation analysis was applied to explore the relationship between different topological properties of brain functional networks and overall cognitive function. RESULTS: Patients with high WMH scores exhibited decreased clustering coefficient values, global and local network efficiency along with increased shortest path length on whole brain level as well as decreased nodal efficiency in some brain regions on nodal level (p < 0.05). Nodal efficiency in the left lingual gyrus was significantly positively correlated with patients' total Montreal Cognitive Assessment (MoCA) scores (p < 0.05). No significant difference was found between two groups on the aspect of total MoCA and Mini-mental State Examination (MMSE) scores (p > 0.05). CONCLUSION: Therefore, we come to conclusions that patients with high WMH scores showed less optimized small-world networks compared to patients with low WMH scores. Global and local network efficiency on the whole-brain level, as well as nodal efficiency in certain brain regions on the nodal level, can be viewed as markers to reflect the course of WMH.


Assuntos
Doenças de Pequenos Vasos Cerebrais , Disfunção Cognitiva , Substância Branca , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Doenças de Pequenos Vasos Cerebrais/complicações , Doenças de Pequenos Vasos Cerebrais/diagnóstico por imagem , Doenças de Pequenos Vasos Cerebrais/patologia , Cognição , Disfunção Cognitiva/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética/métodos , Neuroimagem , Substância Branca/diagnóstico por imagem , Substância Branca/patologia
17.
IEEE J Biomed Health Inform ; 26(7): 3197-3208, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35196252

RESUMO

Recent digital pathology workflows mainly focus on mono-modality histopathology image analysis. However, they ignore the complementarity between Haematoxylin & Eosin (H&E) and Immunohistochemically (IHC) stained images, which can provide comprehensive gold standard for cancer diagnosis. To resolve this issue, we propose a cross-boosted multi-target domain adaptation pipeline for multi-modality histopathology images, which contains Cross-frequency Style-auxiliary Translation Network (CSTN) and Dual Cross-boosted Segmentation Network (DCSN). Firstly, CSTN achieves the one-to-many translation from fluorescence microscopy images to H&E and IHC images for providing source domain training data. To generate images with realistic color and texture, Cross-frequency Feature Transfer Module (CFTM) is developed to pertinently restructure and normalize high-frequency content and low-frequency style features from different domains. Then, DCSN fulfills multi-target domain adaptive segmentation, where a dual-branch encoder is introduced, and Bidirectional Cross-domain Boosting Module (BCBM) is designed to implement cross-modality information complementation through bidirectional inter-domain collaboration. Finally, we establish Multi-modality Thymus Histopathology (MThH) dataset, which is the largest publicly available H&E and IHC image benchmark. Experiments on MThH dataset and several public datasets show that the proposed pipeline outperforms state-of-the-art methods on both histopathology image translation and segmentation.


Assuntos
Benchmarking , Humanos , Processamento de Imagem Assistida por Computador , Microscopia de Fluorescência , Fluxo de Trabalho
18.
Cont Lens Anterior Eye ; 45(3): 101474, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34301476

RESUMO

PURPOSE: To construct a machine learning (ML)-based model for estimating the alignment curve (AC) curvature in orthokeratology lens fitting for vision shaping treatment (VST), which can minimize the number of lens trials, improving efficiency while maintaining accuracy, with regards to its improvement over a previous calculation method. METHODS: Data were retrospectively collected from the clinical case files of 1271 myopic subjects (1271 right eyes). The AC curvatures calculated with a previously published algorithm were used as the target data sets. Four kinds of machine learning algorithms were implemented in the experimental analyses to predict the targeted AC curvatures: robust linear regression models, support vector machine (SVM) regression models with linear kernel functions, bagging decision trees, and Gaussian processes. The previously published calculation method and the novel machine learning method were then compared to assess the final parameters of ordered lenses. RESULTS: The linear SVM and Gaussian process machine learning models achieved the best performance. The input variables included sex, age, horizontal visible iris diameter (HVID), spherical refraction (SER), cylindrical refraction, eccentricity value (e value), flat K (K1) and steep K (K2) readings, anterior chamber depth (ACD), and axial length (AL). The R-squared values for the output AC1K1, AC1K2 and AC2K1 values were 0.91, 0.84, and 0.73, respectively. The previous calculation method and machine learning methods displayed excellent consistency, and the proposed methods performed best based on flat K reading and e values. CONCLUSIONS: The ML model can provide practitioners with an efficient method for estimating the AC curvatures of VST lenses and reducing the probability of cross-infection originating from trial lenses, which is especially useful during pandemics, such as that for COVID-19.


Assuntos
COVID-19 , Lentes de Contato , Algoritmos , Topografia da Córnea , Humanos , Aprendizado de Máquina , Refração Ocular , Estudos Retrospectivos
19.
BMC Med Imaging ; 21(1): 189, 2021 12 08.
Artigo em Inglês | MEDLINE | ID: mdl-34879818

RESUMO

PURPOSE: The objective of this study is to construct a computer aided diagnosis system for normal people and pneumoconiosis using X-raysand deep learning algorithms. MATERIALS AND METHODS: 1760 anonymous digital X-ray images of real patients between January 2017 and June 2020 were collected for this experiment. In order to concentrate the feature extraction ability of the model more on the lung region and restrain the influence of external background factors, a two-stage pipeline from coarse to fine was established. First, the U-Net model was used to extract the lung regions on each sides of the collection images. Second, the ResNet-34 model with transfer learning strategy was implemented to learn the image features extracted in the lung region to achieve accurate classification of pneumoconiosis patients and normal people. RESULTS: Among the 1760 cases collected, the accuracy and the area under curve of the classification model were 92.46% and 89% respectively. CONCLUSION: The successful application of deep learning in the diagnosis of pneumoconiosis further demonstrates the potential of medical artificial intelligence and proves the effectiveness of our proposed algorithm. However, when we further classified pneumoconiosis patients and normal subjects into four categories, we found that the overall accuracy decreased to 70.1%. We will use the CT modality in future studies to provide more details of lung regions.


Assuntos
Aprendizado Profundo , Diagnóstico por Computador , Pneumoconiose/diagnóstico por imagem , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Raios X
20.
Int J Comput Assist Radiol Surg ; 16(10): 1727-1736, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34386900

RESUMO

PURPOSE: Carotid artery atherosclerotic stenosis accounts for 18-25% of ischemic stroke. In the evaluation of carotid atherosclerotic lesions, the automatic, accurate and rapid segmentation of the carotid artery is a priority issue that needs to be addressed urgently. However, the carotid artery area occupies a small target in computed tomography angiography (CTA) images, which affect the segmentation accuracy. METHODS: We proposed a coarse-to-fine segmentation pipeline with the Multiplanar D-SEA UNet to achieve fully automatic carotid artery segmentation on the entire 3D CTA images, and compared with other four neural networks (3D-UNet, RA-UNet, Isensee-UNet, Multiplanar-UNet) by assessing Dice, Jaccard similarity coefficient, sensitivity, area under the curve and average hausdorff distance. RESULTS: Our proposed method can achieve a mean Dice score of 91.51% on the 68 neck CTA scans from Beijing Hospital, which remarkably outperforms state-of-the-art 3D image segmentation methods. And the C2F segmentation pipeline can effectively improve segmentation accuracy while avoiding resolution loss. CONCLUSION: The proposed segmentation method can realize the fully automatic segmentation of the carotid artery and has robust performance with segmentation accuracy, which can be applied into plaque exfoliation and interventional surgery services. In addition, our method is easy to extend to other medical segmentation tasks with appropriate parameter settings.


Assuntos
Artérias Carótidas , Angiografia por Tomografia Computadorizada , Angiografia , Artérias Carótidas/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...