Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
1.
Med Image Anal ; 96: 103214, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38815358

ABSTRACT

Multi-modal ophthalmic image classification plays a key role in diagnosing eye diseases, as it integrates information from different sources to complement their respective performances. However, recent improvements have mainly focused on accuracy, often neglecting the importance of confidence and robustness in predictions for diverse modalities. In this study, we propose a novel multi-modality evidential fusion pipeline for eye disease screening. It provides a measure of confidence for each modality and elegantly integrates the multi-modality information using a multi-distribution fusion perspective. Specifically, our method first utilizes normal inverse gamma prior distributions over pre-trained models to learn both aleatoric and epistemic uncertainty for uni-modality. Then, the normal inverse gamma distribution is analyzed as the Student's t distribution. Furthermore, within a confidence-aware fusion framework, we propose a mixture of Student's t distributions to effectively integrate different modalities, imparting the model with heavy-tailed properties and enhancing its robustness and reliability. More importantly, the confidence-aware multi-modality ranking regularization term induces the model to more reasonably rank the noisy single-modal and fused-modal confidence, leading to improved reliability and accuracy. Experimental results on both public and internal datasets demonstrate that our model excels in robustness, particularly in challenging scenarios involving Gaussian noise and modality missing conditions. Moreover, our model exhibits strong generalization capabilities to out-of-distribution data, underscoring its potential as a promising solution for multimodal eye disease screening.


Subject(s)
Eye Diseases , Humans , Eye Diseases/diagnostic imaging , Multimodal Imaging , Reproducibility of Results , Image Interpretation, Computer-Assisted/methods , Algorithms , Machine Learning
2.
Orthod Craniofac Res ; 26(3): 491-499, 2023 Aug.
Article in English | MEDLINE | ID: mdl-36680384

ABSTRACT

OBJECTIVES: To develop an artificial intelligence (AI) system for automatic palate segmentation through CBCT, and to determine the personalized available sites for palatal mini implants by measuring palatal bone and soft tissue thickness according to the AI-predicted results. MATERIALS AND METHODS: Eight thousand four hundred target slices (from 70 CBCT scans) from orthodontic patients were collected, labelled by well-trained orthodontists and randomly divided into two groups: a training set and a test set. After the deep learning process, we evaluated the performance of our deep learning model with the mean Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), sensitivity (SEN), positive predictive value (PPV) and mean thickness percentage error (MTPE). The pixel traversal method was proposed to measure the thickness of palatal bone and soft tissue, and to predict available sites for palatal orthodontic mini implants. Then, an example of available sites for palatal mini implants from the test set was mapped. RESULTS: The average DSC, ASSD, SEN, PPV and MTPE for the segmented palatal bone tissue were 0.831%, 1.122%, 0.876%, 0.815% and 6.70%, while that for the palatal soft tissue were 0.741%, 1.091%, 0.861%, 0.695% and 12.2%, respectively. Besides, an example of available sites for palatal mini implants was mapped according to predefined criteria. CONCLUSIONS: Our AI system showed high accuracy for palatal segmentation and thickness measurement, which is helpful for the determination of available sites and the design of a surgical guide for palatal orthodontic mini implants.


Subject(s)
Dental Implants , Orthodontic Anchorage Procedures , Spiral Cone-Beam Computed Tomography , Humans , Artificial Intelligence , Orthodontic Anchorage Procedures/methods , Palate/diagnostic imaging , Cone-Beam Computed Tomography/methods
3.
Thorac Cancer ; 13(11): 1684-1690, 2022 06.
Article in English | MEDLINE | ID: mdl-35579111

ABSTRACT

BACKGROUND: Pain is a fearful yet common symptom among lung cancer patients. This multicenter, cross-sectional study was conducted to examine the current status of pain prevalence and management in lung cancer patients in northern China. METHODS: A total of 18 hospitals across northern China were selected. Patients with primary lung cancer who visited the outpatient clinic or were admitted in the wards on a preplanned day were invited to complete a questionnaire. Meanwhile, physicians who had experience of treating primary lung cancer patients were also surveyed. RESULTS: A total of 533 patients and 197 physicians provided valid responses to the survey, of which 45.4% (242/533) of patients reported pain during the course of disease and 24.2% (129/533) of patients had experienced pain within the past 24 h. The mean average pain intensity by the brief pain inventory was 3.47 ± 1.55. The binary logistic regression analysis showed female gender and stage IV disease were significantly associated with the presence of pain. A total of 74.4% (96/129) of patients reporting pain within 24 h were taking analgesics. The most common reason for patients not using analgesics was that the pain was tolerable (48.2%), while the most common barriers to prescribing opioids as reported by physicians were fear of adverse reactions (43.7%) and fear of addiction (43.1%). CONCLUSION: Despite recognition of the importance of pain control by most physicians and an improvement in cancer pain management, inadequate treatment of cancer pain still exists in lung cancer patients in northern China. High-quality pain education for both patients and physicians is needed in the future.


Subject(s)
Lung Neoplasms , Pain , Analgesics , China/epidemiology , Cross-Sectional Studies , Female , Humans , Lung Neoplasms/complications , Lung Neoplasms/epidemiology , Lung Neoplasms/therapy , Pain/drug therapy , Prevalence
4.
Sci Rep ; 11(1): 18024, 2021 09 09.
Article in English | MEDLINE | ID: mdl-34504277

ABSTRACT

Extreme public health interventions play a critical role in mitigating the local and global prevalence and pandemic potential. Here, we use population size for pathogen transmission to measure the intensity of public health interventions, which is a key characteristic variable for nowcasting and forecasting of COVID-19. By formulating a hidden Markov dynamic system and using nonlinear filtering theory, we have developed a stochastic epidemic dynamic model under public health interventions. The model parameters and states are estimated in time from internationally available public data by combining an unscented filter and an interacting multiple model filter. Moreover, we consider the computability of the population size and provide its selection criterion. With applications to COVID-19, we estimate the mean of the effective reproductive number of China and the rest of the globe except China (GEC) to be 2.4626 (95% CI: 2.4142-2.5111) and 3.0979 (95% CI: 3.0968-3.0990), respectively. The prediction results show the effectiveness of the stochastic epidemic dynamic model with nonlinear filtering. The hidden Markov dynamic system with nonlinear filtering can be used to make analysis, nowcasting and forecasting for other contagious diseases in the future since it helps to understand the mechanism of disease transmission and to estimate the population size for pathogen transmission and the number of hidden infections, which is a valid tool for decision-making by policy makers for epidemic control.


Subject(s)
Basic Reproduction Number , COVID-19 , Population Density , COVID-19/epidemiology , COVID-19/prevention & control , COVID-19/transmission , China/epidemiology , Communicable Disease Control , Forecasting , Humans , Models, Statistical , Prevalence , Public Health , SARS-CoV-2
5.
Technol Cancer Res Treat ; 18: 1533033819884561, 2019.
Article in English | MEDLINE | ID: mdl-31736433

ABSTRACT

Radiotherapy is the main treatment strategy for nasopharyngeal carcinoma. A major factor affecting radiotherapy outcome is the accuracy of target delineation. Target delineation is time-consuming, and the results can vary depending on the experience of the oncologist. Using deep learning methods to automate target delineation may increase its efficiency. We used a modified deep learning model called U-Net to automatically segment and delineate tumor targets in patients with nasopharyngeal carcinoma. Patients were randomly divided into a training set (302 patients), validation set (100 patients), and test set (100 patients). The U-Net model was trained using labeled computed tomography images from the training set. The U-Net was able to delineate nasopharyngeal carcinoma tumors with an overall dice similarity coefficient of 65.86% for lymph nodes and 74.00% for primary tumor, with respective Hausdorff distances of 32.10 and 12.85 mm. Delineation accuracy decreased with increasing cancer stage. Automatic delineation took approximately 2.6 hours, compared to 3 hours, using an entirely manual procedure. Deep learning models can therefore improve accuracy, consistency, and efficiency of target delineation in T stage, but additional physician input may be required for lymph nodes.


Subject(s)
Deep Learning , Nasopharyngeal Neoplasms/diagnostic imaging , Nasopharyngeal Neoplasms/radiotherapy , Radiotherapy Planning, Computer-Assisted , Radiotherapy, Image-Guided , Tomography, X-Ray Computed , Adolescent , Adult , Aged , Algorithms , Female , Humans , Image Processing, Computer-Assisted , Male , Middle Aged , Models, Theoretical , Nasopharyngeal Neoplasms/pathology , Neoplasm Grading , Neoplasm Staging , Organs at Risk , Radiotherapy, Image-Guided/methods , Young Adult
6.
Sci Rep ; 8(1): 9708, 2018 06 26.
Article in English | MEDLINE | ID: mdl-29946119

ABSTRACT

Optomotor response/reflex (OMR) assays are emerging as a powerful and versatile tool for phenotypic study and new drug discovery for eye and brain disorders. Yet efficient OMR assessment for visual performance in mice remains a challenge. Existing OMR testing devices for mice require a lengthy procedure and may be subject to bias due to use of artificial criteria. We developed an optimized staircase protocol that utilizes mouse head pausing behavior as a novel indicator for the absence of OMR, to allow rapid and unambiguous vision assessment. It provided a highly sensitive and reliable method that can be easily implemented into automated or manual OMR systems to allow quick and unbiased assessment for visual acuity and contrast sensitivity in mice. The sensitivity and quantitative capacity of the protocol were validated using wild type mice and an inherited mouse model of retinal degeneration - mice carrying rhodopsin deficiency and exhibiting progressive loss of photoreceptors. Our OMR system with this protocol was capable of detecting progressive visual function decline that was closely correlated with the loss of photoreceptors in rhodopsin deficient mice. It provides significant advances over the existing methods in the currently available OMR devices in terms of sensitivity, accuracy and efficiency.


Subject(s)
Visual Acuity/physiology , Animals , Contrast Sensitivity/physiology , Disease Models, Animal , Eye Movements/physiology , Head Movements/physiology , Mice , Photic Stimulation , Retinal Ganglion Cells/physiology , Visual Perception/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...