Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 14(1): 13707, 2024 06 14.
Article in English | MEDLINE | ID: mdl-38877045

ABSTRACT

Determining the fundamental characteristics that define a face as "feminine" or "masculine" has long fascinated anatomists and plastic surgeons, particularly those involved in aesthetic and gender-affirming surgery. Previous studies in this area have relied on manual measurements, comparative anatomy, and heuristic landmark-based feature extraction. In this study, we collected retrospectively at Cedars Sinai Medical Center (CSMC) a dataset of 98 skull samples, which is the first dataset of this kind of 3D medical imaging. We then evaluated the accuracy of multiple deep learning neural network architectures on sex classification with this dataset. Specifically, we evaluated methods representing three different 3D data modeling approaches: Resnet3D, PointNet++, and MeshNet. Despite the limited number of imaging samples, our testing results show that all three approaches achieve AUC scores above 0.9 after convergence. PointNet++ exhibits the highest accuracy, while MeshNet has the lowest. Our findings suggest that accuracy is not solely dependent on the sparsity of data representation but also on the architecture design, with MeshNet's lower accuracy likely due to the lack of a hierarchical structure for progressive data abstraction. Furthermore, we studied a problem related to sex determination, which is the analysis of the various morphological features that affect sex classification. We proposed and developed a new method based on morphological gradients to visualize features that influence model decision making. The method based on morphological gradients is an alternative to the standard saliency map, and the new method provides better visualization of feature importance. Our study is the first to develop and evaluate deep learning models for analyzing 3D facial skull images to identify imaging feature differences between individuals assigned male or female at birth. These findings may be useful for planning and evaluating craniofacial surgery, particularly gender-affirming procedures, such as facial feminization surgery.


Subject(s)
Deep Learning , Imaging, Three-Dimensional , Neural Networks, Computer , Skull , Humans , Skull/anatomy & histology , Skull/diagnostic imaging , Imaging, Three-Dimensional/methods , Female , Male , Retrospective Studies , Sex Characteristics , Adult , Image Processing, Computer-Assisted/methods
2.
medRxiv ; 2022 Oct 12.
Article in English | MEDLINE | ID: mdl-36263062

ABSTRACT

A pandemic of respiratory illnesses from a novel coronavirus known as Sars-CoV-2 has swept across the globe since December of 2019. This is calling upon the research community including medical imaging to provide effective tools for use in combating this virus. Research in biomedical imaging of viral patients is already very active with machine learning models being created for diagnosing Sars-CoV-2 infections in patients using CT scans and chest x-rays. We aim to build upon this research. Here we used a transfer-learning approach to develop models capable of diagnosing COVID19 from chest x-ray. For this work we compiled a dataset of 112120 negative images from the Chest X-Ray 14 and 2725 positive images from public repositories. We tested multiple models, including logistic regression and random forest and XGBoost with and without principal components analysis, using five-fold cross-validation to evaluate recall, precision, and f1-score. These models were compared to a pre-trained deep-learning model for evaluating chest x-rays called COVID-Net. Our best model was XGBoost with principal components with a recall, precision, and f1-score of 0.692, 0.960, 0.804 respectively. This model greatly outperformed COVID-Net which scored 0.987, 0.025, 0.048. This model, with its high precision and reasonable sensitivity, would be most useful as "rule-in" test for COVID19. Though it outperforms some chemical assays in sensitivity, this model should be studied in patients who would not ordinarily receive a chest x-ray before being used for screening.

3.
IEEE/ACM Trans Comput Biol Bioinform ; 19(3): 1387-1392, 2022.
Article in English | MEDLINE | ID: mdl-34061750

ABSTRACT

We present here the Arkansas AI-Campus solution method for the 2019 Kidney Tumor Segmentation Challenge (KiTS19). Our Arkansas AI-Campus team participated the KiTS19 Challenge for four months, from March to July of 2019. This paper provides a summary of our methods, training, testing and validation results for this grand challenge in biomedical imaging analysis. Our deep learning model is an ensemble of U-Net models developed after testing many model variations. Our model has consistent performance on the local test dataset and the final competition independent test dataset. The model achieved local test Dice scores of 0.949 for kidney and tumor segmentation, and 0.601 for tumor segmentation, and the final competition test earned Dice scores 0.9470 and 0.6099 respectively. The Arkansas AI-Campus team solution with a composite DICE score of 0.7784 has achieved a final ranking of top fifty worldwide, and top five among the United States teams in the KiTS19 Competition.


Subject(s)
Image Processing, Computer-Assisted , Kidney Neoplasms , Humans , Kidney Neoplasms/diagnostic imaging , Tomography, X-Ray Computed
4.
IEEE/ACM Trans Comput Biol Bioinform ; 19(2): 1165-1172, 2022.
Article in English | MEDLINE | ID: mdl-32991288

ABSTRACT

Lung cancer is the leading cause of cancer deaths. Low-dose computed tomography (CT)screening has been shown to significantly reduce lung cancer mortality but suffers from a high false positive rate that leads to unnecessary diagnostic procedures. The development of deep learning techniques has the potential to help improve lung cancer screening technology. Here we present the algorithm, DeepScreener, which can predict a patient's cancer status from a volumetric lung CT scan. DeepScreener is based on our model of Spatial Pyramid Pooling, which ranked 16th of 1972 teams (top 1 percent)in the Data Science Bowl 2017 competition (DSB2017), evaluated with the challenge datasets. Here we test the algorithm with an independent set of 1449 low-dose CT scans of the National Lung Screening Trial (NLST)cohort, and we find that DeepScreener has consistent performance of high accuracy. Furthermore, by combining Spatial Pyramid Pooling and 3D Convolution, it achieves an AUC of 0.892, surpassing the previous state-of-the-art algorithms using only 3D convolution. The advancement of deep learning algorithms can potentially help improve lung cancer detection with low-dose CT scans.


Subject(s)
Early Detection of Cancer , Lung Neoplasms , Algorithms , Early Detection of Cancer/methods , Humans , Lung , Lung Neoplasms/diagnostic imaging , Tomography, X-Ray Computed
5.
medRxiv ; 2022 Dec 27.
Article in English | MEDLINE | ID: mdl-36597524

ABSTRACT

We have conducted a study of the COVID-19 severity with the chest x-ray images, a private dataset collected from our collaborator St Bernards Medical Center. The dataset is comprised of chest x-ray images from 1,550 patients who were admitted to emergency room (ER) and were all tested positive for COVID-19. Our study is focused on the following two questions: (1) To predict patients hospital staying duration, based on the chest x-ray image which was taken when the patient was admitted to the ER. The length of stay ranged from zero hours to 95 days in the hospital and followed a power law distribution. Based on our testing results, it is hard for the prediction models to detect strong signal from the chest x-ray images. No model was able to perform better than a trivial most-frequent classifier. However, each model was able to outperform the most-frequent classifier when the data was split evenly into four categories. This would suggest that there is signal in the images, and the performance may be further improved by the addition of clinical features as well as increasing the training set. (2) To predict if a patient is COVID-19 positive or not with the chest x-ray image. We also tested the generalizability of training a prediction model on chest x-ray images from one hospital and then testing the model on images captures from other sites. With our private dataset and the COVIDx dataset, the prediction model can achieve a high accuracy of 95.9%. However, for our hold-one-out study of the generalizability of the models trained on chest x-rays, we found that the model performance suffers due to a significant reduction in training samples of any class.

6.
Sci Rep ; 10(1): 20900, 2020 12 01.
Article in English | MEDLINE | ID: mdl-33262425

ABSTRACT

One of the challenges with urgent evaluation of patients with acute respiratory distress syndrome (ARDS) in the emergency room (ER) is distinguishing between cardiac vs infectious etiologies for their pulmonary findings. We conducted a retrospective study with the collected data of 171 ER patients. ER patient classification for cardiac and infection causes was evaluated with clinical data and chest X-ray image data. We show that a deep-learning model trained with an external image data set can be used to extract image features and improve the classification accuracy of a data set that does not contain enough image data to train a deep-learning model. An analysis of clinical feature importance was performed to identify the most important clinical features for ER patient classification. The current model is publicly available with an interface at the web link: http://nbttranslationalresearch.org/ .


Subject(s)
Deep Learning , Disease/classification , Emergency Service, Hospital , Patients/classification , Radiography, Thoracic , Respiratory Distress Syndrome/diagnostic imaging , Humans , Respiratory Distress Syndrome/etiology , Retrospective Studies
9.
Surg Clin North Am ; 94(5): 1115-26, ix, 2014 Oct.
Article in English | MEDLINE | ID: mdl-25245971

ABSTRACT

Melanoma is the most dangerous form of skin cancer and the sixth leading cause of malignancy in the United States. Non-Caucasians have a decreased overall incidence of melanoma, but African Americans and other ethnic groups often have more advanced disease at initial diagnosis and higher mortality rates than Caucasian populations. Patients with more darkly pigmented skin have a higher percentage of acral lentiginous melanoma, which presents on the palms, soles, and subungual sites and carries specific genetic alterations. Increased awareness of melanoma presentation in pigmented skin may help reduce disparities between ethnic groups.


Subject(s)
Melanoma/ethnology , Racial Groups/ethnology , Skin Neoplasms/ethnology , Early Detection of Cancer , Humans , Melanoma/genetics , Melanoma/therapy , Prognosis , Racial Groups/genetics , Skin Neoplasms/genetics , Skin Neoplasms/therapy , Ultraviolet Rays/adverse effects
SELECTION OF CITATIONS
SEARCH DETAIL
...