Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Phys Med Biol ; 65(3): 035017, 2020 02 05.
Artigo em Inglês | MEDLINE | ID: mdl-31851961

RESUMO

Quality assurance of data prior to use in automated pipelines and image analysis would assist in safeguarding against biases and incorrect interpretation of results. Automation of quality assurance steps would further improve robustness and efficiency of these methods, motivating widespread adoption of techniques. Previous work by our group demonstrated the ability of convolutional neural networks (CNN) to efficiently classify head and neck (H&N) computed-tomography (CT) images for the presence of dental artifacts (DA) that obscure visualization of structures and the accuracy of Hounsfield units. In this work we demonstrate the generalizability of our previous methodology by validating CNNs on six external datasets, and the potential benefits of transfer learning with fine-tuning on CNN performance. 2112 H&N CT images from seven institutions were scored as DA positive or negative. 1538 images from a single institution were used to train three CNNs with resampling grid sizes of 643, 1283 and 2563. The remaining six external datasets were used in five-fold cross-validation with a data split of 20% training/fine-tuning and 80% validation. The three pre-trained models were each validated using the five-folds of the six external datasets. The pre-trained models also underwent transfer learning with fine-tuning using the 20% training/fine-tuning data, and validated using the corresponding validation datasets. The highest micro-averaged AUC for our pre-trained models across all external datasets occurred with a resampling grid of 2563 (AUC = 0.91 ± 0.01). Transfer learning with fine-tuning improved generalizability when utilizing a resampling grid of 2563 to a micro-averaged AUC of 0.92 ± 0.01. Despite these promising results, transfer learning did not improve AUC when utilizing small resampling grids or small datasets. Our work demonstrates the potential of our previously developed automated quality assurance methods to generalize to external datasets. Additionally, we showed that transfer learning with fine-tuning using small portions of external datasets can be used to fine-tune models for improved performance when large variations in images are present.


Assuntos
Implantes Dentários , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Aprendizado de Máquina , Redes Neurais de Computação , Interpretação de Imagem Radiográfica Assistida por Computador/normas , Tomografia Computadorizada por Raios X/métodos , Artefatos , Automação , Neoplasias de Cabeça e Pescoço/classificação , Humanos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
2.
Phys Med Biol ; 65(1): 015005, 2020 01 10.
Artigo em Inglês | MEDLINE | ID: mdl-31683260

RESUMO

Enabling automated pipelines, image analysis and big data methodology in cancer clinics requires thorough understanding of the data. Automated quality assurance steps could improve the efficiency and robustness of these methods by verifying possible data biases. In particular, in head and neck (H&N) computed-tomography (CT) images, dental artifacts (DA) obscure visualization of structures and the accuracy of Hounsfield units; a challenge for image analysis tasks, including radiomics, where poor image quality can lead to systemic biases. In this work we analyze the performance of three-dimensional convolutional neural networks (CNN) trained to classify DA statuses. 1538 patient images were scored by a single observer as DA positive or negative. Stratified five-fold cross validation was performed to train and test CNNs using various isotropic resampling grids (643, 1283 and 2563), with CNN depths designed to produce 323, 163, and 83 machine generated features. These parameters were selected to determine if more computationally efficient CNNs could be utilized to achieve the same performance. The area under the precision recall curve (PR-AUC) was used to assess CNN performance. The highest PR-AUC (0.92 ± 0.03) was achieved with a CNN depth = 5, resampling grid = 256. The CNN performance with 2563 resampling grid size is not significantly better than 643 and 1283 after 20 epochs, which had PR-AUC = 0.89 ± 0.03 (p -value = 0.28) and 0.91 ± 0.02 (p -value = 0.93) at depths of 3 and 4, respectively. Our experiments demonstrate the potential to automate specific quality assurance tasks required for unbiased and robust automated pipeline and image analysis research. Additionally, we determined that there is an opportunity to simplify CNNs with smaller resampling grids to make the process more amenable to very large datasets that will be available in the future.


Assuntos
Implantes Dentários , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Garantia da Qualidade dos Cuidados de Saúde/normas , Tomografia Computadorizada por Raios X/métodos , Artefatos , Automação , Neoplasias de Cabeça e Pescoço/classificação , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...