Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Med Phys ; 47(11): 5609-5618, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32740931

RESUMO

PURPOSE: Organ segmentation of computed tomography (CT) imaging is essential for radiotherapy treatment planning. Treatment planning requires segmentation not only of the affected tissue, but nearby healthy organs-at-risk, which is laborious and time-consuming. We present a fully automated segmentation method based on the three-dimensional (3D) U-Net convolutional neural network (CNN) capable of whole abdomen and pelvis segmentation into 33 unique organ and tissue structures, including tissues that may be overlooked by other automated segmentation approaches such as adipose tissue, skeletal muscle, and connective tissue and vessels. Whole abdomen segmentation is capable of quantifying exposure beyond a handful of organs-at-risk to all tissues within the abdomen. METHODS: Sixty-six (66) CT examinations of 64 individuals were included in the training and validation sets and 18 CT examinations from 16 individuals were included in the test set. All pixels in each examination were segmented by image analysts (with physician correction) and assigned one of 33 labels. Segmentation was performed with a 3D U-Net variant architecture which included residual blocks, and model performance was quantified on 18 test cases. Human interobserver variability (using semiautomated segmentation) was also reported on two scans, and manual interobserver variability of three individuals was reported on one scan. Model performance was also compared to several of the best models reported in the literature for multiple organ segmentation. RESULTS: The accuracy of the 3D U-Net model ranges from a Dice coefficient of 0.95 in the liver, 0.93 in the kidneys, 0.79 in the pancreas, 0.69 in the adrenals, and 0.51 in the renal arteries. Model accuracy is within 5% of human segmentation in eight of 19 organs and within 10% accuracy in 13 of 19 organs. CONCLUSIONS: The CNN approaches the accuracy of human tracers and on certain complex organs displays more consistent prediction than human tracers. Fully automated deep learning-based segmentation of CT abdomen has the potential to improve both the speed and accuracy of radiotherapy dose prediction for organs-at-risk.


Assuntos
Abdome , Redes Neurais de Computação , Abdome/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Órgãos em Risco , Pelve/diagnóstico por imagem , Tomografia Computadorizada por Raios X
2.
JAMA Facial Plast Surg ; 21(6): 511-517, 2019 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-31486840

RESUMO

IMPORTANCE: Preoperative assessment of nasal soft-tissue envelope (STE) thickness is an important component of rhinoplasty that presently lacks validated tools. OBJECTIVE: To measure and assess the distribution of nasal STE thickness in a large patient population and to determine if facial plastic surgery clinicians can predict nasal STE thickness based on visual examination of the nose. DESIGN, SETTING, AND PARTICIPANTS: This retrospective review and prospective assessment of 190 adult patients by 4 expert raters was conducted at an academic tertiary referral center. The patients had high-resolution maxillofacial computed tomography (CT) scans and standardized facial photographs on file and did not have a history of nasal fracture, septal perforation, rhinoplasty, or other surgery or medical conditions altering nasal form. Data were analyzed in March 2019. MAIN OUTCOMES AND MEASURES: Measure nasal STE thickness at defined anatomic subsites using high-resolution CT scans. Measure expert-predicted nasal STE thickness based on visual examination of the nose using a scale from 0 (thinnest) to 100 (thickest). RESULTS: Of the 190 patients, 78 were women and the mean (SD) age was 45 (17) years. The nasal STE was thickest at the sellion (mean [SD]) (6.7 [1.7] mm), thinnest at the rhinion (2.1 [0.7] mm), thickened over the supratip (4.8 [1.0] mm) and nasal tip (3.1 [0.6] mm), and thinned over the columella (2.6 [0.4] mm). In the study population, nasal STE thickness followed a nearly normal distribution for each measured subsite, with the majority of patients in a medium thickness range. Comparison of predicted and actual nasal STE thickness showed that experts could accurately predict nasal STE thickness, with the highest accuracy at the nasal tip (r, 0.73; prediction accuracy, 91%). A strong positive correlation was noted among the experts' STE estimates (r, 0.83-0.89), suggesting a high level of agreement between individual raters. CONCLUSIONS AND RELEVANCE: There is variable thickness of the nasal STE, which influences the external nasal contour and rhinoplasty outcomes. With visual analysis of the nose, experts can agree on and predict nasal STE thickness, with the highest accuracy at the nasal tip. These data can aid in preoperative planning for rhinoplasty, allowing implementation of preoperative, intraoperative, and postoperative strategies to optimize the nasal STE, which may ultimately improve patient outcomes and satisfaction. LEVEL OF EVIDENCE: NA.


Assuntos
Algoritmos , Nariz/anatomia & histologia , Exame Físico , Rinoplastia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Valores de Referência , Estudos Retrospectivos , Tomografia Computadorizada por Raios X
3.
J Digit Imaging ; 32(4): 571-581, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31089974

RESUMO

Deep-learning algorithms typically fall within the domain of supervised artificial intelligence and are designed to "learn" from annotated data. Deep-learning models require large, diverse training datasets for optimal model convergence. The effort to curate these datasets is widely regarded as a barrier to the development of deep-learning systems. We developed RIL-Contour to accelerate medical image annotation for and with deep-learning. A major goal driving the development of the software was to create an environment which enables clinically oriented users to utilize deep-learning models to rapidly annotate medical imaging. RIL-Contour supports using fully automated deep-learning methods, semi-automated methods, and manual methods to annotate medical imaging with voxel and/or text annotations. To reduce annotation error, RIL-Contour promotes the standardization of image annotations across a dataset. RIL-Contour accelerates medical imaging annotation through the process of annotation by iterative deep learning (AID). The underlying concept of AID is to iteratively annotate, train, and utilize deep-learning models during the process of dataset annotation and model development. To enable this, RIL-Contour supports workflows in which multiple-image analysts annotate medical images, radiologists approve the annotations, and data scientists utilize these annotations to train deep-learning models. To automate the feedback loop between data scientists and image analysts, RIL-Contour provides mechanisms to enable data scientists to push deep newly trained deep-learning models to other users of the software. RIL-Contour and the AID methodology accelerate dataset annotation and model development by facilitating rapid collaboration between analysts, radiologists, and engineers.


Assuntos
Conjuntos de Dados como Assunto , Aprendizado Profundo , Diagnóstico por Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Sistemas de Informação em Radiologia , Humanos
4.
Radiology ; 290(3): 669-679, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30526356

RESUMO

Purpose To develop and evaluate a fully automated algorithm for segmenting the abdomen from CT to quantify body composition. Materials and Methods For this retrospective study, a convolutional neural network based on the U-Net architecture was trained to perform abdominal segmentation on a data set of 2430 two-dimensional CT examinations and was tested on 270 CT examinations. It was further tested on a separate data set of 2369 patients with hepatocellular carcinoma (HCC). CT examinations were performed between 1997 and 2015. The mean age of patients was 67 years; for male patients, it was 67 years (range, 29-94 years), and for female patients, it was 66 years (range, 31-97 years). Differences in segmentation performance were assessed by using two-way analysis of variance with Bonferroni correction. Results Compared with reference segmentation, the model for this study achieved Dice scores (mean ± standard deviation) of 0.98 ± 0.03, 0.96 ± 0.02, and 0.97 ± 0.01 in the test set, and 0.94 ± 0.05, 0.92 ± 0.04, and 0.98 ± 0.02 in the HCC data set, for the subcutaneous, muscle, and visceral adipose tissue compartments, respectively. Performance met or exceeded that of expert manual segmentation. Conclusion Model performance met or exceeded the accuracy of expert manual segmentation of CT examinations for both the test data set and the hepatocellular carcinoma data set. The model generalized well to multiple levels of the abdomen and may be capable of fully automated quantification of body composition metrics in three-dimensional CT examinations. © RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Chang in this issue.


Assuntos
Composição Corporal , Aprendizado Profundo , Reconhecimento Automatizado de Padrão , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia Abdominal , Tomografia Computadorizada por Raios X , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Carcinoma Hepatocelular/diagnóstico por imagem , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Pessoa de Meia-Idade , Estudos Retrospectivos
5.
AMIA Annu Symp Proc ; 2018: 942-951, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30815137

RESUMO

Visualizing process metrics can help identify targets for improvement initiatives. Dashboards and scorecards are tools to visualize important metrics in an easily interpretable manner. We describe the development of two visualization systems: a dashboard to provide real-time situational awareness to frontline coordinators, and a scorecard to display aggregate monthly performance metrics for strategic process improvement efforts. Both systems were designed by a multidisciplinary team of physicians, allied health staff, engineers and information technology specialists. We describe the process of defining important metrics, gathering and cleaning data, and designing the visualization interfaces. We also describe some improvement initiatives that stemmed. These systems were implemented in our hospital and improved the availability of data to our staff and leadership, making performance gaps visible and generating new targets for quality improvement projects.


Assuntos
Apresentação de Dados , Visualização de Dados , Serviço Hospitalar de Radiologia/organização & administração , Sistemas de Informação em Radiologia , Serviços de Informação , Recursos Humanos em Hospital , Melhoria de Qualidade , Interface Usuário-Computador
6.
Acad Radiol ; 22(2): 247-55, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25964956

RESUMO

Rationale and Objectives: The primary role of radiology in the preclinical setting is the use of imaging to improve students' understanding of anatomy. Many currently available Web-based anatomy programs include either suboptimal or overwhelming levels of detail for medical students.Our objective was to develop a user-friendly software program that anatomy instructors can completely tailor to match the desired level of detail for their curriculum, meets the unique needs of the first- and the second-year medical students, and is compatible with most Internet browsers and tablets.Materials and Methods: RadStax is a Web-based application developed using free, open-source, ubiquitous software. RadStax was first introduced as an interactive resource for independent study and later incorporated into lectures. First- and second-year medical students were surveyed for quantitative feedback regarding their experience.Results: RadStax was successfully introduced into our medical school curriculum. It allows the creation of learning modules with labeled multiplanar (MPR) image sets, basic anatomic information, and a self-assessment feature. The program received overwhelmingly positive feedback from students. Of 115 students surveyed, 87.0% found it highly effective as a study tool and 85.2% reported high user satisfaction with the program.Conclusions: RadStax is a novel application for instructors wishing to create an atlas of labeled MPR radiologic studies tailored to meet the specific needs their curriculum. Simple and focused, it provides an interactive experience for students similar to the practice of radiologists.This program is a robust anatomy teaching tool that effectively aids in educating the preclinical medical student.


Assuntos
Anatomia/educação , Instrução por Computador/estatística & dados numéricos , Educação de Graduação em Medicina/métodos , Internet/estatística & dados numéricos , Radiologia/educação , Software , Instrução por Computador/métodos , Currículo , Avaliação Educacional/métodos , Avaliação Educacional/estatística & dados numéricos , New York , Design de Software , Ensino/métodos , Interface Usuário-Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...