Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Biol Med ; 143: 105282, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35220074

RESUMO

We created a deep learning model, trained on text classified by natural language processing (NLP), to assess right ventricular (RV) size and function from echocardiographic images. We included 12,684 examinations with corresponding written reports for text classification. After manual annotation of 1489 reports, we trained an NLP model to classify the remaining 10,651 reports. A view classifier was developed to select the 4-chamber or RV-focused view from an echocardiographic examination (n = 539). The final models were two image classification models trained on the predicted labels from the combined manual annotation and NLP models and the corresponding echocardiographic view to assess RV function (training set n = 11,008) and size (training set n = 9951. The text classifier identified impaired RV function with 99% sensitivity and 98% specificity and RV enlargement with 98% sensitivity and 98% specificity. The view classification model identified the 4-chamber view with 92% accuracy and the RV-focused view with 73% accuracy. The image classification models identified impaired RV function with 93% sensitivity and 72% specificity and an enlarged RV with 80% sensitivity and 85% specificity; agreement with the written reports was substantial (both κ = 0.65). Our findings show that models for automatic image assessment can be trained to classify RV size and function by using model-annotated data from written echocardiography reports. This pipeline for auto-annotation of the echocardiographic images, using a NLP model with medical reports as input, can be used to train an image-assessment model without manual annotation of images and enables fast and inexpensive expansion of the training dataset when needed.

2.
Minim Invasive Ther Allied Technol ; 28(5): 309-316, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30663462

RESUMO

Background: The benefit of haptic feedback in laparoscopic virtual reality simulators (VRS) is ambiguous. A previous study found 32% faster acquisition of skills with the combination of 3 D and haptic feedback compared to 2 D only. This study aimed to validate perception and effect on performance of haptic feedback by experienced surgeons in the previously tested VRS. Material and methods: A randomized single blinded cross-over study with laparoscopists (>100 laparoscopic procedures) was conducted in a VRS with 3 D imaging. One group started with haptic feedback, and the other group without. After performing the suturing task with haptics either enabled or disabled, the groups crossed over to the opposite setting. Face validity was assessed through questionnaires. Metrics were obtained from the VRS. Results: The haptics for 'handling the needle', 'needle through tissue' and 'tying the knot' was scored as completely realistic by 3/22, 1/22 and 2/22 respectively. Comparing the metrics for maximum stretch damage between the groups revealed a significantly lower score when a group performed with haptics enabled p = .027 (haptic first group) and p < .001(haptic last group). Conclusion: Haptic feedback in VRS has limited fidelity according to the tested laparoscopic surgeons. In spite of this, significantly less stretch damage was caused with haptics enabled.


Assuntos
Instrução por Computador/métodos , Laparoscopia/educação , Laparoscopia/métodos , Cirurgiões/educação , Técnicas de Sutura/educação , Realidade Virtual , Adulto , Estudos Cross-Over , Retroalimentação , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Distribuição Aleatória
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...