Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Int J Comput Assist Radiol Surg ; 12(1): 161-166, 2017 Jan.
Article in English | MEDLINE | ID: mdl-27350057

ABSTRACT

PURPOSE: With the recent trend toward big data analysis, neuroimaging datasets have grown substantially in the past years. While larger datasets potentially offer important insights for medical research, one major bottleneck is the requirement for resources of medical experts needed to validate automatic processing results. To address this issue, the goal of this paper was to assess whether anonymous nonexperts from an online community can perform quality control of MR-based cortical surface delineations derived by an automatic algorithm. METHODS: So-called knowledge workers from an online crowdsourcing platform were asked to annotate errors in automatic cortical surface delineations on 100 central, coronal slices of MR images. RESULTS: On average, annotations for 100 images were obtained in less than an hour. When using expert annotations as reference, the crowd on average achieves a sensitivity of 82 % and a precision of 42 %. Merging multiple annotations per image significantly improves the sensitivity of the crowd (up to 95 %), but leads to a decrease in precision (as low as 22 %). CONCLUSION: Our experiments show that the detection of errors in automatic cortical surface delineations generated by anonymous untrained workers is feasible. Future work will focus on increasing the sensitivity of our method further, such that the error detection tasks can be handled exclusively by the crowd and expert resources can be focused on error correction.


Subject(s)
Algorithms , Cerebral Cortex/diagnostic imaging , Crowdsourcing/methods , Image Processing, Computer-Assisted/standards , Quality Control , Automation/standards , Datasets as Topic , Humans , Imaging, Three-Dimensional , Magnetic Resonance Imaging , Neuroimaging
2.
Int J Comput Assist Radiol Surg ; 10(8): 1201-12, 2015 Aug.
Article in English | MEDLINE | ID: mdl-25895078

ABSTRACT

PURPOSE: Feature tracking and 3D surface reconstruction are key enabling techniques to computer-assisted minimally invasive surgery. One of the major bottlenecks related to training and validation of new algorithms is the lack of large amounts of annotated images that fully capture the wide range of anatomical/scene variance in clinical practice. To address this issue, we propose a novel approach to obtaining large numbers of high-quality reference image annotations at low cost in an extremely short period of time. METHODS: The concept is based on outsourcing the correspondence search to a crowd of anonymous users from an online community (crowdsourcing) and comprises four stages: (1) feature detection, (2) correspondence search via crowdsourcing, (3) merging multiple annotations per feature by fitting Gaussian finite mixture models, (4) outlier removal using the result of the clustering as input for a second annotation task. RESULTS: On average, 10,000 annotations were obtained within 24 h at a cost of $100. The annotation of the crowd after clustering and before outlier removal was of expert quality with a median distance of about 1 pixel to a publically available reference annotation. The threshold for the outlier removal task directly determines the maximum annotation error, but also the number of points removed. CONCLUSIONS: Our concept is a novel and effective method for fast, low-cost and highly accurate correspondence generation that could be adapted to various other applications related to large-scale data annotation in medical image computing and computer-assisted interventions.


Subject(s)
Minimally Invasive Surgical Procedures/methods , Surgery, Computer-Assisted/methods , Algorithms , Benchmarking , Humans
3.
Med Image Comput Comput Assist Interv ; 17(Pt 2): 349-56, 2014.
Article in English | MEDLINE | ID: mdl-25485398

ABSTRACT

Computer-assisted minimally-invasive surgery (MIS) is often based on algorithms that require establishing correspondences between endoscopic images. However, reference annotations frequently required to train or validate a method are extremely difficult to obtain because they are typically made by a medical expert with very limited resources, and publicly available data sets are still far too small to capture the wide range of anatomical/scene variance. Crowdsourcing is a new trend that is based on outsourcing cognitive tasks to many anonymous untrained individuals from an online community. To our knowledge, this paper is the first to investigate the concept of crowdsourcing in the context of endoscopic video image annotation for computer-assisted MIS. According to our study on publicly available in vivo data with manual reference annotations, anonymous non-experts obtain a median annotation error of 2 px (n = 10,000). By applying cluster analysis to multiple annotations per correspondence, this error can be reduced to about 1 px, which is comparable to that obtained by medical experts (n = 500). We conclude that crowdsourcing is a viable method for generating high quality reference correspondences in endoscopic video images.


Subject(s)
Algorithms , Capsule Endoscopy/methods , Crowdsourcing/methods , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Surgery, Computer-Assisted/methods , User-Computer Interface , Humans , Observer Variation , Reproducibility of Results , Sensitivity and Specificity
4.
Med Image Comput Comput Assist Interv ; 17(Pt 2): 438-45, 2014.
Article in English | MEDLINE | ID: mdl-25485409

ABSTRACT

Machine learning algorithms are gaining increasing interest in the context of computer-assisted interventions. One of the bottlenecks so far, however, has been the availability of training data, typically generated by medical experts with very limited resources. Crowdsourcing is a new trend that is based on outsourcing cognitive tasks to many anonymous untrained individuals from an online community. In this work, we investigate the potential of crowdsourcing for segmenting medical instruments in endoscopic image data. Our study suggests that (1) segmentations computed from annotations of multiple anonymous non-experts are comparable to those made by medical experts and (2) training data generated by the crowd is of the same quality as that annotated by medical experts. Given the speed of annotation, scalability and low costs, this implies that the scientific community might no longer need to rely on experts to generate reference or training data for certain applications. To trigger further research in endoscopic image processing, the data used in this study will be made publicly available.


Subject(s)
Artificial Intelligence , Crowdsourcing/instrumentation , Crowdsourcing/methods , Information Storage and Retrieval/methods , Laparoscopes , Laparoscopy/methods , Pattern Recognition, Automated/methods , Algorithms , Equipment Design , Equipment Failure Analysis , Humans , Image Enhancement/instrumentation , Image Enhancement/methods , Observer Variation , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...