Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 28
Filter
1.
Data Brief ; 52: 110030, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38299104

ABSTRACT

The proposed dataset is comprised of 398 videos, each featuring an individual engaged in specific video surveillance actions. The ground truth for this dataset was expertly curated and is presented in JSON format (standard COCO), offering vital information about the dataset, video frames, and annotations, including precise bounding boxes outlining detected objects. The dataset encompasses three distinct categories for object detection: "Handgun", "Machine_Gun", and "No_Gun", dependent on the video's content. This dataset serves as a resource for research in firearm-related action recognition, firearm detection, security, and surveillance applications, enabling researchers and practitioners to develop and evaluate machine learning models for the detection of handguns and rifles across various scenarios. The meticulous ground truth annotations facilitate precise model evaluation and performance analysis, making this dataset an asset in the field of computer vision and public safety.

2.
Comput Methods Programs Biomed ; 235: 107528, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37040684

ABSTRACT

BACKGROUND AND OBJECTIVE: This paper presents the quantitative comparison of three generative models of digital staining, also known as virtual staining, in H&E modality (i.e., Hematoxylin and Eosin) that are applied to 5 types of breast tissue. Moreover, a qualitative evaluation of the results achieved with the best model was carried out. This process is based on images of samples without staining captured by a multispectral microscope with previous dimensional reduction to three channels in the RGB range. METHODS: The models compared are based on conditional GAN (pix2pix) which uses images aligned with/without staining, and two models that do not require image alignment, Cycle GAN (cycleGAN) and contrastive learning-based model (CUT). These models are compared based on the structural similarity and chromatic discrepancy between samples with chemical staining and their corresponding ones with digital staining. The correspondence between images is achieved after the chemical staining images are subjected to digital unstaining by means of a model obtained to guarantee the cyclic consistency of the generative models. RESULTS: The comparison of the three models corroborates the visual evaluation of the results showing the superiority of cycleGAN both for its larger structural similarity with respect to chemical staining (mean value of SSIM ∼ 0.95) and lower chromatic discrepancy (10%). To this end, quantization and calculation of EMD (Earth Mover's Distance) between clusters is used. In addition, quality evaluation through subjective psychophysical tests with three experts was carried out to evaluate quality of the results with the best model (cycleGAN). CONCLUSIONS: The results can be satisfactorily evaluated by metrics that use as reference image a chemically stained sample and the digital staining images of the reference sample with prior digital unstaining. These metrics demonstrate that generative staining models that guarantee cyclic consistency provide the closest results to chemical H&E staining that also is consistent with the result of qualitative evaluation by experts.


Subject(s)
Deep Learning , Microscopy , Staining and Labeling , Benchmarking , Eosine Yellowish-(YS) , Image Processing, Computer-Assisted
3.
Comput Methods Programs Biomed ; 219: 106775, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35397412

ABSTRACT

BACKGROUND AND OBJECTIVE: Training a deep convolutional neural network (CNN) for automatic image classification requires a large database with images of labeled samples. However, in some applications such as biology and medicine only a few experts can correctly categorize each sample. Experts are able to identify small changes in shape and texture which go unnoticed by untrained people, as well as distinguish between objects in the same class that present drastically different shapes and textures. This means that currently available databases are too small and not suitable to train deep learning models from scratch. To deal with this problem, data augmentation techniques are commonly used to increase the dataset size. However, typical data augmentation methods introduce artifacts or apply distortions to the original image, which instead of creating new realistic samples, obtain basic spatial variations of the original ones. METHODS: We propose a novel data augmentation procedure which generates new realistic samples, by combining two samples that belong to the same class. Although the idea behind the method described in this paper is to mimic the variations that diatoms experience in different stages of their life cycle, it has also been demonstrated in glomeruli and pollen identification problems. This new data augmentation procedure is based on morphing and image registration methods that perform diffeomorphic transformations. RESULTS: The proposed technique achieves an increase in accuracy over existing techniques of 0.47%, 1.47%, and 0.23% for diatom, glomeruli and pollen problems respectively. CONCLUSIONS: For the Diatom dataset, the method is able to simulate the shape changes in different diatom life cycle stages, and thus, images generated resemble newly acquired samples with intermediate shapes. In fact, the other methods compared obtained worse results than those which were not using data augmentation. For the Glomeruli dataset, the method is able to add new samples with different shapes and degrees of sclerosis (through different textures). This is the case where our proposed DA method is more beneficial, when objects highly differ in both shape and texture. Finally, for the Pollen dataset, since there are only small variations between samples in a few classes and this dataset has other features such as noise which are likely to benefit other existing DA techniques, the method still shows an improvement of the results.


Subject(s)
Data Management , Neural Networks, Computer , Databases, Factual , Humans
4.
Entropy (Basel) ; 22(11)2020 Oct 24.
Article in English | MEDLINE | ID: mdl-33286969

ABSTRACT

Adversarial examples are one of the most intriguing topics in modern deep learning. Imperceptible perturbations to the input can fool robust models. In relation to this problem, attack and defense methods are being developed almost on a daily basis. In parallel, efforts are being made to simply pointing out when an input image is an adversarial example. This can help prevent potential issues, as the failure cases are easily recognizable by humans. The proposal in this work is to study how chaos theory methods can help distinguish adversarial examples from regular images. Our work is based on the assumption that deep networks behave as chaotic systems, and adversarial examples are the main manifestation of it (in the sense that a slight input variation produces a totally different output). In our experiments, we show that the Lyapunov exponents (an established measure of chaoticity), which have been recently proposed for classification of adversarial examples, are not robust to image processing transformations that alter image entropy. Furthermore, we show that entropy can complement Lyapunov exponents in such a way that the discriminating power is significantly enhanced. The proposed method achieves 65% to 100% accuracy detecting adversarials with a wide range of attacks (for example: CW, PGD, Spatial, HopSkip) for the MNIST dataset, with similar results when entropy-changing image processing methods (such as Equalization, Speckle and Gaussian noise) are applied. This is also corroborated with two other datasets, Fashion-MNIST and CIFAR 19. These results indicate that classifiers can enhance their robustness against the adversarial phenomenon, being applied in a wide variety of conditions that potentially matches real world cases and also other threatening scenarios.

5.
Data Brief ; 29: 105314, 2020 Apr.
Article in English | MEDLINE | ID: mdl-32154349

ABSTRACT

The data presented in this article is part of the whole slide imaging (WSI) datasets generated in European project AIDPATH This data is also related to the research paper entitle "Glomerulosclerosis Identification in Whole Slide Images using Semantic Segmentation", published in Computer Methods and Programs in Biomedicine Journal [1]. In that article, different methods based on deep learning for glomeruli segmentation and their classification into normal and sclerotic glomerulous are presented and discussed. The raw data used is described and provided here. In addition, the detected glomeruli are also provided as individual image files. These data will encourage research on artificial intelligence (AI) methods, create and compare fresh algorithms, and measure their usability in quantitative nephropathology.

6.
Sensors (Basel) ; 20(3)2020 Jan 31.
Article in English | MEDLINE | ID: mdl-32023954

ABSTRACT

An automatic "museum audio guide" is presented as a new type of audio guide for museums. The device consists of a headset equipped with a camera that captures exhibit pictures and the eyes of things computer vision device (EoT). The EoT board is capable of recognizing artworks using features from accelerated segment test (FAST) keypoints and a random forest classifier, and is able to be used for an entire day without the need to recharge the batteries. In addition, an application logic has been implemented, which allows for a special highly-efficient behavior upon recognition of the painting. Two different use case scenarios have been implemented. The main testing was performed with a piloting phase in a real world museum. Results show that the system keeps its promises regarding its main benefit, which is simplicity of use and the user's preference of the proposed system over traditional audioguides.

7.
Comput Methods Programs Biomed ; 184: 105273, 2020 Feb.
Article in English | MEDLINE | ID: mdl-31891905

ABSTRACT

BACKGROUND AND OBJECTIVE: Glomeruli identification, i.e., detection and characterization, is a key procedure in many nephropathology studies. In this paper, semantic segmentation based on convolutional neural networks (CNN) is proposed to detect glomeruli using Whole Slide Imaging (WSI) follows by a classification CNN to divide the glomeruli into normal and sclerosed. METHODS: Comparison between U-Net and SegNet CNNs is performed for pixel-level segmentation considering both a two and three class problem, that is, a) non-glomerular and glomerular structures and b) non-glomerular normal glomerular and sclerotic structures. The two class semantic segmentation result is then used for a CNN classification where glomerular regions are divided into normal and global sclerosed glomeruli. RESULTS: These methods were tested on a dataset composed of 47 WSIs belonging to human kidney sections stained with Periodic Acid Schiff (PAS). The best approach was the SegNet for two class segmentation follows by a fine-tuned AlexNet network to characterize the glomeruli. 98.16% of accuracy was obtained with this process of consecutive CNNs (SegNet-AlexNet) for segmentation and classification. CONCLUSION: The results obtained demonstrate that the sequential CNN segmentation-classification strategy achieves higher accuracy reducing misclassified cases and therefore being the methodology proposed for glomerulosclerosis detection.


Subject(s)
Kidney Diseases/diagnosis , Kidney Glomerulus/pathology , Semantics , Datasets as Topic , Humans , Image Processing, Computer-Assisted , Kidney Diseases/pathology , Neural Networks, Computer
8.
Comput Methods Programs Biomed ; 179: 104983, 2019 Oct.
Article in English | MEDLINE | ID: mdl-31443854

ABSTRACT

BACKGROUND AND OBJECTIVE: Digital scanners are being increasingly adopt-ed in anatomical pathology, but there is still a lack of a standardized whole slide image (WSI) format. This translates into the need for interoperability and knowledge representation for shareable and computable clinical information. This work describes a robust solution, called Visilab Viewer, able to interact and work with any WSI based on the DICOM standard. METHODS: Visilab Viewer is a web platform developed and integrated alongside a proposed web architecture following the DICOM definition. To prepare the information of the pyramid structure proposed in DICOM, a specific module was defined. The same structure is used by a second module that aggregates on the cache browser the adjacent tiles or frames of the current user's viewport with the aim of achieving fast and fluid navigation over the tissue slide. This solution was tested and compared with three different web viewers, publicly available, with 10 WSIs. RESULTS: A quantitative assessment was performed based on the average load time per frame together with the number of fully loaded frames. Kruskal-Wallis and Dunn tests were used to compare each web viewer latency results and finally to rank them. Additionally, a qualitative evaluation was done by 6 pathologists based on speed and quality for zooming, panning and usability. The proposed viewer obtained the best performance in both assessments. The entire architecture proposed was tested in the 2nd worldwide DICOM Connectathon, obtaining successful results with all participant scanner vendors. CONCLUSIONS: The online tool allows users to navigate and obtain a correct visualization of the samples avoiding any restriction of format and localization. The two strategical modules allow to reduce time in displaying the slide and therefore, offer high fluidity and usability. The web platform manages not only the visualization with the developed web viewer but also includes the insertion, manipulation and generation of new DICOM elements. Visilab Viewer can successfully exchange DICOM data. Connectathons are the ultimate interoperability tests and are therefore required to guarantee that solutions as Visilab Viewer and its architecture can successfully exchange data following the DICOM standard. Accompanying demo video. (Link to Youtube video.).


Subject(s)
Internet , Software , Telepathology/statistics & numerical data , Biopsy, Fine-Needle/statistics & numerical data , Cytological Techniques/statistics & numerical data , Humans , Image Interpretation, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/statistics & numerical data , Telepathology/methods
9.
IEEE Trans Image Process ; 27(10): 4787-4797, 2018 10.
Article in English | MEDLINE | ID: mdl-29994215

ABSTRACT

While action recognition has become an important line of research in computer vision, the recognition of particular events such as aggressive behaviors, or fights, has been relatively less studied. These tasks may be extremely useful in several video surveillance scenarios such as psychiatric wards, prisons or even in personal camera smartphones. Their potential usability has led to a surge of interest in developing fight or violence detectors. One of the key aspects in this case is efficiency, that is, these methods should be computationally fast. "Handcrafted" spatiotemporal features that account for both motion and appearance information can achieve high accuracy rates, albeit the computational cost of extracting some of those features is still prohibitive for practical applications. The deep learning paradigm has been recently applied for the first time to this task too, in the form of a 3D Convolutional Neural Network that processes the whole video sequence as input. However, results in human perception of other's actions suggest that, in this specific task, motion features are crucial. This means that using the whole video as input may add both redundancy and noise in the learning process. In this work, we propose a hybrid "handcrafted/learned" feature framework which provides better accuracy than the previous feature learning method, with similar computational efficiency. The proposed method is compared to three related benchmark datasets. The method outperforms the different state-of-the-art methods in two of the three considered benchmark datasets.


Subject(s)
Human Activities/classification , Neural Networks, Computer , Pattern Recognition, Automated/methods , Violence/classification , Humans , Image Processing, Computer-Assisted/methods , Video Recording
10.
J Biomed Opt ; 23(1): 1-14, 2018 01.
Article in English | MEDLINE | ID: mdl-29297212

ABSTRACT

We study the effectiveness of several low-cost oblique illumination filters to improve overall image quality, in comparison with standard bright field imaging. For this purpose, a dataset composed of 3360 diatom images belonging to 21 taxa was acquired. Subjective and objective image quality assessments were done. The subjective evaluation was performed by a group of diatom experts by psychophysical test where resolution, focus, and contrast were assessed. Moreover, some objective nonreference image quality metrics were applied to the same image dataset to complete the study, together with the calculation of several texture features to analyze the effect of these filters in terms of textural properties. Both image quality evaluation methods, subjective and objective, showed better results for images acquired using these illumination filters in comparison with the no filtered image. These promising results confirm that this kind of illumination filters can be a practical way to improve the image quality, thanks to the simple and low cost of the design and manufacturing process.


Subject(s)
Lighting/instrumentation , Lighting/methods , Microscopy/instrumentation , Microscopy/methods , Algorithms , Anisotropy , Databases, Factual , Diatoms/chemistry , Diatoms/classification , Equipment Design , Image Processing, Computer-Assisted
11.
Comput Med Imaging Graph ; 61: 14-27, 2017 Nov.
Article in English | MEDLINE | ID: mdl-28648530

ABSTRACT

Immunohistochemical (IHC) biomarkers in breast tissue microarray (TMA) samples are used daily in pathology departments. In recent years, automatic methods to evaluate positive staining have been investigated since they may save time and reduce errors in the diagnosis. These errors are mostly due to subjective evaluation. The aim of this work is to develop a density tool able to automatically quantify the positive brown IHC stain in breast TMA for different biomarkers. To avoid the problem of colour variation and make a robust tool independent of the staining process, several colour standardization methods have been analysed. Four colour standardization methods have been compared against colour model segmentation. The standardization methods have been compared by means of NBS colour distance. The use of colour standardization helps to reduce noise due to stain and histological sample preparation. However, the most reliable and robust results have been obtained by combining the HSV and RGB colour models for segmentation with the HSB channels. The segmentation provides three outputs based on three saturation values for weak, medium and strong staining. Each output image can be combined according to the type of biomarker staining. The results with 12 biomarkers were evaluated and compared to the segmentation and density calculation done by expert pathologists. The Hausdorff distance, sensitivity and specificity have been used to quantitative validate the results. The tests carried out with 8000 TMA images provided an average of 95.94% accuracy applied to the total tissue cylinder area. Colour standardization was used only when the tissue core had blurring and fading stain and the expert could not evaluate them without a pre-processing.


Subject(s)
Breast Neoplasms/pathology , Color/standards , Image Processing, Computer-Assisted , Immunohistochemistry , Staining and Labeling , Female , Humans , Tissue Array Analysis
12.
Sensors (Basel) ; 17(5)2017 May 21.
Article in English | MEDLINE | ID: mdl-28531141

ABSTRACT

Embedded systems control and monitor a great deal of our reality. While some "classic" features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last few years to emphasize connectivity functions, thus contributing to the Internet of Things paradigm. A myriad of sensing/computing devices are being attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Apart from the obvious necessity to process at least some data at the edge (to increase security and reduce power consumption and latency), a major breakthrough will arguably come when such devices are endowed with some level of autonomous "intelligence". Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we cannot conceive an exact algorithm. Central to such intelligence is Computer Vision (CV), i.e., extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places and things. The possibilities of embedded CV are endless if we consider new applications and technologies, such as deep learning, drones, home robotics, intelligent surveillance, intelligent toys, wearable cameras, etc. This paper describes the Eyes of Things (EoT) platform, a versatile computer vision platform tackling those challenges and opportunities.

13.
Pathobiology ; 83(2-3): 61-9, 2016.
Article in English | MEDLINE | ID: mdl-27100343

ABSTRACT

The future paradigm of pathology will be digital. Instead of conventional microscopy, a pathologist will perform a diagnosis through interacting with images on computer screens and performing quantitative analysis. The fourth generation of virtual slide telepathology systems, so-called virtual microscopy and whole-slide imaging (WSI), has allowed for the storage and fast dissemination of image data in pathology and other biomedical areas. These novel digital imaging modalities encompass high-resolution scanning of tissue slides and derived technologies, including automatic digitization and computational processing of whole microscopic slides. Moreover, automated image analysis with WSI can extract specific diagnostic features of diseases and quantify individual components of these features to support diagnoses and provide informative clinical measures of disease. Therefore, the challenge is to apply information technology and image analysis methods to exploit the new and emerging digital pathology technologies effectively in order to process and model all the data and information contained in WSI. The final objective is to support the complex workflow from specimen receipt to anatomic pathology report transmission, that is, to improve diagnosis both in terms of pathologists' efficiency and with new information. This article reviews the main concerns about and novel methods of digital pathology discussed at the latest workshop in the field carried out within the European project AIDPATH (Academia and Industry Collaboration for Digital Pathology).


Subject(s)
Image Interpretation, Computer-Assisted , Image Processing, Computer-Assisted , Telepathology/trends , Humans , Microscopy
14.
PLoS One ; 10(10): e0141556, 2015.
Article in English | MEDLINE | ID: mdl-26513238

ABSTRACT

Breast cancer diagnosis is still done by observation of biopsies under the microscope. The development of automated methods for breast TMA classification would reduce diagnostic time. This paper is a step towards the solution for this problem and shows a complete study of breast TMA classification based on colour models and texture descriptors. The TMA images were divided into four classes: i) benign stromal tissue with cellularity, ii) adipose tissue, iii) benign and benign anomalous structures, and iv) ductal and lobular carcinomas. A relevant set of features was obtained on eight different colour models from first and second order Haralick statistical descriptors obtained from the intensity image, Fourier, Wavelets, Multiresolution Gabor, M-LBP and textons descriptors. Furthermore, four types of classification experiments were performed using six different classifiers: (1) classification per colour model individually, (2) classification by combination of colour models, (3) classification by combination of colour models and descriptors, and (4) classification by combination of colour models and descriptors with a previous feature set reduction. The best result shows an average of 99.05% accuracy and 98.34% positive predictive value. These results have been obtained by means of a bagging tree classifier with combination of six colour models and the use of 1719 non-correlated (correlation threshold of 97%) textural features based on Statistical, M-LBP, Gabor and Spatial textons descriptors.


Subject(s)
Breast Neoplasms/pathology , Carcinoma/pathology , Tissue Array Analysis/standards , Adipose Tissue/pathology , Data Interpretation, Statistical , Female , Humans , Reproducibility of Results
15.
PLoS One ; 10(7): e0133059, 2015.
Article in English | MEDLINE | ID: mdl-26197221

ABSTRACT

Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.


Subject(s)
Artificial Intelligence , Computational Biology/methods , Pattern Recognition, Automated/methods , Algorithms , Area Under Curve , Breast Neoplasms/diagnosis , Databases, Factual , Facial Recognition , False Positive Reactions , Female , Humans , Mammography/methods , Pedestrians , ROC Curve , Radiographic Image Interpretation, Computer-Assisted
16.
Stud Health Technol Inform ; 210: 756-60, 2015.
Article in English | MEDLINE | ID: mdl-25991255

ABSTRACT

Breast cancer is the most common type of cancer and the fifth leading cause of death in women over 40. Therefore, prompt diagnostic and treatment is essential. In this work a TMA Computer Aided Diagnosis (CAD) system has been implemented to provide support to pathologists in their daily work. For that purpose, the tool covers each and every process from the TMA core image acquisition to their individual classification. The first process includes: tissue core location, segmentation and rigid registration of digital microscopic images acquired at different magnifications (5x, 10x, 20x, 20x and 40x) from different devices. The classification process allows performing the core classification selecting different types of color models, texture descriptors and classifiers. Finally, the cores are classified into three categories: malignant, doubtful and benign.


Subject(s)
Algorithms , Breast Neoplasms/pathology , Image Interpretation, Computer-Assisted/methods , Microscopy/methods , Pattern Recognition, Automated/methods , Tissue Array Analysis/methods , Female , Humans , Reproducibility of Results , Sensitivity and Specificity , Spain , Support Vector Machine
17.
Micron ; 68: 36-46, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25259684

ABSTRACT

Pollen identification is required in different scenarios such as prevention of allergic reactions, climate analysis or apiculture. However, it is a time-consuming task since experts are required to recognize each pollen grain through the microscope. In this study, we performed an exhaustive assessment on the utility of texture analysis for automated characterisation of pollen samples. A database composed of 1800 brightfield microscopy images of pollen grains from 15 different taxa was used for this purpose. A pattern recognition-based methodology was adopted to perform pollen classification. Four different methods were evaluated for texture feature extraction from the pollen image: Haralick's gray-level co-occurrence matrices (GLCM), log-Gabor filters (LGF), local binary patterns (LBP) and discrete Tchebichef moments (DTM). Fisher's discriminant analysis and k-nearest neighbour were subsequently applied to perform dimensionality reduction and multivariate classification, respectively. Our results reveal that LGF and DTM, which are based on the spectral properties of the image, outperformed GLCM and LBP in the proposed classification problem. Furthermore, we found that the combination of all the texture features resulted in the highest performance, yielding an accuracy of 95%. Therefore, thorough texture characterisation could be considered in further implementations of automatic pollen recognition systems based on image processing techniques.


Subject(s)
Image Processing, Computer-Assisted/methods , Microscopy/methods , Pollen/classification , Surface Properties , Automation, Laboratory/methods , Chemical Phenomena
18.
Comput Med Imaging Graph ; 42: 25-37, 2015 Jun.
Article in English | MEDLINE | ID: mdl-25499960

ABSTRACT

Advances in digital pathology are generating huge volumes of whole slide (WSI) and tissue microarray images (TMA) which are providing new insights into the causes of cancer. The challenge is to extract and process effectively all the information in order to characterize all the heterogeneous tissue-derived data. This study aims to identify an optimal set of features that best separates different classes in breast TMA. These classes are: stroma, adipose tissue, benign and benign anomalous structures and ductal and lobular carcinomas. To this end, we propose an exhaustive assessment on the utility of textons and colour for automatic classification of breast TMA. Frequential and spatial texton maps from eight different colour models were extracted and compared. Then, in a novel way, the TMA is characterized by the 1st and 2nd order Haralick statistical descriptors obtained from the texton maps with a total of 241 × 8 features for each original RGB image. Subsequently, a feature selection process is performed to remove redundant information and therefore to reduce the dimensionality of the feature vector. Three methods were evaluated: linear discriminant analysis, correlation and sequential forward search. Finally, an extended bank of classifiers composed of six techniques was compared, but only three of them could significantly improve accuracy rates: Fisher, Bagging Trees and AdaBoost. Our results reveal that the combination of different colour models applied to spatial texton maps provides the most efficient representation of the breast TMA. Specifically, we found that the best colour model combination is Hb, Luv and SCT for all classifiers and the classifier that performs best for all colour model combinations is the AdaBoost. On a database comprising 628 TMA images, classification yields an accuracy of 98.1% and a precision of 96.2% with a total of 316 features on spatial textons maps.


Subject(s)
Breast Neoplasms/classification , Breast Neoplasms/pathology , Colorimetry/methods , Microscopy/methods , Pattern Recognition, Automated/methods , Tissue Array Analysis/methods , Algorithms , Color , Female , Humans , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Machine Learning , Reproducibility of Results , Sensitivity and Specificity , Terminology as Topic
19.
Microsc Res Tech ; 77(9): 697-713, 2014 Sep.
Article in English | MEDLINE | ID: mdl-24916187

ABSTRACT

The field of anatomic pathology has experienced major changes over the last decade. Virtual microscopy (VM) systems have allowed experts in pathology and other biomedical areas to work in a safer and more collaborative way. VMs are automated systems capable of digitizing microscopic samples that were traditionally examined one by one. The possibility of having digital copies reduces the risk of damaging original samples, and also makes it easier to distribute copies among other pathologists. This article describes the development of an automated high-resolution whole slide imaging (WSI) system tailored to the needs and problems encountered in digital imaging for pathology, from hardware control to the full digitization of samples. The system has been built with an additional digital monochromatic camera together with the color camera by default and LED transmitted illumination (RGB). Monochrome cameras are the preferred method of acquisition for fluorescence microscopy. The system is able to digitize correctly and form large high resolution microscope images for both brightfield and fluorescence. The quality of the digital images has been quantified using three metrics based on sharpness, contrast and focus. It has been proved on 150 tissue samples of brain autopsies, prostate biopsies and lung cytologies, at five magnifications: 2.5×, 10×, 20×, 40×, and 63×. The article is focused on the hardware set-up and the acquisition software, although results of the implemented image processing techniques included in the software and applied to the different tissue samples are also presented.


Subject(s)
Brain/pathology , Image Processing, Computer-Assisted/methods , Lung/anatomy & histology , Lung/pathology , Microscopy/methods , Prostate/pathology , Automation , Autopsy , Brain/anatomy & histology , Female , Humans , Image Processing, Computer-Assisted/instrumentation , Male , Microscopy/instrumentation , Prostate/anatomy & histology , Software
20.
IEEE J Biomed Health Inform ; 18(3): 999-1007, 2014 May.
Article in English | MEDLINE | ID: mdl-24107985

ABSTRACT

This paper describes a specific tool for automatically segmenting and archiving of tissue microarray (TMA) cores in microscopy images at different magnifications. TMA enables researchers to extract the small cylinders of a single tissue (core sections) from histological sections and arrange them in an array on a paraffin block such that hundreds can be analyzed simultaneously. A crucial step to improve the speed and quality of this process is the correct localization of each tissue core in the array. However, usually the tissue cores are not aligned in the microarray, the TMA cores are incomplete and the images are noisy and with distorted colors. We develop a robust framework to handle core sections under these conditions. The algorithms are able to detect, stitch, and archive the TMA cores at different magnifications. Once the TMA cores are segmented they are stored in a relational database allowing their processing for further studies of benign-malignant classification. The method was shown to be reliable for handling the TMA cores and therefore enabling further large-scale molecular pathology research.


Subject(s)
Biopsy/methods , Histocytological Preparation Techniques/methods , Image Processing, Computer-Assisted/methods , Microscopy/methods , Algorithms , Databases, Factual , Histocytochemistry , Humans , Neoplasms/chemistry , ROC Curve
SELECTION OF CITATIONS
SEARCH DETAIL
...