Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Entropy (Basel) ; 26(2)2024 Feb 15.
Article in English | MEDLINE | ID: mdl-38392420

ABSTRACT

Immunohistochemistry is a powerful technique that is widely used in biomedical research and clinics; it allows one to determine the expression levels of some proteins of interest in tissue samples using color intensity due to the expression of biomarkers with specific antibodies. As such, immunohistochemical images are complex and their features are difficult to quantify. Recently, we proposed a novel method, including a first separation stage based on non-negative matrix factorization (NMF), that achieved good results. However, this method was highly dependent on the parameters that control sparseness and non-negativity, as well as on algorithm initialization. Furthermore, the previously proposed method required a reference image as a starting point for the NMF algorithm. In the present work, we propose a new, simpler and more robust method for the automated, unsupervised scoring of immunohistochemical images based on bright field. Our work is focused on images from tumor tissues marked with blue (nuclei) and brown (protein of interest) stains. The new proposed method represents a simpler approach that, on the one hand, avoids the use of NMF in the separation stage and, on the other hand, circumvents the need for a control image. This new approach determines the subspace spanned by the two colors of interest using principal component analysis (PCA) with dimension reduction. This subspace is a two-dimensional space, allowing for color vector determination by considering the point density peaks. A new scoring stage is also developed in our method that, again, avoids reference images, making the procedure more robust and less dependent on parameters. Semi-quantitative image scoring experiments using five categories exhibit promising and consistent results when compared to manual scoring carried out by experts.

2.
Entropy (Basel) ; 24(4)2022 Apr 13.
Article in English | MEDLINE | ID: mdl-35455209

ABSTRACT

In many research laboratories, it is essential to determine the relative expression levels of some proteins of interest in tissue samples. The semi-quantitative scoring of a set of images consists of establishing a scale of scores ranging from zero or one to a maximum number set by the researcher and assigning a score to each image that should represent some predefined characteristic of the IHC staining, such as its intensity. However, manual scoring depends on the judgment of an observer and therefore exposes the assessment to a certain level of bias. In this work, we present a fully automatic and unsupervised method for comparative biomarker quantification in histopathological brightfield images. The method relies on a color separation method that discriminates between two chromogens expressed as brown and blue colors robustly, independent of color variation or biomarker expression level. For this purpose, we have adopted a two-stage stain separation approach in the optical density space. First, a preliminary separation is performed using a deconvolution method in which the color vectors of the stains are determined after an eigendecomposition of the data. Then, we adjust the separation using the non-negative matrix factorization method with beta divergences, initializing the algorithm with the matrices resulting from the previous step. After that, a feature vector of each image based on the intensity of the two chromogens is determined. Finally, the images are annotated using a systematically initialized k-means clustering algorithm with beta divergences. The method clearly defines the initial boundaries of the categories, although some flexibility is added. Experiments for the semi-quantitative scoring of images in five categories have been carried out by comparing the results with the scores of four expert researchers yielding accuracies that range between 76.60% and 94.58%. These results show that the proposed automatic scoring system, which is definable and reproducible, produces consistent results.

3.
Entropy (Basel) ; 21(2)2019 Feb 19.
Article in English | MEDLINE | ID: mdl-33266911

ABSTRACT

Centroid-based clustering is a widely used technique within unsupervised learning algorithms in many research fields. The success of any centroid-based clustering relies on the choice of the similarity measure under use. In recent years, most studies focused on including several divergence measures in the traditional hard k-means algorithm. In this article, we consider the problem of centroid-based clustering using the family of α ß -divergences, which is governed by two parameters, α and ß . We propose a new iterative algorithm, α ß -k-means, giving closed-form solutions for the computation of the sided centroids. The algorithm can be fine-tuned by means of this pair of values, yielding a wide range of the most frequently used divergences. Moreover, it is guaranteed to converge to local minima for a wide range of values of the pair ( α , ß ). Our theoretical contribution has been validated by several experiments performed with synthetic and real data and exploring the ( α , ß ) plane. The numerical results obtained confirm the quality of the algorithm and its suitability to be used in several practical applications.

4.
Comput Biol Med ; 96: 41-51, 2018 05 01.
Article in English | MEDLINE | ID: mdl-29544146

ABSTRACT

Breast cancer is the second leading cause of cancer death among women. Its early diagnosis is extremely important to prevent avoidable deaths. However, malignancy assessment of tissue biopsies is complex and dependent on observer subjectivity. Moreover, hematoxylin and eosin (H&E)-stained histological images exhibit a highly variable appearance, even within the same malignancy level. In this paper, we propose a computer-aided diagnosis (CAD) tool for automated malignancy assessment of breast tissue samples based on the processing of histological images. We provide four malignancy levels as the output of the system: normal, benign, in situ and invasive. The method is based on the calculation of three sets of features related to nuclei, colour regions and textures considering local characteristics and global image properties. By taking advantage of well-established image processing techniques, we build a feature vector for each image that serves as an input to an SVM (Support Vector Machine) classifier with a quadratic kernel. The method has been rigorously evaluated, first with a 5-fold cross-validation within an initial set of 120 images, second with an external set of 30 different images and third with images with artefacts included. Accuracy levels range from 75.8% when the 5-fold cross-validation was performed to 75% with the external set of new images and 61.11% when the extremely difficult images were added to the classification experiment. The experimental results indicate that the proposed method is capable of distinguishing between four malignancy levels with high accuracy. Our results are close to those obtained with recent deep learning-based methods. Moreover, it performs better than other state-of-the-art methods based on feature extraction, and it can help improve the CAD of breast cancer.


Subject(s)
Breast Neoplasms/diagnostic imaging , Breast Neoplasms/pathology , Histocytochemistry/methods , Image Interpretation, Computer-Assisted/methods , Breast/diagnostic imaging , Breast/pathology , Female , Humans , Pattern Recognition, Automated , Support Vector Machine
5.
Med Biol Eng Comput ; 55(11): 1959-1974, 2017 Nov.
Article in English | MEDLINE | ID: mdl-28353133

ABSTRACT

Diabetic retinopathy (DR) is leading cause of blindness among diabetic patients. Recognition of severity level is required by ophthalmologists to early detect and diagnose the DR. However, it is a challenging task for both medical experts and computer-aided diagnosis systems due to requiring extensive domain expert knowledge. In this article, a novel automatic recognition system for the five severity level of diabetic retinopathy (SLDR) is developed without performing any pre- and post-processing steps on retinal fundus images through learning of deep visual features (DVFs). These DVF features are extracted from each image by using color dense in scale-invariant and gradient location-orientation histogram techniques. To learn these DVF features, a semi-supervised multilayer deep-learning algorithm is utilized along with a new compressed layer and fine-tuning steps. This SLDR system was evaluated and compared with state-of-the-art techniques using the measures of sensitivity (SE), specificity (SP) and area under the receiving operating curves (AUC). On 750 fundus images (150 per category), the SE of 92.18%, SP of 94.50% and AUC of 0.924 values were obtained on average. These results demonstrate that the SLDR system is appropriate for early detection of DR and provide an effective treatment for prediction type of diabetes.


Subject(s)
Diabetic Retinopathy/diagnosis , Diabetic Retinopathy/pathology , Pattern Recognition, Automated/methods , Algorithms , Diagnosis, Computer-Assisted/methods , Fundus Oculi , Humans , Image Interpretation, Computer-Assisted/methods , Sensitivity and Specificity
6.
IEEE Trans Med Imaging ; 32(6): 1111-20, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23542950

ABSTRACT

In this paper a psychophysical experiment and a multidimensional scaling (MDS) analysis are undergone to determine the physical characteristics that physicians employ to diagnose a burn depth. Subsequently, these characteristics are translated into mathematical features, correlated with these physical characteristics analysis. Finally, a study to verify the ability of these mathematical features to classify burns is performed. In this study, a space with axes correlated with the MDS axes has been developed. 74 images have been represented in this space and a k-nearest neighbor classifier has been used to classify these 74 images. A success rate of 66.2% was obtained when classifying burns into three burn depths and a success rate of 83.8% was obtained when burns were classified as those which needed grafts and those which did not. Additional studies have been performed comparing our system with a principal component analysis and a support vector machine classifier. Results validate the ability of the mathematical features extracted from the psychophysical experiment to classify burns into their depths. In addition, the method has been compared with another state-of-the-art method and the same database.


Subject(s)
Burns/diagnosis , Burns/pathology , Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Psychophysics/methods , Color , Humans , Principal Component Analysis , Skin/anatomy & histology , Skin/pathology , Support Vector Machine
7.
Skin Res Technol ; 18(3): 278-89, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22093020

ABSTRACT

BACKGROUND: Computer-aided pattern classification of melanoma and other pigmented skin lesions is one of the most important tasks for clinical diagnosis. To differentiate between benign and malignant lesions, the extraction of color, architectural order, symmetry of pattern and homogeneity (CASH) is a challenging task. METHODS: In this article, a novel pattern classification system (PCS) based on the clinical CASH rule is presented to classify among six classes of patterns. The PCS system consists of the following five steps: transformation to the CIE L*a*b* color space, pre-processing to enhance the tumor region and removal of hairs, tumor-area segmentation, color and texture feature extraction, and finally, classification based on a multiclass support vector machine. RESULTS: The PCS system is tested on a total of 180 dermoscopic images. To test the performance of the PCS diagnostic classifier, the area under the receiver operating characteristics curve (AUC) is utilized. The proposed classifier achieved a sensitivity of 91.64%, specificity of 94.14%, and AUC of 0.948. CONCLUSION: The experimental results demonstrate that the proposed pattern classifier is highly accurate and classify between benign and malignant lesions into some extend. The PCS method is fully automatic and can accurately detect different patterns from dermoscopy images using color and texture properties. Additional pattern features can be included to investigate the impact of pattern classification based on the CASH rule.


Subject(s)
Dermoscopy/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Skin Neoplasms/pathology , Adult , Aged, 80 and over , Artificial Intelligence , Female , Humans , Male , Middle Aged , Reproducibility of Results , Sensitivity and Specificity
8.
Burns ; 37(7): 1233-40, 2011 Nov.
Article in English | MEDLINE | ID: mdl-21703768

ABSTRACT

In this paper a computer-based system for burnt surface area estimation (BAI), is presented. First, a 3D model of a patient, adapted to age, weight, gender and constitution is created. On this 3D model, physicians represent both burns as well as burn depth allowing the burnt surface area to be automatically calculated by the system. Each patient models as well as photographs and burn area estimation can be stored. Therefore, these data can be included in the patient's clinical records for further review. Validation of this system was performed. In a first experiment, artificial known sized paper patches were attached to different parts of the body in 37 volunteers. A panel of 5 experts diagnosed the extent of the patches using the Rule of Nines. Besides, our system estimated the area of the "artificial burn". In order to validate the null hypothesis, Student's t-test was applied to collected data. In addition, intraclass correlation coefficient (ICC) was calculated and a value of 0.9918 was obtained, demonstrating that the reliability of the program in calculating the area is of 99%. In a second experiment, the burnt skin areas of 80 patients were calculated using BAI system and the Rule of Nines. A comparison between these two measuring methods was performed via t-Student test and ICC. The hypothesis of null difference between both measures is only true for deep dermal burns and the ICC is significantly different, indicating that the area estimation calculated by applying classical techniques can result in a wrong diagnose of the burnt surface.


Subject(s)
Body Surface Area , Burns/pathology , Computer Graphics , Imaging, Three-Dimensional/methods , Adolescent , Adult , Aged , Aged, 80 and over , Child , Child, Preschool , Female , Humans , Infant , Male , Middle Aged , Young Adult
9.
Comput Methods Programs Biomed ; 104(3): e1-15, 2011 Dec.
Article in English | MEDLINE | ID: mdl-20663582

ABSTRACT

The skin cancer was analyzed by dermoscopy helpful for dermatologists. The classification of melanoma and carcinoma such as basal cell, squamous cell, and merkel cell carcinomas tumors can be increased the sensitivity and specificity. The detection of an automated border is an important step for the correctness of subsequent phases in the computerized melanoma recognition systems. The artifacts such as, dermoscopy-gel, specular reflection and outline (skin lines, blood vessels, and hair or ruler markings) were also contained in the dermoscopic images. In this paper, we present an unsupervised approach for multiple lesion segmentation, modification of Region-based Active Contours (RACs) as well as artifact diminution steps. Iterative thresholding is applied to initialize level set automatically; the stability of curves is enforced by maximum smoothing constraints on Courant-Friedreichs-Lewy (CFL) function. The work has been tested on dermoscopic database of 320 images. The border detection error is quantified by five distinct statistical metrics and manually used to determine the borders from a dermatologist as the ground truth. The segmentation results were compared with other state-of-the-art methods along with the evaluation criteria. The unsupervised border detection system increased the true detection rate (TDR) is 4.31% and reduced the false positive rate (FPR) of 5.28%.


Subject(s)
Carcinoma, Basal Cell/diagnosis , Carcinoma, Merkel Cell/diagnosis , Carcinoma, Squamous Cell/diagnosis , Melanoma/diagnosis , Skin Neoplasms/diagnosis , Carcinoma, Basal Cell/pathology , Carcinoma, Merkel Cell/pathology , Carcinoma, Squamous Cell/pathology , Humans , Melanoma/pathology , Sensitivity and Specificity , Skin Neoplasms/pathology
SELECTION OF CITATIONS
SEARCH DETAIL
...