Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
1.
PLoS One ; 19(6): e0304716, 2024.
Article in English | MEDLINE | ID: mdl-38829872

ABSTRACT

Optical microscopy videos enable experts to analyze the motion of several biological elements. Particularly in blood samples infected with Trypanosoma cruzi (T. cruzi), microscopy videos reveal a dynamic scenario where the parasites' motions are conspicuous. While parasites have self-motion, cells are inert and may assume some displacement under dynamic events, such as fluids and microscope focus adjustments. This paper analyzes the trajectory of T. cruzi and blood cells to discriminate between these elements by identifying the following motion patterns: collateral, fluctuating, and pan-tilt-zoom (PTZ). We consider two approaches: i) classification experiments for discrimination between parasites and cells; and ii) clustering experiments to identify the cell motion. We propose the trajectory step dispersion (TSD) descriptor based on standard deviation to characterize these elements, outperforming state-of-the-art descriptors. Our results confirm motion is valuable in discriminating T. cruzi of the cells. Since the parasites perform the collateral motion, their trajectory steps tend to randomness. The cells may assume fluctuating motion following a homogeneous and directional path or PTZ motion with trajectory steps in a restricted area. Thus, our findings may contribute to developing new computational tools focused on trajectory analysis, which can advance the study and medical diagnosis of Chagas disease.


Subject(s)
Microscopy, Video , Trypanosoma cruzi , Trypanosoma cruzi/physiology , Microscopy, Video/methods , Chagas Disease/parasitology , Humans , Image Processing, Computer-Assisted/methods
2.
Sensors (Basel) ; 21(15)2021 Jul 27.
Article in English | MEDLINE | ID: mdl-34372319

ABSTRACT

Ecological environments research helps to assess the impacts on forests and managing forests. The usage of novel software and hardware technologies enforces the solution of tasks related to this problem. In addition, the lack of connectivity for large data throughput raises the demand for edge-computing-based solutions towards this goal. Therefore, in this work, we evaluate the opportunity of using a Wearable edge AI concept in a forest environment. For this matter, we propose a new approach to the hardware/software co-design process. We also address the possibility of creating wearable edge AI, where the wireless personal and body area networks are platforms for building applications using edge AI. Finally, we evaluate a case study to test the possibility of performing an edge AI task in a wearable-based environment. Thus, in this work, we evaluate the system to achieve the desired task, the hardware resource and performance, and the network latency associated with each part of the process. Through this work, we validated both the design pattern review and case study. In the case study, the developed algorithms could classify diseased leaves with a circa 90% accuracy with the proposed technique in the field. This results can be reviewed in the laboratory with more modern models that reached up to 96% global accuracy. The system could also perform the desired tasks with a quality factor of 0.95, considering the usage of three devices. Finally, it detected a disease epicenter with an offset of circa 0.5 m in a 6 m × 6 m × 12 m space. These results enforce the usage of the proposed methods in the targeted environment and the proposed changes in the co-design pattern.


Subject(s)
Algorithms , Wearable Electronic Devices , Artificial Intelligence , Equipment Design , Humans , Software
3.
Sci Data ; 8(1): 151, 2021 06 10.
Article in English | MEDLINE | ID: mdl-34112812

ABSTRACT

Amidst the current health crisis and social distancing, telemedicine has become an important part of mainstream of healthcare, and building and deploying computational tools to support screening more efficiently is an increasing medical priority. The early identification of cervical cancer precursor lesions by Pap smear test can identify candidates for subsequent treatment. However, one of the main challenges is the accuracy of the conventional method, often subject to high rates of false negative. While machine learning has been highlighted to reduce the limitations of the test, the absence of high-quality curated datasets has prevented strategies development to improve cervical cancer screening. The Center for Recognition and Inspection of Cells (CRIC) platform enables the creation of CRIC Cervix collection, currently with 400 images (1,376 × 1,020 pixels) curated from conventional Pap smears, with manual classification of 11,534 cells. This collection has the potential to advance current efforts in training and testing machine learning algorithms for the automation of tasks as part of the cytopathological analysis in the routine work of laboratories.


Subject(s)
Cervix Uteri/pathology , Internet Use , Papanicolaou Test , Uterine Cervical Neoplasms/pathology , Early Detection of Cancer , Female , Humans , Machine Learning
4.
Diagn Cytopathol ; 49(4): 559-574, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33548162

ABSTRACT

BACKGROUND: Cervical cancer progresses slowly, increasing the chance of early detection of pre-neoplastic lesions via Pap exam test and subsequently preventing deaths. However, the exam presents both false-negatives and false-positives results. Therefore, automatic methods (AMs) of reading the Pap test have been used to improve the quality control of the exam. We performed a literature review to evaluate the feasibility of implementing AMs in laboratories. METHODS: This work reviewed scientific publications regarding automated cytology from the last 15 years. The terms used were "Papanicolaou test" and "Automated cytology screening" in Portuguese, English, and Spanish, in the three scientific databases (SCIELO, PUBMED, MEDLINE). RESULTS: Of the resulting 787 articles, 34 were selected for a complete review, including three AMs: ThinPrep Imaging System, FocalPoint GS Imaging System and CytoProcessor. In total, 1 317 148 cytopathological slides were evaluated automatically, with 1 308 028 (99.3%) liquid-based cytology slides and 9120 (0.7%) conventional cytology smears. The AM diagnostic performances were statistically equal to or better than those of the manual method. AM use increased the detection of cellular abnormalities and reduced false-negatives. The average sample rejection rate was ≤3.5%. CONCLUSION: AMs are relevant in quality control during the analytical phase of cervical cancer screening. This technology eliminates slide-handling steps and reduces the sample space, allowing professionals to focus on diagnostic interpretation while maintaining high-level care, which can reduce false-negatives. Further studies with conventional cytology are needed. The use of AM is still not so widespread in cytopathology laboratories.


Subject(s)
Automation, Laboratory/methods , Papanicolaou Test/methods , Uterine Cervical Neoplasms/pathology , Automation, Laboratory/standards , Female , Humans , Papanicolaou Test/standards
5.
Comput Methods Programs Biomed ; 182: 105053, 2019 Dec.
Article in English | MEDLINE | ID: mdl-31521047

ABSTRACT

BACKGROUND AND OBJECTIVES: Saliency refers to the visual perception quality that makes objects in a scene to stand out from others and attract attention. While computational saliency models can simulate the expert's visual attention, there is little evidence about how these models perform when used to predict the cytopathologist's eye fixations. Saliency models may be the key to instrumenting fast object detection on large Pap smear slides under real noisy conditions, artifacts, and cell occlusions. This paper describes how our computational schemes retrieve regions of interest (ROI) of clinical relevance using visual attention models. We also compare the performance of different computed saliency models as part of cell screening tasks, aiming to design a computer-aided diagnosis systems that supports cytopathologists. METHOD: We record eye fixation maps from cytopathologists at work, and compare with 13 different saliency prediction algorithms, including deep learning. We develop cell-specific convolutional neural networks (CNN) to investigate the impact of bottom-up and top-down factors on saliency prediction from real routine exams. By combining the eye tracking data from pathologists with computed saliency models, we assess algorithms reliability in identifying clinically relevant cells. RESULTS: The proposed cell-specific CNN model outperforms all other saliency prediction methods, particularly regarding the number of false positives. Our algorithm also detects the most clinically relevant cells, which are among the three top salient regions, with accuracy above 98% for all diseases, except carcinoma (87%). Bottom-up methods performed satisfactorily, with saliency maps that enabled ROI detection above 75% for carcinoma and 86% for other pathologies. CONCLUSIONS: ROIs extraction using our saliency prediction methods enabled ranking the most relevant clinical areas within the image, a viable data reduction strategy to guide automatic analyses of Pap smear slides. Top-down factors for saliency prediction on cell images increases the accuracy of the estimated maps while bottom-up algorithms proved to be useful for predicting the cytopathologist's eye fixations depending on parameters, such as the number of false positive and negative. Our contributions are: comparison among 13 state-of-the-art saliency models to cytopathologists' visual attention and deliver a method that the associate the most conspicuous regions to clinically relevant cells.


Subject(s)
Cervix Uteri/pathology , Deep Learning , Neural Networks, Computer , Female , Humans , Papanicolaou Test
6.
IEEE J Biomed Health Inform ; 21(2): 441-450, 2017 03.
Article in English | MEDLINE | ID: mdl-26800556

ABSTRACT

In this paper, we introduce and evaluate the systems submitted to the first Overlapping Cervical Cytology Image Segmentation Challenge, held in conjunction with the IEEE International Symposium on Biomedical Imaging 2014. This challenge was organized to encourage the development and benchmarking of techniques capable of segmenting individual cells from overlapping cellular clumps in cervical cytology images, which is a prerequisite for the development of the next generation of computer-aided diagnosis systems for cervical cancer. In particular, these automated systems must detect and accurately segment both the nucleus and cytoplasm of each cell, even when they are clumped together and, hence, partially occluded. However, this is an unsolved problem due to the poor contrast of cytoplasm boundaries, the large variation in size and shape of cells, and the presence of debris and the large degree of cellular overlap. The challenge initially utilized a database of 16 high-resolution ( ×40 magnification) images of complex cellular fields of view, in which the isolated real cells were used to construct a database of 945 cervical cytology images synthesized with a varying number of cells and degree of overlap, in order to provide full access of the segmentation ground truth. These synthetic images were used to provide a reliable and comprehensive framework for quantitative evaluation on this segmentation problem. Results from the submitted methods demonstrate that all the methods are effective in the segmentation of clumps containing at most three cells, with overlap coefficients up to 0.3. This highlights the intrinsic difficulty of this challenge and provides motivation for significant future improvement.


Subject(s)
Algorithms , Cervix Uteri/cytology , Image Processing, Computer-Assisted/methods , Microscopy/methods , Cervix Uteri/diagnostic imaging , Female , Humans , Papanicolaou Test/methods , Uterine Cervical Neoplasms
SELECTION OF CITATIONS
SEARCH DETAIL
...