Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Biol Med ; 170: 108055, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38295480

RESUMO

In the domain of medical image analysis, deep learning models are heralding a revolution, especially in detecting complex and nuanced features characteristic of diseases like tumors and cancers. However, the robustness and adaptability of these models across varied imaging conditions and magnifications remain a formidable challenge. This paper introduces the Fourier Adaptive Recognition System (FARS), a pioneering model primarily engineered to address adaptability in malarial parasite recognition. Yet, the foundational principles guiding FARS lend themselves seamlessly to broader applications, including tumor and cancer diagnostics. FARS capitalizes on the untapped potential of transitioning from bounding box labels to richer semantic segmentation labels, enabling a more refined examination of microscopy slides. With the integration of adversarial training and the Color Domain Aware Fourier Domain Adaptation (F2DA), the model ensures consistent feature extraction across diverse microscopy configurations. The further inclusion of category-dependent context attention amplifies FARS's cross-domain versatility. Evidenced by a substantial elevation in cross-magnification performance from 31.3% mAP to 55.19% mAP and a 15.68% boost in cross-domain adaptability, FARS positions itself as a significant advancement in malarial parasite recognition. Furthermore, the core methodologies of FARS can serve as a blueprint for enhancing precision in other realms of medical image analysis, especially in the complex terrains of tumor and cancer imaging. The code is available at; https://github.com/Mr-TalhaIlyas/FARS.


Assuntos
Microscopia , Neoplasias , Humanos , Semântica , Processamento de Imagem Assistida por Computador
2.
Front Plant Sci ; 14: 1234616, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37621880

RESUMO

Recent developments in deep learning-based automatic weeding systems have shown promise for unmanned weed eradication. However, accurately distinguishing between crops and weeds in varying field conditions remains a challenge for these systems, as performance deteriorates when applied to new or different fields due to insignificant changes in low-level statistics and a significant gap between training and test data distributions. In this study, we propose an approach based on unsupervised domain adaptation to improve crop-weed recognition in new, unseen fields. Our system addresses this issue by learning to ignore insignificant changes in low-level statistics that cause a decline in performance when applied to new data. The proposed network includes a segmentation module that produces segmentation maps using labeled (training field) data while also minimizing entropy using unlabeled (test field) data simultaneously, and a discriminator module that maximizes the confusion between extracted features from the training and test farm samples. This module uses adversarial optimization to make the segmentation network invariant to changes in the field environment. We evaluated the proposed approach on four different unseen (test) fields and found consistent improvements in performance. These results suggest that the proposed approach can effectively handle changes in new field environments during real field inference.

3.
Front Plant Sci ; 13: 983625, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36275542

RESUMO

The emergence of deep neural networks has allowed the development of fully automated and efficient diagnostic systems for plant disease and pest phenotyping. Although previous approaches have proven to be promising, they are limited, especially in real-life scenarios, to properly diagnose and characterize the problem. In this work, we propose a framework which besides recognizing and localizing various plant abnormalities also informs the user about the severity of the diseases infecting the plant. By taking a single image as input, our algorithm is able to generate detailed descriptive phrases (user-defined) that display the location, severity stage, and visual attributes of all the abnormalities that are present in the image. Our framework is composed of three main components. One of them is a detector that accurately and efficiently recognizes and localizes the abnormalities in plants by extracting region-based anomaly features using a deep neural network-based feature extractor. The second one is an encoder-decoder network that performs pixel-level analysis to generate abnormality-specific severity levels. Lastly is an integration unit which aggregates the information of these units and assigns unique IDs to all the detected anomaly instances, thus generating descriptive sentences describing the location, severity, and class of anomalies infecting plants. We discuss two possible ways of utilizing the abovementioned units in a single framework. We evaluate and analyze the efficacy of both approaches on newly constructed diverse paprika disease and pest recognition datasets, comprising six anomaly categories along with 11 different severity levels. Our algorithm achieves mean average precision of 91.7% for the abnormality detection task and a mean panoptic quality score of 70.78% for severity level prediction. Our algorithm provides a practical and cost-efficient solution to farmers that facilitates proper handling of crops.

4.
Sci Rep ; 12(1): 8672, 2022 05 23.
Artigo em Inglês | MEDLINE | ID: mdl-35606487

RESUMO

Fine segmentation labelling tasks are time consuming and typically require a great deal of manual labor. This paper presents a novel method for efficiently creating pixel-level fine segmentation labelling that significantly reduces the amount of necessary human labor. The proposed method utilizes easily produced multiple and complementary coarse labels to build a complete fine label via supervised learning. The primary label among the coarse labels is the manual label, which is produced with simple contours or bounding boxes that roughly encompass an object. All others coarse labels are complementary and are generated automatically using existing algorithms. Fine labels can be rapidly created during the supervised learning of such coarse labels. In the experimental study, the proposed technique achieved a fine label IOU (intersection of union) of 92% in segmenting our newly constructed bean field dataset. The proposed method also achieved 95% and 92% mean IOU when tested on publicly available agricultural CVPPP and CWFID datasets, respectively. Our proposed method of segmentation also achieved a mean IOU of 81% when it was tested on our newly constructed paprika disease dataset, which includes multiple categories.


Assuntos
Fenômenos Biológicos , Processamento de Imagem Assistida por Computador , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos
5.
Neural Netw ; 151: 1-15, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35367734

RESUMO

Nuclei segmentation and classification of hematoxylin and eosin-stained histology images is a challenging task due to a variety of issues, such as color inconsistency that results from the non-uniform manual staining operations, clustering of nuclei, and blurry and overlapping nuclei boundaries. Existing approaches involve segmenting nuclei by drawing their polygon representations or by measuring the distances between nuclei centroids. In contrast, we leverage the fact that morphological features (appearance, shape, and texture) of nuclei in a tissue vary greatly depending upon the tissue type. We exploit this information by extracting tissue specific (TS) features from raw histopathology images using the proposed tissue specific feature distillation (TSFD) backbone. The bi-directional feature pyramid network (BiFPN) within TSFD-Net generates a robust hierarchical feature pyramid utilizing TS features where the interlinked decoders jointly optimize and fuse these features to generate final predictions. We also propose a novel combinational loss function for joint optimization and faster convergence of our proposed network. Extensive ablation studies are performed to validate the effectiveness of each component of TSFD-Net. The proposed network outperforms state-of-the-art networks such as StarDist, Micro-Net, Mask-RCNN, Hover-Net, and CPP-Net on the PanNuke dataset, which contains 19 different tissue types and 5 clinically important tumor classes, achieving 50.4% and 63.77% mean and binary panoptic quality, respectively. The code is available at: https://github.com/Mr-TalhaIlyas/TSFD.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Núcleo Celular , Análise por Conglomerados , Destilação , Processamento de Imagem Assistida por Computador/métodos
6.
Front Plant Sci ; 12: 591333, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33692814

RESUMO

Autonomous harvesters can be used for the timely cultivation of high-value crops such as strawberries, where the robots have the capability to identify ripe and unripe crops. However, the real-time segmentation of strawberries in an unbridled farming environment is a challenging task due to fruit occlusion by multiple trusses, stems, and leaves. In this work, we propose a possible solution by constructing a dynamic feature selection mechanism for convolutional neural networks (CNN). The proposed building block namely a dense attention module (DAM) controls the flow of information between the convolutional encoder and decoder. DAM enables hierarchical adaptive feature fusion by exploiting both inter-channel and intra-channel relationships and can be easily integrated into any existing CNN to obtain category-specific feature maps. We validate our attention module through extensive ablation experiments. In addition, a dataset is collected from different strawberry farms and divided into four classes corresponding to different maturity levels of fruits and one is devoted to background. Quantitative analysis of the proposed method showed a 4.1% and 2.32% increase in mean intersection over union, over existing state-of-the-art semantic segmentation models and other attention modules respectively, while simultaneously retaining a processing speed of 53 frames per second.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...