Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Animals (Basel) ; 14(14)2024 Jul 17.
Article in English | MEDLINE | ID: mdl-39061551

ABSTRACT

Optimizing the breeding techniques and increasing the hatching rate of Andrias davidianus offspring necessitates a thorough understanding of its parental care behaviors. However, A. davidianus' nocturnal and cave-dwelling tendencies pose significant challenges for direct observation. To address this problem, this study constructed a dataset for the parental care behavior of A. davidianus, applied the target detection method to this behavior for the first time, and proposed a detection model for A. davidianus' parental care behavior based on the YOLOv8s algorithm. Firstly, a multi-scale feature fusion convolution (MSConv) is proposed and combined with a C2f module, which significantly enhances the feature extraction capability of the model. Secondly, the large separable kernel attention is introduced into the spatial pyramid pooling fast (SPPF) layer to effectively reduce the interference factors in the complex environment. Thirdly, to address the problem of low quality of captured images, Wise-IoU (WIoU) is used to replace CIoU in the original YOLOv8 to optimize the loss function and improve the model's robustness. The experimental results show that the model achieves 85.7% in the mAP50-95, surpassing the YOLOv8s model by 2.1%. Compared with other mainstream models, the overall performance of our model is much better and can effectively detect the parental care behavior of A. davidianus. Our research method not only offers a reference for the behavior recognition of A. davidianus and other amphibians but also provides a new strategy for the smart breeding of A. davidianus.

2.
IEEE J Biomed Health Inform ; 24(5): 1405-1412, 2020 05.
Article in English | MEDLINE | ID: mdl-31647449

ABSTRACT

Despite the potential to revolutionise disease diagnosis by performing data-driven classification, clinical interpretability of ConvNet remains challenging. In this paper, a novel clinical interpretable ConvNet architecture is proposed not only for accurate glaucoma diagnosis but also for the more transparent interpretation by highlighting the distinct regions recognised by the network. To the best of our knowledge, this is the first work of providing the interpretable diagnosis of glaucoma with the popular deep learning model. We propose a novel scheme for aggregating features from different scales to promote the performance of glaucoma diagnosis, which we refer to as M-LAP. Moreover, by modelling the correspondence from binary diagnosis information to the spatial pixels, the proposed scheme generates glaucoma activations, which bridge the gap between global semantical diagnosis and precise location. In contrast to previous works, it can discover the distinguish local regions in fundus images as evidence for clinical interpretable glaucoma diagnosis. Experimental results, performed on the challenging ORIGA datasets, show that our method on glaucoma diagnosis outperforms state-of-the-art methods with the highest AUC (0.88). Remarkably, the extensive results, optic disc segmentation (dice of 0.9) and local disease focus localization based on the evidence map, demonstrate the effectiveness of our methods on clinical interpretability.


Subject(s)
Deep Learning , Glaucoma/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Humans , Optic Disk/diagnostic imaging , ROC Curve , Semantics
SELECTION OF CITATIONS
SEARCH DETAIL
...