Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 24(1)2024 Jan 03.
Article in English | MEDLINE | ID: mdl-38203152

ABSTRACT

This paper presents a novel approach to risk assessment by incorporating image captioning as a fundamental component to enhance the effectiveness of surveillance systems. The proposed surveillance system utilizes image captioning to generate descriptive captions that portray the relationship between objects, actions, and space elements within the observed scene. Subsequently, it evaluates the risk level based on the content of these captions. After defining the risk levels to be detected in the surveillance system, we constructed a dataset consisting of [Image-Caption-Danger Score]. Our dataset offers caption data presented in a unique sentence format, departing from conventional caption styles. This unique format enables a comprehensive interpretation of surveillance scenes by considering various elements, such as objects, actions, and spatial context. We fine-tuned the BLIP-2 model using our dataset to generate captions, and captions were then interpreted with BERT to evaluate the risk level of each scene, categorizing them into stages ranging from 1 to 7. Multiple experiments provided empirical support for the effectiveness of the proposed system, demonstrating significant accuracy rates of 92.3%, 89.8%, and 94.3% for three distinct risk levels: safety, hazard, and danger, respectively.

2.
Materials (Basel) ; 16(23)2023 Nov 21.
Article in English | MEDLINE | ID: mdl-38067998

ABSTRACT

The quantification of the phase fraction is critical in materials science, bridging the gap between material composition, processing techniques, microstructure, and resultant properties. Traditional methods involving manual annotation are precise but labor-intensive and prone to human inaccuracies. We propose an automated segmentation technique for high-tensile strength alloy steel, where the complexity of microstructures presents considerable challenges. Our method leverages the UNet architecture, originally developed for biomedical image segmentation, and optimizes its performance via careful hyper-parameter selection and data augmentation. We employ Electron Backscatter Diffraction (EBSD) imagery for complex-phase segmentation and utilize a combined loss function to capture both textural and structural characteristics of the microstructures. Additionally, this work is the first to examine the scalability of the model across varying magnifications and types of steel and achieves high accuracy in terms of dice scores demonstrating the adaptability and robustness of the model.

3.
Sensors (Basel) ; 23(1)2022 Dec 26.
Article in English | MEDLINE | ID: mdl-36616837

ABSTRACT

Pork production is hugely impacted by the health and breeding of pigs. Analyzing the eating pattern of pigs helps in optimizing the supply chain management with a healthy breeding environment. Monitoring the feed intake of pigs in a barn provides information about their eating habits, behavioral patterns, and surrounding environment, which can be used for further analysis to monitor growth in pigs and eventually contribute to the quality and quantity of meat production. In this paper, we present a novel method to estimate the number of pigs taking in feed by considering the pig's posture. In order to solve problems arising from using the pig's posture, we propose an algorithm to match the pig's head and the corresponding pig's body using the major-and-minor axis of the pig detection box. In our experiment, we present the detection performance of the YOLOv5 model according to the anchor box, and then we demonstrate that the proposed method outperforms previous methods. We therefore measure the number of pigs taking in feed over a period of 24 h and the number of times pigs consume feed in a day over a period of 30 days, and observe the pig's feed intake pattern.


Subject(s)
Eating , Feeding Behavior , Swine , Animals , Meat/analysis , Animal Feed/analysis
4.
Plants (Basel) ; 10(6)2021 Jun 21.
Article in English | MEDLINE | ID: mdl-34205610

ABSTRACT

Deep learning architectures are widely used in state-of-the-art image classification tasks. Deep learning has enhanced the ability to automatically detect and classify plant diseases. However, in practice, disease classification problems are treated as black-box methods. Thus, it is difficult to trust the model that it truly identifies the region of the disease in the image; it may simply use unrelated surroundings for classification. Visualization techniques can help determine important areas for the model by highlighting the region responsible for the classification. In this study, we present a methodology for visualizing coffee diseases using different visualization approaches. Our goal is to visualize aspects of a coffee disease to obtain insight into what the model "sees" as it learns to classify healthy and non-healthy images. In addition, visualization helped us identify misclassifications and led us to propose a guided approach for coffee disease classification. The guided approach achieved a classification accuracy of 98% compared to the 77% of naïve approach on the Robusta coffee leaf image dataset. The visualization methods considered in this study were Grad-CAM, Grad-CAM++, and Score-CAM. We also provided a visual comparison of the visualization methods.

SELECTION OF CITATIONS
SEARCH DETAIL
...