Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 14(1): 16401, 2024 07 16.
Article in English | MEDLINE | ID: mdl-39013897

ABSTRACT

Lameness affects animal mobility, causing pain and discomfort. Lameness in early stages often goes undetected due to a lack of observation, precision, and reliability. Automated and non-invasive systems offer precision and detection ease and may improve animal welfare. This study was conducted to create a repository of images and videos of sows with different locomotion scores. Our goal is to develop a computer vision model for automatically identifying specific points on the sow's body. The automatic identification and ability to track specific body areas, will allow us to conduct kinematic studies with the aim of facilitating the detection of lameness using deep learning. The video database was collected on a pig farm with a scenario built to allow filming of sows in locomotion with different lameness scores. Two stereo cameras were used to record 2D videos images. Thirteen locomotion experts assessed the videos using the Locomotion Score System developed by Zinpro Corporation. From this annotated repository, computational models were trained and tested using the open-source deep learning-based animal pose tracking framework SLEAP (Social LEAP Estimates Animal Poses). The top-performing models were constructed using the LEAP architecture to accurately track 6 (lateral view) and 10 (dorsal view) skeleton keypoints. The architecture achieved average precisions values of 0.90 and 0.72, average distances of 6.83 and 11.37 in pixel, and similarities of 0.94 and 0.86 for the lateral and dorsal views, respectively. These computational models are proposed as a Precision Livestock Farming tool and method for identifying and estimating postures in pigs automatically and objectively. The 2D video image repository with different pig locomotion scores can be used as a tool for teaching and research. Based on our skeleton keypoint classification results, an automatic system could be developed. This could contribute to the objective assessment of locomotion scores in sows, improving their welfare.


Subject(s)
Deep Learning , Locomotion , Video Recording , Animals , Locomotion/physiology , Swine , Video Recording/methods , Female , Lameness, Animal/diagnosis , Lameness, Animal/physiopathology , Biomechanical Phenomena , Swine Diseases/diagnosis , Swine Diseases/physiopathology
2.
Animals (Basel) ; 14(13)2024 Jul 06.
Article in English | MEDLINE | ID: mdl-38998108

ABSTRACT

Infrared thermography has been investigated in recent studies to monitor body surface temperature and correlate it with animal welfare and performance factors. In this context, this study proposes the use of the thermal signature method as a feature extractor from the temperature matrix obtained from regions of the body surface of laying hens (face, eye, wattle, comb, leg, and foot) to enable the construction of a computational model for heat stress level classification. In an experiment conducted in climate-controlled chambers, 192 laying hens, 34 weeks old, from two different strains (Dekalb White and Dekalb Brown) were divided into groups and housed under conditions of heat stress (35 °C and 60% humidity) and thermal comfort (26 °C and 60% humidity). Weekly, individual thermal images of the hens were collected using a thermographic camera, along with their respective rectal temperatures. Surface temperatures of the six featherless image areas of the hens' bodies were cut out. Rectal temperature was used to label each infrared thermography data as "Danger" or "Normal", and five different classifier models (Random Forest, Random Tree, Multilayer Perceptron, K-Nearest Neighbors, and Logistic Regression) for rectal temperature class were generated using the respective thermal signatures. No differences between the strains were observed in the thermal signature of surface temperature and rectal temperature. It was evidenced that the rectal temperature and the thermal signature express heat stress and comfort conditions. The Random Forest model for the face area of the laying hen achieved the highest performance (89.0%). For the wattle area, a Random Forest model also demonstrated high performance (88.3%), indicating the significance of this area in strains where it is more developed. These findings validate the method of extracting characteristics from infrared thermography. When combined with machine learning, this method has proven promising for generating classifier models of thermal stress levels in laying hen production environments.

3.
Animals (Basel) ; 14(1)2023 Dec 21.
Article in English | MEDLINE | ID: mdl-38200761

ABSTRACT

The selection of animals to be marketed is largely completed by their visual assessment, solely relying on the skill level of the animal caretaker. Real-time monitoring of the weight of farm animals would provide important information for not only marketing, but also for the assessment of health and well-being issues. The objective of this study was to develop and evaluate a method based on 3D Convolutional Neural Network to predict weight from point clouds. Intel Real Sense D435 stereo depth camera placed at 2.7 m height was used to capture the 3D videos of a single finishing pig freely walking in a holding pen ranging in weight between 20-120 kg. The animal weight and 3D videos were collected from 249 Landrace × Large White pigs in farm facilities of the FZEA-USP (Faculty of Animal Science and Food Engineering, University of Sao Paulo) between 5 August and 9 November 2021. Point clouds were manually extracted from the recorded 3D video and applied for modeling. A total of 1186 point clouds were used for model training and validating using PointNet framework in Python with a 9:1 split and 112 randomly selected point clouds were reserved for testing. The volume between the body surface points and a constant plane resembling the ground was calculated and correlated with weight to make a comparison with results from the PointNet method. The coefficient of determination (R2 = 0.94) was achieved with PointNet regression model on test point clouds compared to the coefficient of determination (R2 = 0.76) achieved from the volume of the same animal. The validation RMSE of the model was 6.79 kg with a test RMSE of 6.88 kg. Further, to analyze model performance based on weight range the pigs were divided into three different weight ranges: below 55 kg, between 55 and 90 kg, and above 90 kg. For different weight groups, pigs weighing below 55 kg were best predicted with the model. The results clearly showed that 3D deep learning on point sets has a good potential for accurate weight prediction even with a limited training dataset. Therefore, this study confirms the usability of 3D deep learning on point sets for farm animals' weight prediction, while a larger data set needs to be used to ensure the most accurate predictions.

4.
PLoS One ; 16(10): e0258672, 2021.
Article in English | MEDLINE | ID: mdl-34665834

ABSTRACT

The aim of this study was to develop and evaluate a machine vision algorithm to assess the pain level in horses, using an automatic computational classifier based on the Horse Grimace Scale (HGS) and trained by machine learning method. The use of the Horse Grimace Scale is dependent on a human observer, who most of the time does not have availability to evaluate the animal for long periods and must also be well trained in order to apply the evaluation system correctly. In addition, even with adequate training, the presence of an unknown person near an animal in pain can result in behavioral changes, making the evaluation more complex. As a possible solution, the automatic video-imaging system will be able to monitor pain responses in horses more accurately and in real-time, and thus allow an earlier diagnosis and more efficient treatment for the affected animals. This study is based on assessment of facial expressions of 7 horses that underwent castration, collected through a video system positioned on the top of the feeder station, capturing images at 4 distinct timepoints daily for two days before and four days after surgical castration. A labeling process was applied to build a pain facial image database and machine learning methods were used to train the computational pain classifier. The machine vision algorithm was developed through the training of a Convolutional Neural Network (CNN) that resulted in an overall accuracy of 75.8% while classifying pain on three levels: not present, moderately present, and obviously present. While classifying between two categories (pain not present and pain present) the overall accuracy reached 88.3%. Although there are some improvements to be made in order to use the system in a daily routine, the model appears promising and capable of measuring pain on images of horses automatically through facial expressions, collected from video images.


Subject(s)
Automated Facial Recognition/methods , Orchiectomy/adverse effects , Pain Measurement/veterinary , Algorithms , Animals , Databases, Factual , Deep Learning , Facial Recognition , Horses , Humans , Image Interpretation, Computer-Assisted/methods , Neural Networks, Computer , Orchiectomy/veterinary , Video Recording
SELECTION OF CITATIONS
SEARCH DETAIL
...