Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 23(9)2023 Apr 25.
Article in English | MEDLINE | ID: mdl-37177474

ABSTRACT

One of the most challenging problems associated with the development of accurate and reliable application of computer vision and artificial intelligence in agriculture is that, not only are massive amounts of training data usually required, but also, in most cases, the images have to be properly labeled before models can be trained. Such a labeling process tends to be time consuming, tiresome, and expensive, often making the creation of large labeled datasets impractical. This problem is largely associated with the many steps involved in the labeling process, requiring the human expert rater to perform different cognitive and motor tasks in order to correctly label each image, thus diverting brain resources that should be focused on pattern recognition itself. One possible way to tackle this challenge is by exploring the phenomena in which highly trained experts can almost reflexively recognize and accurately classify objects of interest in a fraction of a second. As techniques for recording and decoding brain activity have evolved, it has become possible to directly tap into this ability and to accurately assess the expert's level of confidence and attention during the process. As a result, the labeling time can be reduced dramatically while effectively incorporating the expert's knowledge into artificial intelligence models. This study investigates how the use of electroencephalograms from plant pathology experts can improve the accuracy and robustness of image-based artificial intelligence models dedicated to plant disease recognition. Experiments have demonstrated the viability of the approach, with accuracies improving from 96% with the baseline model to 99% using brain generated labels and active learning approach.


Subject(s)
Brain Waves , Plant Pathology , Humans , Artificial Intelligence , Reproducibility of Results , Electroencephalography
2.
Sensors (Basel) ; 22(6)2022 Mar 16.
Article in English | MEDLINE | ID: mdl-35336456

ABSTRACT

Acquiring useful data from agricultural areas has always been somewhat of a challenge, as these are often expansive, remote, and vulnerable to weather events. Despite these challenges, as technologies evolve and prices drop, a surge of new data are being collected. Although a wealth of data are being collected at different scales (i.e., proximal, aerial, satellite, ancillary data), this has been geographically unequal, causing certain areas to be virtually devoid of useful data to help face their specific challenges. However, even in areas with available resources and good infrastructure, data and knowledge gaps are still prevalent, because agricultural environments are mostly uncontrolled and there are vast numbers of factors that need to be taken into account and properly measured for a full characterization of a given area. As a result, data from a single sensor type are frequently unable to provide unambiguous answers, even with very effective algorithms, and even if the problem at hand is well defined and limited in scope. Fusing the information contained in different sensors and in data from different types is one possible solution that has been explored for some decades. The idea behind data fusion involves exploring complementarities and synergies of different kinds of data in order to extract more reliable and useful information about the areas being analyzed. While some success has been achieved, there are still many challenges that prevent a more widespread adoption of this type of approach. This is particularly true for the highly complex environments found in agricultural areas. In this article, we provide a comprehensive overview on the data fusion applied to agricultural problems; we present the main successes, highlight the main challenges that remain, and suggest possible directions for future research.


Subject(s)
Agriculture
3.
Sensors (Basel) ; 20(7)2020 Apr 10.
Article in English | MEDLINE | ID: mdl-32290316

ABSTRACT

The management of livestock in extensive production systems may be challenging, especially in large areas. Using Unmanned Aerial Vehicles (UAVs) to collect images from the area of interest is quickly becoming a viable alternative, but suitable algorithms for extraction of relevant information from the images are still rare. This article proposes a method for counting cattle which combines a deep learning model for rough animal location, color space manipulation to increase contrast between animals and background, mathematical morphology to isolate the animals and infer the number of individuals in clustered groups, and image matching to take into account image overlap. Using Nelore and Canchim breeds as a case study, the proposed approach yields accuracies over 90% under a wide variety of conditions and backgrounds.


Subject(s)
Aircraft , Neural Networks, Computer , Animals , Cattle , Image Processing, Computer-Assisted
4.
Sensors (Basel) ; 19(24)2019 Dec 10.
Article in English | MEDLINE | ID: mdl-31835487

ABSTRACT

Unmanned aerial vehicles (UAVs) are being increasingly viewed as valuable tools to aid the management of farms. This kind of technology can be particularly useful in the context of extensive cattle farming, as production areas tend to be expansive and animals tend to be more loosely monitored. With the advent of deep learning, and convolutional neural networks (CNNs) in particular, extracting relevant information from aerial images has become more effective. Despite the technological advancements in drone, imaging and machine learning technologies, the application of UAVs for cattle monitoring is far from being thoroughly studied, with many research gaps still remaining. In this context, the objectives of this study were threefold: (1) to determine the highest possible accuracy that could be achieved in the detection of animals of the Canchim breed, which is visually similar to the Nelore breed (Bos taurus indicus); (2) to determine the ideal ground sample distance (GSD) for animal detection; (3) to determine the most accurate CNN architecture for this specific problem. The experiments involved 1853 images containing 8629 samples of animals, and 15 different CNN architectures were tested. A total of 900 models were trained (15 CNN architectures × 3 spacial resolutions × 2 datasets × 10-fold cross validation), allowing for a deep analysis of the several aspects that impact the detection of cattle using aerial images captured using UAVs. Results revealed that many CNN architectures are robust enough to reliably detect animals in aerial images even under far from ideal conditions, indicating the viability of using UAVs for cattle monitoring.

5.
Vet Parasitol ; 235: 106-112, 2017 Feb 15.
Article in English | MEDLINE | ID: mdl-28215860

ABSTRACT

This paper presents a study on the use of low resolution infrared images to detect ticks in cattle. Emphasis is given to the main factors that influence the quality of the captured images, as well as to the actions that can increase the amount of information conveyed by these images. In addition, a new automatic method for analyzing the images and counting the ticks is introduced. The proposed algorithm relies only on color transformations and simple mathematical morphology operations, thus being easy to implement and computationally light. Tests were carried out using a large database containing images of the neck and hind end of the animals. It was observed that the proposed algorithm is very effective in detecting ticks visible in the images, even if the contrast with the background is not high. On the other hand, due to both intrinsic and extrinsic factors, the thermographic images used in this study did not always succeed in creating enough contrast between ticks and cattle's hair coat. Although these problems can be mitigated by following some directives, currently only rough estimates for tick counts can be achieved using infrared images with low spatial resolution.


Subject(s)
Algorithms , Cattle Diseases/diagnosis , Thermography/veterinary , Tick Infestations/veterinary , Ticks/physiology , Animals , Cattle , Cattle Diseases/parasitology , Female , Infrared Rays , Male , Thermography/methods , Tick Infestations/diagnosis , Tick Infestations/parasitology
6.
Plant Dis ; 98(12): 1709-1716, 2014 Dec.
Article in English | MEDLINE | ID: mdl-30703885

ABSTRACT

A method is presented to detect and quantify leaf symptoms using conventional color digital images. The method was designed to be completely automatic, eliminating the possibility of human error and reducing time taken to measure disease severity. The program is capable of dealing with images containing multiple leaves, further reducing the time taken. Accurate results are possible when the symptoms and leaf veins have similar color and shade characteristics. The algorithm is subject to one constraint: the background must be as close to white or black as possible. Tests showed that the method provided accurate estimates over a wide variety of conditions, being robust to variation in size, shape, and color of leaves; symptoms; and leaf veins. Low rates of false positives and false negatives occurred due to extrinsic factors such as issues with image capture and the use of extreme file compression ratios.

SELECTION OF CITATIONS
SEARCH DETAIL
...