Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Plant Phenomics ; 6: 0189, 2024.
Article in English | MEDLINE | ID: mdl-38817960

ABSTRACT

Deep learning and multimodal remote and proximal sensing are widely used for analyzing plant and crop traits, but many of these deep learning models are supervised and necessitate reference datasets with image annotations. Acquiring these datasets often demands experiments that are both labor-intensive and time-consuming. Furthermore, extracting traits from remote sensing data beyond simple geometric features remains a challenge. To address these challenges, we proposed a radiative transfer modeling framework based on the Helios 3-dimensional (3D) plant modeling software designed for plant remote and proximal sensing image simulation. The framework has the capability to simulate RGB, multi-/hyperspectral, thermal, and depth cameras, and produce associated plant images with fully resolved reference labels such as plant physical traits, leaf chemical concentrations, and leaf physiological traits. Helios offers a simulated environment that enables generation of 3D geometric models of plants and soil with random variation, and specification or simulation of their properties and function. This approach differs from traditional computer graphics rendering by explicitly modeling radiation transfer physics, which provides a critical link to underlying plant biophysical processes. Results indicate that the framework is capable of generating high-quality, labeled synthetic plant images under given lighting scenarios, which can lessen or remove the need for manually collected and annotated data. Two example applications are presented that demonstrate the feasibility of using the model to enable unsupervised learning by training deep learning models exclusively with simulated images and performing prediction tasks using real images.

2.
Plant Phenomics ; 5: 0084, 2023.
Article in English | MEDLINE | ID: mdl-37680999

ABSTRACT

In recent years, deep learning models have become the standard for agricultural computer vision. Such models are typically fine-tuned to agricultural tasks using model weights that were originally fit to more general, non-agricultural datasets. This lack of agriculture-specific fine-tuning potentially increases training time and resource use, and decreases model performance, leading to an overall decrease in data efficiency. To overcome this limitation, we collect a wide range of existing public datasets for 3 distinct tasks, standardize them, and construct standard training and evaluation pipelines, providing us with a set of benchmarks and pretrained models. We then conduct a number of experiments using methods that are commonly used in deep learning tasks but unexplored in their domain-specific applications for agriculture. Our experiments guide us in developing a number of approaches to improve data efficiency when training agricultural deep learning models, without large-scale modifications to existing pipelines. Our results demonstrate that even slight training modifications, such as using agricultural pretrained model weights, or adopting specific spatial augmentations into data processing pipelines, can considerably boost model performance and result in shorter convergence time, saving training resources. Furthermore, we find that even models trained on low-quality annotations can produce comparable levels of performance to their high-quality equivalents, suggesting that datasets with poor annotations can still be used for training, expanding the pool of currently available datasets. Our methods are broadly applicable throughout agricultural deep learning and present high potential for substantial data efficiency improvements.

4.
Appl Environ Microbiol ; 89(1): e0182822, 2023 01 31.
Article in English | MEDLINE | ID: mdl-36533914

ABSTRACT

In assessing food microbial safety, the presence of Escherichia coli is a critical indicator of fecal contamination. However, conventional detection methods require the isolation of bacterial macrocolonies for biochemical or genetic characterization, which takes a few days and is labor-intensive. In this study, we show that the real-time object detection and classification algorithm You Only Look Once version 4 (YOLOv4) can accurately identify the presence of E. coli at the microcolony stage after a 3-h cultivation. Integrating with phase-contrast microscopic imaging, YOLOv4 discriminated E. coli from seven other common foodborne bacterial species with an average precision of 94%. This approach also enabled the rapid quantification of E. coli concentrations over 3 orders of magnitude with an R2 of 0.995. For romaine lettuce spiked with E. coli (10 to 103 CFU/g), the trained YOLOv4 detector had a false-negative rate of less than 10%. This approach accelerates analysis and avoids manual result determination, which has the potential to be applied as a rapid and user-friendly bacterial sensing approach in food industries. IMPORTANCE A simple, cost-effective, and rapid method is desired to identify potential pathogen contamination in food products and thus prevent foodborne illnesses and outbreaks. This study combined artificial intelligence (AI) and optical imaging to detect bacteria at the microcolony stage within 3 h of inoculation. This approach eliminates the need for time-consuming culture-based colony isolation and resource-intensive molecular approaches for bacterial identification. The approach developed in this study is broadly applicable for the identification of diverse bacterial species. In addition, this approach can be implemented in resource-limited areas, as it does not require expensive instruments and significantly trained human resources. This AI-assisted detection not only achieves high accuracy in bacterial classification but also provides the potential for automated bacterial detection, reducing labor workloads in food industries, environmental monitoring, and clinical settings.


Subject(s)
Artificial Intelligence , Escherichia coli , Humans , Bacteria , Food Safety , Optical Imaging , Food Microbiology , Colony Count, Microbial , Food Contamination/analysis
5.
Front Plant Sci ; 13: 758818, 2022.
Article in English | MEDLINE | ID: mdl-35498682

ABSTRACT

Plant breeders, scientists, and commercial producers commonly use growth rate as an integrated signal of crop productivity and stress. Plant growth monitoring is often done destructively via growth rate estimation by harvesting plants at different growth stages and simply weighing each individual plant. Within plant breeding and research applications, and more recently in commercial applications, non-destructive growth monitoring is done using computer vision to segment plants in images from the background, either in 2D or 3D, and relating these image-based features to destructive biomass measurements. Recent advancements in machine learning have improved image-based localization and detection of plants, but such techniques are not well suited to make biomass predictions when there is significant self-occlusion or occlusion from neighboring plants, such as those encountered under leafy green production in controlled environment agriculture. To enable prediction of plant biomass under occluded growing conditions, we develop an end-to-end deep learning approach that directly predicts lettuce plant biomass from color and depth image data as provided by a low cost and commercially available sensor. We test the performance of the proposed deep neural network for lettuce production, observing a mean prediction error of 7.3% on a comprehensive test dataset of 864 individuals and substantially outperforming previous work on plant biomass estimation. The modeling approach is robust to the busy and occluded scenes often found in commercial leafy green production and requires only measured mass values for training. We then demonstrate that this level of prediction accuracy allows for rapid, non-destructive detection of changes in biomass accumulation due to experimentally induced stress induction in as little as 2 days. Using this method growers may observe and react to changes in plant-environment interactions in near real time. Moreover, we expect that such a sensitive technique for non-destructive biomass estimation will enable novel research and breeding of improved productivity and yield in response to stress.

SELECTION OF CITATIONS
SEARCH DETAIL
...