Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Plant Phenomics ; 6: 0188, 2024.
Article in English | MEDLINE | ID: mdl-38933805

ABSTRACT

The tassel state in maize hybridization fields not only reflects the growth stage of the maize but also reflects the performance of the detasseling operation. Existing tassel detection models are primarily used to identify mature tassels with obvious features, making it difficult to accurately identify small tassels or detasseled plants. This study presents a novel approach that utilizes unmanned aerial vehicles (UAVs) and deep learning techniques to accurately identify and assess tassel states, before and after manually detasseling in maize hybridization fields. The proposed method suggests that a specific tassel annotation and data augmentation strategy is valuable for substantial enhancing the quality of the tassel training data. This study also evaluates mainstream object detection models and proposes a series of highly accurate tassel detection models based on tassel categories with strong data adaptability. In addition, a strategy for blocking large UAV images, as well as improving tassel detection accuracy, is proposed to balance UAV image acquisition and computational cost. The experimental results demonstrate that the proposed method can accurately identify and classify tassels at various stages of detasseling. The tassel detection model optimized with the enhanced data achieves an average precision of 94.5% across all categories. An optimal model combination that uses blocking strategies for different development stages can improve the tassel detection accuracy to 98%. This could be useful in addressing the issue of missed tassel detections in maize hybridization fields. The data annotation strategy and image blocking strategy may also have broad applications in object detection and recognition in other agricultural scenarios.

2.
Plant Phenomics ; 5: 0043, 2023.
Article in English | MEDLINE | ID: mdl-37223316

ABSTRACT

The field phenotyping platforms that can obtain high-throughput and time-series phenotypes of plant populations at the 3-dimensional level are crucial for plant breeding and management. However, it is difficult to align the point cloud data and extract accurate phenotypic traits of plant populations. In this study, high-throughput, time-series raw data of field maize populations were collected using a field rail-based phenotyping platform with light detection and ranging (LiDAR) and an RGB (red, green, and blue) camera. The orthorectified images and LiDAR point clouds were aligned via the direct linear transformation algorithm. On this basis, time-series point clouds were further registered by the time-series image guidance. The cloth simulation filter algorithm was then used to remove the ground points. Individual plants and plant organs were segmented from maize population by fast displacement and region growth algorithms. The plant heights of 13 maize cultivars obtained using the multi-source fusion data were highly correlated with the manual measurements (R2 = 0.98), and the accuracy was higher than only using one source point cloud data (R2 = 0.93). It demonstrates that multi-source data fusion can effectively improve the accuracy of time series phenotype extraction, and rail-based field phenotyping platforms can be a practical tool for plant growth dynamic observation of phenotypes in individual plant and organ scales.

3.
Research (Wash D C) ; 6: 0059, 2023.
Article in English | MEDLINE | ID: mdl-36951796

ABSTRACT

The lack of efficient crop phenotypic measurement methods has become a bottleneck in the field of breeding and precision cultivation. However, high-throughput and accurate phenotypic measurement could accelerate the breeding and improve the existing cultivation management technology. In view of this, this paper introduces a high-throughput crop phenotype measurement platform named the LQ-FieldPheno, which was developed by China National Agricultural Information Engineering Technology Research Centre. The proposed platform represents a mobile phenotypic high-throughput automatic acquisition system based on a field track platform, which introduces the Internet of Things (IoT) into agricultural breeding. The proposed platform uses the crop phenotype multisensor central imaging unit as a core and integrates different types of equipment, including an automatic control system, upward field track, intelligent navigation vehicle, and environmental sensors. Furthermore, it combines an RGB camera, a 6-band multispectral camera, a thermal infrared camera, a 3-dimensional laser radar, and a deep camera. Special software is developed to control motions and sensors and to design run lines. Using wireless sensor networks and mobile communication wireless networks of IoT, the proposed system can obtain phenotypic information about plants in their growth period with a high-throughput, automatic, and high time sequence. Moreover, the LQ-FieldPheno has the characteristics of multiple data acquisition, vital timeliness, remarkable expansibility, high-cost performance, and flexible customization. The LQ-FieldPheno has been operated in the 2020 maize growing season, and the collected point cloud data are used to estimate the maize plant height. Compared with the traditional crop phenotypic measurement technology, the LQ-FieldPheno has the advantage of continuously and synchronously obtaining multisource phenotypic data at different growth stages and extracting different plant parameters. The proposed platform could contribute to the research of crop phenotype, remote sensing, agronomy, and related disciplines.

4.
Plants (Basel) ; 12(3)2023 Jan 20.
Article in English | MEDLINE | ID: mdl-36771568

ABSTRACT

Unmanned ground vehicles (UGV) have attracted much attention in crop phenotype monitoring due to their lightweight and flexibility. This paper describes a new UGV equipped with an electric slide rail and point cloud high-throughput acquisition and phenotype extraction system. The designed UGV is equipped with an autopilot system, a small electric slide rail, and Light Detection and Ranging (LiDAR) to achieve high-throughput, high-precision automatic crop point cloud acquisition and map building. The phenotype analysis system realized single plant segmentation and pipeline extraction of plant height and maximum crown width of the crop point cloud using the Random sampling consistency (RANSAC), Euclidean clustering, and k-means clustering algorithm. This phenotyping system was used to collect point cloud data and extract plant height and maximum crown width for 54 greenhouse-potted lettuce plants. The results showed that the correlation coefficient (R2) between the collected data and manual measurements were 0.97996 and 0.90975, respectively, while the root mean square error (RMSE) was 1.51 cm and 4.99 cm, respectively. At less than a tenth of the cost of the PlantEye F500, UGV achieves phenotypic data acquisition with less error and detects morphological trait differences between lettuce types. Thus, it could be suitable for actual 3D phenotypic measurements of greenhouse crops.

5.
Front Plant Sci ; 14: 1109443, 2023.
Article in English | MEDLINE | ID: mdl-36814756

ABSTRACT

The gap fraction (GF) of vegetative canopies is an important property related to the contained bulk of reproductive elements and woody facets within the tree crown volume. This work was developed from the perspectives of porous media theory and computer graphics techniques, considering the vegetative elements in the canopy as a solid matrix and treating the gaps between them as pores to guide volume-based GFvol calculations. Woody components and individual leaves were extracted from terrestrial laser scanning data. The concept of equivalent leaf thickness describing the degrees of leaf curling and drooping was proposed to construct hexagonal prisms properly enclosing the scanned points of each leaf, and cylinder models were adopted to fit each branch segment, enabling the calculation of the equivalent leaf and branch volumes within the crown. Finally, the volume-based GFvol of the tree crown following the definition of the void fraction in porous media theory was calculated as one minus the ratio of the total plant leaf and branch volume to the canopy volume. This approach was tested on five tree species and a forest plot with variable canopy architecture, yielding an estimated maximum volume-based GFvol of 0.985 for a small crepe myrtle and a minimal volume-based GFvol of 0.953 for a sakura tree. The 3D morphology of each compositional element in the tree canopy was geometrically defined and the canopy was considered a porous structure to conduct GFvol calculations based on multidisciplinary theory.

6.
Plant Phenomics ; 2022: 9856739, 2022.
Article in English | MEDLINE | ID: mdl-35935676

ABSTRACT

Forested environments feature a highly complex radiation regime, and solar radiation is hindered from penetrating into the forest by the 3D canopy structure; hence, canopy shortwave radiation varies spatiotemporally, seasonally, and meteorologically, making the radiant flux challenging to both measure and model. Here, we developed a synergetic method using airborne LiDAR data and computer graphics to model the forest canopy and calculate the radiant fluxes of three forest plots (conifer, broadleaf, and mixed). Directional incident solar beams were emitted according to the solar altitude and azimuth angles, and the forest canopy surface was decomposed into triangular elements. A ray tracing algorithm was utilized to simulate the propagation of reflected and transmitted beams within the forest canopy. Our method accurately modeled the solar radiant fluxes and demonstrated good agreement (R 2 ≥ 0.82) with the plot-scale results of hemispherical photo-based HPEval software and pyranometer measurements. The maximum incident radiant flux appeared in the conifer plot at noon on June 15 due to the largest solar altitude angle (81.21°) and dense clustering of tree crowns; the conifer plot also received the maximum reflected radiant flux (10.91-324.65 kW) due to the higher reflectance of coniferous trees and the better absorption of reflected solar beams. However, the broadleaf plot received more transmitted radiant flux (37.7-226.71 kW) for the trees in the shaded area due to the larger transmittance of broadleaf species. Our method can directly simulate the detailed plot-scale distribution of canopy radiation and is valuable for researching light-dependent biophysiological processes.

7.
Front Plant Sci ; 13: 927832, 2022.
Article in English | MEDLINE | ID: mdl-35845657

ABSTRACT

The currently available methods for evaluating most biochemical traits of plant phenotyping are destructive and have extremely low throughput. However, hyperspectral techniques can non-destructively obtain the spectral reflectance characteristics of plants, which can provide abundant biophysical and biochemical information. Therefore, plant spectra combined with machine learning algorithms can be used to predict plant phenotyping traits. However, the raw spectral reflectance characteristics contain noise and redundant information, thus can easily affect the robustness of the models developed via multivariate analysis methods. In this study, two end-to-end deep learning models were developed based on 2D convolutional neural networks (2DCNN) and fully connected neural networks (FCNN; Deep2D and DeepFC, respectively) to rapidly and non-destructively predict the phenotyping traits of lettuces from spectral reflectance. Three linear and two nonlinear multivariate analysis methods were used to develop models to weigh the performance of the deep learning models. The models based on multivariate analysis methods require a series of manual feature extractions, such as pretreatment and wavelength selection, while the proposed models can automatically extract the features in relation to phenotyping traits. A visible near-infrared hyperspectral camera was used to image lettuce plants growing in the field, and the spectra extracted from the images were used to train the network. The proposed models achieved good performance with a determination coefficient of prediction ( R p 2 ) of 0.9030 and 0.8490 using Deep2D for soluble solids content and DeepFC for pH, respectively. The performance of the deep learning models was compared with five multivariate analysis method. The quantitative analysis showed that the deep learning models had higher R p 2 than all the multivariate analysis methods, indicating better performance. Also, wavelength selection and different pretreatment methods had different effects on different multivariate analysis methods, and the selection of appropriate multivariate analysis methods and pretreatment methods increased more time and computational cost. Unlike multivariate analysis methods, the proposed deep learning models did not require any pretreatment or dimensionality reduction and thus are more suitable for application in high-throughput plant phenotyping platforms. These results indicate that the deep learning models can better predict phenotyping traits of plants using spectral reflectance.

8.
Plant Biotechnol J ; 19(1): 35-50, 2021 01.
Article in English | MEDLINE | ID: mdl-32569428

ABSTRACT

High-throughput phenotyping is increasingly becoming an important tool for rapid advancement of genetic gain in breeding programmes. Manual phenotyping of vascular bundles is tedious and time-consuming, which lags behind the rapid development of functional genomics in maize. More robust and automated techniques of phenotyping vascular bundles traits at high-throughput are urgently needed for large crop populations. In this study, we developed a standard process for stem micro-CT data acquisition and an automatic CT image process pipeline to obtain vascular bundle traits of stems including geometry-related, morphology-related and distribution-related traits. Next, we analysed the phenotypic variation of stem vascular bundles between natural population subgroup (480 inbred lines) based on 48 comprehensively phenotypic information. Also, the first database for stem micro-phenotypes, MaizeSPD, was established, storing 554 pieces of basic information of maize inbred lines, 523 pieces of experimental information, 1008 pieces of CT scanning images and processed images, and 24 192 pieces of phenotypic data. Combined with genome-wide association studies (GWASs), a total of 1562 significant single nucleotide polymorphism (SNPs) were identified for 30 stem micro-phenotypic traits, and 84 unique genes of 20 traits such as VBNum, VBAvArea and PZVBDensity were detected. Candidate genes identified by GWAS mainly encode enzymes involved in cell wall metabolism, transcription factors, protein kinase and protein related to plant signal transduction and stress response. The results presented here will advance our knowledge about phenotypic trait components of stem vascular bundles and provide useful information for understanding the genetic controls of vascular bundle formation and development.


Subject(s)
Plant Vascular Bundle , Zea mays , Genetic Association Studies , Phenotype , Plant Breeding , Polymorphism, Single Nucleotide , Zea mays/genetics
9.
Plant Phenomics ; 2020: 1848437, 2020.
Article in English | MEDLINE | ID: mdl-33313542

ABSTRACT

Plant phenotyping technologies play important roles in plant research and agriculture. Detailed phenotypes of individual plants can guide the optimization of shoot architecture for plant breeding and are useful to analyze the morphological differences in response to environments for crop cultivation. Accordingly, high-throughput phenotyping technologies for individual plants grown in field conditions are urgently needed, and MVS-Pheno, a portable and low-cost phenotyping platform for individual plants, was developed. The platform is composed of four major components: a semiautomatic multiview stereo (MVS) image acquisition device, a data acquisition console, data processing and phenotype extraction software for maize shoots, and a data management system. The platform's device is detachable and adjustable according to the size of the target shoot. Image sequences for each maize shoot can be captured within 60-120 seconds, yielding 3D point clouds of shoots are reconstructed using MVS-based commercial software, and the phenotypic traits at the organ and individual plant levels are then extracted by the software. The correlation coefficient (R 2) between the extracted and manually measured plant height, leaf width, and leaf area values are 0.99, 0.87, and 0.93, respectively. A data management system has also been developed to store and manage the acquired raw data, reconstructed point clouds, agronomic information, and resulting phenotypic traits. The platform offers an optional solution for high-throughput phenotyping of field-grown plants, which is especially useful for large populations or experiments across many different ecological regions.

10.
Front Plant Sci ; 11: 563386, 2020.
Article in English | MEDLINE | ID: mdl-33123178

ABSTRACT

The yield and quality of fresh lettuce can be determined from the growth rate and color of individual plants. Manual assessment and phenotyping for hundreds of varieties of lettuce is very time consuming and labor intensive. In this study, we utilized a "Sensor-to-Plant" greenhouse phenotyping platform to periodically capture top-view images of lettuce, and datasets of over 2000 plants from 500 lettuce varieties were thus captured at eight time points during vegetative growth. Here, we present a novel object detection-semantic segmentation-phenotyping method based on convolutional neural networks (CNNs) to conduct non-invasive and high-throughput phenotyping of the growth and development status of multiple lettuce varieties. Multistage CNN models for object detection and semantic segmentation were integrated to bridge the gap between image capture and plant phenotyping. An object detection model was used to detect and identify each pot from the sequence of images with 99.82% accuracy, semantic segmentation model was utilized to segment and identify each lettuce plant with a 97.65% F1 score, and a phenotyping pipeline was utilized to extract a total of 15 static traits (related to geometry and color) of each lettuce plant. Furthermore, the dynamic traits (growth and accumulation rates) were calculated based on the changing curves of static traits at eight growth points. The correlation and descriptive ability of these static and dynamic traits were carefully evaluated for the interpretability of traits related to digital biomass and quality of lettuce, and the observed accumulation rates of static straits more accurately reflected the growth status of lettuce plants. Finally, we validated the application of image-based high-throughput phenotyping through geometric measurement and color grading for a wide range of lettuce varieties. The proposed method can be extended to crops such as maize, wheat, and soybean as a non-invasive means of phenotype evaluation and identification.

11.
Front Plant Sci ; 10: 714, 2019.
Article in English | MEDLINE | ID: mdl-31214228

ABSTRACT

Reliable, automatic, multifunctional, and high-throughput phenotypic technologies are increasingly considered important tools for rapid advancement of genetic gain in breeding programs. With the rapid development in high-throughput phenotyping technologies, research in this area is entering a new era called 'phenomics.' The crop phenotyping community not only needs to build a multi-domain, multi-level, and multi-scale crop phenotyping big database, but also to research technical systems for phenotypic traits identification and develop bioinformatics technologies for information extraction from the overwhelming amounts of omics data. Here, we provide an overview of crop phenomics research, focusing on two parts, from phenotypic data collection through various sensors to phenomics analysis. Finally, we discussed the challenges and prospective of crop phenomics in order to provide suggestions to develop new methods of mining genes associated with important agronomic traits, and propose new intelligent solutions for precision breeding.

SELECTION OF CITATIONS
SEARCH DETAIL
...