Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
Add more filters










Publication year range
1.
Sensors (Basel) ; 20(11)2020 Jun 11.
Article in English | MEDLINE | ID: mdl-32545168

ABSTRACT

High-throughput plant phenotyping in controlled environments (growth chambers and glasshouses) is often delivered via large, expensive installations, leading to limited access and the increased relevance of "affordable phenotyping" solutions. We present two robot vectors for automated plant phenotyping under controlled conditions. Using 3D-printed components and readily-available hardware and electronic components, these designs are inexpensive, flexible and easily modified to multiple tasks. We present a design for a thermal imaging robot for high-precision time-lapse imaging of canopies and a Plate Imager for high-throughput phenotyping of roots and shoots of plants grown on media plates. Phenotyping in controlled conditions requires multi-position spatial and temporal monitoring of environmental conditions. We also present a low-cost sensor platform for environmental monitoring based on inexpensive sensors, microcontrollers and internet-of-things (IoT) protocols.


Subject(s)
Environmental Monitoring , Plants , Phenotype
2.
Article in English | MEDLINE | ID: mdl-32406835

ABSTRACT

We address the complex problem of reliably segmenting root structure from soil in X-ray Computed Tomography (CT) images. We utilise a deep learning approach, and propose a state-of-the-art multi-resolution architecture based on encoderdecoders. While previous work in encoder-decoders implies the use of multiple resolutions simply by downsampling and upsampling images, we make this process explicit, with branches of the network tasked separately with obtaining local high-resolution segmentation, and wider low-resolution contextual information. The complete network is a memory efficient implementation that is still able to resolve small root detail in large volumetric images. We compare against a number of different encoder-decoder based architectures from the literature, as well as a popular existing image analysis tool designed for root CT segmentation. We show qualitatively and quantitatively that a multi-resolution approach offers substantial accuracy improvements over a both a small receptive field size in a deep network, or a larger receptive field in a shallower network. We then further improve performance using an incremental learning approach, in which failures in the original network are used to generate harder negative training examples. Our proposed method requires no user interaction, is fully automatic, and identifies large and fine root material throughout the whole volume.

3.
Plant Methods ; 16: 29, 2020.
Article in English | MEDLINE | ID: mdl-32165909

ABSTRACT

BACKGROUND: Convolvulus sepium (hedge bindweed) detection in sugar beet fields remains a challenging problem due to variation in appearance of plants, illumination changes, foliage occlusions, and different growth stages under field conditions. Current approaches for weed and crop recognition, segmentation and detection rely predominantly on conventional machine-learning techniques that require a large set of hand-crafted features for modelling. These might fail to generalize over different fields and environments. RESULTS: Here, we present an approach that develops a deep convolutional neural network (CNN) based on the tiny YOLOv3 architecture for C. sepium and sugar beet detection. We generated 2271 synthetic images, before combining these images with 452 field images to train the developed model. YOLO anchor box sizes were calculated from the training dataset using a k-means clustering approach. The resulting model was tested on 100 field images, showing that the combination of synthetic and original field images to train the developed model could improve the mean average precision (mAP) metric from 0.751 to 0.829 compared to using collected field images alone. We also compared the performance of the developed model with the YOLOv3 and Tiny YOLO models. The developed model achieved a better trade-off between accuracy and speed. Specifically, the average precisions (APs@IoU0.5) of C. sepium and sugar beet were 0.761 and 0.897 respectively with 6.48 ms inference time per image (800 × 1200) on a NVIDIA Titan X GPU environment. CONCLUSION: The developed model has the potential to be deployed on an embedded mobile platform like the Jetson TX for online weed detection and management due to its high-speed inference. It is recommendable to use synthetic images and empirical field images together in training stage to improve the performance of models.

4.
Mach Vis Appl ; 31(1): 2, 2020.
Article in English | MEDLINE | ID: mdl-31894176

ABSTRACT

There is an increase in consumption of agricultural produce as a result of the rapidly growing human population, particularly in developing nations. This has triggered high-quality plant phenotyping research to help with the breeding of high-yielding plants that can adapt to our continuously changing climate. Novel, low-cost, fully automated plant phenotyping systems, capable of infield deployment, are required to help identify quantitative plant phenotypes. The identification of quantitative plant phenotypes is a key challenge which relies heavily on the precise segmentation of plant images. Recently, the plant phenotyping community has started to use very deep convolutional neural networks (CNNs) to help tackle this fundamental problem. However, these very deep CNNs rely on some millions of model parameters and generate very large weight matrices, thus making them difficult to deploy infield on low-cost, resource-limited devices. We explore how to compress existing very deep CNNs for plant image segmentation, thus making them easily deployable infield and on mobile devices. In particular, we focus on applying these models to the pixel-wise segmentation of plants into multiple classes including background, a challenging problem in the plant phenotyping community. We combined two approaches (separable convolutions and SVD) to reduce model parameter numbers and weight matrices of these very deep CNN-based models. Using our combined method (separable convolution and SVD) reduced the weight matrix by up to 95% without affecting pixel-wise accuracy. These methods have been evaluated on two public plant datasets and one non-plant dataset to illustrate generality. We have successfully tested our models on a mobile device.

5.
IEEE/ACM Trans Comput Biol Bioinform ; 17(6): 1907-1917, 2020.
Article in English | MEDLINE | ID: mdl-31027044

ABSTRACT

Plant phenotyping is the quantitative description of a plant's physiological, biochemical, and anatomical status which can be used in trait selection and helps to provide mechanisms to link underlying genetics with yield. Here, an active vision- based pipeline is presented which aims to contribute to reducing the bottleneck associated with phenotyping of architectural traits. The pipeline provides a fully automated response to photometric data acquisition and the recovery of three-dimensional (3D) models of plants without the dependency of botanical expertise, whilst ensuring a non-intrusive and non-destructive approach. Access to complete and accurate 3D models of plants supports computation of a wide variety of structural measurements. An Active Vision Cell (AVC) consisting of a camera-mounted robot arm plus combined software interface and a novel surface reconstruction algorithm is proposed. This pipeline provides a robust, flexible, and accurate method for automating the 3D reconstruction of plants. The reconstruction algorithm can reduce noise and provides a promising and extendable framework for high throughput phenotyping, improving current state-of-the-art methods. Furthermore, the pipeline can be applied to any plant species or form due to the application of an active vision framework combined with the automatic selection of key parameters for surface reconstruction.


Subject(s)
Imaging, Three-Dimensional/methods , Models, Biological , Plant Shoots , Algorithms , Computational Biology , Phenotype , Plant Shoots/anatomy & histology , Plant Shoots/classification , Plant Shoots/physiology , Plants/anatomy & histology , Plants/classification , Software , Surface Properties
6.
Front Plant Sci ; 10: 1516, 2019.
Article in English | MEDLINE | ID: mdl-31850020

ABSTRACT

Cassava roots are complex structures comprising several distinct types of root. The number and size of the storage roots are two potential phenotypic traits reflecting crop yield and quality. Counting and measuring the size of cassava storage roots are usually done manually, or semi-automatically by first segmenting cassava root images. However, occlusion of both storage and fibrous roots makes the process both time-consuming and error-prone. While Convolutional Neural Nets have shown performance above the state-of-the-art in many image processing and analysis tasks, there are currently a limited number of Convolutional Neural Net-based methods for counting plant features. This is due to the limited availability of data, annotated by expert plant biologists, which represents all possible measurement outcomes. Existing works in this area either learn a direct image-to-count regressor model by regressing to a count value, or perform a count after segmenting the image. We, however, address the problem using a direct image-to-count prediction model. This is made possible by generating synthetic images, using a conditional Generative Adversarial Network (GAN), to provide training data for missing classes. We automatically form cassava storage root masks for any missing classes using existing ground-truth masks, and input them as a condition to our GAN model to generate synthetic root images. We combine the resulting synthetic images with real images to learn a direct image-to-count prediction model capable of counting the number of storage roots in real cassava images taken from a low cost aeroponic growth system. These models are used to develop a system that counts cassava storage roots in real images. Our system first predicts age group ('young' and 'old' roots; pertinent to our image capture regime) in a given image, and then, based on this prediction, selects an appropriate model to predict the number of storage roots. We achieve 91% accuracy on predicting ages of storage roots, and 86% and 71% overall percentage agreement on counting 'old' and 'young' storage roots respectively. Thus we are able to demonstrate that synthetically generated cassava root images can be used to supplement missing root classes, turning the counting problem into a direct image-to-count prediction task.

7.
Gigascience ; 8(11)2019 11 01.
Article in English | MEDLINE | ID: mdl-31702012

ABSTRACT

BACKGROUND: In recent years quantitative analysis of root growth has become increasingly important as a way to explore the influence of abiotic stress such as high temperature and drought on a plant's ability to take up water and nutrients. Segmentation and feature extraction of plant roots from images presents a significant computer vision challenge. Root images contain complicated structures, variations in size, background, occlusion, clutter and variation in lighting conditions. We present a new image analysis approach that provides fully automatic extraction of complex root system architectures from a range of plant species in varied imaging set-ups. Driven by modern deep-learning approaches, RootNav 2.0 replaces previously manual and semi-automatic feature extraction with an extremely deep multi-task convolutional neural network architecture. The network also locates seeds, first order and second order root tips to drive a search algorithm seeking optimal paths throughout the image, extracting accurate architectures without user interaction. RESULTS: We develop and train a novel deep network architecture to explicitly combine local pixel information with global scene information in order to accurately segment small root features across high-resolution images. The proposed method was evaluated on images of wheat (Triticum aestivum L.) from a seedling assay. Compared with semi-automatic analysis via the original RootNav tool, the proposed method demonstrated comparable accuracy, with a 10-fold increase in speed. The network was able to adapt to different plant species via transfer learning, offering similar accuracy when transferred to an Arabidopsis thaliana plate assay. A final instance of transfer learning, to images of Brassica napus from a hydroponic assay, still demonstrated good accuracy despite many fewer training images. CONCLUSIONS: We present RootNav 2.0, a new approach to root image analysis driven by a deep neural network. The tool can be adapted to new image domains with a reduced number of images, and offers substantial speed improvements over semi-automatic and manual approaches. The tool outputs root architectures in the widely accepted RSML standard, for which numerous analysis packages exist (http://rootsystemml.github.io/), as well as segmentation masks compatible with other automated measurement tools. The tool will provide researchers with the ability to analyse root systems at larget scales than ever before, at a time when large scale genomic studies have made this more important than ever.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Plant Roots/anatomy & histology , Plant Roots/growth & development
8.
Plant Physiol ; 181(1): 28-42, 2019 09.
Article in English | MEDLINE | ID: mdl-31331997

ABSTRACT

Understanding the relationships between local environmental conditions and plant structure and function is critical for both fundamental science and for improving the performance of crops in field settings. Wind-induced plant motion is important in most agricultural systems, yet the complexity of the field environment means that it remained understudied. Despite the ready availability of image sequences showing plant motion, the cultivation of crop plants in dense field stands makes it difficult to detect features and characterize their general movement traits. Here, we present a robust method for characterizing motion in field-grown wheat plants (Triticum aestivum) from time-ordered sequences of red, green, and blue images. A series of crops and augmentations was applied to a dataset of 290 collected and annotated images of ear tips to increase variation and resolution when training a convolutional neural network. This approach enables wheat ears to be detected in the field without the need for camera calibration or a fixed imaging position. Videos of wheat plants moving in the wind were also collected and split into their component frames. Ear tips were detected using the trained network, then tracked between frames using a probabilistic tracking algorithm to approximate movement. These data can be used to characterize key movement traits, such as periodicity, and obtain more detailed static plant properties to assess plant structure and function in the field. Automated data extraction may be possible for informing lodging models, breeding programs, and linking movement properties to canopy light distributions and dynamic light fluctuation.


Subject(s)
Deep Learning , Triticum/physiology , Agriculture , Algorithms , Breeding , Crops, Agricultural , Environment , Motion , Phenotype , Wind
10.
Plant Cell Environ ; 41(1): 121-133, 2018 Jan.
Article in English | MEDLINE | ID: mdl-28503782

ABSTRACT

Spatially averaged models of root-soil interactions are often used to calculate plant water uptake. Using a combination of X-ray computed tomography (CT) and image-based modelling, we tested the accuracy of this spatial averaging by directly calculating plant water uptake for young wheat plants in two soil types. The root system was imaged using X-ray CT at 2, 4, 6, 8 and 12 d after transplanting. The roots were segmented using semi-automated root tracking for speed and reproducibility. The segmented geometries were converted to a mesh suitable for the numerical solution of Richards' equation. Richards' equation was parameterized using existing pore scale studies of soil hydraulic properties in the rhizosphere of wheat plants. Image-based modelling allows the spatial distribution of water around the root to be visualized and the fluxes into the root to be calculated. By comparing the results obtained through image-based modelling to spatially averaged models, the impact of root architecture and geometry in water uptake was quantified. We observed that the spatially averaged models performed well in comparison to the image-based models with <2% difference in uptake. However, the spatial averaging loses important information regarding the spatial distribution of water near the root system.


Subject(s)
Imaging, Three-Dimensional , Models, Biological , Plant Roots/metabolism , Soil/chemistry , Tomography, X-Ray Computed , Water/metabolism , Plant Roots/anatomy & histology , Porosity
11.
Gigascience ; 6(10): 1-10, 2017 10 01.
Article in English | MEDLINE | ID: mdl-29020747

ABSTRACT

In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets.


Subject(s)
Machine Learning , Plant Roots/classification , Plant Shoots/classification , Phenotype , Plant Roots/genetics , Plant Shoots/genetics , Plants , Quantitative Trait Loci , Triticum/classification , Triticum/genetics
12.
Nat Plants ; 3: 17057, 2017 May 08.
Article in English | MEDLINE | ID: mdl-28481327

ABSTRACT

Plants can acclimate by using tropisms to link the direction of growth to environmental conditions. Hydrotropism allows roots to forage for water, a process known to depend on abscisic acid (ABA) but whose molecular and cellular basis remains unclear. Here we show that hydrotropism still occurs in roots after laser ablation removed the meristem and root cap. Additionally, targeted expression studies reveal that hydrotropism depends on the ABA signalling kinase SnRK2.2 and the hydrotropism-specific MIZ1, both acting specifically in elongation zone cortical cells. Conversely, hydrotropism, but not gravitropism, is inhibited by preventing differential cell-length increases in the cortex, but not in other cell types. We conclude that root tropic responses to gravity and water are driven by distinct tissue-based mechanisms. In addition, unlike its role in root gravitropism, the elongation zone performs a dual function during a hydrotropic response, both sensing a water potential gradient and subsequently undergoing differential growth.


Subject(s)
Plant Roots/growth & development , Tropism , Abscisic Acid/metabolism , Arabidopsis/cytology , Arabidopsis/growth & development , Arabidopsis/metabolism , Arabidopsis Proteins/metabolism , Plant Roots/cytology , Signal Transduction
13.
Front Plant Sci ; 7: 1392, 2016.
Article in English | MEDLINE | ID: mdl-27708654

ABSTRACT

Physical perturbation of a plant canopy brought about by wind is a ubiquitous phenomenon and yet its biological importance has often been overlooked. This is partly due to the complexity of the issue at hand: wind-induced movement (or mechanical excitation) is a stochastic process which is difficult to measure and quantify; plant motion is dependent upon canopy architectural features which, until recently, were difficult to accurately represent and model in 3-dimensions; light patterning throughout a canopy is difficult to compute at high-resolutions, especially when confounded by other environmental variables. Recent studies have reinforced the expectation that canopy architecture is a strong determinant of productivity and yield; however, links between the architectural properties of the plant and its mechanical properties, particularly its response to wind, are relatively unknown. As a result, biologically relevant data relating canopy architecture, light- dynamics, and short-scale photosynthetic responses in the canopy setting are scarce. Here, we hypothesize that wind-induced movement will have large consequences for the photosynthetic productivity of our crops due to its influence on light patterning. To address this issue, in this study we combined high resolution 3D reconstructions of a plant canopy with a simple representation of canopy perturbation as a result of wind using solid body rotation in order to explore the potential effects on light patterning, interception, and photosynthetic productivity. We looked at two different scenarios: firstly a constant distortion where a rice canopy was subject to a permanent distortion throughout the whole day; and secondly, a dynamic distortion, where the canopy was distorted in incremental steps between two extremes at set time points in the day. We find that mechanical canopy excitation substantially alters light dynamics; light distribution and modeled canopy carbon gain. We then discuss methods required for accurate modeling of mechanical canopy excitation (here coined the 4-dimensional plant) and some associated biological and applied implications of such techniques. We hypothesize that biomechanical plant properties are a specific adaptation to achieve wind-induced photosynthetic enhancement and we outline how traits facilitating canopy excitation could be used as a route for improving crop yield.

14.
Plant J ; 84(5): 1034-43, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26461469

ABSTRACT

Root system interactions and competition for resources are active areas of research that contribute to our understanding of how roots perceive and react to environmental conditions. Recent research has shown this complex suite of processes can now be observed in a natural environment (i.e. soil) through the use of X-ray microcomputed tomography (µCT), which allows non-destructive analysis of plant root systems. Due to their similar X-ray attenuation coefficients and densities, the roots of different plants appear as similar greyscale intensity values in µCT image data. Unless they are manually and carefully traced, it has not previously been possible to automatically label and separate different root systems grown in the same soil environment. We present a technique, based on a visual tracking approach, which exploits knowledge of the shape of root cross-sections to automatically recover from X-ray µCT data three-dimensional descriptions of multiple, interacting root architectures growing in soil. The method was evaluated on both simulated root data and real images of two interacting winter wheat Cordiale (Triticumaestivum L.) plants grown in a single soil column, demonstrating that it is possible to automatically segment different root systems from within the same soil sample. This work supports the automatic exploration of supportive and competitive foraging behaviour of plant root systems in natural soil environments.


Subject(s)
Plant Roots/anatomy & histology , X-Ray Microtomography/methods , Computer Simulation , Image Processing, Computer-Assisted , Plant Roots/growth & development , Software , Triticum/anatomy & histology , Triticum/growth & development
15.
Plant Physiol ; 169(2): 1192-204, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26282240

ABSTRACT

Photoinhibition reduces photosynthetic productivity; however, it is difficult to quantify accurately in complex canopies partly because of a lack of high-resolution structural data on plant canopy architecture, which determines complex fluctuations of light in space and time. Here, we evaluate the effects of photoinhibition on long-term carbon gain (over 1 d) in three different wheat (Triticum aestivum) lines, which are architecturally diverse. We use a unique method for accurate digital three-dimensional reconstruction of canopies growing in the field. The reconstruction method captures unique architectural differences between lines, such as leaf angle, curvature, and leaf density, thus providing a sensitive method of evaluating the productivity of actual canopy structures that previously were difficult or impossible to obtain. We show that complex data on light distribution can be automatically obtained without conventional manual measurements. We use a mathematical model of photosynthesis parameterized by field data consisting of chlorophyll fluorescence, light response curves of carbon dioxide assimilation, and manual confirmation of canopy architecture and light attenuation. Model simulations show that photoinhibition alone can result in substantial reduction in carbon gain, but this is highly dependent on exact canopy architecture and the diurnal dynamics of photoinhibition. The use of such highly realistic canopy reconstructions also allows us to conclude that even a moderate change in leaf angle in upper layers of the wheat canopy led to a large increase in the number of leaves in a severely light-limited state.


Subject(s)
Carbon/metabolism , Imaging, Three-Dimensional/methods , Models, Biological , Triticum/physiology , Fluorescence , Light , Photosynthesis , Plant Leaves/metabolism , Plant Leaves/physiology
16.
Plant Physiol ; 167(3): 617-27, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25614065

ABSTRACT

The number of image analysis tools supporting the extraction of architectural features of root systems has increased in recent years. These tools offer a handy set of complementary facilities, yet it is widely accepted that none of these software tools is able to extract in an efficient way the growing array of static and dynamic features for different types of images and species. We describe the Root System Markup Language (RSML), which has been designed to overcome two major challenges: (1) to enable portability of root architecture data between different software tools in an easy and interoperable manner, allowing seamless collaborative work; and (2) to provide a standard format upon which to base central repositories that will soon arise following the expanding worldwide root phenotyping effort. RSML follows the XML standard to store two- or three-dimensional image metadata, plant and root properties and geometries, continuous functions along individual root paths, and a suite of annotations at the image, plant, or root scale at one or several time points. Plant ontologies are used to describe botanical entities that are relevant at the scale of root system architecture. An XML schema describes the features and constraints of RSML, and open-source packages have been developed in several languages (R, Excel, Java, Python, and C#) to enable researchers to integrate RSML files into popular research workflow.


Subject(s)
Plant Roots/anatomy & histology , Programming Languages , Software , Imaging, Three-Dimensional , Models, Biological , Plant Roots/growth & development , Plant Roots/physiology , Workflow
17.
Funct Plant Biol ; 42(5): 460-470, 2015 May.
Article in English | MEDLINE | ID: mdl-32480692

ABSTRACT

X-ray microcomputed tomography (µCT) allows nondestructive visualisation of plant root systems within their soil environment and thus offers an alternative to the commonly used destructive methodologies for the examination of plant roots and their interaction with the surrounding soil. Various methods for the recovery of root system information from X-ray computed tomography (CT) image data have been presented in the literature. Detailed, ideally quantitative, evaluation is essential, in order to determine the accuracy and limitations of the proposed methods, and to allow potential users to make informed choices among them. This, however, is a complicated task. Three-dimensional ground truth data are expensive to produce and the complexity of X-ray CT data means that manually generated ground truth may not be definitive. Similarly, artificially generated data are not entirely representative of real samples. The aims of this work are to raise awareness of the evaluation problem and to propose experimental approaches that allow the performance of root extraction methods to be assessed, ultimately improving the techniques available. To illustrate the issues, tests are conducted using both artificially generated images and real data samples.

18.
Plant Physiol ; 166(4): 1688-98, 2014 Dec.
Article in English | MEDLINE | ID: mdl-25332504

ABSTRACT

Increased adoption of the systems approach to biological research has focused attention on the use of quantitative models of biological objects. This includes a need for realistic three-dimensional (3D) representations of plant shoots for quantification and modeling. Previous limitations in single-view or multiple-view stereo algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present a fully automatic approach to image-based 3D plant reconstruction that can be achieved using a single low-cost camera. The reconstructed plants are represented as a series of small planar sections that together model the more complex architecture of the leaf surfaces. The boundary of each leaf patch is refined using the level-set method, optimizing the model based on image information, curvature constraints, and the position of neighboring surfaces. The reconstruction process makes few assumptions about the nature of the plant material being reconstructed and, as such, is applicable to a wide variety of plant species and topologies and can be extended to canopy-scale imaging. We demonstrate the effectiveness of our approach on data sets of wheat (Triticum aestivum) and rice (Oryza sativa) plants as well as a unique virtual data set that allows us to compute quantitative measures of reconstruction accuracy. The output is a 3D mesh structure that is suitable for modeling applications in a format that can be imported in the majority of 3D graphics and software packages.


Subject(s)
Imaging, Three-Dimensional/methods , Oryza/cytology , Triticum/cytology , Algorithms , Models, Theoretical , Oryza/growth & development , Plant Leaves/cytology , Plant Leaves/growth & development , Plant Shoots/cytology , Plant Shoots/growth & development , Software , Triticum/growth & development
19.
New Phytol ; 202(4): 1212-1222, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24641449

ABSTRACT

Root elongation and bending require the coordinated expansion of multiple cells of different types. These processes are regulated by the action of hormones that can target distinct cell layers. We use a mathematical model to characterise the influence of the biomechanical properties of individual cell walls on the properties of the whole tissue. Taking a simple constitutive model at the cell scale which characterises cell walls via yield and extensibility parameters, we derive the analogous tissue-level model to describe elongation and bending. To accurately parameterise the model, we take detailed measurements of cell turgor, cell geometries and wall thicknesses. The model demonstrates how cell properties and shapes contribute to tissue-level extensibility and yield. Exploiting the highly organised structure of the elongation zone (EZ) of the Arabidopsis root, we quantify the contributions of different cell layers, using the measured parameters. We show how distributions of material and geometric properties across the root cross-section contribute to the generation of curvature, and relate the angle of a gravitropic bend to the magnitude and duration of asymmetric wall softening. We quantify the geometric factors which lead to the predominant contribution of the outer cell files in driving root elongation and bending.


Subject(s)
Arabidopsis/physiology , Gravitropism , Plant Roots/physiology , Arabidopsis/cytology , Arabidopsis/growth & development , Cell Wall/metabolism , Mechanical Phenomena , Microscopy, Electron, Transmission , Models, Theoretical , Organ Specificity , Plant Roots/cytology , Plant Roots/growth & development
20.
Plant Cell ; 26(3): 862-75, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24632533

ABSTRACT

Auxin is a key regulator of plant growth and development. Within the root tip, auxin distribution plays a crucial role specifying developmental zones and coordinating tropic responses. Determining how the organ-scale auxin pattern is regulated at the cellular scale is essential to understanding how these processes are controlled. In this study, we developed an auxin transport model based on actual root cell geometries and carrier subcellular localizations. We tested model predictions using the DII-VENUS auxin sensor in conjunction with state-of-the-art segmentation tools. Our study revealed that auxin efflux carriers alone cannot create the pattern of auxin distribution at the root tip and that AUX1/LAX influx carriers are also required. We observed that AUX1 in lateral root cap (LRC) and elongating epidermal cells greatly enhance auxin's shootward flux, with this flux being predominantly through the LRC, entering the epidermal cells only as they enter the elongation zone. We conclude that the nonpolar AUX1/LAX influx carriers control which tissues have high auxin levels, whereas the polar PIN carriers control the direction of auxin transport within these tissues.


Subject(s)
Arabidopsis/metabolism , Indoleacetic Acids/metabolism , Plant Roots/metabolism , Biological Transport , Subcellular Fractions/metabolism
SELECTION OF CITATIONS
SEARCH DETAIL
...