Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 13(1): 1399, 2023 01 25.
Article in English | MEDLINE | ID: mdl-36697423

ABSTRACT

Plant roots influence many ecological and biogeochemical processes, such as carbon, water and nutrient cycling. Because of difficult accessibility, knowledge on plant root growth dynamics in field conditions, however, is fragmentary at best. Minirhizotrons, i.e. transparent tubes placed in the substrate into which specialized cameras or circular scanners are inserted, facilitate the capture of high-resolution images of root dynamics at the soil-tube interface with little to no disturbance after the initial installation. Their use, especially in field studies with multiple species and heterogeneous substrates, though, is limited by the amount of work that subsequent manual tracing of roots in the images requires. Furthermore, the reproducibility and objectivity of manual root detection is questionable. Here, we use a Convolutional Neural Network (CNN) for the automatic detection of roots in minirhizotron images and compare the performance of our RootDetector with human analysts with different levels of expertise. Our minirhizotron data come from various wetlands on organic soils, i.e. highly heterogeneous substrates consisting of dead plant material, often times mainly roots, in various degrees of decomposition. This may be seen as one of the most challenging soil types for root segmentation in minirhizotron images. RootDetector showed a high capability to correctly segment root pixels in minirhizotron images from field observations (F1 = 0.6044; r2 compared to a human expert = 0.99). Reproducibility among humans, however, depended strongly on expertise level, with novices showing drastic variation among individual analysts and annotating on average more than 13-times higher root length/cm2 per image compared to expert analysts. CNNs such as RootDetector provide a reliable and efficient method for the detection of roots and root length in minirhizotron images even from challenging field conditions. Analyses with RootDetector thus save resources, are reproducible and objective, and are as accurate as manual analyses performed by human experts.


Subject(s)
Neural Networks, Computer , Plant Roots , Humans , Reproducibility of Results , Carbon , Soil
2.
Front Plant Sci ; 12: 767400, 2021.
Article in English | MEDLINE | ID: mdl-34804101

ABSTRACT

The recent developments in artificial intelligence have the potential to facilitate new research methods in ecology. Especially Deep Convolutional Neural Networks (DCNNs) have been shown to outperform other approaches in automatic image analyses. Here we apply a DCNN to facilitate quantitative wood anatomical (QWA) analyses, where the main challenges reside in the detection of a high number of cells, in the intrinsic variability of wood anatomical features, and in the sample quality. To properly classify and interpret features within the images, DCNNs need to undergo a training stage. We performed the training with images from transversal wood anatomical sections, together with manually created optimal outputs of the target cell areas. The target species included an example for the most common wood anatomical structures: four conifer species; a diffuse-porous species, black alder (Alnus glutinosa L.); a diffuse to semi-diffuse-porous species, European beech (Fagus sylvatica L.); and a ring-porous species, sessile oak (Quercus petraea Liebl.). The DCNN was created in Python with Pytorch, and relies on a Mask-RCNN architecture. The developed algorithm detects and segments cells, and provides information on the measurement accuracy. To evaluate the performance of this tool we compared our Mask-RCNN outputs with U-Net, a model architecture employed in a similar study, and with ROXAS, a program based on traditional image analysis techniques. First, we evaluated how many target cells were correctly recognized. Next, we assessed the cell measurement accuracy by evaluating the number of pixels that were correctly assigned to each target cell. Overall, the "learning process" defining artificial intelligence plays a key role in overcoming the issues that are usually manually solved in QWA analyses. Mask-RCNN is the model that better detects which are the features characterizing a target cell when these issues occur. In general, U-Net did not attain the other algorithms' performance, while ROXAS performed best for conifers, and Mask-RCNN showed the highest accuracy in detecting target cells and segmenting lumen areas of angiosperms. Our research demonstrates that future software tools for QWA analyses would greatly benefit from using DCNNs, saving time during the analysis phase, and providing a flexible approach that allows model retraining.

3.
Mar Pollut Bull ; 149: 110530, 2019 Dec.
Article in English | MEDLINE | ID: mdl-31454615

ABSTRACT

Machine learning algorithms can be trained on complex data sets to detect, predict, or model specific aspects. Aim of this study was to train an artificial neural network in comparison to a Random Forest model to detect induced changes in microbial communities, in order to support environmental monitoring efforts of contamination events. Models were trained on taxon count tables obtained via next-generation amplicon sequencing of water column samples originating from a lab microcosm incubation experiment conducted over 140 days to determine the effects of glyphosate on succession within brackish-water microbial communities. Glyphosate-treated assemblages were classified correctly; a subsetting approach identified the taxa primarily responsible for this, permitting the reduction of input features. This study demonstrates the potential of artificial neural networks to predict indicator species for glyphosate contamination. The results could empower the development of environmental monitoring strategies with applications limited to neither glyphosate nor amplicon sequence data.


Subject(s)
Glycine/analogs & derivatives , Microbiota/drug effects , Microbiota/genetics , Neural Networks, Computer , RNA, Ribosomal, 16S/genetics , Water Pollutants, Chemical/toxicity , Algorithms , Environmental Monitoring , Glycine/toxicity , High-Throughput Nucleotide Sequencing , Machine Learning , Random Allocation , Water Microbiology , Glyphosate
4.
IEEE Comput Graph Appl ; 36(2): 10-5, 2016.
Article in English | MEDLINE | ID: mdl-26960024

ABSTRACT

Visual computing technologies have traditionally been developed for conventional setups where air is the surrounding medium for the user, the display, and/or the camera. However, given mankind's increasingly need to rely on the oceans to solve the problems of future generations (such as offshore oil and gas, renewable energies, and marine mineral resources), there is a growing need for mixed-reality applications for use in water. This article highlights the various research challenges when changing the medium from air to water, introduces the concept of underwater mixed environments, and presents recent developments in underwater visual computing applications.

SELECTION OF CITATIONS
SEARCH DETAIL
...