Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
J Imaging ; 7(10)2021 Sep 29.
Article in English | MEDLINE | ID: mdl-34677284

ABSTRACT

This paper proposes a novel approach for semi-supervised domain adaptation for holistic regression tasks, where a DNN predicts a continuous value y∈R given an input image x. The current literature generally lacks specific domain adaptation approaches for this task, as most of them mostly focus on classification. In the context of holistic regression, most of the real-world datasets not only exhibit a covariate (or domain) shift, but also a label gap-the target dataset may contain labels not included in the source dataset (and vice versa). We propose an approach tackling both covariate and label gap in a unified training framework. Specifically, a Generative Adversarial Network (GAN) is used to reduce covariate shift, and label gap is mitigated via label normalisation. To avoid overfitting, we propose a stopping criterion that simultaneously takes advantage of the Maximum Mean Discrepancy and the GAN Global Optimality condition. To restore the original label range-that was previously normalised-a handful of annotated images from the target domain are used. Our experimental results, run on 3 different datasets, demonstrate that our approach drastically outperforms the state-of-the-art across the board. Specifically, for the cell counting problem, the mean squared error (MSE) is reduced from 759 to 5.62; in the case of the pedestrian dataset, our approach lowered the MSE from 131 to 1.47. For the last experimental setup, we borrowed a task from plant biology, i.e., counting the number of leaves in a plant, and we ran two series of experiments, showing the MSE is reduced from 2.36 to 0.88 (intra-species), and from 1.48 to 0.6 (inter-species).

2.
Plant J ; 103(6): 2330-2343, 2020 09.
Article in English | MEDLINE | ID: mdl-32530068

ABSTRACT

The phenotypic analysis of root system growth is important to inform efforts to enhance plant resource acquisition from soils; however, root phenotyping remains challenging because of the opacity of soil, requiring systems that facilitate root system visibility and image acquisition. Previously reported systems require costly or bespoke materials not available in most countries, where breeders need tools to select varieties best adapted to local soils and field conditions. Here, we report an affordable soil-based growth (rhizobox) and imaging system to phenotype root development in glasshouses or shelters. All components of the system are made from locally available commodity components, facilitating the adoption of this affordable technology in low-income countries. The rhizobox is large enough (approximately 6000 cm2 of visible soil) to avoid restricting vertical root system growth for most if not all of the life cycle, yet light enough (approximately 21 kg when filled with soil) for routine handling. Support structures and an imaging station, with five cameras covering the whole soil surface, complement the rhizoboxes. Images are acquired via the Phenotiki sensor interface, collected, stitched and analysed. Root system architecture (RSA) parameters are quantified without intervention. The RSAs of a dicot species (Cicer arietinum, chickpea) and a monocot species (Hordeum vulgare, barley), exhibiting contrasting root systems, were analysed. Insights into root system dynamics during vegetative and reproductive stages of the chickpea life cycle were obtained. This affordable system is relevant for efforts in Ethiopia and other low- and middle-income countries to enhance crop yields and climate resilience sustainably.


Subject(s)
Plant Roots/anatomy & histology , Aging , Cicer/anatomy & histology , Cicer/genetics , Genotype , Hordeum/anatomy & histology , Hordeum/genetics , Phenotype , Soil
3.
Front Plant Sci ; 11: 141, 2020.
Article in English | MEDLINE | ID: mdl-32256503

ABSTRACT

Image-based plant phenotyping has been steadily growing and this has steeply increased the need for more efficient image analysis techniques capable of evaluating multiple plant traits. Deep learning has shown its potential in a multitude of visual tasks in plant phenotyping, such as segmentation and counting. Here, we show how different phenotyping traits can be extracted simultaneously from plant images, using multitask learning (MTL). MTL leverages information contained in the training images of related tasks to improve overall generalization and learns models with fewer labels. We present a multitask deep learning framework for plant phenotyping, able to infer three traits simultaneously: (i) leaf count, (ii) projected leaf area (PLA), and (iii) genotype classification. We adopted a modified pretrained ResNet50 as a feature extractor, trained end-to-end to predict multiple traits. We also leverage MTL to show that through learning from more easily obtainable annotations (such as PLA and genotype) we can predict a better leaf count (harder to obtain annotation). We evaluate our findings on several publicly available datasets of top-view images of Arabidopsis thaliana. Experimental results show that the proposed MTL method improves the leaf count mean squared error (MSE) by more than 40%, compared to a single task network on the same dataset. We also show that our MTL framework can be trained with up to 75% fewer leaf count annotations without significantly impacting performance, whereas a single task model shows a steady decline when fewer annotations are available. Code available at https://github.com/andobrescu/Multi_task_plant_phenotyping.

4.
IEEE Trans Image Process ; 29(1): 2166-2175, 2020.
Article in English | MEDLINE | ID: mdl-31634130

ABSTRACT

Finding suitable image representations for the task at hand is critical in computer vision. Different approaches extending the original Restricted Boltzmann Machine (RBM) model have recently been proposed to offer rotation-invariant feature learning. In this paper, we present an extended novel RBM that learns rotation invariant features by explicitly factorizing for rotation nuisance in 2D image inputs within an unsupervised framework. While the goal is to learn invariant features, our model infers an orientation per input image during training, using information related to the reconstruction error. The training process is regularised by a Kullback-Leibler divergence, offering stability and consistency. We used the γ -score, a measure that calculates the amount of invariance, to mathematically and experimentally demonstrate that our approach indeed learns rotation invariant features. We show that our method outperforms the current state-of-the-art RBM approaches for rotation invariant feature learning on three different benchmark datasets, by measuring the performance with the test accuracy of an SVM classifier. Our implementation is available at https://bitbucket.org/tuttoweb/rotinvrbm.

5.
Plant J ; 96(4): 880-890, 2018 11.
Article in English | MEDLINE | ID: mdl-30101442

ABSTRACT

Direct observation of morphological plant traits is tedious and a bottleneck for high-throughput phenotyping. Hence, interest in image-based analysis is increasing, with the requirement for software that can reliably extract plant traits, such as leaf count, preferably across a variety of species and growth conditions. However, current leaf counting methods do not work across species or conditions and therefore may lack broad utility. In this paper, we present Pheno-Deep Counter, a single deep network that can predict leaf count in two-dimensional (2D) plant images of different species with a rosette-shaped appearance. We demonstrate that our architecture can count leaves from multi-modal 2D images, such as visible light, fluorescence and near-infrared. Our network design is flexible, allowing for inputs to be added or removed to accommodate new modalities. Furthermore, our architecture can be used as is without requiring dataset-specific customization of the internal structure of the network, opening its use to new scenarios. Pheno-Deep Counter is able to produce accurate predictions in many plant species and, once trained, can count leaves in a few seconds. Through our universal and open source approach to deep counting we aim to broaden utilization of machine learning-based approaches to leaf counting. Our implementation can be downloaded at https://bitbucket.org/tuttoweb/pheno-deep-counter.


Subject(s)
Deep Learning , Phenotype , Plant Leaves/anatomy & histology , Image Processing, Computer-Assisted/methods , Machine Learning , Plants , Software
6.
IEEE Trans Med Imaging ; 37(3): 803-814, 2018 03.
Article in English | MEDLINE | ID: mdl-29053447

ABSTRACT

We propose a multi-input multi-output fully convolutional neural network model for MRI synthesis. The model is robust to missing data, as it benefits from, but does not require, additional input modalities. The model is trained end-to-end, and learns to embed all input modalities into a shared modality-invariant latent space. These latent representations are then combined into a single fused representation, which is transformed into the target output modality with a learnt decoder. We avoid the need for curriculum learning by exploiting the fact that the various input modalities are highly correlated. We also show that by incorporating information from segmentation masks the model can both decrease its error and generate data with synthetic lesions. We evaluate our model on the ISLES and BRATS data sets and demonstrate statistically significant improvements over state-of-the-art methods for single input tasks. This improvement increases further when multiple input modalities are used, demonstrating the benefits of learning a common latent space, again resulting in a statistically significant improvement over the current best method. Finally, we demonstrate our approach on non skull-stripped brain images, producing a statistically significant improvement over the previous best method. Code is made publicly available at https://github.com/agis85/multimodal_brain_synthesis.


Subject(s)
Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Multimodal Imaging/methods , Algorithms , Brain/diagnostic imaging , Humans , Machine Learning , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...