Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Plant Sci ; 14: 1150956, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37860262

RESUMO

Plant phenology plays a vital role in assessing climate change. To monitor this, individual plants are traditionally visited and observed by trained volunteers organized in national or international networks - in Germany, for example, by the German Weather Service, DWD. However, their number of observers is continuously decreasing. In this study, we explore the feasibility of using opportunistically captured plant observations, collected via the plant identification app Flora Incognita to determine the onset of flowering and, based on that, create interpolation maps comparable to those of the DWD. Therefore, the opportunistic observations of 17 species collected in 2020 and 2021 were assigned to "Flora Incognita stations" based on location and altitude in order to mimic the network of stations forming the data basis for the interpolation conducted by the DWD. From the distribution of observations, the percentile representing onset of flowering date was calculated using a parametric bootstrapping approach and then interpolated following the same process as applied by the DWD. Our results show that for frequently observed, herbaceous and conspicuous species, the patterns of onset of flowering were similar and comparable between both data sources. We argue that a prominent flowering stage is crucial for accurately determining the onset of flowering from opportunistic plant observations, and we discuss additional factors, such as species distribution, location bias and societal events contributing to the differences among species and phenology data. In conclusion, our study demonstrates that the phenological monitoring of certain species can benefit from incorporating opportunistic plant observations. Furthermore, we highlight the potential to expand the taxonomic range of monitored species for phenological stage assessment through opportunistic plant observation data.

2.
Front Plant Sci ; 12: 804140, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35154194

RESUMO

Poaceae represent one of the largest plant families in the world. Many species are of great economic importance as food and forage plants while others represent important weeds in agriculture. Although a large number of studies currently address the question of how plants can be best recognized on images, there is a lack of studies evaluating specific approaches for uniform species groups considered difficult to identify because they lack obvious visual characteristics. Poaceae represent an example of such a species group, especially when they are non-flowering. Here we present the results from an experiment to automatically identify Poaceae species based on images depicting six well-defined perspectives. One perspective shows the inflorescence while the others show vegetative parts of the plant such as the collar region with the ligule, adaxial and abaxial side of the leaf and culm nodes. For each species we collected 80 observations, each representing a series of six images taken with a smartphone camera. We extract feature representations from the images using five different convolutional neural networks (CNN) trained on objects from different domains and classify them using four state-of-the art classification algorithms. We combine these perspectives via score level fusion. In order to evaluate the potential of identifying non-flowering Poaceae we separately compared perspective combinations either comprising inflorescences or not. We find that for a fusion of all six perspectives, using the best combination of feature extraction CNN and classifier, an accuracy of 96.1% can be achieved. Without the inflorescence, the overall accuracy is still as high as 90.3%. In all but one case the perspective conveying the most information about the species (excluding inflorescence) is the ligule in frontal view. Our results show that even species considered very difficult to identify can achieve high accuracies in automatic identification as long as images depicting suitable perspectives are available. We suggest that our approach could be transferred to other difficult-to-distinguish species groups in order to identify the most relevant perspectives.

3.
New Phytol ; 229(1): 593-606, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32803754

RESUMO

Pollen identification and quantification are crucial but challenging tasks in addressing a variety of evolutionary and ecological questions (pollination, paleobotany), but also for other fields of research (e.g. allergology, honey analysis or forensics). Researchers are exploring alternative methods to automate these tasks but, for several reasons, manual microscopy is still the gold standard. In this study, we present a new method for pollen analysis using multispectral imaging flow cytometry in combination with deep learning. We demonstrate that our method allows fast measurement while delivering high accuracy pollen identification. A dataset of 426 876 images depicting pollen from 35 plant species was used to train a convolutional neural network classifier. We found the best-performing classifier to yield a species-averaged accuracy of 96%. Even species that are difficult to differentiate using microscopy could be clearly separated. Our approach also allows a detailed determination of morphological pollen traits, such as size, symmetry or structure. Our phylogenetic analyses suggest phylogenetic conservatism in some of these traits. Given a comprehensive pollen reference database, we provide a powerful tool to be used in any pollen study with a need for rapid and accurate species identification, pollen grain quantification and trait extraction of recent pollen.


Assuntos
Aprendizado Profundo , Citometria de Fluxo , Filogenia , Pólen , Polinização
4.
BMC Bioinformatics ; 21(1): 576, 2020 Dec 14.
Artigo em Inglês | MEDLINE | ID: mdl-33317442

RESUMO

BACKGROUND: Digital plant images are becoming increasingly important. First, given a large number of images deep learning algorithms can be trained to automatically identify plants. Second, structured image-based observations provide information about plant morphological characteristics. Finally in the course of digitalization, digital plant collections receive more and more interest in schools and universities. RESULTS: We developed a freely available mobile application called Flora Capture allowing users to collect series of plant images from predefined perspectives. These images, together with accompanying metadata, are transferred to a central project server where each observation is reviewed and validated by a team of botanical experts. Currently, more than 4800 plant species, naturally occurring in the Central European region, are covered by the application. More than 200,000 images, depicting more than 1700 plant species, have been collected by thousands of users since the initial app release in 2016. CONCLUSION: Flora Capture allows experts, laymen and citizen scientists to collect a digital herbarium and share structured multi-modal observations of plants. Collected images contribute, e.g., to the training of plant identification algorithms, but also suit educational purposes. Additionally, presence records collected with each observation allow contribute to verifiable records of plant occurrences across the world.


Assuntos
Plantas/anatomia & histologia , Software , Flores/anatomia & histologia , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação
5.
BMC Bioinformatics ; 20(1): 4, 2019 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-30606100

RESUMO

BACKGROUND: Modern plant taxonomy reflects phylogenetic relationships among taxa based on proposed morphological and genetic similarities. However, taxonomical relation is not necessarily reflected by close overall resemblance, but rather by commonality of very specific morphological characters or similarity on the molecular level. It is an open research question to which extent phylogenetic relations within higher taxonomic levels such as genera and families are reflected by shared visual characters of the constituting species. As a consequence, it is even more questionable whether the taxonomy of plants at these levels can be identified from images using machine learning techniques. RESULTS: Whereas previous studies on automated plant identification from images focused on the species level, we investigated classification at higher taxonomic levels such as genera and families. We used images of 1000 plant species that are representative for the flora of Western Europe. We tested how accurate a visual representation of genera and families can be learned from images of their species in order to identify the taxonomy of species included in and excluded from learning. Using natural images with random content, roughly 500 images per species are required for accurate classification. The classification accuracy for 1000 species amounts to 82.2% and increases to 85.9% and 88.4% on genus and family level. Classifying species excluded from training, the accuracy significantly reduces to 38.3% and 38.7% on genus and family level. Excluded species of well represented genera and families can be classified with 67.8% and 52.8% accuracy. CONCLUSION: Our results show that shared visual characters are indeed present at higher taxonomic levels. Most dominantly they are preserved in flowers and leaves, and enable state-of-the-art classification algorithms to learn accurate visual representations of plant genera and families. Given a sufficient amount and composition of training data, we show that this allows for high classification accuracy increasing with the taxonomic level and even facilitating the taxonomic identification of species excluded from the training process.


Assuntos
Filogenia , Plantas/classificação
6.
BMC Ecol ; 18(1): 51, 2018 12 03.
Artigo em Inglês | MEDLINE | ID: mdl-30509239

RESUMO

BACKGROUND: Phytoplankton species identification and counting is a crucial step of water quality assessment. Especially drinking water reservoirs, bathing and ballast water need to be regularly monitored for harmful species. In times of multiple environmental threats like eutrophication, climate warming and introduction of invasive species more intensive monitoring would be helpful to develop adequate measures. However, traditional methods such as microscopic counting by experts or high throughput flow cytometry based on scattering and fluorescence signals are either too time-consuming or inaccurate for species identification tasks. The combination of high qualitative microscopy with high throughput and latest development in machine learning techniques can overcome this hurdle. RESULTS: In this study, image based cytometry was used to collect ~ 47,000 images for brightfield and Chl a fluorescence at 60× magnification for nine common freshwater species of nano- and micro-phytoplankton. A deep neuronal network trained on these images was applied to identify the species and the corresponding life cycle stage during the batch cultivation. The results show the high potential of this approach, where species identity and their respective life cycle stage could be predicted with a high accuracy of 97%. CONCLUSIONS: These findings could pave the way for reliable and fast phytoplankton species determination of indicator species as a crucial step in water quality assessment.


Assuntos
Aprendizado Profundo , Monitoramento Ambiental/métodos , Citometria de Fluxo/métodos , Estágios do Ciclo de Vida , Fitoplâncton/classificação , Ensaios de Triagem em Larga Escala/métodos , Fitoplâncton/crescimento & desenvolvimento
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...