Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 22(2)2022 Jan 10.
Article in English | MEDLINE | ID: mdl-35062457

ABSTRACT

With the availability of low-cost and efficient digital cameras, ecologists can now survey the world's biodiversity through image sensors, especially in the previously rather inaccessible marine realm. However, the data rapidly accumulates, and ecologists face a data processing bottleneck. While computer vision has long been used as a tool to speed up image processing, it is only since the breakthrough of deep learning (DL) algorithms that the revolution in the automatic assessment of biodiversity by video recording can be considered. However, current applications of DL models to biodiversity monitoring do not consider some universal rules of biodiversity, especially rules on the distribution of species abundance, species rarity and ecosystem openness. Yet, these rules imply three issues for deep learning applications: the imbalance of long-tail datasets biases the training of DL models; scarce data greatly lessens the performances of DL models for classes with few data. Finally, the open-world issue implies that objects that are absent from the training dataset are incorrectly classified in the application dataset. Promising solutions to these issues are discussed, including data augmentation, data generation, cross-entropy modification, few-shot learning and open set recognition. At a time when biodiversity faces the immense challenges of climate change and the Anthropocene defaunation, stronger collaboration between computer scientists and ecologists is urgently needed to unlock the automatic monitoring of biodiversity.


Subject(s)
Deep Learning , Ecosystem , Biodiversity , Climate Change , Video Recording
2.
Conserv Biol ; 36(1): e13798, 2022 02.
Article in English | MEDLINE | ID: mdl-34153121

ABSTRACT

Deep learning has become a key tool for the automated monitoring of animal populations with video surveys. However, obtaining large numbers of images to train such models is a major challenge for rare and elusive species because field video surveys provide few sightings. We designed a method that takes advantage of videos accumulated on social media for training deep-learning models to detect rare megafauna species in the field. We trained convolutional neural networks (CNNs) with social media images and tested them on images collected from field surveys. We applied our method to aerial video surveys of dugongs (Dugong dugon) in New Caledonia (southwestern Pacific). CNNs trained with 1303 social media images yielded 25% false positives and 38% false negatives when tested on independent field video surveys. Incorporating a small number of images from New Caledonia (equivalent to 12% of social media images) in the training data set resulted in a nearly 50% decrease in false negatives. Our results highlight how and the extent to which images collected on social media can offer a solid basis for training deep-learning models for rare megafauna detection and that the incorporation of a few images from the study site further boosts detection accuracy. Our method provides a new generation of deep-learning models that can be used to rapidly and accurately process field video surveys for the monitoring of rare megafauna.


El aprendizaje profundo se ha convertido en una importante herramienta para el monitoreo automatizado de las poblaciones animales con video-censos. Sin embargo, la obtención de cantidades abundantes de imágenes para preparar dichos modelos es un reto primordial para las especies elusivas e infrecuentes porque los video-censos de campo proporcionan pocos avistamientos. Diseñamos un método que aprovecha los videos acumulados en las redes sociales para preparar a los modelos de aprendizaje profundo para detectar especies infrecuentes de megafauna en el campo. Preparamos algunas redes neurales convolucionales con imágenes tomadas de las redes sociales y las pusimos a prueba con imágenes tomadas en los censos de campo. Aplicamos nuestro método a los censos aéreos en video de dugongos (Dugong dugon) en Nueva Caledonia (Pacífico sudoccidental). Las redes neurales convolucionales preparadas con 1,303 imágenes de las redes sociales produjeron 25% de falsos positivos y 38% de falsos negativos cuando las probamos en video-censos de campo independientes. La incorporación de un número pequeño de imágenes tomadas en Nueva Caledonia (equivalente al 12% de las imágenes de las redes sociales) dentro del conjunto de datos usados en la preparación dio como resultado una disminución de casi el 50% en los falsos negativos. Nuestros resultados destacan cómo y a qué grado las imágenes recolectadas en las redes sociales pueden ofrecer una base sólida para la preparación de modelos de aprendizaje profundo para la detección de megafauna infrecuente. También resaltan que la incorporación de unas cuantas imágenes del sitio de estudio aumenta mucho más la certeza de detección. Nuestro método proporciona una nueva generación de modelos de aprendizaje profundo que pueden usarse para procesar rápida y acertadamente los video-censos de campo para el monitoreo de megafauna infrecuente.


Subject(s)
Deep Learning , Social Media , Animals , Conservation of Natural Resources , Humans , Neural Networks, Computer
3.
Sci Rep ; 10(1): 14846, 2020 Sep 04.
Article in English | MEDLINE | ID: mdl-32884094

ABSTRACT

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

4.
Sci Rep ; 10(1): 10972, 2020 07 03.
Article in English | MEDLINE | ID: mdl-32620873

ABSTRACT

Processing data from surveys using photos or videos remains a major bottleneck in ecology. Deep Learning Algorithms (DLAs) have been increasingly used to automatically identify organisms on images. However, despite recent advances, it remains difficult to control the error rate of such methods. Here, we proposed a new framework to control the error rate of DLAs. More precisely, for each species, a confidence threshold was automatically computed using a training dataset independent from the one used to train the DLAs. These species-specific thresholds were then used to post-process the outputs of the DLAs, assigning classification scores to each class for a given image including a new class called "unsure". We applied this framework to a study case identifying 20 fish species from 13,232 underwater images on coral reefs. The overall rate of species misclassification decreased from 22% with the raw DLAs to 2.98% after post-processing using the thresholds defined to minimize the risk of misclassification. This new framework has the potential to unclog the bottleneck of information extraction from massive digital data while ensuring a high level of accuracy in biodiversity assessment.

SELECTION OF CITATIONS
SEARCH DETAIL
...