Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Am Nat ; 203(5): 618-627, 2024 May.
Article in English | MEDLINE | ID: mdl-38635364

ABSTRACT

AbstractAutonomous sensors provide opportunities to observe organisms across spatial and temporal scales that humans cannot directly observe. By processing large data streams from autonomous sensors with deep learning methods, researchers can make novel and important natural history discoveries. In this study, we combine automated acoustic monitoring with deep learning models to observe breeding-associated activity in the endangered Sierra Nevada yellow-legged frog (Rana sierrae), a behavior that current surveys do not measure. By deploying inexpensive hydrophones and developing a deep learning model to recognize breeding-associated vocalizations, we discover three undocumented R. sierrae vocalization types and find an unexpected temporal pattern of nocturnal breeding-associated vocal activity. This study exemplifies how the combination of autonomous sensor data and deep learning can shed new light on species' natural history, especially during times or in locations where human observation is limited or impossible.


Subject(s)
Ranidae , Vocalization, Animal , Animals , Humans , Acoustics
2.
Sensors (Basel) ; 23(11)2023 Jun 01.
Article in English | MEDLINE | ID: mdl-37299981

ABSTRACT

The AudioMoth is a popular autonomous recording unit (ARU) that is widely used to record vocalizing species in the field. Despite its growing use, there have been few quantitative tests on the performance of this recorder. Such information is needed to design effective field surveys and to appropriately analyze recordings made by this device. Here, we report the results of two tests designed to evaluate the performance characteristics of the AudioMoth recorder. First, we performed indoor and outdoor pink noise playback experiments to evaluate how different device settings, orientations, mounting conditions, and housing options affect frequency response patterns. We found little variation in acoustic performance between devices and relatively little effect of placing recorders in a plastic bag for weather protection. The AudioMoth has a mostly flat on-axis response with a boost above 3 kHz, with a generally omnidirectional response that suffers from attenuation behind the recorder, an effect that is accentuated when it is mounted on a tree. Second, we performed battery life tests under a variety of recording frequencies, gain settings, environmental temperatures, and battery types. We found that standard alkaline batteries last for an average of 189 h at room temperature using a 32 kHz sample rate, and that lithium batteries can last for twice as long at freezing temperatures compared to alkaline batteries. This information will aid researchers in both collecting and analyzing recordings generated by the AudioMoth recorder.


Subject(s)
Acoustics , Noise , Temperature , Cold Temperature , Housing
3.
Conserv Biol ; 35(5): 1659-1668, 2021 10.
Article in English | MEDLINE | ID: mdl-33586273

ABSTRACT

Anurans (frogs and toads) are among the most globally threatened taxonomic groups. Successful conservation of anurans will rely on improved data on the status and changes in local populations, particularly for rare and threatened species. Automated sensors, such as acoustic recorders, have the potential to provide such data by massively increasing the spatial and temporal scale of population sampling efforts. Analyzing such data sets will require robust and efficient tools that can automatically identify the presence of a species in audio recordings. Like bats and birds, many anuran species produce distinct vocalizations that can be captured by autonomous acoustic recorders and represent excellent candidates for automated recognition. However, in contrast to birds and bats, effective automated acoustic recognition tools for anurans are not yet widely available. An effective automated call-recognition method for anurans must be robust to the challenges of real-world field data and should not require extensive labeled data sets. We devised a vocalization identification tool that classifies anuran vocalizations in audio recordings based on their periodic structure: the repeat interval-based bioacoustic identification tool (RIBBIT). We applied RIBBIT to field recordings to study the boreal chorus frog (Pseudacris maculata) of temperate North American grasslands and the critically endangered variable harlequin frog (Atelopus varius) of tropical Central American rainforests. The tool accurately identified boreal chorus frogs, even when they vocalized in heavily overlapping choruses and identified variable harlequin frog vocalizations at a field site where it had been very rarely encountered in visual surveys. Using a few simple parameters, RIBBIT can detect any vocalization with a periodic structure, including those of many anurans, insects, birds, and mammals. We provide open-source implementations of RIBBIT in Python and R to support its use for other taxa and communities.


Los anuros (ranas y sapos) se encuentran dentro de los grupos taxonómicos más amenazados a nivel mundial. La conservación exitosa de los anuros dependerá de información mejorada sobre el estado y los cambios en las poblaciones locales, particularmente para las especies raras y amenazadas. Los sensores automatizados, como las grabadoras acústicas, tienen el potencial para proporcionar dicha información al incrementar masivamente la escala espacial y temporal de los esfuerzos de muestreo poblacional. El análisis de dicha información requerirá herramientas robustas y eficientes que puedan identificar automáticamente la presencia de una especie en las grabaciones de audio. Como las aves y los murciélagos, muchas especies de anuros producen vocalizaciones distintivas que pueden ser capturadas por las grabadoras acústicas autónomas y también son excelentes candidatas para el reconocimiento automatizado. Sin embargo, a diferencia de las aves y los murciélagos, todavía no se cuenta con una disponibilidad extensa de herramientas para el reconocimiento acústico automatizado de los anuros. Un método efectivo para el reconocimiento automatizado del canto de los anuros debe ser firme ante los retos de los datos reales de campo y no debería requerir conjuntos extensos de datos etiquetados. Diseñamos una herramienta de identificación de las vocalizaciones: la herramienta de identificación bioacústica basada en el intervalo de repetición (RIBBIT), el cual clasifica las vocalizaciones de los anuros en las grabaciones de audio con base en su estructura periódica. Aplicamos la RIBBIT a las grabaciones de campo para estudiar a dos especies: la rana coral boreal (Pseudacris maculata) de los pastizales templados de América del Norte y la rana arlequín variable (Atelopus varius), críticamente en peligro de extinción, de las selvas tropicales de América Central. Mostramos que RIBBIT puede identificar correctamente a las ranas corales boreales, incluso cuando vocalizan en coros con mucha superposición, y puede identificar las vocalizaciones de la rana arlequín variable en un sitio de campo en donde rara vez se le ha visto durante censos visuales. Mediante relativamente unos cuantos parámetros simples, RIBBIT puede detectar cualquier vocalización con una estructura periódica, incluyendo aquellas de muchos anuros, insectos, aves y mamíferos. Proporcionamos implementaciones de fuente abierta de RIBBIT en Python y en R para fomentar su uso para otros taxones y comunidades.


Subject(s)
Conservation of Natural Resources , Vocalization, Animal , Acoustics , Animals , Anura , Birds
SELECTION OF CITATIONS
SEARCH DETAIL
...