Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 13(1): 11106, 2023 07 10.
Article in English | MEDLINE | ID: mdl-37429871

ABSTRACT

Acoustic identification of vocalizing individuals opens up new and deeper insights into animal communications, such as individual-/group-specific dialects, turn-taking events, and dialogs. However, establishing an association between an individual animal and its emitted signal is usually non-trivial, especially for animals underwater. Consequently, a collection of marine species-, array-, and position-specific ground truth localization data is extremely challenging, which strongly limits possibilities to evaluate localization methods beforehand or at all. This study presents ORCA-SPY, a fully-automated sound source simulation, classification and localization framework for passive killer whale (Orcinus orca) acoustic monitoring that is embedded into PAMGuard, a widely used bioacoustic software toolkit. ORCA-SPY enables array- and position-specific multichannel audio stream generation to simulate real-world ground truth killer whale localization data and provides a hybrid sound source identification approach integrating ANIMAL-SPOT, a state-of-the-art deep learning-based orca detection network, followed by downstream Time-Difference-Of-Arrival localization. ORCA-SPY was evaluated on simulated multichannel underwater audio streams including various killer whale vocalization events within a large-scale experimental setup benefiting from previous real-world fieldwork experience. Across all 58,320 embedded vocalizing killer whale events, subject to various hydrophone array geometries, call types, distances, and noise conditions responsible for a signal-to-noise ratio varying from [Formula: see text] dB to 3 dB, a detection rate of 94.0 % was achieved with an average localization error of 7.01[Formula: see text]. ORCA-SPY was field-tested on Lake Stechlin in Brandenburg Germany under laboratory conditions with a focus on localization. During the field test, 3889 localization events were observed with an average error of 29.19[Formula: see text] and a median error of 17.54[Formula: see text]. ORCA-SPY was deployed successfully during the DeepAL fieldwork 2022 expedition (DLFW22) in Northern British Columbia, with a mean average error of 20.01[Formula: see text] and a median error of 11.01[Formula: see text] across 503 localization events. ORCA-SPY is an open-source and publicly available software framework, which can be adapted to various recording conditions as well as animal species.


Subject(s)
Deep Learning , Whale, Killer , Animals , Sound , Computer Simulation , Software
2.
R Soc Open Sci ; 10(6): 221613, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37325592

ABSTRACT

Area-restricted search (ARS) behaviour is commonly used to characterize spatio-temporal variation in foraging activity of predators, but evidence of the drivers underlying this behaviour in marine systems is sparse. Advances in underwater sound recording techniques and automated processing of acoustic data now provide opportunities to investigate these questions where species use different vocalizations when encountering prey. Here, we used passive acoustics to investigate drivers of ARS behaviour in a population of dolphins and determined if residency in key foraging areas increased following encounters with prey. Analyses were based on two independent proxies of foraging: echolocation buzzes (widely used as foraging proxies) and bray calls (vocalizations linked to salmon predation attempts). Echolocation buzzes were extracted from echolocation data loggers and bray calls from broadband recordings by a convolutional neural network. We found a strong positive relationship between the duration of encounters and the frequency of both foraging proxies, supporting the theory that bottlenose dolphins engage in ARS behaviour in response to higher prey encounter rates. This study provides empirical evidence for one driver of ARS behaviour and demonstrates the potential for applying passive acoustic monitoring in combination with deep learning-based techniques to investigate the behaviour of vocal animals.

3.
Sci Rep ; 12(1): 21966, 2022 12 19.
Article in English | MEDLINE | ID: mdl-36535999

ABSTRACT

Bioacoustic research spans a wide range of biological questions and applications, relying on identification of target species or smaller acoustic units, such as distinct call types. However, manually identifying the signal of interest is time-intensive, error-prone, and becomes unfeasible with large data volumes. Therefore, machine-driven algorithms are increasingly applied to various bioacoustic signal identification challenges. Nevertheless, biologists still have major difficulties trying to transfer existing animal- and/or scenario-related machine learning approaches to their specific animal datasets and scientific questions. This study presents an animal-independent, open-source deep learning framework, along with a detailed user guide. Three signal identification tasks, commonly encountered in bioacoustics research, were investigated: (1) target signal vs. background noise detection, (2) species classification, and (3) call type categorization. ANIMAL-SPOT successfully segmented human-annotated target signals in data volumes representing 10 distinct animal species and 1 additional genus, resulting in a mean test accuracy of 97.9%, together with an average area under the ROC curve (AUC) of 95.9%, when predicting on unseen recordings. Moreover, an average segmentation accuracy and F1-score of 95.4% was achieved on the publicly available BirdVox-Full-Night data corpus. In addition, multi-class species and call type classification resulted in 96.6% and 92.7% accuracy on unseen test data, as well as 95.2% and 88.4% regarding previous animal-specific machine-based detection excerpts. Furthermore, an Unweighted Average Recall (UAR) of 89.3% outperformed the multi-species classification baseline system of the ComParE 2021 Primate Sub-Challenge. Besides animal independence, ANIMAL-SPOT does not rely on expert knowledge or special computing resources, thereby making deep-learning-based bioacoustic signal identification accessible to a broad audience.


Subject(s)
Deep Learning , Animals , Humans , Machine Learning , Algorithms , Acoustics , Area Under Curve
4.
Sci Rep ; 9(1): 10997, 2019 07 29.
Article in English | MEDLINE | ID: mdl-31358873

ABSTRACT

Large bioacoustic archives of wild animals are an important source to identify reappearing communication patterns, which can then be related to recurring behavioral patterns to advance the current understanding of intra-specific communication of non-human animals. A main challenge remains that most large-scale bioacoustic archives contain only a small percentage of animal vocalizations and a large amount of environmental noise, which makes it extremely difficult to manually retrieve sufficient vocalizations for further analysis - particularly important for species with advanced social systems and complex vocalizations. In this study deep neural networks were trained on 11,509 killer whale (Orcinus orca) signals and 34,848 noise segments. The resulting toolkit ORCA-SPOT was tested on a large-scale bioacoustic repository - the Orchive - comprising roughly 19,000 hours of killer whale underwater recordings. An automated segmentation of the entire Orchive recordings (about 2.2 years) took approximately 8 days. It achieved a time-based precision or positive-predictive-value (PPV) of 93.2% and an area-under-the-curve (AUC) of 0.9523. This approach enables an automated annotation procedure of large bioacoustics databases to extract killer whale sounds, which are essential for subsequent identification of significant communication patterns. The code will be publicly available in October 2019 to support the application of deep learning to bioaoucstic research. ORCA-SPOT can be adapted to other animal species.


Subject(s)
Vocalization, Animal , Whale, Killer/physiology , Acoustics , Animals , Deep Learning , Female , Male , Neural Networks, Computer , Sound , Sound Spectrography/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...