Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
J Acoust Soc Am ; 149(6): 3797, 2021 06.
Article in English | MEDLINE | ID: mdl-34241455

ABSTRACT

This paper proposes a robust system for detecting North Atlantic right whales by using deep learning methods to denoise noisy recordings. Passive acoustic recordings of right whale vocalisations are subject to noise contamination from many sources, such as shipping and offshore activities. When such data are applied to uncompensated classifiers, accuracy falls substantially. To build robustness into the detection process, two separate approaches that have proved successful for image denoising are considered. Specifically, a denoising convolutional neural network and a denoising autoencoder, each of which is applied to spectrogram representations of the noisy audio signal, are developed. Performance is improved further by matching the classifier training to include the vestigial signal that remains in clean estimates after the denoising process. Evaluations are performed first by adding white, tanker, trawler, and shot noises at signal-to-noise ratios from -10 to +5 dB to clean recordings to simulate noisy conditions. Experiments show that denoising gives substantial improvements to accuracy, particularly when using the vestigial-trained classifier. A final test applies the proposed methods to previously unseen noisy right whale recordings and finds that denoising is able to improve performance over the baseline clean-trained model in this new noise environment.


Subject(s)
Deep Learning , Whales , Animals , Neural Networks, Computer , Noise/adverse effects , Signal-To-Noise Ratio
2.
J Acoust Soc Am ; 124(6): 3989-4000, 2008 Dec.
Article in English | MEDLINE | ID: mdl-19206822

ABSTRACT

The aim of this work is to develop methods that enable acoustic speech features to be predicted from mel-frequency cepstral coefficient (MFCC) vectors as may be encountered in distributed speech recognition architectures. The work begins with a detailed analysis of the multiple correlation between acoustic speech features and MFCC vectors. This confirms the existence of correlation, which is found to be higher when measured within specific phonemes rather than globally across all speech sounds. The correlation analysis leads to the development of a statistical method of predicting acoustic speech features from MFCC vectors that utilizes a network of hidden Markov models (HMMs) to localize prediction to specific phonemes. Within each HMM, the joint density of acoustic features and MFCC vectors is modeled and used to make a maximum a posteriori prediction. Experimental results are presented across a range of conditions, such as with speaker-dependent, gender-dependent, and gender-independent constraints, and these show that acoustic speech features can be predicted from MFCC vectors with good accuracy. A comparison is also made against an alternative scheme that substitutes the higher-order MFCCs with acoustic features for transmission. This delivers accurate acoustic features but at the expense of a significant reduction in speech recognition accuracy.


Subject(s)
Models, Biological , Pattern Recognition, Physiological , Phonetics , Recognition, Psychology , Signal Detection, Psychological , Speech Acoustics , Speech Intelligibility , Speech Perception , Female , Humans , Male , Psychoacoustics , Reproducibility of Results , Sex Factors
3.
J Acoust Soc Am ; 118(2): 1134-43, 2005 Aug.
Article in English | MEDLINE | ID: mdl-16158667

ABSTRACT

This work proposes a method to reconstruct an acoustic speech signal solely from a stream of mel-frequency cepstral coefficients (MFCCs) as may be encountered in a distributed speech recognition (DSR) system. Previous methods for speech reconstruction have required, in addition to the MFCC vectors, fundamental frequency and voicing components. In this work the voicing classification and fundamental frequency are predicted from the MFCC vectors themselves using two maximum a posteriori (MAP) methods. The first method enables fundamental frequency prediction by modeling the joint density of MFCCs and fundamental frequency using a single Gaussian mixture model (GMM). The second scheme uses a set of hidden Markov models (HMMs) to link together a set of state-dependent GMMs, which enables a more localized modeling of the joint density of MFCCs and fundamental frequency. Experimental results on speaker-independent male and female speech show that accurate voicing classification and fundamental frequency prediction is attained when compared to hand-corrected reference fundamental frequency measurements. The use of the predicted fundamental frequency and voicing for speech reconstruction is shown to give very similar speech quality to that obtained using the reference fundamental frequency and voicing.


Subject(s)
Phonetics , Speech Acoustics , Speech Perception/physiology , Dichotic Listening Tests , Female , Humans , Male , Mathematical Computing , Models, Biological , Predictive Value of Tests , Sound Spectrography , Speech Intelligibility , Time Factors , Voice
SELECTION OF CITATIONS
SEARCH DETAIL
...