Your browser doesn't support javascript.
loading
A perceptual similarity space for speech based on self-supervised speech representations.
Chernyak, Bronya R; Bradlow, Ann R; Keshet, Joseph; Goldrick, Matthew.
Afiliação
  • Chernyak BR; Faculty of Electrical & Computer Engineering, Technion-Israel Institute of Technology, Haifa 3200003, Israel.
  • Bradlow AR; Department of Linguistics, Northwestern University, Evanston, Illinois 60208, USA.
  • Keshet J; Faculty of Electrical & Computer Engineering, Technion-Israel Institute of Technology, Haifa 3200003, Israel.
  • Goldrick M; Department of Linguistics, Northwestern University, Evanston, Illinois 60208, USA.
J Acoust Soc Am ; 155(6): 3915-3929, 2024 Jun 01.
Article em En | MEDLINE | ID: mdl-38904539
ABSTRACT
Speech recognition by both humans and machines frequently fails in non-optimal yet common situations. For example, word recognition error rates for second-language (L2) speech can be high, especially under conditions involving background noise. At the same time, both human and machine speech recognition sometimes shows remarkable robustness against signal- and noise-related degradation. Which acoustic features of speech explain this substantial variation in intelligibility? Current approaches align speech to text to extract a small set of pre-defined spectro-temporal properties from specific sounds in particular words. However, variation in these properties leaves much cross-talker variation in intelligibility unexplained. We examine an alternative approach utilizing a perceptual similarity space acquired using self-supervised learning. This approach encodes distinctions between speech samples without requiring pre-defined acoustic features or speech-to-text alignment. We show that L2 English speech samples are less tightly clustered in the space than L1 samples reflecting variability in English proficiency among L2 talkers. Critically, distances in this similarity space are perceptually meaningful L1 English listeners have lower recognition accuracy for L2 speakers whose speech is more distant in the space from L1 speech. These results indicate that perceptual similarity may form the basis for an entirely new speech and language analysis approach.
Assuntos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Acústica da Fala / Inteligibilidade da Fala / Percepção da Fala Limite: Adult / Female / Humans / Male Idioma: En Revista: J Acoust Soc Am Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Israel País de publicação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Acústica da Fala / Inteligibilidade da Fala / Percepção da Fala Limite: Adult / Female / Humans / Male Idioma: En Revista: J Acoust Soc Am Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Israel País de publicação: Estados Unidos