Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Assunto principal
Intervalo de ano de publicação
1.
Commun Biol ; 7(1): 711, 2024 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-38862808

RESUMO

Deepfakes are viral ingredients of digital environments, and they can trick human cognition into misperceiving the fake as real. Here, we test the neurocognitive sensitivity of 25 participants to accept or reject person identities as recreated in audio deepfakes. We generate high-quality voice identity clones from natural speakers by using advanced deepfake technologies. During an identity matching task, participants show intermediate performance with deepfake voices, indicating levels of deception and resistance to deepfake identity spoofing. On the brain level, univariate and multivariate analyses consistently reveal a central cortico-striatal network that decoded the vocal acoustic pattern and deepfake-level (auditory cortex), as well as natural speaker identities (nucleus accumbens), which are valued for their social relevance. This network is embedded in a broader neural identity and object recognition network. Humans can thus be partly tricked by deepfakes, but the neurocognitive mechanisms identified during deepfake processing open windows for strengthening human resilience to fake information.


Assuntos
Percepção da Fala , Humanos , Masculino , Feminino , Adulto , Adulto Jovem , Percepção da Fala/fisiologia , Rede Nervosa/fisiologia , Córtex Auditivo/fisiologia , Voz/fisiologia , Corpo Estriado/fisiologia
2.
Cerebellum ; 2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38448793

RESUMO

The progression of multisystem neurodegenerative diseases such as ataxia significantly impacts speech and communication, necessitating adaptive clinical care strategies. With the deterioration of speech, Alternative and Augmentative Communication (AAC) can play an ever increasing role in daily life for individuals with ataxia. This review describes the spectrum of AAC resources available, ranging from unaided gestures and sign language to high-tech solutions like speech-generating devices (SGDs) and eye-tracking technology. Despite the availability of various AAC tools, their efficacy is often compromised by the physical limitations inherent in ataxia, including upper limb ataxia and visual disturbances. Traditional speech-to-text algorithms and eye gaze technology face challenges in accuracy and efficiency due to the atypical speech and movement patterns associated with the disease.In addressing these challenges, maintaining existing speech abilities through rehabilitation is prioritized, complemented by advances in digital therapeutics to provide home-based treatments. Simultaneously, projects incorporating AI driven solutions aim to enhance the intelligibility of dysarthric speech through improved speech-to-text accuracy.This review discusses the complex needs assessment for AAC in ataxia, emphasizing the dynamic nature of the disease and the importance of regular reassessment to tailor communication strategies to the changing abilities of the individual. It also highlights the necessity of multidisciplinary involvement for effective AAC assessment and intervention. The future of AAC looks promising with developments in brain-computer interfaces and the potential of voice banking, although their application in ataxia requires further exploration.

3.
J Acoust Soc Am ; 146(1): EL1, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-31370609

RESUMO

An unsupervised automatic clustering algorithm (k-means) classified 1282 Mel frequency cepstral coefficient (MFCC) representations of isolated steady-state vowel utterances from eight standard German vowel categories with fo between 196 and 698 Hz. Experiment I obtained the number of MFCCs (1-20) in connection with the spectral bandwidth (2-20 kHz) at which performance peaked (five MFCCs at 4 kHz). In experiment II, classification performance with different ranges of fo revealed that ranges with fo > 500 Hz reduced classification performance but it remained well above chance. This shows that isolated steady state vowels with strongly undersampled spectra contain sufficient acoustic information to be classified automatically.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...