Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Cognition ; 251: 105909, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39111075

RESUMEN

Vowelless words are exceptionally typologically rare, though they are found in some languages, such as Tashlhiyt (e.g., fkt 'give it'). The current study tests whether lexicons containing tri-segmental (CCC) vowelless words are more difficult to acquire than lexicons not containing vowelless words by adult English speakers from brief auditory exposure. The role of acoustic-phonetic form on learning these typologically rare word forms is also explored: In Experiment 1, participants were trained on words produced in either only Clear speech or Casual speech productions of words; Experiment 2 trained participants on lexical items produced in both speech styles. Listeners were able to learn both vowelless and voweled lexicons equally well when speaking style was consistent for participants, but learning was lower for vowelless lexicons when training consisted of variable acoustic-phonetic forms. In both experiments, responses to a post-training wordlikeness ratings task containing novel items revealed that exposure to a vowelless lexicon leads participants to accept new vowelless words as acceptable lexical forms. These results demonstrate that one of the typologically rarest types of lexical forms - words without vowels - can be rapidly acquired by naive adult listeners. Yet, acoustic-phonetic variation modulates learning.


Asunto(s)
Aprendizaje , Percepción del Habla , Humanos , Adulto , Percepción del Habla/fisiología , Masculino , Femenino , Aprendizaje/fisiología , Adulto Joven , Fonética , Lenguaje
2.
Sci Rep ; 14(1): 15611, 2024 07 06.
Artículo en Inglés | MEDLINE | ID: mdl-38971806

RESUMEN

This study compares how English-speaking adults and children from the United States adapt their speech when talking to a real person and a smart speaker (Amazon Alexa) in a psycholinguistic experiment. Overall, participants produced more effortful speech when talking to a device (longer duration and higher pitch). These differences also varied by age: children produced even higher pitch in device-directed speech, suggesting a stronger expectation to be misunderstood by the system. In support of this, we see that after a staged recognition error by the device, children increased pitch even more. Furthermore, both adults and children displayed the same degree of variation in their responses for whether "Alexa seems like a real person or not", further indicating that children's conceptualization of the system's competence shaped their register adjustments, rather than an increased anthropomorphism response. This work speaks to models on the mechanisms underlying speech production, and human-computer interaction frameworks, providing support for routinized theories of spoken interaction with technology.


Asunto(s)
Habla , Humanos , Adulto , Niño , Masculino , Femenino , Habla/fisiología , Adulto Joven , Adolescente , Psicolingüística
3.
J Acoust Soc Am ; 156(1): 489-502, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39013039

RESUMEN

Anticipatory coarticulation is a highly informative cue to upcoming linguistic information: listeners can identify that the word is ben and not bed by hearing the vowel alone. The present study compares the relative performances of human listeners and a self-supervised pre-trained speech model (wav2vec 2.0) in the use of nasal coarticulation to classify vowels. Stimuli consisted of nasalized (from CVN words) and non-nasalized (from CVCs) American English vowels produced by 60 humans and generated in 36 TTS voices. wav2vec 2.0 performance is similar to human listener performance, in aggregate. Broken down by vowel type: both wav2vec 2.0 and listeners perform higher for non-nasalized vowels produced naturally by humans. However, wav2vec 2.0 shows higher correct classification performance for nasalized vowels, than for non-nasalized vowels, for TTS voices. Speaker-level patterns reveal that listeners' use of coarticulation is highly variable across talkers. wav2vec 2.0 also shows cross-talker variability in performance. Analyses also reveal differences in the use of multiple acoustic cues in nasalized vowel classifications across listeners and the wav2vec 2.0. Findings have implications for understanding how coarticulatory variation is used in speech perception. Results also can provide insight into how neural systems learn to attend to the unique acoustic features of coarticulation.


Asunto(s)
Fonética , Acústica del Lenguaje , Percepción del Habla , Humanos , Femenino , Percepción del Habla/fisiología , Masculino , Adulto , Adulto Joven , Señales (Psicología) , Calidad de la Voz
4.
J Acoust Soc Am ; 155(5): 3060-3070, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38717210

RESUMEN

Speakers tailor their speech to different types of interlocutors. For example, speech directed to voice technology has different acoustic-phonetic characteristics than speech directed to a human. The present study investigates the perceptual consequences of human- and device-directed registers in English. We compare two groups of speakers: participants whose first language is English (L1) and bilingual L1 Mandarin-L2 English talkers. Participants produced short sentences in several conditions: an initial production and a repeat production after a human or device guise indicated either understanding or misunderstanding. In experiment 1, a separate group of L1 English listeners heard these sentences and transcribed the target words. In experiment 2, the same productions were transcribed by an automatic speech recognition (ASR) system. Results show that transcription accuracy was highest for L1 talkers for both human and ASR transcribers. Furthermore, there were no overall differences in transcription accuracy between human- and device-directed speech. Finally, while human listeners showed an intelligibility benefit for coda repair productions, the ASR transcriber did not benefit from these enhancements. Findings are discussed in terms of models of register adaptation, phonetic variation, and human-computer interaction.


Asunto(s)
Multilingüismo , Inteligibilidad del Habla , Percepción del Habla , Humanos , Masculino , Femenino , Adulto , Adulto Joven , Acústica del Lenguaje , Fonética , Software de Reconocimiento del Habla
5.
Sci Rep ; 14(1): 313, 2024 01 03.
Artículo en Inglés | MEDLINE | ID: mdl-38172277

RESUMEN

Tashlhiyt is a low-resource language with respect to acoustic databases, language corpora, and speech technology tools, such as Automatic Speech Recognition (ASR) systems. This study investigates whether a method of cross-language re-use of ASR is viable for Tashlhiyt from an existing commercially-available system built for Arabic. The source and target language in this case have similar phonological inventories, but Tashlhiyt permits typologically rare phonological patterns, including vowelless words, while Arabic does not. We find systematic disparities in ASR transfer performance (measured as word error rate (WER) and Levenshtein distance) for Tashlhiyt across word forms and speaking style variation. Overall, performance was worse for casual speaking modes across the board. In clear speech, performance was lower for vowelless than for voweled words. These results highlight systematic speaking mode- and phonotactic-disparities in cross-language ASR transfer. They also indicate that linguistically-informed approaches to ASR re-use can provide more effective ways to adapt existing speech technology tools for low resource languages, especially when they contain typologically rare structures. The study also speaks to issues of linguistic disparities in ASR and speech technology more broadly. It can also contribute to understanding the extent to which machines are similar to, or different from, humans in mapping the acoustic signal to discrete linguistic representations.


Asunto(s)
Percepción del Habla , Humanos , Lenguaje , Lingüística , Habla , Software de Reconocimiento del Habla
6.
JASA Express Lett ; 3(12)2023 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-38117232

RESUMEN

This study investigates how California English speakers adjust nasal coarticulation and hyperarticulation on vowels across three speech styles: speaking slowly and clearly (imagining a hard-of-hearing addressee), casually (imagining a friend/family member addressee), and speaking quickly and clearly (imagining being an auctioneer). Results show covariation in speaking rate and vowel hyperarticulation across the styles. Additionally, results reveal that speakers produce more extensive anticipatory nasal coarticulation in the slow-clear speech style, in addition to a slower speech rate. These findings are interpreted in terms of accounts of coarticulation in which speakers selectively tune their production of nasal coarticulation based on the speaking style.


Asunto(s)
Geraniaceae , Habla , Humanos , Amigos , Lenguaje , Nariz
7.
J Acoust Soc Am ; 154(4): 2290-2304, 2023 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-37843380

RESUMEN

Prior work demonstrates that exposure to speakers of the same accent facilitates comprehension of a novel talker with the same accent (accent-specific learning). Moreover, exposure to speakers of multiple different accents enhances understanding of a talker with a novel accent (accent-independent learning). Although bottom-up acoustic information about accent constrains adaptation to novel talkers, the effect of top-down social information remains unclear. The current study examined effects of apparent ethnicity on adaptation to novel L2-accented ("non-native") talkers while keeping bottom-up information constant. Native English listeners transcribed sentences in noise for three Mandarin-accented English speakers and then a fourth (novel) Mandarin-accented English speaker. Transcription accuracy of the novel talker improves when: all speakers are presented with east Asian faces (ethnicity-specific learning); the exposure speakers are paired with different, non-east Asian ethnicities and the novel talker has an east Asian face (ethnicity-independent learning). However, accuracy does not improve when all speakers have White faces or when the exposure speakers have White faces and the test talker has an east Asian face. This study demonstrates that apparent ethnicity affects adaptation to novel L2-accented talkers, thus underscoring the importance of social expectations in perceptual learning and cross-talker generalization.


Asunto(s)
Percepción del Habla , Habla , Humanos , Etnicidad , Aprendizaje , Lenguaje , Inteligibilidad del Habla
8.
J Acoust Soc Am ; 153(2): 1084, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36859167

RESUMEN

This study investigates the impact of wearing a face mask on the production and perception of coarticulatory vowel nasalization. Speakers produced monosyllabic American English words with oral and nasal codas (i.e., CVC and CVN) in face-masked and un-face-masked conditions to a real human interlocutor. The vowel was either tense or lax. Acoustic analyses indicate that speakers produced greater coarticulatory vowel nasality in CVN items when wearing a face mask, particularly, when the vowel is lax, suggesting targeted enhancement of the oral-nasalized contrast in this condition. This enhancement is not observed for tense vowels. In a perception study, participants heard CV syllables excised from the recorded words and performed coda identifications. For lax vowels, listeners were more accurate at identifying the coda in the face-masked condition, indicating that they benefited from the speakers' production adjustments. Overall, the results indicate that speakers adapt their speech in specific contexts when wearing a face mask, and these speaker adjustments have an influence on listeners' abilities to identify words in the speech signal.


Asunto(s)
Máscaras , Habla , Humanos , Equipo de Protección Personal , Acústica , Percepción
9.
J Speech Lang Hear Res ; 66(2): 545-564, 2023 02 13.
Artículo en Inglés | MEDLINE | ID: mdl-36729698

RESUMEN

PURPOSE: This study investigates the debate that musicians have an advantage in speech-in-noise perception from years of targeted auditory training. We also consider the effect of age on any such advantage, comparing musicians and nonmusicians (age range: 18-66 years), all of whom had normal hearing. We manipulate the degree of fundamental frequency (f o) separation between the competing talkers, as well as use different tasks, to probe attentional differences that might shape a musician's advantage across ages. METHOD: Participants (ranging in age from 18 to 66 years) included 29 musicians and 26 nonmusicians. They completed two tasks varying in attentional demands: (a) a selective attention task where listeners identify the target sentence presented with a one-talker interferer (Experiment 1), and (b) a divided attention task where listeners hear two vowels played simultaneously and identify both competing vowels (Experiment 2). In both paradigms, f o separation was manipulated between the two voices (Δf o = 0, 0.156, 0.306, 1, 2, 3 semitones). RESULTS: Results show that increasing differences in f o separation lead to higher accuracy on both tasks. Additionally, we find evidence for a musician's advantage across the two studies. In the sentence identification task, younger adult musicians show higher accuracy overall, as well as a stronger reliance on f o separation. Yet, this advantage declines with musicians' age. In the double vowel identification task, musicians of all ages show an across-the-board advantage in detecting two vowels-and use f o separation more to aid in stream separation-but show no consistent difference in double vowel identification. CONCLUSIONS: Overall, we find support for a hybrid auditory encoding-attention account of music-to-speech transfer. The musician's advantage includes f o, but the benefit also depends on the attentional demands in the task and listeners' age. Taken together, this study suggests a complex relationship between age, musical experience, and speech-in-speech paradigm on a musician's advantage. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.21956777.


Asunto(s)
Música , Percepción del Habla , Adulto , Humanos , Adolescente , Adulto Joven , Persona de Mediana Edad , Anciano , Habla , Audición , Ruido , Atención
10.
J Acoust Soc Am ; 152(6): 3429, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36586870

RESUMEN

Tashlhiyt Berber is known for having typologically unusual word-initial phonological contrasts, specifically, word-initial singleton-geminate minimal pairs (e.g., sin vs ssin) and sequences of consonants that violate the sonority sequencing principle (e.g., non-rising sonority sequences: fsin). The current study investigates the role of a listener-oriented speaking style on the perceptual enhancement of these rarer phonological contrasts. It examines the perception of word-initial singleton, geminate, and complex onsets in Tashlhiyt Berber across clear and casual speaking styles by native and naive listeners. While clear speech boosts the discriminability of pairs containing singleton-initial words for both listener groups, only native listeners performed better in discriminating between initial singleton-geminate contrasts in clear speech. Clear speech did not improve perception for lexical contrasts containing a non-rising-sonority consonant cluster for either listener group. These results are discussed in terms of how clear speech can inform phonological typology and the role of phonetic enhancement in language-universal vs language-specific speech perception.


Asunto(s)
Percepción del Habla , Habla , Lenguaje , Fonética , Medios de Contraste
11.
JASA Express Lett ; 2(4): 045204, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-36154231

RESUMEN

This study examined how speaking style and guise influence the intelligibility of text-to-speech (TTS) and naturally produced human voices. Results showed that TTS voices were less intelligible overall. Although using a clear speech style improved intelligibility for both human and TTS voices (using "newscaster" neural TTS), the clear speech effect was stronger for TTS voices. Finally, a visual device guise decreased intelligibility, regardless of voice type. The results suggest that both speaking style and visual guise affect intelligibility of human and TTS voices. Findings are discussed in terms of theories about the role of social information in speech perception.


Asunto(s)
Percepción del Habla , Envío de Mensajes de Texto , Voz , Cognición , Humanos , Inteligibilidad del Habla
12.
J Acoust Soc Am ; 151(1): 577, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-35105023

RESUMEN

Some models of speech production propose that speech variation reflects an adaptive trade-off between the needs of the listener and constraints on the speaker. The current study considers communicative load as both a situational and lexical variable that influences phonetic variation in speech to real interlocutors. The current study investigates whether the presence or absence of a target word in the sight of a real listener influences speakers' patterns of variation during a communicative task. To test how lexical difficulty also modulates intelligibility, target words varied in phonological neighborhood density (ND), a measure of lexical difficulty. Acoustic analyses reveal that speakers produced longer vowels in words that were not visually present for the listener to see, compared to when the listener could see those words. This suggests that speakers assess in real time the presence or absence of supportive visual information in assessing listener comprehension difficulty. Furthermore, the presence or absence of the word interacted with ND to predict both vowel duration and hyperarticulation patterns. These findings indicate that lexical measures of a word's difficulty and speakers' online assessment of lexical intelligibility (based on a word's visual presence or not) interactively influence phonetic modifications during communication with a real listener.


Asunto(s)
Fonética , Percepción del Habla , Comprensión , Habla , Inteligibilidad del Habla , Medición de la Producción del Habla
13.
J Acoust Soc Am ; 149(5): 3424, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-34241128

RESUMEN

This study investigates the perception of coarticulatory vowel nasality generated using different text-to-speech (TTS) methods in American English. Experiment 1 compared concatenative and neural TTS using a 4IAX task, where listeners discriminated between a word pair containing either both oral or nasalized vowels and a word pair containing one oral and one nasalized vowel. Vowels occurred either in identical or alternating consonant contexts across pairs to reveal perceptual sensitivity and compensatory behavior, respectively. For identical contexts, listeners were better at discriminating between oral and nasalized vowels in neural than in concatenative TTS for nasalized same-vowel trials, but better discrimination for concatenative TTS was observed for oral same-vowel trials. Meanwhile, listeners displayed less compensation for coarticulation in neural than in concatenative TTS. To determine whether apparent roboticity of the TTS voice shapes vowel discrimination and compensation patterns, a "roboticized" version of neural TTS was generated (monotonized f0 and addition of an echo), holding phonetic nasality constant; a ratings study (experiment 2) confirmed that the manipulation resulted in different apparent roboticity. Experiment 3 compared the discrimination of unmodified neural TTS and roboticized neural TTS: listeners displayed lower accuracy in identical contexts for roboticized relative to unmodified neural TTS, yet the performances in alternating contexts were similar.


Asunto(s)
Percepción del Habla , Voz , Lenguaje , Fonética , Habla , Acústica del Lenguaje
14.
Cognition ; 210: 104570, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33450446

RESUMEN

This study investigates the impact of wearing a fabric face mask on speech comprehension, an underexplored topic that can inform theories of speech production. Speakers produced sentences in three speech styles (casual, clear, positive-emotional) while in both face-masked and non-face-masked conditions. Listeners were most accurate at word identification in multi-talker babble for sentences produced in clear speech, and less accurate for casual speech (with emotional speech accuracy numerically in between). In the clear speaking style, face-masked speech was actually more intelligible than non-face-masked speech, suggesting that speakers make clarity adjustments specifically for face masks. In contrast, in the emotional condition, face-masked speech was less intelligible than non-face-masked speech, and in the casual condition, no difference was observed, suggesting that 'emotional' and 'casual' speech are not styles produced with the explicit intent to be intelligible to listeners. These findings are discussed in terms of automatic and targeted speech adaptation accounts.


Asunto(s)
Máscaras , Percepción del Habla , Adaptación Fisiológica , Emociones , Humanos , Inteligibilidad del Habla
15.
J Acoust Soc Am ; 147(3): EL271, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-32237846

RESUMEN

Listeners show better-than-chance discrimination of nasalized and oral vowels occurring in appropriate consonantal contexts. Yet, the methods for investigating partial perceptual compensation for nasal coarticulation often include nasal and oral vowels containing naturally different pitch contours. Listeners may therefore be discriminating between these vowels based on pitch differences and not nasalization. The current study investigates the effect of pitch variation on the discrimination of nasalized and oral vowels in C_N and C_C items. The f0 contour of vowels within paired discrimination trials was varied. The results indicate that pitch variation does not influence patterns of partial perceptual compensation for coarticulation.

16.
J Acoust Soc Am ; 145(6): 3675, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-31255131

RESUMEN

Vowels are enhanced via vowel-space expansion in perceptually difficult contexts, including in words subject to greater lexical competition. Yet, vowel hyperarticulation often covaries with other acoustic adjustments, such as increased nasal coarticulation, suggesting that the goals of phonetic enhancement are not strictly to produce canonical phoneme realizations. This study explores phonetic enhancement by examining how speakers realize an allophonic vowel split in lexically challenging conditions. Specifically, in US English, /æ/ is raising before nasal codas, such that pre-nasal and pre-oral /æ/ are moving apart. Speakers produced monosyllabic words varying in phonological neighborhood density (ND), a measure of lexical difficulty, with CæN or CæC structure to a real listener interlocutor in an interactive task. Acoustic analyses reveal that speakers enhance pre-oral /æ/ by lowering it in Hi ND words; meanwhile, pre-nasal /æ/ Hi ND words are produced with greater degrees of nasalization and increased diphthongization. These patterns indicate that ND-conditioned phonetic enhancement is realized in targeted ways for distinct allophones of /æ/. Results support views of hyperarticulation in which the goal is to make words, that is, segments in their contexts, as distinct as possible.


Asunto(s)
Lenguaje , Acústica del Lenguaje , Percepción del Habla/fisiología , Habla/fisiología , Femenino , Humanos , Masculino , Fonética , Medición de la Producción del Habla/métodos
17.
J Acoust Soc Am ; 142(4): EL375, 2017 10.
Artículo en Inglés | MEDLINE | ID: mdl-29092604

RESUMEN

Surface-level phonetic details are used during word recognition. Yet, questions remain about how these details are encoded in lexical representations and the role of memory and attention during this process. The current study utilizes lexical repetition priming to examine the effect of a delay between hearing a word repeated with either the same or different coarticulatory patterns on lexical recognition. Listeners were faster to recognize repeated words with the same patterns of coarticulatory nasality, confirming that subphonemic information is encoded in the lexicon. Furthermore, when listeners had to adapt to more than one talker, greater coarticulatory specificity in delayed priming was observed suggesting that word-specific encoding of subphonemic details is an active cognitive process.

18.
Lang Cogn Neurosci ; 32(6): 776-791, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-33043064

RESUMEN

We investigated phonetic imitation of coarticulatory vowel nasality using an adapted shadowing paradigm in which participants produced a printed word (target) after hearing a different word (prime). Two versions of primes with nasal codas were used: primes with a natural degree of vowel nasality and hypernasalized primes. The version of the prime participants heard varied, whether consistent with their past experience with nasality from the talker or inconsistent, and the duration of delay between prime and target. People spontaneously modify coarticulatory nasality to resemble that demonstrated in the prime they were exposed to. Furthermore, this imitation also reflects the degree of nasality demonstrated by overall experience with the speaker's vowels. The influence of past experience on imitation increases with increased delay between prime and target. Imitation of another speaker appears to involve tracking general articulatory properties about the speaker, and not solely what was specific to the most recent experience.

19.
J Acoust Soc Am ; 140(5): 3560, 2016 11.
Artículo en Inglés | MEDLINE | ID: mdl-27908038

RESUMEN

This study investigates the spontaneous phonetic imitation of coarticulatory vowel nasalization. Speakers produced monosyllabic words with a vowel-nasal sequence either from dense or sparse phonological neighborhoods in shadowing and word-naming tasks. During shadowing, they were exposed to target words that were modified to have either an artificially increased or decreased degree of coarticulatory vowel nasality. Increased nasality, which is communicatively more facilitative in that it provides robust predictive information about the upcoming nasal segment, was imitated more strongly during shadowing than decreased nasality. An effect of neighborhood density was also observed only in the increased nasality condition, where high neighborhood density words were imitated more robustly in early shadowing repetition. An effect of exposure to decreased nasality was observed during post-shadowing word-naming only. The observed imitation of coarticulatory nasality provides evidence that speakers and listeners are sensitive to the details of coarticulatory realization, and that imitation need not be mediated by abstract phonological representations. Neither a communicative account nor a representational account could single-handedly predict these observed patterns of imitation. As such, it is argued that these findings support both communicative and representational accounts of phonetic imitation.

20.
J Acoust Soc Am ; 134(5): 3793-807, 2013 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-24180789

RESUMEN

Speech produced in the context of real or imagined communicative difficulties is characterized by hyperarticulation. Phonological neighborhood density (ND) conditions similar patterns in production: Words with many neighbors are hyperarticulated relative to words with fewer; Hi ND words also show greater coarticulation than Lo ND words [e.g., Scarborough, R. (2012). "Lexical similarity and speech production: Neighborhoods for nonwords," Lingua 122(2), 164-176]. Coarticulatory properties of "clear speech" are more variable across studies. This study examined hyperarticulation and nasal coarticulation across five real and simulated clear speech contexts and two neighborhood conditions, and investigated consequences of these details for word perception. The data revealed a continuum of (attempted) clarity, though real listener-directed speech (Real) differed from all of the simulated styles. Like the clearest simulated-context speech (spoken "as if to someone hard-of-hearing"-HOH), Real had greater hyperarticulation than other conditions. However, Real had the greatest coarticulatory nasality while HOH had the least. Lexical decisions were faster for words from Real than from HOH, indicating that speech produced in real communicative contexts (with hyperarticulation and increased coarticulation) was perceptually better than simulated clear speech. Hi ND words patterned with Real in production, and Real Hi ND words were clear enough to overcome the dense neighborhood disadvantage.


Asunto(s)
Acústica del Lenguaje , Inteligibilidad del Habla , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Audiometría del Habla , Comprensión , Señales (Psicología) , Femenino , Humanos , Masculino , Fonética , Espectrografía del Sonido , Medición de la Producción del Habla , Factores de Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...