Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
Add more filters










Publication year range
1.
Cogn Sci ; 44(4): e12823, 2020 04.
Article in English | MEDLINE | ID: mdl-32274861

ABSTRACT

Despite the lack of invariance problem (the many-to-many mapping between acoustics and percepts), human listeners experience phonetic constancy and typically perceive what a speaker intends. Most models of human speech recognition (HSR) have side-stepped this problem, working with abstract, idealized inputs and deferring the challenge of working with real speech. In contrast, carefully engineered deep learning networks allow robust, real-world automatic speech recognition (ASR). However, the complexities of deep learning architectures and training regimens make it difficult to use them to provide direct insights into mechanisms that may support HSR. In this brief article, we report preliminary results from a two-layer network that borrows one element from ASR, long short-term memory nodes, which provide dynamic memory for a range of temporal spans. This allows the model to learn to map real speech from multiple talkers to semantic targets with high accuracy, with human-like timecourse of lexical access and phonological competition. Internal representations emerge that resemble phonetically organized responses in human superior temporal gyrus, suggesting that the model develops a distributed phonological code despite no explicit training on phonetic or phonemic targets. The ability to work with real speech is a major advance for cognitive models of HSR.


Subject(s)
Computer Simulation , Models, Neurological , Neural Networks, Computer , Speech Perception , Speech , Female , Humans , Male , Phonetics , Semantics
2.
Multisens Res ; 33(6): 569-598, 2020 10 09.
Article in English | MEDLINE | ID: mdl-32083558

ABSTRACT

Cross-modal correspondence is the tendency to systematically map stimulus features across sensory modalities. The current study explored cross-modal correspondence between speech sound and shape (Experiment 1), and whether such association can influence shape representation (Experiment 2). For the purpose of closely examining the role of the two factors - articulation and pitch - combined in speech acoustics, we generated two sets of 25 vowel stimuli - pitch-varying and pitch-constant sets. Both sets were generated by manipulating articulation - frontness and height of the tongue body's positions - but differed in terms of whether pitch varied among the sounds within the same set. In Experiment 1, participants made a forced choice between a round and a spiky shape to indicate the shape better associated with each sound. Results showed that shape choice was modulated according to both articulation and pitch, and we therefore concluded that both factors play significant roles in sound-shape correspondence. In Experiment 2, participants reported their subjective experience of shape accompanied by vowel sounds by adjusting an ambiguous shape in the response display. We found that sound-shape correspondence exerts an effect on shape representation by modulating audiovisual interaction, but only in the case of pitch-varying sounds. Therefore, pitch information within vowel acoustics plays the leading role in sound-shape correspondence influencing shape representation. Taken together, our results suggest the importance of teasing apart the roles of articulation and pitch for understanding sound-shape correspondence.


Subject(s)
Phonetics , Pitch Perception/physiology , Sound , Speech Acoustics , Visual Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Male , Young Adult
3.
J Acoust Soc Am ; 146(1): 316, 2019 07.
Article in English | MEDLINE | ID: mdl-31370597

ABSTRACT

Speech inversion is a well-known ill-posed problem and addition of speaker differences typically makes it even harder. Normalizing the speaker differences is essential to effectively using multi-speaker articulatory data for training a speaker independent speech inversion system. This paper explores a vocal tract length normalization (VTLN) technique to transform the acoustic features of different speakers to a target speaker acoustic space such that speaker specific details are minimized. The speaker normalized features are then used to train a deep feed-forward neural network based speech inversion system. The acoustic features are parameterized as time-contextualized mel-frequency cepstral coefficients. The articulatory features are represented by six tract-variable (TV) trajectories, which are relatively speaker invariant compared to flesh point data. Experiments are performed with ten speakers from the University of Wisconsin X-ray microbeam database. Results show that the proposed speaker normalization approach provides an 8.15% relative improvement in correlation between actual and estimated TVs as compared to the system where speaker normalization was not performed. To determine the efficacy of the method across datasets, cross speaker evaluations were performed across speakers from the Multichannel Articulatory-TIMIT and EMA-IEEE datasets. Results prove that the VTLN approach provides improvement in performance even across datasets.

4.
Ann Rehabil Med ; 42(4): 634-638, 2018 Aug.
Article in English | MEDLINE | ID: mdl-30180536

ABSTRACT

The application of three-dimensional (3D) printing is growing explosively in the medical field, and is especially widespread in the clinical use of fabricating upper limb orthosis and prosthesis. Advantages of 3D-printed orthosis compared to conventional ones include its lower cost, easier modification, and faster fabrication. Hands are the most common body parts involved with burn victims and one of the main complications of hand burns are finger joint contractures. Applying orthotic devices such as finger splints are a well-established essential element of burn care. In spite of the rapid evolution of the clinical use of 3D printing, to our knowledge, its application to hand burn patients has not yet been reported. In this study, the authors present a series of patients with hand burn injuries whose orthotic needs were fulfilled with the application of 3D-printed finger splints.

5.
J Phon ; 68: 1-14, 2018 May.
Article in English | MEDLINE | ID: mdl-30034052

ABSTRACT

Speech, though communicative, is quite variable both in articulation and acoustics, and it has often been claimed that articulation is more variable. Here we compared variability in articulation and acoustics for 32 speakers in the x-ray microbeam database (XRMB; Westbury, 1994). Variability in tongue, lip and jaw positions for nine English vowels (/u, ʊ, æ, ɑ, ʌ, ɔ, ε, ɪ, i/) was compared to that of the corresponding formant values. The domains were made comparable by creating three-dimensional spaces for each: the first three principal components from an analysis of a 14-dimensional space for articulation, and an F1xF2xF3 space for acoustics. More variability occurred in the articulation than the acoustics for half of the speakers, while the reverse was true for the other half. Individual tokens were further from the articulatory median than the acoustic median for 40-60% of tokens across speakers. A separate analysis of three non-low front vowels (/ε, ɪ, i/, for which the XRMB system provides the most direct articulatory evidence) did not differ from the omnibus analysis. Speakers tended to be either more or less variable consistently across vowels. Across speakers, there was a positive correlation between articulatory and acoustic variability, both for all vowels and for just the three non-low front vowels. Although the XRMB is an incomplete representation of articulation, it nonetheless provides data for direct comparisons between articulatory and acoustic variability that have not been reported previously. The results indicate that articulation is not more variable than acoustics, that speakers had relatively consistent variability across vowels, and that articulatory and acoustic variability were related for the vowels themselves.

6.
Multisens Res ; 31(5): 419-437, 2018 Jan 01.
Article in English | MEDLINE | ID: mdl-31264605

ABSTRACT

It has recently been reported in the synesthesia literature that graphemes sharing the same phonetic feature tend to induce similar synesthetic colors. In the present study, we investigated whether phonetic properties are associated with colors in a specific manner among the general population, even when other visual and linguistic features of graphemes are removed. To test this hypothesis, we presented vowel sounds synthesized by systematically manipulating the position of the tongue body's center. Participants were asked to choose a color after hearing each sound. Results from the main experiment showed that lightness and chromaticity of matched colors exhibited systematic variations along the two axes of the position of the tongue body's center. Some non-random associations between vowel sounds and colors remained effective with pitch and intensity of the sounds equalized in the control experiment, which suggests that other acoustic factors such as inherent pitch of vowels cannot solely account for the current results. Taken together, these results imply that the association between phonetic features and colors is not random, and this synesthesia-like association is shared by people in the general population.

7.
Ann Rehabil Med ; 41(4): 705-708, 2017 Aug.
Article in English | MEDLINE | ID: mdl-28971057

ABSTRACT

Being located in the hypogastric area, the ilioinguinal nerve, together with iliohypogastric nerve, can be damaged during lower abdominal surgeries. Conventionally, the diagnosis of ilioinguinal neuropathy relies on clinical assessments, and standardized diagnostic methods have not been established as of yet. We hereby report the case of young man who presented ilioinguinal neuralgia with symptoms of burning pain in the right groin and scrotum shortly after receiving inguinal herniorrhaphy. To raise the diagnostic certainty, we used a real-time ultrasonography (US) to guide a monopolar electromyography needle to the ilioinguinal nerve, and then performed a motor conduction study. A subsequent US-guided ilioinguinal nerve block resulted in complete resolution of the patient's neuralgic symptoms.

8.
J Phon ; 65: 45-59, 2017 Nov.
Article in English | MEDLINE | ID: mdl-31346299

ABSTRACT

Studies of speech accommodation provide evidence for change in use of language structures beyond the critical/sensitive period. For example, Sancier and Fowler (1997) found changes in the voice-onset-times (VOTs) of both languages of a Portuguese-English bilingual as a function of her language context. Though accommodation has been studied widely within a monolingual context, it has received less attention in and between the languages of bilinguals. We tested whether these findings of phonetic accommodation, speech accommodation at the phonetic level, would generalize to a sample of Spanish-English bilinguals. We recorded participants reading Spanish and English sentences after 3-4 months in the US and after 2-4 weeks in a Spanish speaking country and measured the VOTs of their voiceless plosives. Our statistical analyses show that participants' English VOTs drifted towards those of the ambient language, but their Spanish VOTs did not. We found considerable variation in the extent of individual participants' drift in English. Further analysis of our results suggested that native-likeness of L2 VOTs and extent of active language use predict the extent of drift. We provide a model based on principles of self-organizing dynamical systems to account for our Spanish-English phonetic drift findings and the Portuguese-English findings.

9.
J Acoust Soc Am ; 139(2): 713-27, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26936555

ABSTRACT

The measurement of formant frequencies of vowels is among the most common measurements in speech studies, but measurements are known to be biased by the particular fundamental frequency (F0) exciting the formants. Approaches to reducing the errors were assessed in two experiments. In the first, synthetic vowels were constructed with five different first formant (F1) values and nine different F0 values; formant bandwidths, and higher formant frequencies, were constant. Input formant values were compared to manual measurements and automatic measures using the linear prediction coding-Burg algorithm, linear prediction closed-phase covariance, the weighted linear prediction-attenuated main excitation (WLP-AME) algorithm [Alku, Pohjalainen, Vainio, Laukkanen, and Story (2013). J. Acoust. Soc. Am. 134(2), 1295-1313], spectra smoothed cepstrally and by averaging repeated discrete Fourier transforms. Formants were also measured manually from pruned reassigned spectrograms (RSs) [Fulop (2011). Speech Spectrum Analysis (Springer, Berlin)]. All but WLP-AME and RS had large errors in the direction of the strongest harmonic; the smallest errors occur with WLP-AME and RS. In the second experiment, these methods were used on vowels in isolated words spoken by four speakers. Results for the natural speech show that F0 bias affects all automatic methods, including WLP-AME; only the formants measured manually from RS appeared to be accurate. In addition, RS coped better with weaker formants and glottal fry.


Subject(s)
Signal Processing, Computer-Assisted , Speech Acoustics , Speech Production Measurement/methods , Voice Quality , Acoustics , Adult , Algorithms , Female , Fourier Analysis , Humans , Linear Models , Male , Middle Aged , Reproducibility of Results , Sound Spectrography , Young Adult
10.
Ann Rehabil Med ; 40(1): 50-5, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26949669

ABSTRACT

OBJECTIVE: To examine the usefulness of the second lumbrical-interosseous (2L-INT) distal motor latency (DML) comparison test in localizing median neuropathy to the wrist in patients with absent median sensory and motor response in routine nerve conduction studies. METHODS: Electrodiagnostic results from 1,705 hands of patients with carpal tunnel syndrome (CTS) symptoms were reviewed retrospectively. All subjects were evaluated using routine nerve conduction studies: median sensory conduction recorded from digits 1 to 4, motor conduction from the abductor pollicis brevis muscle, and the 2L-INT DML comparison test. RESULTS: Four hundred and one hands from a total of 1,705 were classified as having severe CTS. Among the severe CTS group, 56 hands (14.0%) showed absent median sensory and motor response in a routine nerve conduction study, and, of those hands, 42 (75.0%) showed an abnormal 2L-INT response. CONCLUSION: The 2L-INT DML comparison test proved to be a valuable electrodiagnostic technique in localizing median mononeuropathy at the wrist, even in the most severe CTS patients.

11.
Ecol Psychol ; 28(4): 216-261, 2016 Oct 01.
Article in English | MEDLINE | ID: mdl-28367052

ABSTRACT

To become language users, infants must embrace the integrality of speech perception and production. That they do so, and quite rapidly, is implied by the native-language attunement they achieve in each domain by 6-12 months. Yet research has most often addressed one or the other domain, rarely how they interrelate. Moreover, mainstream assumptions that perception relies on acoustic patterns whereas production involves motor patterns entail that the infant would have to translate incommensurable information to grasp the perception-production relationship. We posit the more parsimonious view that both domains depend on commensurate articulatory information. Our proposed framework combines principles of the Perceptual Assimilation Model (PAM) and Articulatory Phonology (AP). According to PAM, infants attune to articulatory information in native speech and detect similarities of nonnative phones to native articulatory patterns. The AP premise that gestures of the speech organs are the basic elements of phonology offers articulatory similarity metrics while satisfying the requirement that phonological information be discrete and contrastive: (a) distinct articulatory organs produce vocal tract constrictions and (b) phonological contrasts recruit different articulators and/or constrictions of a given articulator that differ in degree or location. Various lines of research suggest young children perceive articulatory information, which guides their productions: discrimination of between- versus within-organ contrasts, simulations of attunement to language-specific articulatory distributions, multimodal speech perception, oral/vocal imitation, and perceptual effects of articulator activation or suppression. We conclude that articulatory gesture information serves as the foundation for developmental integrality of speech perception and production.

12.
J Acoust Soc Am ; 134(5): 3808-17, 2013 Nov.
Article in English | MEDLINE | ID: mdl-24180790

ABSTRACT

Previous work has shown that velar stops are produced with a forward movement during closure, forming a forward (anterior) loop for a VCV sequence, when the preceding vowels are back or mid. Are listeners aware of this aspect of articulatory dynamics? The current study used articulatory synthesis to examine how such kinematic patterns are reflected in the acoustics, and whether those acoustic patterns elicit different goodness ratings. In Experiment I, the size and direction of loops was modulated in articulatory synthesis. The resulting stimuli were presented to listeners for a naturalness judgment. Results show that listeners rate forward loops as more natural than backward loops, in agreement with typical productions. Acoustic analysis of the synthetic stimuli shows that forward loops exhibit shorter and shallower VC transitions than CV transitions. In Experiment II, three acoustic parameters were employed incorporating F3-F2 distance, transition slope, and transition length to systematically modulate the magnitude of VC and CV transitions. Listeners rated the naturalness in accord with those of Experiment I. This study reveals that there is sufficient information in the acoustic signature of "velar loops" to affect perceptual preference. Similarity to typical productions seemed to determine preferences, not acoustic distinctiveness.


Subject(s)
Speech Acoustics , Speech Perception , Tongue/physiology , Voice Quality , Acoustic Stimulation , Audiometry, Speech , Biomechanical Phenomena , Discrimination, Psychological , Female , Humans , Male , Movement , Pattern Recognition, Physiological , Phonetics , Sound Spectrography , Time Factors
13.
J Acoust Soc Am ; 134(3): 2235-46, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23967953

ABSTRACT

While efforts to document endangered languages have steadily increased, the phonetic analysis of endangered language data remains a challenge. The transcription of large documentation corpora is, by itself, a tremendous feat. Yet, the process of segmentation remains a bottleneck for research with data of this kind. This paper examines whether a speech processing tool, forced alignment, can facilitate the segmentation task for small data sets, even when the target language differs from the training language. The authors also examined whether a phone set with contextualization outperforms a more general one. The accuracy of two forced aligners trained on English (hmalign and p2fa) was assessed using corpus data from Yoloxóchitl Mixtec. Overall, agreement performance was relatively good, with accuracy at 70.9% within 30 ms for hmalign and 65.7% within 30 ms for p2fa. Segmental and tonal categories influenced accuracy as well. For instance, additional stop allophones in hmalign's phone set aided alignment accuracy. Agreement differences between aligners also corresponded closely with the types of data on which the aligners were trained. Overall, using existing alignment systems was found to have potential for making phonetic analysis of small corpora more efficient, with more allophonic phone sets providing better agreement than general ones.


Subject(s)
Acoustics , Pattern Recognition, Automated , Phonetics , Signal Processing, Computer-Assisted , Speech Acoustics , Speech Production Measurement , Voice Quality , Feasibility Studies , Humans , Reproducibility of Results , Software Design , Sound Spectrography
14.
J Phon ; 41(2): 63-77, 2013 Mar 01.
Article in English | MEDLINE | ID: mdl-24496111

ABSTRACT

There is a tendency for spoken consonant-vowel (CV) syllables, in babbling in particular, to show preferred combinations: labial consonants with central vowels, alveolars with front, and velars with back. This pattern was first described by MacNeilage and Davis, who found the evidence compatible with their "frame-then-content" (F/C) model. F/C postulates that CV syllables in babbling are produced with no control of the tongue (and therefore effectively random tongue positions) but systematic oscillation of the jaw. Articulatory Phonology (AP; Browman & Goldstein) predicts that CV preferences will depend on the degree of synergy of tongue movements for the C and V. We present computational modeling of both accounts using articulatory synthesis. Simulations found better correlations between patterns in babbling and the AP account than with the F/C model. These results indicate that the underlying assumptions of the F/C model are not supported and that the AP account provides a better and account with broader coverage by showing that articulatory synergies influence all CV syllables, not just the most common ones.

15.
J Acoust Soc Am ; 132(6): 3980-9, 2012 Dec.
Article in English | MEDLINE | ID: mdl-23231127

ABSTRACT

Speech can be represented as a constellation of constricting vocal tract actions called gestures, whose temporal patterning with respect to one another is expressed in a gestural score. Current speech datasets do not come with gestural annotation and no formal gestural annotation procedure exists at present. This paper describes an iterative analysis-by-synthesis landmark-based time-warping architecture to perform gestural annotation of natural speech. For a given utterance, the Haskins Laboratories Task Dynamics and Application (TADA) model is employed to generate a corresponding prototype gestural score. The gestural score is temporally optimized through an iterative timing-warping process such that the acoustic distance between the original and TADA-synthesized speech is minimized. This paper demonstrates that the proposed iterative approach is superior to conventional acoustically-referenced dynamic timing-warping procedures and provides reliable gestural annotation for speech datasets.


Subject(s)
Acoustics , Gestures , Glottis/physiology , Mouth/physiology , Speech Acoustics , Voice Quality , Biomechanical Phenomena , Female , Humans , Male , Models, Theoretical , Signal Processing, Computer-Assisted , Sound Spectrography , Speech Production Measurement/methods , Time Factors
16.
J Phon ; 40(3): 374-389, 2012 May 01.
Article in English | MEDLINE | ID: mdl-22773868

ABSTRACT

This study compares the time to initiate words with varying syllable structures (V, VC, CV, CVC, CCV, CCVC). In order to test the hypothesis that different syllable structures require different amounts of time to prepare their temporal controls, or plans, two delayed naming experiments were carried out. In the first of these the initiation time was determined from acoustic recordings. The results confirmed the hypothesis but also showed an interaction with the initial segment (i.e., vowel-initial words were initiated later than words beginning with consonants, but this difference was much smaller for words starting stops compared to /l/ or /s/). Adding a coda did not affect the initiation time. In order to rule out effects of segment-specific articulatory to acoustic interval differences, a second experiment was performed in which speech movements of the tongue, the jaw and the lips were recorded by means of electromagnetic articulography. Results from initiation time, based on articulatory measurements, showed a significant syllable structure effect with VC words being initiated significantly later than CV(C) words. Only minor effects of the initial segment were found. These results can be partly explained by the amount of accumulated experience a speaker has in coordinating the relevant gesture combinations and triggering them appropriately in time.

17.
J Acoust Soc Am ; 131(3): 2270-87, 2012 Mar.
Article in English | MEDLINE | ID: mdl-22423722

ABSTRACT

Studies have shown that supplementary articulatory information can help to improve the recognition rate of automatic speech recognition systems. Unfortunately, articulatory information is not directly observable, necessitating its estimation from the speech signal. This study describes a system that recognizes articulatory gestures from speech, and uses the recognized gestures in a speech recognition system. Recognizing gestures for a given utterance involves recovering the set of underlying gestural activations and their associated dynamic parameters. This paper proposes a neural network architecture for recognizing articulatory gestures from speech and presents ways to incorporate articulatory gestures for a digit recognition task. The lack of natural speech database containing gestural information prompted us to use three stages of evaluation. First, the proposed gestural annotation architecture was tested on a synthetic speech dataset, which showed that the use of estimated tract-variable-time-functions improved gesture recognition performance. In the second stage, gesture-recognition models were applied to natural speech waveforms and word recognition experiments revealed that the recognized gestures can improve the noise-robustness of a word recognition system. In the final stage, a gesture-based Dynamic Bayesian Network was trained and the results indicate that incorporating gestural information can improve word recognition performance compared to acoustic-only systems.


Subject(s)
Gestures , Speech Perception/physiology , Speech Recognition Software , Speech/physiology , Bayes Theorem , Humans , Phonetics , Speech Acoustics , Vocabulary
18.
Lang Speech ; 55(Pt 4): 503-15, 2012 Dec.
Article in English | MEDLINE | ID: mdl-23420980

ABSTRACT

Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and velars with back vowels (Davis & MacNeilage, 1994). Plausible biomechanical explanations have been proposed, but it is also possible that infants are mirroring the frequency of the CVs that they hear. As noted, previous assessments of adult language were based on dictionaries; these "type" counts are incommensurate with the babbling measures, which are necessarily "token" counts. We analyzed the tokens in two spoken corpora for English, two for French and one for Mandarin. We found that the adult spoken CV preferences correlated with the type counts for Mandarin and French, not for English. Correlations between the adult spoken corpora and the babbling results had all three possible outcomes: significantly positive (French), uncorrelated (Mandarin), and significantly negative (English). There were no correlations of the dictionary data with the babbling results when we consider all nine combinations of consonants and vowels. The results indicate that spoken frequencies of CV combinations can differ from dictionary (type) counts and that the CV preferences apparent in babbling are biomechanically driven and can ignore the frequencies of CVs in the ambient spoken language.


Subject(s)
Language Development , Lip/physiology , Phonetics , Speech Perception/physiology , Speech/physiology , Adult , Biomechanical Phenomena/physiology , Databases, Factual , Feedback , Humans , Infant
19.
Lang Learn Dev ; 7(3): 243-249, 2011 Jul 01.
Article in English | MEDLINE | ID: mdl-23825933

ABSTRACT

The article by MacNeilage and Davis in this issue, entitled "In Defense of the 'Frames, then Content' (FC) Perspective on Speech Acquisition: A Response to Two Critiques" appears to assume that the only alternative to segment-level control is oscillation specifically of the jaw; however, other articulators could be oscillated by infants as well. This allows the preferred CV combinations to emerge without positing a level of segmental control in babbling. Their response does not address our modeling work, which, rather similarly to Davis's own modeling (Serkhane, Schwartz, Boë, Davis, & Matyear, 2007), shows little support for the Frame-then-Content (F/C) account. Our results show substantial support for the Articulatory Phonology (AP) one. A closer look at feeding in infants shows substantial control of the tongue and lips, casting further doubt on the foundation of the F/C account.

20.
Lang Learn Dev ; 7(3): 202-225, 2011.
Article in English | MEDLINE | ID: mdl-23505343

ABSTRACT

Certain consonant/vowel combinations (labial/central, coronal/front, velar/back) are more frequent in babbling as well as, to a lesser extent, in adult language, than chance would dictate. The "Frame then Content" (F/C) hypothesis (Davis & MacNeilage, 1994) attributes this pattern to biomechanical vocal-tract biases that change as infants mature. Articulatory Phonology (AP; Browman and Goldstein 1989) attributes preferences to demands placed on shared articulators. F/C implies that preferences will diminish as articulatory control increases, while AP does not. Here, babbling from children at 6, 9 and 12 months in English, French and Mandarin environments was examined. There was no developmental trend in CV preferences, although older ages exhibited greater articulatory control. A perception test showed no evidence of bias toward hearing the preferred combinations. Modeling using articulatory synthesis found limited support for F/C but more for AP, including data not originally encompassed in F/C. AP thus provides an alternative biomechanical explanation.

SELECTION OF CITATIONS
SEARCH DETAIL
...