Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 38
Filter
1.
J Acoust Soc Am ; 152(1): 43, 2022 Jul.
Article in English | MEDLINE | ID: covidwho-1949893

ABSTRACT

Hands-on, project-based learning was difficult to achieve in online classes during the COVID-19 pandemic. The Engineering Experimentation course at Cooper Union teaches third-year mechanical engineering students practical experimental skills to measure physical phenomenon, which typically requires in-person laboratory classes. In response to COVID, a low-cost, at-home laboratory kit was devised to give students tools to conduct experiments. The kit included a microcontroller acting as a data-acquisition device and custom software to facilitate data transfer. A speed of sound laboratory was designed with the kit to teach skills in data collection, signal processing, and error analysis. The students derived the sound speed by placing two microphones a known distance apart and measuring the time for an impulsive signal to travel from one to the other. The students reported sound speeds from 180.7-477.8 m/s in a temperature range from 273.7-315.9 K. While these reported speeds contained a large amount of error, the exercise allowed the students to learn how to account for sources of error within experiments. This paper also presents final projects designed by the students at home, an impedance tube and two Doppler shift experiments, that exhibit successful and effective low-cost solutions to demonstrate and measure acoustic phenomenon.


Subject(s)
COVID-19 , Laboratories , Acoustics , COVID-19/epidemiology , Humans , Pandemics , Students
2.
J Acoust Soc Am ; 152(1): 9, 2022 Jul.
Article in English | MEDLINE | ID: covidwho-1949892

ABSTRACT

This paper describes ongoing developments to an advanced laboratory course at Kettering University, which is targeted to students in engineering and engineering physics and emphasizes theoretical, computational, and experimental components in the context of airborne acoustics and modal testing [cf. D. A. Russell and D. O. Ludwigsen, J. Acoust. Soc. Am. 131, 2515-2524 (2012)]. These developments have included a transition to electronic laboratory notebooks and cloud-based computing resources, incorporation of updated hardware and software, and creation and testing of a multiple-choice assessment instrument for the course. When Kettering University suddenly shifted to exclusively remote teaching in March 2020 due to the COVID-19 pandemic, many of these changes proved to be essential for enabling rapid adaptation to a situation in which a laboratory was not available for the course. Laboratory activities were rewritten by crowdsourcing archived data, videos were incorporated to illustrate dynamic phenomena, and computer simulations were used to retain student interactivity. The comparison of multiple measures, including the assessment instrument, team-based grades on project papers, and individual grades on final exams, indicates that most students were successful at learning the course material and adapting to work on team-based projects in the midst of challenging remote learning conditions.


Subject(s)
COVID-19 , Acoustics , COVID-19/epidemiology , Humans , Learning , Pandemics , Students , Teaching
3.
Eur Arch Otorhinolaryngol ; 279(9): 4617-4621, 2022 Sep.
Article in English | MEDLINE | ID: covidwho-1941601

ABSTRACT

PURPOSE: Investigating whether the Acoustic Voice Quality Index (AVQI) and the Acoustic Breathiness Index (ABI) are valid and comparable to previous unmasked measurements if the speaker wears a surgical mask or a FFP-2 mask to reduce the risk of transmitting air-borne viruses such as SARS-CoV-2. METHODS: A convenience sample of 31 subjectively healthy participants was subjected to AVQI and ABI voice examination four times: Twice wearing no mask, once with a surgical mask and once with a FFP-2 mask as used regularly in our hospital. The order of the four mask conditions was randomized. The difference in the results between the two recordings without a mask was then compared to the differences between the recordings with each mask and one recording without a mask. RESULTS: Sixty-two percent of the AVQI readings without a mask represented perfectly healthy voices, the largest AVQI without a mask value was 4.0. The mean absolute difference in AVQI was 0.45 between the measurements without masks, 0.48 between no mask and surgical mask and 0.51 between no mask and FFP-2 mask. The results were neither clinically nor statistically significant. For the ABI the resulting absolute differences (in the same order) were 0.48, 0.69 and 0.56, again neither clinically nor statistically different. CONCLUSION: Based on a convenience sample of healthy or only mildly impaired voices wearing CoViD-19 protective masks does not substantially impair the results of either AVQI or ABI results.


Subject(s)
COVID-19 , Dysphonia , Acoustics , COVID-19/prevention & control , Dysphonia/diagnosis , Humans , Masks , Reproducibility of Results , SARS-CoV-2 , Severity of Illness Index , Speech Acoustics , Speech Production Measurement/methods , Voice Quality
4.
Nat Commun ; 13(1): 3459, 2022 06 16.
Article in English | MEDLINE | ID: covidwho-1921608

ABSTRACT

Newly developed acoustic technologies are playing a transformational role in life science and biomedical applications ranging from the activation and inactivation of mechanosensitive ion channels for fundamental physiological processes to the development of contact-free, precise biofabrication protocols for tissue engineering and large-scale manufacturing of organoids. Here, we provide our perspective on the development of future acoustic technologies and their promise in addressing critical challenges in biomedicine.


Subject(s)
Acoustics , Sound , Delivery of Health Care , Organoids , Tissue Engineering
5.
J Acoust Soc Am ; 151(4): 2672, 2022 04.
Article in English | MEDLINE | ID: covidwho-1807297

ABSTRACT

Sound & Music is an introductory musical acoustics course designed from the ground up using Physics Education Research techniques. The onset of the COVID-19 pandemic forced changes in the curriculum that essentially were reactionary in scope. This was a universal problem that opened up discussions with other educators. Although it had existed previously, the idea of "flipping" a class became a popular concept during the pandemic. Pedagogies applied to an introductory acoustics course are examined as to what they meant in the context of the pandemic. This paper will look at the structure and format of the course pre-pandemic as well as discuss excerpts from three different hands-on activities that were each designed using Physics Education Research techniques. It will then look at how these were altered to be used during the pandemic era as well as other challenges that were overcome during this time, summarizing what changes worked and what did not.


Subject(s)
COVID-19 , Music , Acoustics , Humans , Pandemics , Physics , Research Design
6.
J Acoust Soc Am ; 151(4): 2276, 2022 04.
Article in English | MEDLINE | ID: covidwho-1807295

ABSTRACT

In March 2020 with the advent of COVID, emergency plans were put in place to deliver the Master's Course in Environmental and Architectural Acoustics entirely on-line. This was necessary as although the acoustics laboratory is large, it was deemed to be unsafe for face-to-face teaching due to a complete lack of ventilation in the anechoic and reverberation chambers. Hence, it was necessary to create an alternative for the 2020/21 delivery. In September 2020, it was decided that a "Lab in a Box" supported by on-line demonstrations and pre-recorded films would create the best alternative experience for the postgraduate students. The "Lab in a Box" allowed demonstrations to be replicated at home or in the garden using a Windows based calibrated measurement platform based on audio components. Examples of such laboratories included Fast and Slow Measurements, Noise Exposure, Noise Survey, Loudness, Reverberation Time, and Speech Intelligibility. The results showed that the students gained from more independence and increased flexibility in delivery, achieving very similar marks. This has opened up the possibility of increasing student numbers by reusing these alternative teaching strategies in the future.


Subject(s)
COVID-19 , Speech Perception , Acoustics , COVID-19/epidemiology , Humans , Laboratories , Pandemics , Speech Intelligibility , Teaching
7.
J Biomed Inform ; 130: 104078, 2022 Jun.
Article in English | MEDLINE | ID: covidwho-1804424

ABSTRACT

Scientific evidence shows that acoustic analysis could be an indicator for diagnosing COVID-19. From analyzing recorded breath sounds on smartphones, it is discovered that patients with COVID-19 have different patterns in both the time domain and frequency domain. These patterns are used in this paper to diagnose the infection of COVID-19. Statistics of the sound signals, analysis in the frequency domain, and Mel-Frequency Cepstral Coefficients (MFCCs) are then calculated and applied in two classifiers, k-Nearest Neighbors (kNN) and Convolutional Neural Network (CNN), to diagnose whether a user is contracted with COVID-19 or not. Test results show that, amazingly, an accuracy of over 97% could be achieved with a CNN classifier and more than 85% on kNN with optimized features. Optimization methods for selecting the best features and using various metrics to evaluate the performance are also demonstrated in this paper. Owing to the high accuracy of the CNN model, the CNN model was implemented in an Android app to diagnose COVID-19 with a probability to indicate the confidence level. The initial medical test shows a similar test result between the method proposed in this paper and the lateral flow method, which indicates that the proposed method is feasible and effective. Because of the use of breath sound and tested on the smartphone, this method could be used by everybody regardless of the availability of other medical resources, which could be a powerful tool for society to diagnose COVID-19.


Subject(s)
Artificial Intelligence , COVID-19 , Acoustics , COVID-19/diagnosis , Humans , Neural Networks, Computer , Respiratory Sounds/diagnosis , Smartphone
8.
Neuroimage ; 252: 119044, 2022 05 15.
Article in English | MEDLINE | ID: covidwho-1756286

ABSTRACT

Multisensory integration enables stimulus representation even when the sensory input in a single modality is weak. In the context of speech, when confronted with a degraded acoustic signal, congruent visual inputs promote comprehension. When this input is masked, speech comprehension consequently becomes more difficult. But it still remains inconclusive which levels of speech processing are affected under which circumstances by occluding the mouth area. To answer this question, we conducted an audiovisual (AV) multi-speaker experiment using naturalistic speech. In half of the trials, the target speaker wore a (surgical) face mask, while we measured the brain activity of normal hearing participants via magnetoencephalography (MEG). We additionally added a distractor speaker in half of the trials in order to create an ecologically difficult listening situation. A decoding model on the clear AV speech was trained and used to reconstruct crucial speech features in each condition. We found significant main effects of face masks on the reconstruction of acoustic features, such as the speech envelope and spectral speech features (i.e. pitch and formant frequencies), while reconstruction of higher level features of speech segmentation (phoneme and word onsets) were especially impaired through masks in difficult listening situations. As we used surgical face masks in our study, which only show mild effects on speech acoustics, we interpret our findings as the result of the missing visual input. Our findings extend previous behavioural results, by demonstrating the complex contextual effects of occluding relevant visual information on speech processing.


Subject(s)
Speech Perception , Speech , Acoustic Stimulation , Acoustics , Humans , Mouth , Visual Perception
9.
Int J Environ Res Public Health ; 19(6)2022 03 12.
Article in English | MEDLINE | ID: covidwho-1760581

ABSTRACT

Headsets are increasingly used in the working environment. In addition to being frequently used by call-centre staff, they are also becoming more popular with remote workers and teleconference participants. The aim of this work was to describe and evaluate the acoustic signal parameters reproduced by headsets and examine the factors affecting the values of these parameters. The tests were carried out in laboratory conditions using a manikin (head and torso simulator) designed for acoustic research. A total of 12 headset models were tested during the research. The results show that the A-weighted sound pressure level of the test signal reproduced by four (100% gain) and two (75% gain) headsets exceeded 85 dB. The highest equivalent A-weighted sound pressure level was 92.5 dB, which means that the headset should not be used for more than approx. 1 h and 25 min; otherwise, the criterion value will be exceeded. The analysis of the acoustic signal reproduced by the headsets confirmed that the A-weighted sound pressure level affected the gain level in the test signal reproduction path. This value also depended on the type of connector used, the computer from which the test signal was reproduced and the type of sound card used.


Subject(s)
Hearing Loss, Noise-Induced , Noise, Occupational , Acoustics , Humans , Workplace
10.
Sensors (Basel) ; 22(5)2022 Mar 02.
Article in English | MEDLINE | ID: covidwho-1742608

ABSTRACT

Recently, the issue of sound quality inside vehicles has attracted interest from both researchers and industry alike due to health concerns and also to increase the appeal of vehicles to consumers. This work extends the analysis of interior acoustic noise inside a vehicle under several conditions by comparing measured power levels and two different models for acoustic noise, namely the Gaussian and the alpha-stable distributions. Noise samples were collected in a scenario with real traffic patterns using a measurement setup composed of a Raspberry Pi Board and a microphone strategically positioned. The analysis of the acquired data shows that the observed noise levels are higher when traffic conditions are good. Additionally, the interior noise presented considerable impulsiveness, which tends to be more severe when traffic is slower. Finally, our results suggest that noise sources related to the vehicle itself and its movement are the most relevant ones in the composition of the interior acoustic noise.


Subject(s)
Acoustics , Noise , Sound
11.
J Acoust Soc Am ; 151(2): 1033, 2022 02.
Article in English | MEDLINE | ID: covidwho-1723417

ABSTRACT

Chronic obstructive pulmonary disease (COPD) is the third leading cause of death worldwide with over 3 × 106 deaths in 2019. Such an alarming figure becomes frightening when combined with the number of lost lives resulting from COVID-caused respiratory failure. Because COPD exacerbations identified early can commonly be treated at home, early symptom detections may enable a major reduction of COPD patient readmission and associated healthcare costs; this is particularly important during pandemics such as COVID-19 in which healthcare facilities are overwhelmed. The standard adjuncts used to assess lung function (e.g., spirometry, plethysmography, and CT scan) are expensive, time consuming, and cannot be used in remote patient monitoring of an acute exacerbation. In this paper, a wearable multi-modal system for breathing analysis is presented, which can be used in quantifying various airflow obstructions. The wearable multi-modal electroacoustic system employs a body area sensor network with each sensor-node having a multi-modal sensing capability, such as a digital stethoscope, electrocardiogram monitor, thermometer, and goniometer. The signal-to-noise ratio (SNR) of the resulting acoustic spectrum is used as a measure of breathing intensity. The results are shown from data collected from over 35 healthy subjects and 3 COPD subjects, demonstrating a positive correlation of SNR values to the health-scale score.


Subject(s)
COVID-19 , Wearable Electronic Devices , Acoustics , COVID-19/diagnosis , Humans , SARS-CoV-2 , Spirometry
12.
Chem Pharm Bull (Tokyo) ; 70(3): 199-201, 2022 Mar 01.
Article in English | MEDLINE | ID: covidwho-1714684

ABSTRACT

MS is a powerful methodology for chemical screening to directly quantify substrates and products of enzymes, but its low throughput has been an issue. Recently, an acoustic liquid-handling apparatus (Echo®) used for rapid nano-dispensing has been coupled to a high-sensitivity mass spectrometer to create the Echo® MS system, and we applied this system to screening of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) 3CL protease inhibitors. Primary screening of 32033 chemical samples was completed in 12 h. Among the hits showing selective, dose-dependent 3CL-inhibitory activity, 8 compounds showed antiviral activity in cell-based assay.


Subject(s)
COVID-19 , Protease Inhibitors , Acoustics , COVID-19/drug therapy , High-Throughput Screening Assays/methods , Humans , Protease Inhibitors/chemistry , Protease Inhibitors/pharmacology , SARS-CoV-2
13.
J Speech Lang Hear Res ; 65(3): 991-1000, 2022 03 08.
Article in English | MEDLINE | ID: covidwho-1692517

ABSTRACT

PURPOSE: The Test for Rating Emotions in Speech (T-RES) has been developed in order to assess the processing of emotions in spoken language. In this tool, spoken sentences, which are composed of emotional content (anger, happiness, sadness, and neutral) in both semantics and prosody in different combinations, are rated by listeners. To date, English, German, and Hebrew versions have been developed, as well as online versions, iT-RES, to adapt to COVID-19 social restrictions. Since the perception of spoken emotions may be affected by linguistic (and cultural) variables, it is important to compare the acoustic characteristics of the stimuli within and between languages. The goal of the current report was to provide cross-linguistic acoustic validation of the T-RES. METHOD: T-RES sentences in the aforementioned languages were acoustically analyzed in terms of mean F0, F0 range, and speech rate to obtain profiles of acoustic parameters for different emotions. RESULTS: Significant within-language discriminability of prosodic emotions was found, for both mean F0 and speech rate. Similarly, these measures were associated with comparable patterns of prosodic emotions for each of the tested languages and emotional ratings. CONCLUSIONS: The results demonstrate the lack of dependence of prosody and semantics within the T-RES stimuli. These findings illustrate the listeners' ability to clearly distinguish between the different prosodic emotions in each language, providing a cross-linguistic validation of the T-RES and iT-RES.


Subject(s)
COVID-19 , Speech Perception , Acoustics , Emotions , Humans , Language , Linguistics , SARS-CoV-2 , Speech
14.
J Acoust Soc Am ; 149(6): 4377, 2021 06.
Article in English | MEDLINE | ID: covidwho-1666347

ABSTRACT

COVID-19 is a global health crisis that has been affecting our daily lives throughout the past year. The symptomatology of COVID-19 is heterogeneous with a severity continuum. Many symptoms are related to pathological changes in the vocal system, leading to the assumption that COVID-19 may also affect voice production. For the first time, the present study investigates voice acoustic correlates of a COVID-19 infection based on a comprehensive acoustic parameter set. We compare 88 acoustic features extracted from recordings of the vowels /i:/, /e:/, /u:/, /o:/, and /a:/ produced by 11 symptomatic COVID-19 positive and 11 COVID-19 negative German-speaking participants. We employ the Mann-Whitney U test and calculate effect sizes to identify features with prominent group differences. The mean voiced segment length and the number of voiced segments per second yield the most important differences across all vowels indicating discontinuities in the pulmonic airstream during phonation in COVID-19 positive participants. Group differences in front vowels are additionally reflected in fundamental frequency variation and the harmonics-to-noise ratio, group differences in back vowels in statistics of the Mel-frequency cepstral coefficients and the spectral slope. Our findings represent an important proof-of-concept contribution for a potential voice-based identification of individuals infected with COVID-19.


Subject(s)
COVID-19 , Voice , Acoustics , Humans , Phonation , SARS-CoV-2 , Speech Acoustics , Voice Quality
15.
J Acoust Soc Am ; 150(3): 1945, 2021 09.
Article in English | MEDLINE | ID: covidwho-1621987

ABSTRACT

This study aimed to develop an artificial intelligence (AI)-based tool for screening COVID-19 patients based on the acoustic parameters of their voices. Twenty-five acoustic parameters were extracted from voice samples of 203 COVID-19 patients and 171 healthy individuals who produced a sustained vowel, i.e., /a/, as long as they could after a deep breath. The selected acoustic parameters were from different categories including fundamental frequency and its perturbation, harmonicity, vocal tract function, airflow sufficiency, and periodicity. After the feature extraction, different machine learning methods were tested. A leave-one-subject-out validation scheme was used to tune the hyper-parameters and record the test set results. Then the models were compared based on their accuracy, precision, recall, and F1-score. Based on accuracy (89.71%), recall (91.63%), and F1-score (90.62%), the best model was the feedforward neural network (FFNN). Its precision function (89.63%) was a bit lower than the logistic regression (90.17%). Based on these results and confusion matrices, the FFNN model was employed in the software. This screening tool could be practically used at home and public places to ensure the health of each individual's respiratory system. If there are any related abnormalities in the test taker's voice, the tool recommends that they seek a medical consultant.


Subject(s)
Artificial Intelligence , COVID-19 , Acoustics , Humans , Neural Networks, Computer , SARS-CoV-2
16.
J Acoust Soc Am ; 150(6): 4474, 2021 12.
Article in English | MEDLINE | ID: covidwho-1596049

ABSTRACT

The unprecedented lockdowns resulting from COVID-19 in spring 2020 triggered changes in human activities in public spaces. A predictive modeling approach was developed to characterize the changes in the perception of the sound environment when people could not be surveyed. Building on a database of soundscape questionnaires (N = 1,136) and binaural recordings (N = 687) collected in 13 locations across London and Venice during 2019, new recordings (N = 571) were made in the same locations during the 2020 lockdowns. Using these 30-s-long recordings, linear multilevel models were developed to predict the soundscape pleasantness ( R2=0.85) and eventfulness ( R2=0.715) during the lockdown and compare the changes for each location. The performance was above average for comparable models. An online listening study also investigated the change in the sound sources within the spaces. Results indicate (1) human sounds were less dominant and natural sounds more dominant across all locations; (2) contextual information is important for predicting pleasantness but not for eventfulness; (3) perception shifted toward less eventful soundscapes and to more pleasant soundscapes for previously traffic-dominated locations but not for human- and natural-dominated locations. This study demonstrates the usefulness of predictive modeling and the importance of considering contextual information when discussing the impact of sound level reductions on the soundscape.


Subject(s)
Acoustics , COVID-19 , Communicable Disease Control , Humans , SARS-CoV-2 , Sound
17.
Sci Rep ; 11(1): 20439, 2021 11 05.
Article in English | MEDLINE | ID: covidwho-1504468

ABSTRACT

Seismic ambient noise with frequencies > 1 Hz includes noise related to human activities. A reduction in seismic noise during the COVID-19 pandemic has been observed worldwide, as restrictions were imposed to control outbreaks of the SARS-CoV-2 virus. In this context, we studied the effect of changes in anthropogenic activities during COVID-19 on the seismic noise levels in the Tokyo metropolitan area, Japan, considering time of day, day of the week, and seasonal changes. The results showed the largest reduction in noise levels during the first state of emergency under most conditions. After the first state of emergency was lifted, the daytime noise reverted to previous levels immediately on weekdays and gradually on Sundays. This was likely because economic activities instantly resumed, while non-essential outings on Sundays were still mostly avoided. Furthermore, the daytime noise level on Sundays was strongly reduced regardless of changes on weekdays after the second state of emergency, which restricted activities mainly at night. Sunday noise levels gradually increased from the middle of the second state of emergency, suggesting a gradual reduction in public concern about COVID-19 following a decrease in the number of infections. Our findings demonstrate that seismic noise can be used to monitor social activities.


Subject(s)
COVID-19/epidemiology , Leisure Activities , Noise , Acoustics , Activities of Daily Living , Communicable Disease Control/methods , Disease Outbreaks , Emergency Service, Hospital , Environmental Monitoring/methods , Humans , Pandemics , SARS-CoV-2 , Tokyo/epidemiology
18.
Conserv Biol ; 35(5): 1659-1668, 2021 10.
Article in English | MEDLINE | ID: covidwho-1455530

ABSTRACT

Anurans (frogs and toads) are among the most globally threatened taxonomic groups. Successful conservation of anurans will rely on improved data on the status and changes in local populations, particularly for rare and threatened species. Automated sensors, such as acoustic recorders, have the potential to provide such data by massively increasing the spatial and temporal scale of population sampling efforts. Analyzing such data sets will require robust and efficient tools that can automatically identify the presence of a species in audio recordings. Like bats and birds, many anuran species produce distinct vocalizations that can be captured by autonomous acoustic recorders and represent excellent candidates for automated recognition. However, in contrast to birds and bats, effective automated acoustic recognition tools for anurans are not yet widely available. An effective automated call-recognition method for anurans must be robust to the challenges of real-world field data and should not require extensive labeled data sets. We devised a vocalization identification tool that classifies anuran vocalizations in audio recordings based on their periodic structure: the repeat interval-based bioacoustic identification tool (RIBBIT). We applied RIBBIT to field recordings to study the boreal chorus frog (Pseudacris maculata) of temperate North American grasslands and the critically endangered variable harlequin frog (Atelopus varius) of tropical Central American rainforests. The tool accurately identified boreal chorus frogs, even when they vocalized in heavily overlapping choruses and identified variable harlequin frog vocalizations at a field site where it had been very rarely encountered in visual surveys. Using a few simple parameters, RIBBIT can detect any vocalization with a periodic structure, including those of many anurans, insects, birds, and mammals. We provide open-source implementations of RIBBIT in Python and R to support its use for other taxa and communities.


Los anuros (ranas y sapos) se encuentran dentro de los grupos taxonómicos más amenazados a nivel mundial. La conservación exitosa de los anuros dependerá de información mejorada sobre el estado y los cambios en las poblaciones locales, particularmente para las especies raras y amenazadas. Los sensores automatizados, como las grabadoras acústicas, tienen el potencial para proporcionar dicha información al incrementar masivamente la escala espacial y temporal de los esfuerzos de muestreo poblacional. El análisis de dicha información requerirá herramientas robustas y eficientes que puedan identificar automáticamente la presencia de una especie en las grabaciones de audio. Como las aves y los murciélagos, muchas especies de anuros producen vocalizaciones distintivas que pueden ser capturadas por las grabadoras acústicas autónomas y también son excelentes candidatas para el reconocimiento automatizado. Sin embargo, a diferencia de las aves y los murciélagos, todavía no se cuenta con una disponibilidad extensa de herramientas para el reconocimiento acústico automatizado de los anuros. Un método efectivo para el reconocimiento automatizado del canto de los anuros debe ser firme ante los retos de los datos reales de campo y no debería requerir conjuntos extensos de datos etiquetados. Diseñamos una herramienta de identificación de las vocalizaciones: la herramienta de identificación bioacústica basada en el intervalo de repetición (RIBBIT), el cual clasifica las vocalizaciones de los anuros en las grabaciones de audio con base en su estructura periódica. Aplicamos la RIBBIT a las grabaciones de campo para estudiar a dos especies: la rana coral boreal (Pseudacris maculata) de los pastizales templados de América del Norte y la rana arlequín variable (Atelopus varius), críticamente en peligro de extinción, de las selvas tropicales de América Central. Mostramos que RIBBIT puede identificar correctamente a las ranas corales boreales, incluso cuando vocalizan en coros con mucha superposición, y puede identificar las vocalizaciones de la rana arlequín variable en un sitio de campo en donde rara vez se le ha visto durante censos visuales. Mediante relativamente unos cuantos parámetros simples, RIBBIT puede detectar cualquier vocalización con una estructura periódica, incluyendo aquellas de muchos anuros, insectos, aves y mamíferos. Proporcionamos implementaciones de fuente abierta de RIBBIT en Python y en R para fomentar su uso para otros taxones y comunidades.


Subject(s)
Conservation of Natural Resources , Vocalization, Animal , Acoustics , Animals , Anura , Birds
19.
JMIR Mhealth Uhealth ; 9(9): e24352, 2021 09 17.
Article in English | MEDLINE | ID: covidwho-1443933

ABSTRACT

BACKGROUND: Mood disorders are commonly underrecognized and undertreated, as diagnosis is reliant on self-reporting and clinical assessments that are often not timely. Speech characteristics of those with mood disorders differs from healthy individuals. With the wide use of smartphones, and the emergence of machine learning approaches, smartphones can be used to monitor speech patterns to help the diagnosis and monitoring of mood disorders. OBJECTIVE: The aim of this review is to synthesize research on using speech patterns from smartphones to diagnose and monitor mood disorders. METHODS: Literature searches of major databases, Medline, PsycInfo, EMBASE, and CINAHL, initially identified 832 relevant articles using the search terms "mood disorders", "smartphone", "voice analysis", and their variants. Only 13 studies met inclusion criteria: use of a smartphone for capturing voice data, focus on diagnosing or monitoring a mood disorder(s), clinical populations recruited prospectively, and in the English language only. Articles were assessed by 2 reviewers, and data extracted included data type, classifiers used, methods of capture, and study results. Studies were analyzed using a narrative synthesis approach. RESULTS: Studies showed that voice data alone had reasonable accuracy in predicting mood states and mood fluctuations based on objectively monitored speech patterns. While a fusion of different sensor modalities revealed the highest accuracy (97.4%), nearly 80% of included studies were pilot trials or feasibility studies without control groups and had small sample sizes ranging from 1 to 73 participants. Studies were also carried out over short or varying timeframes and had significant heterogeneity of methods in terms of the types of audio data captured, environmental contexts, classifiers, and measures to control for privacy and ambient noise. CONCLUSIONS: Approaches that allow smartphone-based monitoring of speech patterns in mood disorders are rapidly growing. The current body of evidence supports the value of speech patterns to monitor, classify, and predict mood states in real time. However, many challenges remain around the robustness, cost-effectiveness, and acceptability of such an approach and further work is required to build on current research and reduce heterogeneity of methodologies as well as clinical evaluation of the benefits and risks of such approaches.


Subject(s)
Smartphone , Speech , Acoustics , Humans , Monitoring, Physiologic , Mood Disorders/diagnosis
20.
Eur Arch Otorhinolaryngol ; 279(4): 1701-1708, 2022 Apr.
Article in English | MEDLINE | ID: covidwho-1431684

ABSTRACT

PURPOSE: The authors aim to review available reports on the potential effects of masks on voice and speech parameters. METHODS: A literature search was conducted using MEDLINE and Google Scholar databases through July 2021. Several targeted populations, mask scenarios and methodologies were approached. The assessed voice parameters were divided into self-reported, acoustic and aerodynamic. RESULTS: It was observed that the wearing of a face mask has been shown to induce several changes in voice parameters: (1) self-reported-significantly increased vocal effort and fatigue, increased vocal tract discomfort and increased values of voice handicap index (VHI) were observed; (2) acoustics-increased voice intensity, altered formants frequency (F2 and F3) with no changes in fundamental frequency, increased harmonics-to-noise ratio (HNR) and increased mean spectral values in high-frequency levels (1000-8000 Hz), especially with KN95 mask; (3) aerodynamics-maximum phonatory time was assessed in only two reports, and showed no alterations. CONCLUSION: Despite the different populations, mask-type scenarios and methodologies described by each study, the results of this review outline the significant changes in voice characteristics with the use of face masks. Wearing a mask shows to increase the perception of vocal effort and an alteration of the vocal tract length and speech articulatory movements, leading to spectral sound changes, impaired communication and perception. Studies analyzing the effect of masks on voice aerodynamics are lacking. Further research is required to study the long-term effects of face masks on the potential development of voice pathology.


Subject(s)
Voice Disorders , Voice , Acoustics , Humans , Phonation , Speech , Speech Acoustics , Voice Disorders/etiology , Voice Disorders/prevention & control , Voice Quality
SELECTION OF CITATIONS
SEARCH DETAIL