Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
2.
Sci Rep ; 14(1): 12731, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38830946

ABSTRACT

Conversational Agents (CAs) have made their way to providing interactive assistance to users. However, the current dialogue modelling techniques for CAs are predominantly based on hard-coded rules and rigid interaction flows, which negatively affects their flexibility and scalability. Large Language Models (LLMs) can be used as an alternative, but unfortunately they do not always provide good levels of privacy protection for end-users since most of them are running on cloud services. To address these problems, we leverage the potential of transfer learning and study how to best fine-tune lightweight pre-trained LLMs to predict the intent of user queries. Importantly, our LLMs allow for on-device deployment, making them suitable for personalised, ubiquitous, and privacy-preserving scenarios. Our experiments suggest that RoBERTa and XLNet offer the best trade-off considering these constraints. We also show that, after fine-tuning, these models perform on par with ChatGPT. We also discuss the implications of this research for relevant stakeholders, including researchers and practitioners. Taken together, this paper provides insights into LLM suitability for on-device CAs and highlights the middle ground between LLM performance and memory footprint while also considering privacy implications.

3.
Biomed Eng Lett ; 14(1): 103-113, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38186953

ABSTRACT

Brain-Computer Interfacing (BCI) has shown promise in Machine Learning (ML) for emotion recognition. Unfortunately, how data are partitioned in training/test splits is often overlooked, which makes it difficult to attribute research findings to actual modeling improvements or to partitioning issues. We introduce the "data transfer rate" construct (i.e., how much data of the test samples are seen during training) and use it to examine data partitioning effects under several conditions. As a use case, we consider emotion recognition in videos using electroencephalogram (EEG) signals. Three data splits are considered, each representing a relevant BCI task: subject-independent (affective decoding), video-independent (affective annotation), and time-based (feature extraction). Model performance may change significantly (ranging e.g. from 50% to 90%) depending on how data is partitioned, in classification accuracy. This was evidenced in all experimental conditions tested. Our results show that (1) for affective decoding, it is hard to achieve performance above the baseline case (random classification) unless some data of the test subjects are considered in the training partition; (2) for affective annotation, having data from the same subject in training and test partitions, even though they correspond to different videos, also increases performance; and (3) later signal segments are generally more discriminative, but it is the number of segments (data points) what matters the most. Our findings not only have implications in how brain data are managed, but also in how experimental conditions and results are reported.

4.
J Prosthet Dent ; 128(3): 382-389, 2022 Sep.
Article in English | MEDLINE | ID: mdl-33597078

ABSTRACT

STATEMENT OF PROBLEM: Studies determining the main predictors of masticatory performance by using mixing ability tests are sparse. PURPOSE: The purpose of this clinical study was to identify potential determinants of masticatory performance assessed by analyzing a patient's masticatory ability using bicolored chewing gum and visual, quantitative, and interactive methods. MATERIAL AND METHODS: Nondental participants attending healthcare centers were consecutively recruited in Granada, Spain. The inclusion criteria were older than18 years and resident in the coverage area of the reference healthcare centers for at least the previous 6 months. The participants were excluded if they had received dental treatment in the previous 6 months or they were unable to communicate. The masticatory performance was determined by using 2-colored chewing gum (Kiss 3 white and blue; Smint) that was masticated for a total of 20 strokes. The masticated gum was crushed between 2 transparent glass slides, creating a 1-mm-thick specimen that was subsequently scanned. The mixed-color area was calculated as a percentage by using Photoshop as described by Schimmel et al and designated as the standard method. In addition, all images made were analyzed by using the Web application the Chewing Performance Calculator. In addition, the masticated bolus was inspected visually, and mastication performance was classified as being poor, moderate, or good. Sociodemographic data, as well as data on behaviors, medical and nutritional status, health-related quality of life, saliva, and general oral health, were collected for all participants to identify the main determinants of masticatory performance. RESULTS: One hundred thirty-seven participants were enrolled. The masticatory performance values obtained using both methods (standard method and Chewing Performance Calculator) were significantly greater for well masticated gum (P<.001), which had been visually classified as being poorly masticated (69.1% for standard method and 43.5% for Chewing Performance Calculator), moderately masticated (89.7% for standard method and 67.3% for Chewing Performance Calculator), and well masticated (97.3% for standard method and 80.3% for Chewing Performance Calculator). The bivariate analyses revealed that masticatory performance was significantly higher in younger people (<65 years) (P=.008), who also had a higher basal salivary flow rate (P<.001), were nondenture users (P=.002), and had more standing teeth and occlusal units (P<.001). However, the multiple regression analyses showed that the number of occlusal units was the only significant predictor of masticatory performance. In addition, the mean masticatory performance (95% confidence interval: 47.7% to 56.8%) was found to be greatly improved (by 1.2% to 2.2%), with each occlusal unit, in accordance with the Chewing Performance Calculator and between 0.8% and 1.8% according to the standard method; the basal masticatory performance was calculated as 72.1% to 81.2% (95% confidence interval). CONCLUSIONS: The number of occlusal units is one of the main predictors of masticatory performance when a 2-color bolus is used to test mixing ability.


Subject(s)
Chewing Gum , Quality of Life , Color , Humans , Mastication , Oral Health , Software
5.
Sensors (Basel) ; 21(17)2021 Aug 27.
Article in English | MEDLINE | ID: mdl-34502662

ABSTRACT

Physical objects are usually not designed with interaction capabilities to control digital content. Nevertheless, they provide an untapped source for interactions since every object could be used to control our digital lives. We call this the missing interface problem: Instead of embedding computational capacity into objects, we can simply detect users' gestures on them. However, gesture detection on such unmodified objects has to date been limited in the spatial resolution and detection fidelity. To address this gap, we conducted research on micro-gesture detection on physical objects based on Google Soli's radar sensor. We introduced two novel deep learning architectures to process range Doppler images, namely a three-dimensional convolutional neural network (Conv3D) and a spectrogram-based ConvNet. The results show that our architectures enable robust on-object gesture detection, achieving an accuracy of approximately 94% for a five-gesture set, surpassing previous state-of-the-art performance results by up to 39%. We also showed that the decibel (dB) Doppler range setting has a significant effect on system performance, as accuracy can vary up to 20% across the dB range. As a result, we provide guidelines on how to best calibrate the radar sensor.


Subject(s)
Gestures , Radar , Algorithms , Neural Networks, Computer , Recognition, Psychology
6.
Sensors (Basel) ; 21(9)2021 Apr 29.
Article in English | MEDLINE | ID: mdl-33946830

ABSTRACT

Temporal salience considers how visual attention varies over time. Although visual salience has been widely studied from a spatial perspective, its temporal dimension has been mostly ignored, despite arguably being of utmost importance to understand the temporal evolution of attention on dynamic contents. To address this gap, we proposed Glimpse, a novel measure to compute temporal salience based on the observer-spatio-temporal consistency of raw gaze data. The measure is conceptually simple, training free, and provides a semantically meaningful quantification of visual attention over time. As an extension, we explored scoring algorithms to estimate temporal salience from spatial salience maps predicted with existing computational models. However, these approaches generally fall short when compared with our proposed gaze-based measure. Glimpse could serve as the basis for several downstream tasks such as segmentation or summarization of videos. Glimpse's software and data are publicly available.

7.
J Prosthet Dent ; 125(1): 82-88, 2021 Jan.
Article in English | MEDLINE | ID: mdl-31987585

ABSTRACT

STATEMENT OF PROBLEM: There is a need to quantitatively differentiate between impaired and normal mastication by using straightforward and reliable methods because currently available methods are expensive, complex, and time-consuming. PURPOSE: The purpose of this clinical study was to assess the reliability, validity, and clinical utility of a new Web-based software program designed to calculate masticatory performance, the Chewing Performance Calculator (CPC) measuring masticatory performance (MP), by analyzing the area of mixed bicolored chewing gum. MATERIAL AND METHODS: One hundred and ten participants were consecutively recruited from the School of Dentistry of the University of Salamanca. MP was determined by using 2-colored chewing gum that was masticated for a total of 20 strokes. The masticated gum was then flattened between 2 transparent glass tiles, generating a 1-mm-thick specimen that was scanned to calculate the percentage of area where the 2 colors were mixed. The area was calculated by using a photo-editing software program as described by Schimmel et al (standard method). In addition, all the images were analyzed by using the CPC Web application, which took as input the image of the masticated bolus enclosed in a custom plastic platen that allowed 3 parts of the image to be selected interactively: the platen, the bolus background, and the mixed color fraction of the bolus. The application then computed MP as a percentage. Additionally, an oral examination was carried out to record the number of occlusal units. These data were used to assess the validity of CPC by using the Pearson correlation coefficient. Construct validity was assessed by using ANOVA by comparing the MP scores obtained for masticated gums, classified upon inspection as being poorly, moderately, or highly mixed. The time spent evaluating the specimens with GSM and CPC methods was also recorded and used to indicate the usefulness of the procedure. RESULTS: The MP was found to range between 5.2% and 100% (95% CI: 80.8-88.8) with the GSM and between 9.2% and 96.4% (95% CI: 60.0-67.6) with the CPC. The time needed to calculate MP by using the GSM was significantly higher (235.2 versus 260.5 seconds) than that with the CPC (42.3 to 48.6 seconds). Both methods were significantly intercorrelated (r=0.65; P<.001) and correlated with the number of occlusal units (r=0.54 for CPC and r=0.40 for GSM). The correlation coefficient of MP calculated by using CPC (r=0.54; P<.001) was greater than that calculated by using GSM (r=0.40; P<.001). Moreover, both methods showed adequate construct validity because the values calculated for MP significantly increased as the mixing of the masticated gums also increased, subjectively classified as poor, moderate, and high. CONCLUSIONS: The CPC software program allowed MP to be determined in a valid and easy-to-use manner by using 2-colored chewing gum.


Subject(s)
Chewing Gum , Software , Color , Humans , Mastication , Physical Examination , Reproducibility of Results
8.
Front Hum Neurosci ; 14: 565664, 2020.
Article in English | MEDLINE | ID: mdl-33304250
SELECTION OF CITATIONS
SEARCH DETAIL
...