Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Contemp Clin Trials ; 139: 107464, 2024 04.
Article in English | MEDLINE | ID: mdl-38307224

ABSTRACT

Dental disease continues to be one of the most prevalent chronic diseases in the United States. Although oral self-care behaviors (OSCB), involving systematic twice-a-day tooth brushing, can prevent dental disease, this basic behavior is not sufficiently practiced. Recent advances in digital technology offer tremendous potential for promoting OSCB by delivering Just-In-Time Adaptive Interventions (JITAIs)- interventions that leverage dynamic information about the person's state and context to effectively prompt them to engage in a desired behavior in real-time, real-world settings. However, limited research attention has been given to systematically investigating how to best prompt individuals to engage in OSCB in daily life, and under what conditions prompting would be most beneficial. This paper describes the protocol for a Micro-Randomized Trial (MRT) to inform the development of a JITAI for promoting ideal OSCB, namely, brushing twice daily, for two minutes each time, in all four dental quadrants (i.e., 2x2x4). Sensors within an electric toothbrush (eBrush) will be used to track OSCB and a matching mobile app (Oralytics) will deliver on-demand feedback and educational information. The MRT will micro-randomize participants twice daily (morning and evening) to either (a) a prompt (push notification) containing one of several theoretically grounded engagement strategies or (b) no prompt. The goal is to investigate whether, what type of, and under what conditions prompting increases engagement in ideal OSCB. The results will build the empirical foundation necessary to develop an optimized JITAI that will be evaluated relative to a suitable control in a future randomized controlled trial.


Subject(s)
Mobile Applications , Stomatognathic Diseases , Humans , Oral Health , Self Care , Randomized Controlled Trials as Topic
2.
Proc Innov Appl Artif Intell Conf ; 37(13): 15724-15730, 2023 Jun 27.
Article in English | MEDLINE | ID: mdl-37637073

ABSTRACT

While dental disease is largely preventable, professional advice on optimal oral hygiene practices is often forgotten or abandoned by patients. Therefore patients may benefit from timely and personalized encouragement to engage in oral self-care behaviors. In this paper, we develop an online reinforcement learning (RL) algorithm for use in optimizing the delivery of mobile-based prompts to encourage oral hygiene behaviors. One of the main challenges in developing such an algorithm is ensuring that the algorithm considers the impact of current actions on the effectiveness of future actions (i.e., delayed effects), especially when the algorithm has been designed to run stably and autonomously in a constrained, real-world setting characterized by highly noisy, sparse data. We address this challenge by designing a quality reward that maximizes the desired health outcome (i.e., high-quality brushing) while minimizing user burden. We also highlight a procedure for optimizing the hyperparameters of the reward by building a simulation environment test bed and evaluating candidates using the test bed. The RL algorithm discussed in this paper will be deployed in Oralytics. To the best of our knowledge, Oralytics is the first mobile health study utilizing an RL algorithm designed to prevent dental disease by optimizing the delivery of motivational messages supporting oral self-care behaviors.

3.
Algorithms ; 15(8)2022 Aug.
Article in English | MEDLINE | ID: mdl-36713810

ABSTRACT

Online reinforcement learning (RL) algorithms are increasingly used to personalize digital interventions in the fields of mobile health and online education. Common challenges in designing and testing an RL algorithm in these settings include ensuring the RL algorithm can learn and run stably under real-time constraints, and accounting for the complexity of the environment, e.g., a lack of accurate mechanistic models for the user dynamics. To guide how one can tackle these challenges, we extend the PCS (predictability, computability, stability) framework, a data science framework that incorporates best practices from machine learning and statistics in supervised learning to the design of RL algorithms for the digital interventions setting. Furthermore, we provide guidelines on how to design simulation environments, a crucial tool for evaluating RL candidate algorithms using the PCS framework. We show how we used the PCS framework to design an RL algorithm for Oralytics, a mobile health study aiming to improve users' tooth-brushing behaviors through the personalized delivery of intervention messages. Oralytics will go into the field in late 2022.

4.
Adv Neural Inf Process Syst ; 34: 7460-7471, 2021 Dec.
Article in English | MEDLINE | ID: mdl-35757490

ABSTRACT

Bandit algorithms are increasingly used in real-world sequential decision-making problems. Associated with this is an increased desire to be able to use the resulting datasets to answer scientific questions like: Did one type of ad lead to more purchases? In which contexts is a mobile health intervention effective? However, classical statistical approaches fail to provide valid confidence intervals when used with data collected with bandit algorithms. Alternative methods have recently been developed for simple models (e.g., comparison of means). Yet there is a lack of general methods for conducting statistical inference using more complex models on data collected with (contextual) bandit algorithms; for example, current methods cannot be used for valid inference on parameters in a logistic regression model for a binary reward. In this work, we develop theory justifying the use of M-estimators-which includes estimators based on empirical risk minimization as well as maximum likelihood-on data collected with adaptive algorithms, including (contextual) bandit algorithms. Specifically, we show that M-estimators, modified with particular adaptive weights, can be used to construct asymptotically valid confidence regions for a variety of inferential targets.

5.
Adv Neural Inf Process Syst ; 33: 9818-9829, 2020 Dec.
Article in English | MEDLINE | ID: mdl-35002190

ABSTRACT

As bandit algorithms are increasingly utilized in scientific studies and industrial applications, there is an associated increasing need for reliable inference methods based on the resulting adaptively-collected data. In this work, we develop methods for inference on data collected in batches using a bandit algorithm. We first prove that the ordinary least squares estimator (OLS), which is asymptotically normal on independently sampled data, is not asymptotically normal on data collected using standard bandit algorithms when there is no unique optimal arm. This asymptotic non-normality result implies that the naive assumption that the OLS estimator is approximately normal can lead to Type-1 error inflation and confidence intervals with below-nominal coverage probabilities. Second, we introduce the Batched OLS estimator (BOLS) that we prove is (1) asymptotically normal on data collected from both multi-arm and contextual bandits and (2) robust to non-stationarity in the baseline reward.

SELECTION OF CITATIONS
SEARCH DETAIL
...