Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
1.
Sensors (Basel) ; 24(3)2024 Jan 24.
Article in English | MEDLINE | ID: mdl-38339479

ABSTRACT

BACKGROUND: Combination devices to monitor heart rate/rhythms and physical activity are becoming increasingly popular in research and clinical settings. The Zio XT Patch (iRhythm Technologies, San Francisco, CA, USA) is US Food and Drug Administration (FDA)-approved for monitoring heart rhythms, but the validity of its accelerometer for assessing physical activity is unknown. OBJECTIVE: To validate the accelerometer in the Zio XT Patch for measuring physical activity against the widely-used ActiGraph GT3X. METHODS: The Zio XT and ActiGraph wGT3X-BT (Actigraph, Pensacola, FL, USA) were worn simultaneously in two separately-funded ancillary studies to Visit 6 of the Atherosclerosis Risk in Communities (ARIC) Study (2016-2017). Zio XT was worn on the chest and ActiGraph was worn on the hip. Raw accelerometer data were summarized using mean absolute deviation (MAD) for six different epoch lengths (1-min, 5-min, 10-min, 30-min, 1-h, and 2-h). Participants who had ≥3 days of at least 10 h of valid data between 7 a.m-11 p.m were included. Agreement of epoch-level MAD between the two devices was evaluated using correlation and mean squared error (MSE). RESULTS: Among 257 participants (average age: 78.5 ± 4.7 years; 59.1% female), there were strong correlations between MAD values from Zio XT and ActiGraph (average r: 1-min: 0.66, 5-min: 0.90, 10-min: 0.93, 30-min: 0.93, 1-h: 0.89, 2-h: 0.82), with relatively low error values (Average MSE × 106: 1-min: 349.37 g, 5-min: 86.25 g, 10-min: 56.80 g, 30-min: 45.46 g, 1-h: 52.56 g, 2-h: 54.58 g). CONCLUSIONS: These findings suggest that Zio XT accelerometry is valid for measuring duration, frequency, and intensity of physical activity within time epochs of 5-min to 2-h.


Subject(s)
Atherosclerosis , Exercise , Humans , Female , Aged , Aged, 80 and over , Male , Accelerometry , Atherosclerosis/diagnosis
2.
Inj Epidemiol ; 10(1): 12, 2023 Mar 01.
Article in English | MEDLINE | ID: mdl-36859384

ABSTRACT

BACKGROUND: Firearm injuries are a long-running yet preventable public health concern in the USA. We analyzed national inpatient data to determine the burden of firearm injuries on the USA hospital system. For each year from 2000-2014 and 2016-2020, we calculated the annual frequency of firearm hospitalization in the USA overall and by the intent of the shooter. We also calculated the rate of firearm hospitalizations per 100,000 inpatient encounters. For each outcome, we used regression analysis to estimate the average year-over-year change. Finally, we explored the types of firearms responsible for firearm hospitalizations. FINDINGS: Each year during 2000-2020 (excluding 2015), there were an average of 30,428 firearm hospitalizations in the USA. On average, firearm hospitalizations represented 84 out of every 100,000 inpatient encounters each year. There was not a statistically significant year-over-year increase in firearm hospitalizations for either the periods 2000-2014 or 2016-2020. However, firearm hospitalizations were noticeably higher in 2020 than in other years. Until 2019, the most frequent intent among firearm hospitalizations was assault. Beginning in 2019, assaults were outnumbered by unintentional firearm hospitalizations. According to diagnosis codes, handguns were used more often than rifles/shotguns/larger firearms in firearm injuries that resulted in hospitalization for the intents assault (27.93% handguns; 5.87% rifles/shotguns/larger firearms), unintentional (23.94% handguns; 10.48% rifles/shotguns/larger firearms), self-harm (46.63% handguns; 14.35% rifles/shotguns/larger firearms) and undetermined (17.82% handguns; 6.21% rifles/shotguns/larger firearms). Frequently, the type of firearm responsible for the hospitalization was not recorded in the patient's diagnosis code. CONCLUSION: Firearm injuries inflict a significant burden on the hospital system in the USA. While firearm hospitalizations were unusually high in 2020, there is not strong evidence that the burden of firearm injuries on the hospital system is changing over time. The frequent non-identification of the type of firearm responsible for the injury in hospital patients' diagnosis code complicates injury surveillance efforts.

3.
Front Aging Neurosci ; 14: 868500, 2022.
Article in English | MEDLINE | ID: mdl-36204547

ABSTRACT

We examined the construct of mental planning by quantifying digital clock drawing digit placement accuracy in command and copy conditions, and by investigating its underlying neuropsychological correlates and functional connectivity. We hypothesized greater digit misplacement would associate with attention, abstract reasoning, and visuospatial function, as well as functional connectivity from a major source of acetylcholine throughout the brain: the basal nucleus of Meynert (BNM). Participants (n = 201) included non-demented older adults who completed all metrics within 24 h of one another. A participant subset met research criteria for mild cognitive impairment (MCI; n = 28) and was compared to non-MCI participants on digit misplacement accuracy and expected functional connectivity differences. Digit misplacement and a comparison dissociate variable of total completion time were acquired for command and copy conditions. a priori fMRI seeds were the bilateral BNM. Command digit misplacement is negatively associated with semantics, visuospatial, visuoconstructional, and reasoning (p's < 0.01) and negatively associated with connectivity from the BNM to the anterior cingulate cortex (ACC; p = 0.001). Individuals with MCI had more misplacement and less BNM-ACC connectivity (p = 0.007). Total completion time involved posterior and cerebellar associations only. Findings suggest clock drawing digit placement accuracy may be a unique metric of mental planning and provide insight into neurodegenerative disease.

4.
Front Digit Health ; 4: 773387, 2022.
Article in English | MEDLINE | ID: mdl-35656333

ABSTRACT

Patients in critical care settings often require continuous and multifaceted monitoring. However, current clinical monitoring practices fail to capture important functional and behavioral indices such as mobility or agitation. Recent advances in non-invasive sensing technology, high throughput computing, and deep learning techniques are expected to transform the existing patient monitoring paradigm by enabling and streamlining granular and continuous monitoring of these crucial critical care measures. In this review, we highlight current approaches to pervasive sensing in critical care and identify limitations, future challenges, and opportunities in this emerging field.

5.
Front Digit Health ; 4: 970281, 2022.
Article in English | MEDLINE | ID: mdl-36714611

ABSTRACT

Introduction: Overall performance of machine learning-based prediction models is promising; however, their generalizability and fairness must be vigorously investigated to ensure they perform sufficiently well for all patients. Objective: This study aimed to evaluate prediction bias in machine learning models used for predicting acute postoperative pain. Method: We conducted a retrospective review of electronic health records for patients undergoing orthopedic surgery from June 1, 2011, to June 30, 2019, at the University of Florida Health system/Shands Hospital. CatBoost machine learning models were trained for predicting the binary outcome of low (≤4) and high pain (>4). Model biases were assessed against seven protected attributes of age, sex, race, area deprivation index (ADI), speaking language, health literacy, and insurance type. Reweighing of protected attributes was investigated for reducing model bias compared with base models. Fairness metrics of equal opportunity, predictive parity, predictive equality, statistical parity, and overall accuracy equality were examined. Results: The final dataset included 14,263 patients [age: 60.72 (16.03) years, 53.87% female, 39.13% low acute postoperative pain]. The machine learning model (area under the curve, 0.71) was biased in terms of age, race, ADI, and insurance type, but not in terms of sex, language, and health literacy. Despite promising overall performance in predicting acute postoperative pain, machine learning-based prediction models may be biased with respect to protected attributes. Conclusion: These findings show the need to evaluate fairness in machine learning models involved in perioperative pain before they are implemented as clinical decision support tools.

6.
Explor Med ; 2: 110-121, 2021.
Article in English | MEDLINE | ID: mdl-34263257

ABSTRACT

AIMS: Reduced pre-operative cognitive functioning in older adults is a risk factor for postoperative complications, but it is unknown if preoperative digitally-acquired clock drawing test (CDT) cognitive screening variables, which allow for more nuanced examination of patient performance, may predict lengthier hospital stay and greater cost of hospital care. This issue is particularly relevant for older adults undergoing transcatheter aortic valve replacement (TAVR), as this surgical procedure is chosen for intermediate-risk older adults needing aortic replacement. This proof of concept research explored if specific latency and graphomotor variables indicative of planning from digitally-acquired command and copy clock drawing would predict post-TAVR duration and cost of hospitalization, over and above age, education, American Society of Anesthesiologists (ASA) physical status classification score, and frailty. METHODS: Form January 2018 to December 2019, 162 out of 190 individuals electing TAVR completed digital clock drawing as part of a hospital wide cognitive screening program. Separate hierarchical regressions were computed for the command and copy conditions of the CDT and assessed how a-priori selected clock drawing metrics (total time to completion, ideal digit placement difference, and hour hand distance from center; included within the same block) incrementally predicted outcome, as measured by R2 change significance values. RESULTS: Above and beyond age, education, ASA physical status classification score, and frailty, only digitally-acquired CDT copy performance explained significant variance for length of hospital stay (9.5%) and cost of care (8.9%). CONCLUSIONS: Digital variables from clock copy condition provided predictive value over common demographic and comorbidity variables. We hypothesize this is due to the sensitivity of the copy condition to executive dysfunction, as has been shown in previous studies for subtypes of cognitive impairment. Individuals undergoing TAVR procedures are often frail and executively compromised due to their cerebrovascular disease. We encourage additional research on the value of digitally-acquired clock drawing within different surgery types. Type of cognitive impairment and the value of digitally-acquired CDT command and copy parameters in other surgeries remain unknown.

7.
J Alzheimers Dis ; 82(1): 47-57, 2021.
Article in English | MEDLINE | ID: mdl-34219737

ABSTRACT

BACKGROUND: Advantages of digital clock drawing metrics for dementia subtype classification needs examination. OBJECTIVE: To assess how well kinematic, time-based, and visuospatial features extracted from the digital Clock Drawing Test (dCDT) can classify a combined group of Alzheimer's disease/Vascular Dementia patients versus healthy controls (HC), and classify dementia patients with Alzheimer's disease (AD) versus vascular dementia (VaD). METHODS: Healthy, community-dwelling control participants (n = 175), patients diagnosed clinically with Alzheimer's disease (n = 29), and vascular dementia (n = 27) completed the dCDT to command and copy clock drawing conditions. Thirty-seven dCDT command and 37 copy dCDT features were extracted and used with Random Forest classification models. RESULTS: When HC participants were compared to participants with dementia, optimal area under the curve was achieved using models that combined both command and copy dCDT features (AUC = 91.52%). Similarly, when AD versus VaD participants were compared, optimal area under the curve was, achieved with models that combined both command and copy features (AUC = 76.94%). Subsequent follow-up analyses of a corpus of 10 variables of interest determined using a Gini Index found that groups could be dissociated based on kinematic, time-based, and visuospatial features. CONCLUSION: The dCDT is able to operationally define graphomotor output that cannot be measured using traditional paper and pencil test administration in older health controls and participants with dementia. These data suggest that kinematic, time-based, and visuospatial behavior obtained using the dCDT may provide additional neurocognitive biomarkers that may be able to identify and tract dementia syndromes.


Subject(s)
Alzheimer Disease/classification , Cognitive Dysfunction/classification , Dementia, Vascular/classification , Digital Technology , Neuropsychological Tests , Visual Perception , Aged , Biomechanical Phenomena , Female , Humans , Male , Middle Aged
8.
J Alzheimers Dis ; 82(1): 59-70, 2021.
Article in English | MEDLINE | ID: mdl-34219739

ABSTRACT

BACKGROUND: Relative to the abundance of publications on dementia and clock drawing, there is limited literature operationalizing 'normal' clock production. OBJECTIVE: To operationalize subtle behavioral patterns seen in normal digital clock drawing to command and copy conditions. METHODS: From two research cohorts of cognitively-well participants age 55 plus who completed digital clock drawing to command and copy conditions (n = 430), we examined variables operationalizing clock face construction, digit placement, clock hand construction, and a variety of time-based, latency measures. Data are stratified by age, education, handedness, and number anchoring. RESULTS: Normative data are provided in supplementary tables. Typical errors reported in clock research with dementia were largely absent. Adults age 55 plus produce symmetric clock faces with one stroke, with minimal overshoot and digit misplacement, and hands with expected hour hand to minute hand ratio. Data suggest digitally acquired graphomotor and latency differences based on handedness, age, education, and anchoring. CONCLUSION: Data provide useful benchmarks from which to assess digital clock drawing performance in Alzheimer's disease and related dementias.


Subject(s)
Benchmarking , Neuropsychological Tests , Aged , Cognition , Female , Humans , Male , Reaction Time , Writing
9.
JMIR Mhealth Uhealth ; 9(5): e23681, 2021 05 03.
Article in English | MEDLINE | ID: mdl-33938809

ABSTRACT

BACKGROUND: Research has shown the feasibility of human activity recognition using wearable accelerometer devices. Different studies have used varying numbers and placements for data collection using sensors. OBJECTIVE: This study aims to compare accuracy performance between multiple and variable placements of accelerometer devices in categorizing the type of physical activity and corresponding energy expenditure in older adults. METHODS: In total, 93 participants (mean age 72.2 years, SD 7.1) completed a total of 32 activities of daily life in a laboratory setting. Activities were classified as sedentary versus nonsedentary, locomotion versus nonlocomotion, and lifestyle versus nonlifestyle activities (eg, leisure walk vs computer work). A portable metabolic unit was worn during each activity to measure metabolic equivalents (METs). Accelerometers were placed on 5 different body positions: wrist, hip, ankle, upper arm, and thigh. Accelerometer data from each body position and combinations of positions were used to develop random forest models to assess activity category recognition accuracy and MET estimation. RESULTS: Model performance for both MET estimation and activity category recognition were strengthened with the use of additional accelerometer devices. However, a single accelerometer on the ankle, upper arm, hip, thigh, or wrist had only a 0.03-0.09 MET increase in prediction error compared with wearing all 5 devices. Balanced accuracy showed similar trends with slight decreases in balanced accuracy for the detection of locomotion (balanced accuracy decrease range 0-0.01), sedentary (balanced accuracy decrease range 0.05-0.13), and lifestyle activities (balanced accuracy decrease range 0.04-0.08) compared with all 5 placements. The accuracy of recognizing activity categories increased with additional placements (accuracy decrease range 0.15-0.29). Notably, the hip was the best single body position for MET estimation and activity category recognition. CONCLUSIONS: Additional accelerometer devices slightly enhance activity recognition accuracy and MET estimation in older adults. However, given the extra burden of wearing additional devices, single accelerometers with appropriate placement appear to be sufficient for estimating energy expenditure and activity category recognition in older adults.


Subject(s)
Accelerometry , Exercise , Aged , Energy Metabolism , Human Activities , Humans , Wrist
10.
Article in English | MEDLINE | ID: mdl-33718920

ABSTRACT

Accurate prediction and monitoring of patient health in the intensive care unit can inform shared decisions regarding appropriateness of care delivery, risk-reduction strategies, and intensive care resource use. Traditionally, algorithmic solutions for patient outcome prediction rely solely on data available from electronic health records (EHR). In this pilot study, we explore the benefits of augmenting existing EHR data with novel measurements from wrist-worn activity sensors as part of a clinical environment known as the Intelligent ICU. We implemented temporal deep learning models based on two distinct sources of patient data: (1) routinely measured vital signs from electronic health records, and (2) activity data collected from wearable sensors. As a proxy for illness severity, our models predicted whether patients leaving the intensive care unit would be successfully or unsuccessfully discharged from the hospital. We overcome the challenge of small sample size in our prospective cohort by applying deep transfer learning using EHR data from a much larger cohort of traditional ICU patients. Our experiments quantify added utility of non-traditional measurements for predicting patient health, especially when applying a transfer learning procedure to small novel Intelligent ICU cohorts of critically ill patients.

11.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 5657-5660, 2020 07.
Article in English | MEDLINE | ID: mdl-33019260

ABSTRACT

The Clock Drawing Test, where the participant is asked to draw a clock from memory and copy a model clock, is widely used for screening of cognitive impairment. The digital version of the clock test, the digital clock drawing test (dCDT), employs accelerometer and pressure sensors of a digital pen to capture time and pressure information from a participant's performance in a granular digital format. While visual features of the clock drawing test have previously been studied, little is known about the relationship between demographic and cognitive impairment characteristics with dCDT latency and graphomotor features. Here, we examine dCDT feature clusters with respect to sociodemographic and cognitive impairment outcomes. Our results show that the clusters are not significantly different in terms of age and gender, but did significantly differ in terms of education, Mini-Mental State Exam scores, and cognitive impairment diagnoses.This study shows that features extracted from digital clock drawings can provide important information regarding cognitive reserve and cognitive impairments.


Subject(s)
Cognitive Dysfunction , Cognitive Reserve , Cognitive Dysfunction/diagnosis , Humans , Mass Screening , Memory , Neuropsychological Tests
12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 5696-5699, 2020 07.
Article in English | MEDLINE | ID: mdl-33019268

ABSTRACT

Critical care patients experience varying levels of pain during their stay in the intensive care unit, often requiring administration of analgesics and sedation. Such medications generally exacerbate the already sedentary physical activity profiles of critical care patients, contributing to delayed recovery. Thus, it is important not only to minimize pain levels, but also to optimize analgesic strategies in order to maximize mobility and activity of ICU patients. Currently, we lack an understanding of the relation between pain and physical activity on a granular level. In this study, we examined the relationship between nurse assessed pain scores and physical activity as measured using a wearable accelerometer device. We found that average, standard deviation, and maximum physical activity counts are significantly higher before high pain reports compared to before low pain reports during both daytime and nighttime, while percentage of time spent immobile was not significantly different between the two pain report groups. Clusters detected among patients using extracted physical activity features were significant in adjusted logistic regression analysis for prediction of pain report group.


Subject(s)
Critical Illness , Pain , Analgesics/therapeutic use , Exercise , Humans , Intensive Care Units , Pain/drug therapy
13.
Sci Rep ; 9(1): 8020, 2019 05 29.
Article in English | MEDLINE | ID: mdl-31142754

ABSTRACT

Currently, many critical care indices are not captured automatically at a granular level, rather are repetitively assessed by overburdened nurses. In this pilot study, we examined the feasibility of using pervasive sensing technology and artificial intelligence for autonomous and granular monitoring in the Intensive Care Unit (ICU). As an exemplary prevalent condition, we characterized delirious patients and their environment. We used wearable sensors, light and sound sensors, and a camera to collect data on patients and their environment. We analyzed collected data to detect and recognize patient's face, their postures, facial action units and expressions, head pose variation, extremity movements, sound pressure levels, light intensity level, and visitation frequency. We found that facial expressions, functional status entailing extremity movement and postures, and environmental factors including the visitation frequency, light and sound pressure levels at night were significantly different between the delirious and non-delirious patients. Our results showed that granular and autonomous monitoring of critically ill patients and their environment is feasible using a noninvasive system, and we demonstrated its potential for characterizing critical care patients and environmental factors.


Subject(s)
Critical Care/methods , Deep Learning , Delirium/diagnosis , Intensive Care Units , Remote Sensing Technology/methods , Accelerometry/instrumentation , Accelerometry/methods , Adult , Aged , Critical Illness/psychology , Delirium/psychology , Feasibility Studies , Female , Humans , Male , Middle Aged , Pilot Projects , Prospective Studies , Remote Sensing Technology/instrumentation , Video Recording , Wearable Electronic Devices
14.
JMIR Mhealth Uhealth ; 7(3): e10044, 2019 03 26.
Article in English | MEDLINE | ID: mdl-30912756

ABSTRACT

BACKGROUND: Chronic pain, including arthritis, affects about 100 million adults in the United States. Complexity and diversity of the pain experience across time and people and its fluctuations across and within days show the need for valid pain reports that do not rely on patient's long-term recall capability. Smartwatches can be used as digital ecological momentary assessment (EMA) tools for real-time collection of pain scores. Smartwatches are generally less expensive than smartphones, are highly portable, and have a simpler user interface, providing an excellent medium for continuous data collection and enabling a higher compliance rate. OBJECTIVE: The aim of this study was to explore the attitudes and perceptions of older adults towards design and technological aspects of a smartwatch framework for measuring patient report outcomes (PRO) as an EMA tool. METHODS: A focus group session was conducted to explore the perception of participants towards smartwatch technology and its utility for PRO assessment. Participants included older adults (age 65+), with unilateral or bilateral symptomatic knee osteoarthritis. A preliminary user interface with server communication capability was developed and deployed on 10 Samsung Gear S3 smartwatches and provided to the users during the focus group. Pain was designated as the main PRO, while fatigue, mood, and sleep quality were included as auxiliary PROs. Pre-planned topics included participants' attitude towards the smartwatch technology, usability of the custom-designed app interface, and suitability of the smartwatch technology for PRO assessment. Discussions were transcribed, and content analysis with theme characterization was performed to identify and code the major themes. RESULTS: We recruited 19 participants (age 65+) who consented to take part in the focus group study. The overall attitude of the participants toward the smartwatch technology was positive. They showed interest in the direct phone-call capability, availability of extra apps such as the weather apps and sensors for tracking health and wellness such as accelerometer and heart rate sensor. Nearly three-quarters of participants showed willingness to participate in a one-year study to wear the watch daily. Concerns were raised regarding usability, including accessibility (larger icons), notification customization, and intuitive interface design (unambiguous icons and assessment scales). Participants expressed interest in using smartwatch technology for PRO assessment and the availability of methods for sharing data with health care providers. CONCLUSIONS: All participants had overall positive views of the smartwatch technology for measuring PROs to facilitate patient-provider communications and to provide more targeted treatments and interventions in the future. Usability concerns were the major issues that will require special consideration in future smartwatch PRO user interface designs, especially accessibility issues, notification design, and use of intuitive assessment scales.


Subject(s)
Mobile Applications/standards , Pain Measurement/methods , Perception , Aged , Aged, 80 and over , Female , Focus Groups/methods , Humans , Male , Mobile Applications/statistics & numerical data , Pain Measurement/standards , Patient Reported Outcome Measures , Pilot Projects , Qualitative Research , Technology Assessment, Biomedical/methods
15.
JMIR Mhealth Uhealth ; 7(2): e11270, 2019 02 06.
Article in English | MEDLINE | ID: mdl-30724739

ABSTRACT

BACKGROUND: Wearable accelerometers have greatly improved measurement of physical activity, and the increasing popularity of smartwatches with inherent acceleration data collection suggest their potential use in the physical activity research domain; however, their use needs to be validated. OBJECTIVE: This study aimed to assess the validity of accelerometer data collected from a Samsung Gear S smartwatch (SGS) compared with an ActiGraph GT3X+ (GT3X+) activity monitor. The study aims were to (1) assess SGS validity using a mechanical shaker; (2) assess SGS validity using a treadmill running test; and (3) compare individual activity recognition, location of major body movement detection, activity intensity detection, locomotion recognition, and metabolic equivalent scores (METs) estimation between the SGS and GT3X+. METHODS: To validate and compare the SGS accelerometer data with GT3X+ data, we collected data simultaneously from both devices during highly controlled, mechanically simulated, and less-controlled natural wear conditions. First, SGS and GT3X+ data were simultaneously collected from a mechanical shaker and an individual ambulating on a treadmill. Pearson correlation was calculated for mechanical shaker and treadmill experiments. Finally, SGS and GT3X+ data were simultaneously collected during 15 common daily activities performed by 40 participants (n=12 males, mean age 55.15 [SD 17.8] years). A total of 15 frequency- and time-domain features were extracted from SGS and GT3X+ data. We used these features for training machine learning models on 6 tasks: (1) individual activity recognition, (2) activity intensity detection, (3) locomotion recognition, (4) sedentary activity detection, (5) major body movement location detection, and (6) METs estimation. The classification models included random forest, support vector machines, neural networks, and decision trees. The results were compared between devices. We evaluated the effect of different feature extraction window lengths on model accuracy as defined by the percentage of correct classifications. In addition to these classification tasks, we also used the extracted features for METs estimation. RESULTS: The results were compared between devices. Accelerometer data from SGS were highly correlated with the accelerometer data from GT3X+ for all 3 axes, with a correlation ≥.89 for both the shaker test and treadmill test and ≥.70 for all daily activities, except for computer work. Our results for the classification of activity intensity levels, locomotion, sedentary, major body movement location, and individual activity recognition showed overall accuracies of 0.87, 1.00, 0.98, 0.85, and 0.64, respectively. The results were not significantly different between the SGS and GT3X+. Random forest model was the best model for METs estimation (root mean squared error of .71 and r-squared value of .50). CONCLUSIONS: Our results suggest that a commercial brand smartwatch can be used in lieu of validated research grade activity monitors for individual activity recognition, major body movement location detection, activity intensity detection, and locomotion detection tasks.


Subject(s)
Human Activities/psychology , Machine Learning/standards , Recognition, Psychology , Smartphone/standards , Accelerometry/instrumentation , Actigraphy/instrumentation , Female , Human Activities/statistics & numerical data , Humans , Machine Learning/statistics & numerical data , Male , Middle Aged , Smartphone/statistics & numerical data , Validation Studies as Topic
16.
Crit Care Explor ; 1(9): e0027, 2019 Sep.
Article in English | MEDLINE | ID: mdl-32166280

ABSTRACT

We sought to determine the feasibility of using wearable accelerometer devices for determining delirium effects on patients' physical activity patterns and detecting delirium and delirium subtype. DATA SOURCES: PubMed, Embase, and Web of Science. STUDY SELECTION: Screening was performed using predefined search terms to identify original research studies using accelerometer devices for studying physical activity in relation to delirium. DATA EXTRACTION: Key data were extracted from the selected articles. DATA SYNTHESIS: Among the 14 studies identified, there were a total of 315 patients who wore accelerometer devices to record movements related to delirium. Eight studies (57.1%) used accelerometer devices to compare the activity of delirious and nondelirious patients. Delirious patients had lower activity levels, lower restlessness index, higher number of daytime immobility minutes, lower mean activity levels during the day, and higher mean activity levels at night. Delirious patients also had lower actual sleep time, lower sleep efficiency, fewer nighttime minutes resting, fewer minutes resting over 24 hours, and smaller change in activity from day to night. Six studies (42.9%) evaluated the feasibility of using accelerometer devices for detection of delirium and its subtype. Variables including number of postural changes during daytime, frequency of ultrashort, short, and continuous movements were significantly different among the nondelirium and the three delirium subtypes. CONCLUSIONS: The results from the studies using accelerometer devices in studying delirium demonstrate that accelerometer devices can potentially detect the differences between delirious and nondelirious patients, detect delirium, and determine delirium subtype. We suggest the following directions as the next steps for future studies using accelerometer devices for predicting delirium: benchmark studies with longer data collection, larger and more diverse population size, incorporating related factors (e.g., medications), and evaluating delirium subtype and severity.

17.
J Biomed Inform ; 89: 29-40, 2019 01.
Article in English | MEDLINE | ID: mdl-30414474

ABSTRACT

Smartphone and smartwatch technology is changing the transmission and monitoring landscape for patients and research participants to communicate their healthcare information in real time. Flexible, bidirectional and real-time control of communication allows development of a rich set of healthcare applications that can provide interactivity with the participant and adapt dynamically to their changing environment. Additionally, smartwatches have a variety of sensors suitable for collecting physical activity and location data. The combination of all these features makes it possible to transmit the collected data to a remote server, and thus, to monitor physical activity and potentially social activity in real time. As smartwatches exhibit high user acceptability and increasing popularity, they are ideal devices for monitoring activities for extended periods of time to investigate the physical activity patterns in free-living condition and their relationship with the seemingly random occurring illnesses, which have remained a challenge in the current literature. Therefore, the purpose of this study was to develop a smartwatch-based framework for real-time and online assessment and mobility monitoring (ROAMM). The proposed ROAMM framework will include a smartwatch application and server. The smartwatch application will be used to collect and preprocess data. The server will be used to store and retrieve data, remote monitor, and for other administrative purposes. With the integration of sensor-based and user-reported data collection, the ROAMM framework allows for data visualization and summary statistics in real-time.


Subject(s)
Exercise , Mobile Applications , Monitoring, Physiologic/instrumentation , Smartphone , Accelerometry/instrumentation , Humans
18.
Article in English | MEDLINE | ID: mdl-30411088

ABSTRACT

Early mobilization of critically ill patients in the Intensive Care Unit (ICU) can prevent adverse outcomes such as delirium and post-discharge physical impairment. To date, no studies have characterized activity of sepsis patients in the ICU using granular actigraphy data. This study characterizes the activity of sepsis patients in the ICU to aid in future mobility interventions. We have compared the actigraphy features of 24 patients in four groups: Chronic Critical Illness (CCI) sepsis patients in the ICU, Rapid Recovery (RR) sepsis patients in the ICU, non-sepsis ICU patients (control-ICU), and healthy subjects. We used a total of 15 statistical and circadian rhythm features extracted from the patients' actigraphy data collected over a five-day period. Our results show that the four groups are significantly different in terms of activity features. In addition, we observed that the CCI and control-ICU patients show less regularity in their circadian rhythm compared to the RR patients. These results show the potential of using actigraphy data for guiding mobilization practices, classifying sepsis recovery subtype, as well as for tracking patients' recovery.

19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 4106-4109, 2018 Jul.
Article in English | MEDLINE | ID: mdl-30441259

ABSTRACT

Physiological timeseries such as vital signs contain important information about a patient and are used in different clinical application; however, they suffer from missing values and sampling irregularity. In recent years, Gaussian Processes have been used as sophisticated nonlinear value imputation methods on time series, however there is a lack of comparison to other simpler methods. This paper compares the ability of five methods that can be used in missing data imputation in physiological time series. These models are linear interpolation as the baseline, cubic spline interpolation, and three non-linear methods: Single Task Gaussian Processes, Multi-Task Gaussian Processes, and Multivariate Imputation Chained Equations. We used seven intraoperative physiological time series from 27,481 patients. Piecewise aggregate approximation was employed as a dimensionality reduction and resampling strategy. Linear interpolation and cubic splining show overall superiority in prediction of the missing values, compared to the other complex models. The performance of the kernel-based methods suggest that they are highly sensitive to the kernel width and require incorporation of domain knowledge for fine-tuning.


Subject(s)
Normal Distribution , Humans , Linear Models
20.
Food Funct ; 9(8): 4469-4479, 2018 Aug 15.
Article in English | MEDLINE | ID: mdl-30073224

ABSTRACT

Without appropriate interventions, prediabetes is typically followed by type II diabetes. Eggs are a rich source of important nutrients including protein, vitamins, minerals, carotenoids and lecithin. In this 12-week, parallel, randomized controlled trial, 42 overweight or obese individuals between the ages of 40 and 75 years with pre- and type II-diabetes were included. Participants were randomly assigned to receive either one large egg per day or an equivalent amount of egg substitute for 12 weeks. Blood samples were obtained to analyze lipid profile and biomarkers associated with glycemic control at all time points. Regular egg consumption resulted in improvements of fasting blood glucose, which was significantly (P = 0.05) reduced by 4.4% at the final visit in the egg group. Participants in the egg group had significantly (P = 0.01) lower levels of homeostatic model assessment of insulin resistance (HOMA-IR) at all visits. In the egg group, ATP-binding cassette protein family A1 (ABCA1) was significantly higher at the 6-week visit (0.78 ± 0.21 vs. 0.28 ± 0.05 mg dL-1, P < 0.001) and tended to be higher at the final visit (0.62 ± 0.11 vs. 0.55 ± 0.18 mg dL-1, P = 0.1). The mean apolipoprotein A1 (apo A1) level was also significantly higher at the final visit in the egg group compared to the control (147.43 ± 5.34 vs. 142.81 ± 5.09 mg dL-1, P = 0.01). There were no significant changes in total cholesterol and low-density lipoprotein cholesterol (LDL-C) levels. Daily consumption of one large egg may reduce the risk of diabetes without having any adverse effects on lipid profiles in individuals with pre- and type II diabetes.


Subject(s)
Diabetes Mellitus, Type 2/diet therapy , Eggs/analysis , Insulin/blood , Prediabetic State/drug therapy , ATP Binding Cassette Transporter 1/genetics , ATP Binding Cassette Transporter 1/metabolism , Adult , Aged , Apolipoprotein A-I/blood , Blood Glucose/metabolism , Cholesterol, LDL/blood , Diabetes Mellitus, Type 2/metabolism , Female , Glycemic Index , Humans , Insulin Resistance , Male , Middle Aged , Prediabetic State/metabolism
SELECTION OF CITATIONS
SEARCH DETAIL
...