Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
medRxiv ; 2024 May 16.
Article in English | MEDLINE | ID: mdl-38798457

ABSTRACT

Importance: Randomized clinical trials (RCTs) are the standard for defining an evidence-based approach to managing disease, but their generalizability to real-world patients remains challenging to quantify. Objective: To develop a multidimensional patient variable mapping algorithm to quantify the similarity and representation of electronic health record (EHR) patients corresponding to an RCT and estimate the putative treatment effects in real-world settings based on individual treatment effects observed in an RCT. Design: A retrospective analysis of the Treatment of Preserved Cardiac Function Heart Failure with an Aldosterone Antagonist Trial (TOPCAT; 2006-2012) and a multi-hospital patient cohort from the electronic health record (EHR) in the Yale New Haven Hospital System (YNHHS; 2015-2023). Setting: A multicenter international RCT (TOPCAT) and multi-hospital patient cohort (YNHHS). Participants: All TOPCAT participants and patients with heart failure with preserved ejection fraction (HFpEF) and ≥1 hospitalization within YNHHS. Exposures: 63 pre-randomization characteristics measured across the TOPCAT and YNNHS cohorts. Main Outcomes and Measures: Real-world generalizability of the RCT TOPCAT using a multidimensional phenotypic distance metric between TOPCAT and YNHHS cohorts. Estimation of the individualized treatment effect of spironolactone use on all-cause mortality within the YNHHS cohort based on phenotypic distance from the TOPCAT cohort. Results: There were 3,445 patients in TOPCAT and 11,712 HFpEF patients across five hospital sites. Across the 63 TOPCAT variables mapped by clinicians to the EHR, there were larger differences between TOPCAT and each of the 5 EHR sites (median SMD 0.200, IQR 0.037-0.410) than between the 5 EHR sites (median SMD 0.062, IQR 0.010-0.130). The synthesis of these differences across covariates using our multidimensional similarity score also suggested substantial phenotypic dissimilarity between the TOPCAT and EHR cohorts. By phenotypic distance, a majority (55%) of TOPCAT participants were closer to each other than any individual EHR patient. Using a TOPCAT-derived model of individualized treatment benefit from spironolactone, those predicted to derive benefit and receiving spironolactone in the EHR cohorts had substantially better outcomes compared with predicted benefit and not receiving the medication (HR 0.74, 95% CI 0.62-0.89). Conclusions and Relevance: We propose a novel approach to evaluating the real-world representativeness of RCT participants against corresponding patients in the EHR across the full multidimensional spectrum of the represented phenotypes. This enables the evaluation of the implications of RCTs for real-world patients. KEY POINTS: Question: How can we examine the multi-dimensional generalizability of randomized clinical trials (RCT) to real-world patient populations?Findings: We demonstrate a novel phenotypic distance metric comparing an RCT to real-world populations in a large multicenter RCT of heart failure patients and the corresponding patients in multisite electronic health records (EHRs). Across 63 pre-randomization characteristics, pairwise assessments of members of the RCT and EHR cohorts were more discordant from each other than between members of the EHR cohort (median standardized mean difference 0.200 [0.037-0.410] vs 0.062 [0.010-0.130]), with a majority (55%) of RCT participants closer to each other than any individual EHR patient. The approach also enabled the quantification of expected real world outcomes based on effects observed in the RCT.Meaning: A multidimensional phenotypic distance metric quantifies the generalizability of RCTs to a given population while also offering an avenue to examine expected real-world patient outcomes based on treatment effects observed in the RCT.

2.
Eur Heart J Digit Health ; 5(3): 303-313, 2024 May.
Article in English | MEDLINE | ID: mdl-38774380

ABSTRACT

Aims: An algorithmic strategy for anatomical vs. functional testing in suspected coronary artery disease (CAD) (Anatomical vs. Stress teSting decIsion Support Tool; ASSIST) is associated with better outcomes than random selection. However, in the real world, this decision is rarely random. We explored the agreement between a provider-driven vs. simulated algorithmic approach to cardiac testing and its association with outcomes across multinational cohorts. Methods and results: In two cohorts of functional vs. anatomical testing in a US hospital health system [Yale; 2013-2023; n = 130 196 (97.0%) vs. n = 4020 (3.0%), respectively], and the UK Biobank [n = 3320 (85.1%) vs. n = 581 (14.9%), respectively], we examined outcomes stratified by agreement between the real-world and ASSIST-recommended strategies. Younger age, female sex, Black race, and diabetes history were independently associated with lower odds of ASSIST-aligned testing. Over a median of 4.9 (interquartile range [IQR]: 2.4-7.1) and 5.4 (IQR: 2.6-8.8) years, referral to the ASSIST-recommended strategy was associated with a lower risk of acute myocardial infarction or death (hazard ratioadjusted: 0.81, 95% confidence interval [CI] 0.77-0.85, P < 0.001 and 0.74 [95% CI 0.60-0.90], P = 0.003, respectively), an effect that remained significant across years, test types, and risk profiles. In post hoc analyses of anatomical-first testing in the Prospective Multicentre Imaging Study for Evaluation of Chest Pain (PROMISE) trial, alignment with ASSIST was independently associated with a 17% and 30% higher risk of detecting CAD in any vessel or the left main artery/proximal left anterior descending coronary artery, respectively. Conclusion: In cohorts where historical practices largely favour functional testing, alignment with an algorithmic approach to cardiac testing defined by ASSIST was associated with a lower risk of adverse outcomes. This highlights the potential utility of a data-driven approach in the diagnostic management of CAD.

3.
medRxiv ; 2024 Apr 03.
Article in English | MEDLINE | ID: mdl-38633808

ABSTRACT

Background: Current risk stratification strategies for heart failure (HF) risk require either specific blood-based biomarkers or comprehensive clinical evaluation. In this study, we evaluated the use of artificial intelligence (AI) applied to images of electrocardiograms (ECGs) to predict HF risk. Methods: Across multinational longitudinal cohorts in the integrated Yale New Haven Health System (YNHHS) and in population-based UK Biobank (UKB) and Brazilian Longitudinal Study of Adult Health (ELSA-Brasil), we identified individuals without HF at baseline. Incident HF was defined based on the first occurrence of an HF hospitalization. We evaluated an AI-ECG model that defines the cross-sectional probability of left ventricular dysfunction from a single image of a 12-lead ECG and its association with incident HF. We accounted for the competing risk of death using the Fine-Gray subdistribution model and evaluated the discrimination using Harrel's c-statistic. The pooled cohort equations to prevent HF (PCP-HF) were used as a comparator for estimating incident HF risk. Results: Among 231,285 individuals at YNHHS, 4472 had a primary HF hospitalization over 4.5 years (IQR 2.5-6.6) of follow-up. In UKB and ELSA-Brasil, among 42,741 and 13,454 people, 46 and 31 developed HF over a follow-up of 3.1 (2.1-4.5) and 4.2 (3.7-4.5) years, respectively. A positive AI-ECG screen portended a 4-fold higher risk of incident HF among YNHHS patients (age-, sex-adjusted HR [aHR] 3.88 [95% CI, 3.63-4.14]). In UKB and ELSA-Brasil, a positive-screen ECG portended 13- and 24-fold higher hazard of incident HF, respectively (aHR: UKBB, 12.85 [6.87-24.02]; ELSA-Brasil, 23.50 [11.09-49.81]). The association was consistent after accounting for comorbidities and the competing risk of death. Higher model output probabilities were progressively associated with a higher risk for HF. The model's discrimination for incident HF was 0.718 in YNHHS, 0.769 in UKB, and 0.810 in ELSA-Brasil. Across cohorts, incorporating model probability with PCP-HF yielded a significant improvement in discrimination over PCP-HF alone. Conclusions: An AI model applied to images of 12-lead ECGs can identify those at elevated risk of HF across multinational cohorts. As a digital biomarker of HF risk that requires just an ECG image, this AI-ECG approach can enable scalable and efficient screening for HF risk.

4.
medRxiv ; 2024 Mar 19.
Article in English | MEDLINE | ID: mdl-38562897

ABSTRACT

Background: Risk stratification strategies for cancer therapeutics-related cardiac dysfunction (CTRCD) rely on serial monitoring by specialized imaging, limiting their scalability. Objectives: To examine an artificial intelligence (AI)-enhanced electrocardiographic (AI-ECG) surrogate for imaging risk biomarkers, and its association with CTRCD. Methods: Across a five-hospital U.S.-based health system (2013-2023), we identified patients with breast cancer or non-Hodgkin lymphoma (NHL) who received anthracyclines (AC) and/or trastuzumab (TZM), and a control cohort receiving immune checkpoint inhibitors (ICI). We deployed a validated AI model of left ventricular systolic dysfunction (LVSD) to ECG images (≥0.1, positive screen) and explored its association with i) global longitudinal strain (GLS) measured within 15 days (n=7,271 pairs); ii) future CTRCD (new cardiomyopathy, heart failure, or left ventricular ejection fraction [LVEF]<50%), and LVEF<40%. In the ICI cohort we correlated baseline AI-ECG-LVSD predictions with downstream myocarditis. Results: Higher AI-ECG LVSD predictions were associated with worse GLS (-18% [IQR:-20 to -17%] for predictions<0.1, to -12% [IQR:-15 to -9%] for ≥0.5 (p<0.001)). In 1,308 patients receiving AC/TZM (age 59 [IQR:49-67] years, 999 [76.4%] women, 80 [IQR:42-115] follow-up months) a positive baseline AI-ECG LVSD screen was associated with ~2-fold and ~4.8-fold increase in the incidence of the composite CTRCD endpoint (adj.HR 2.22 [95%CI:1.63-3.02]), and LVEF<40% (adj.HR 4.76 [95%CI:2.62-8.66]), respectively. Among 2,056 patients receiving ICI (age 65 [IQR:57-73] years, 913 [44.4%] women, follow-up 63 [IQR:28-99] months) AI-ECG predictions were not associated with ICI myocarditis (adj.HR 1.36 [95%CI:0.47-3.93]). Conclusion: AI applied to baseline ECG images can stratify the risk of CTRCD associated with anthracycline or trastuzumab exposure.

5.
medRxiv ; 2024 Feb 18.
Article in English | MEDLINE | ID: mdl-38405776

ABSTRACT

Timely and accurate assessment of electrocardiograms (ECGs) is crucial for diagnosing, triaging, and clinically managing patients. Current workflows rely on a computerized ECG interpretation using rule-based tools built into the ECG signal acquisition systems with limited accuracy and flexibility. In low-resource settings, specialists must review every single ECG for such decisions, as these computerized interpretations are not available. Additionally, high-quality interpretations are even more essential in such low-resource settings as there is a higher burden of accuracy for automated reads when access to experts is limited. Artificial Intelligence (AI)-based systems have the prospect of greater accuracy yet are frequently limited to a narrow range of conditions and do not replicate the full diagnostic range. Moreover, these models often require raw signal data, which are unavailable to physicians and necessitate costly technical integrations that are currently limited. To overcome these challenges, we developed and validated a format-independent vision encoder-decoder model - ECG-GPT - that can generate free-text, expert-level diagnosis statements directly from ECG images. The model shows robust performance, validated on 2.6 million ECGs across 6 geographically distinct health settings: (1) 2 large and diverse US health systems- Yale-New Haven and Mount Sinai Health Systems, (2) a consecutive ECG dataset from a central ECG repository from Minas Gerais, Brazil, (3) the prospective cohort study, UK Biobank, (4) a Germany-based, publicly available repository, PTB-XL, and (5) a community hospital in Missouri. The model demonstrated consistently high performance (AUROC≥0.81) across a wide range of rhythm and conduction disorders. This can be easily accessed via a web-based application capable of receiving ECG images and represents a scalable and accessible strategy for generating accurate, expert-level reports from images of ECGs, enabling accurate triage of patients globally, especially in low-resource settings.

6.
medRxiv ; 2024 Mar 03.
Article in English | MEDLINE | ID: mdl-38293023

ABSTRACT

Background: Artificial intelligence-enhanced electrocardiography (AI-ECG) can identify hypertrophic cardiomyopathy (HCM) on 12-lead ECGs and offers a novel way to monitor treatment response. While the surgical or percutaneous reduction of the interventricular septum (SRT) represented initial HCM therapies, mavacamten offers an oral alternative. Objective: To evaluate biological response to SRT and mavacamten. Methods: We applied an AI-ECG model for HCM detection to ECG images from patients who underwent SRT across three sites: Yale New Haven Health System (YNHHS), Cleveland Clinic Foundation (CCF), and Atlantic Health System (AHS); and to ECG images from patients receiving mavacamten at YNHHS. Results: A total of 70 patients underwent SRT at YNHHS, 100 at CCF, and 145 at AHS. At YNHHS, there was no significant change in the AI-ECG HCM score before versus after SRT (pre-SRT: median 0.55 [IQR 0.24-0.77] vs post-SRT: 0.59 [0.40-0.75]). The AI-ECG HCM scores also did not improve post SRT at CCF (0.61 [0.32-0.79] vs 0.69 [0.52-0.79]) and AHS (0.52 [0.35-0.69] vs 0.61 [0.49-0.70]). Among 36 YNHHS patients on mavacamten therapy, the median AI-ECG score before starting mavacamten was 0.41 (0.22-0.77), which decreased significantly to 0.28 (0.11-0.50, p <0.001 by Wilcoxon signed-rank test) at the end of a median follow-up period of 237 days. Conclusions: The lack of improvement in AI-based HCM score with SRT, in contrast to a significant decrease with mavacamten, suggests the potential role of AI-ECG for serial monitoring of pathophysiological improvement in HCM at the point-of-care using ECG images.

7.
JACC Adv ; 2(7)2023 Sep.
Article in English | MEDLINE | ID: mdl-38094515

ABSTRACT

BACKGROUND: Smartphone-based health applications are increasingly popular, but their real-world use for cardiovascular risk management remains poorly understood. OBJECTIVES: The purpose of this study was to investigate the patterns of tracking health goals using smart devices, including smartphones and/or tablets, in the United States. METHODS: Using the nationally representative Health Information National Trends Survey for 2017 to 2020, we examined self-reported tracking of health-related goals (optimizing body weight, increasing physical activity, and/or quitting smoking) using smart devices among those with cardiovascular disease (CVD) or cardiovascular risk factors of hypertension, diabetes, obesity, and/or smoking. Survey analyses were used to obtain national estimates of use patterns and identify features associated with the use of these devices for tracking health goals. RESULTS: Of 16,092 Health Information National Trends Survey participants, 10,660 had CVD or cardiovascular risk factors, representing 154.2 million (95% CI: 149.2-159.3 million) U.S. adults. Among the general U.S. adult population, 46% (95% CI: 44%-47%) tracked their health goals using their smart devices, compared with 42% (95% CI: 40%-43%) of those with or at risk of CVD. Younger age, female, Black race, higher educational attainment, and greater income were independently associated with tracking of health goals using smart devices. CONCLUSIONS: Two in 5 U.S. adults with or at risk of CVD use their smart devices to track health goals. While representing a potential avenue to improve care, the lower use of smart devices among older and low-income individuals, who are at higher risk of adverse cardiovascular outcomes, requires that digital health interventions are designed so as not to exacerbate existing disparities.

8.
medRxiv ; 2023 Sep 19.
Article in English | MEDLINE | ID: mdl-37790355

ABSTRACT

Importance: Elevated lipoprotein(a) [Lp(a)] is associated with atherosclerotic cardiovascular disease (ASCVD) and major adverse cardiovascular events (MACE). However, fewer than 0.5% of patients undergo Lp(a) testing, limiting the evaluation and use of novel targeted therapeutics currently under development. Objective: We developed and validated a machine learning model to enable targeted screening for elevated Lp(a). Design: Cross-sectional. Setting: 4 multinational population-based cohorts. Participants: We included 456,815 participants from the UK Biobank (UKB), the largest cohort with protocolized Lp(a) testing for model development. The model's external validity was assessed in Atherosclerosis Risk in Communities (ARIC) (N=14,484), Coronary Artery Risk Development in Young Adults (CARDIA) (N=4,124), and Multi-Ethnic Study of Atherosclerosis (MESA) (N=4,672) cohorts. Exposures: Demographics, medications, diagnoses, procedures, vitals, and laboratory measurements from UKB and linked electronic health records (EHR) were candidate input features to predict high Lp(a). We used the pooled cohort equations (PCE), an ASCVD risk marker, as a comparator to identify elevated Lp(a). Main Outcomes and Measures: The main outcome was elevated Lp(a) (≥150 nmol/L), and the number-needed-to-test (NNT) to find one case with elevated Lp(a). We explored the association of the model's prediction probabilities with all-cause and cardiovascular mortality, and MACE. Results: The Algorithmic Risk Inspection for Screening Elevated Lp(a) (ARISE) used low-density lipoprotein cholesterol, statin use, triglycerides, high-density lipoprotein cholesterol, history of ASCVD, and anti-hypertensive medication use as input features. ARISE outperformed cardiovascular risk stratification through PCE for predicting elevated Lp(a) with a significantly lower NNT (4.0 versus 8.0 [with or without PCE], P<0.001). ARISE performed comparably across external validation cohorts and subgroups, reducing the NNT by up to 67.3%, depending on the probability threshold. Over a median follow-up of 4.2 years, a high ARISE probability was also associated with a greater hazard of all-cause death and MACE (age/sex-adjusted hazard ratio [aHR], 1.35, and 1.38, respectively, P<0.001), with a greater increase in cardiovascular mortality (aHR, 2.17, P<0.001). Conclusions and Relevance: ARISE optimizes screening for elevated Lp(a) using commonly available clinical features. ARISE can be deployed in EHR and other settings to encourage greater Lp(a) testing and to improve identifying cases eligible for novel targeted therapeutics in trials. KEY POINTS: Question: How can we optimize the identification of individuals with elevated lipoprotein(a) [Lp(a)] who may be eligible for novel targeted therapeutics?Findings: Using 4 multinational population-based cohorts, we developed and validated a machine learning model, Algorithmic Risk Inspection for Screening Elevated Lp(a) (ARISE), to enable targeted screening for elevated Lp(a). In contrast to the pooled cohort equations that do not identify those with elevated Lp(a), ARISE reduces the "number-needed-to-test" to find one case with elevated Lp(a) by up to 67.3%.Meaning: ARISE can be deployed in electronic health records and other settings to enable greater yield of Lp(a) testing, thereby improving the identification of individuals with elevated Lp(a).

9.
Circulation ; 148(9): 765-777, 2023 08 29.
Article in English | MEDLINE | ID: mdl-37489538

ABSTRACT

BACKGROUND: Left ventricular (LV) systolic dysfunction is associated with a >8-fold increased risk of heart failure and a 2-fold risk of premature death. The use of ECG signals in screening for LV systolic dysfunction is limited by their availability to clinicians. We developed a novel deep learning-based approach that can use ECG images for the screening of LV systolic dysfunction. METHODS: Using 12-lead ECGs plotted in multiple different formats, and corresponding echocardiographic data recorded within 15 days from the Yale New Haven Hospital between 2015 and 2021, we developed a convolutional neural network algorithm to detect an LV ejection fraction <40%. The model was validated within clinical settings at Yale New Haven Hospital and externally on ECG images from Cedars Sinai Medical Center in Los Angeles, CA; Lake Regional Hospital in Osage Beach, MO; Memorial Hermann Southeast Hospital in Houston, TX; and Methodist Cardiology Clinic of San Antonio, TX. In addition, it was validated in the prospective Brazilian Longitudinal Study of Adult Health. Gradient-weighted class activation mapping was used to localize class-discriminating signals on ECG images. RESULTS: Overall, 385 601 ECGs with paired echocardiograms were used for model development. The model demonstrated high discrimination across various ECG image formats and calibrations in internal validation (area under receiving operation characteristics [AUROCs], 0.91; area under precision-recall curve [AUPRC], 0.55); and external sets of ECG images from Cedars Sinai (AUROC, 0.90 and AUPRC, 0.53), outpatient Yale New Haven Hospital clinics (AUROC, 0.94 and AUPRC, 0.77), Lake Regional Hospital (AUROC, 0.90 and AUPRC, 0.88), Memorial Hermann Southeast Hospital (AUROC, 0.91 and AUPRC 0.88), Methodist Cardiology Clinic (AUROC, 0.90 and AUPRC, 0.74), and Brazilian Longitudinal Study of Adult Health cohort (AUROC, 0.95 and AUPRC, 0.45). An ECG suggestive of LV systolic dysfunction portended >27-fold higher odds of LV systolic dysfunction on transthoracic echocardiogram (odds ratio, 27.5 [95% CI, 22.3-33.9] in the held-out set). Class-discriminative patterns localized to the anterior and anteroseptal leads (V2 and V3), corresponding to the left ventricle regardless of the ECG layout. A positive ECG screen in individuals with an LV ejection fraction ≥40% at the time of initial assessment was associated with a 3.9-fold increased risk of developing incident LV systolic dysfunction in the future (hazard ratio, 3.9 [95% CI, 3.3-4.7]; median follow-up, 3.2 years). CONCLUSIONS: We developed and externally validated a deep learning model that identifies LV systolic dysfunction from ECG images. This approach represents an automated and accessible screening strategy for LV systolic dysfunction, particularly in low-resource settings.


Subject(s)
Electrocardiography , Ventricular Dysfunction, Left , Adult , Humans , Prospective Studies , Longitudinal Studies , Ventricular Dysfunction, Left/diagnostic imaging , Ventricular Function, Left/physiology
10.
NPJ Digit Med ; 6(1): 124, 2023 Jul 11.
Article in English | MEDLINE | ID: mdl-37433874

ABSTRACT

Artificial intelligence (AI) can detect left ventricular systolic dysfunction (LVSD) from electrocardiograms (ECGs). Wearable devices could allow for broad AI-based screening but frequently obtain noisy ECGs. We report a novel strategy that automates the detection of hidden cardiovascular diseases, such as LVSD, adapted for noisy single-lead ECGs obtained on wearable and portable devices. We use 385,601 ECGs for development of a standard and noise-adapted model. For the noise-adapted model, ECGs are augmented during training with random gaussian noise within four distinct frequency ranges, each emulating real-world noise sources. Both models perform comparably on standard ECGs with an AUROC of 0.90. The noise-adapted model performs significantly better on the same test set augmented with four distinct real-world noise recordings at multiple signal-to-noise ratios (SNRs), including noise isolated from a portable device ECG. The standard and noise-adapted models have an AUROC of 0.72 and 0.87, respectively, when evaluated on ECGs augmented with portable ECG device noise at an SNR of 0.5. This approach represents a novel strategy for the development of wearable-adapted tools from clinical ECG repositories.

11.
JAMA Netw Open ; 6(6): e2316634, 2023 06 01.
Article in English | MEDLINE | ID: mdl-37285157

ABSTRACT

Importance: Wearable devices may be able to improve cardiovascular health, but the current adoption of these devices could be skewed in ways that could exacerbate disparities. Objective: To assess sociodemographic patterns of use of wearable devices among adults with or at risk for cardiovascular disease (CVD) in the US population in 2019 to 2020. Design, Setting, and Participants: This population-based cross-sectional study included a nationally representative sample of the US adults from the Health Information National Trends Survey (HINTS). Data were analyzed from June 1 to November 15, 2022. Exposures: Self-reported CVD (history of heart attack, angina, or congestive heart failure) and CVD risk factors (≥1 risk factor among hypertension, diabetes, obesity, or cigarette smoking). Main Outcomes and Measures: Self-reported access to wearable devices, frequency of use, and willingness to share health data with clinicians (referred to as health care providers in the survey). Results: Of the overall 9303 HINTS participants representing 247.3 million US adults (mean [SD] age, 48.8 [17.9] years; 51% [95% CI, 49%-53%] women), 933 (10.0%) representing 20.3 million US adults had CVD (mean [SD] age, 62.2 [17.0] years; 43% [95% CI, 37%-49%] women), and 5185 (55.7%) representing 134.9 million US adults were at risk for CVD (mean [SD] age, 51.4 [16.9] years; 43% [95% CI, 37%-49%] women). In nationally weighted assessments, an estimated 3.6 million US adults with CVD (18% [95% CI, 14%-23%]) and 34.5 million at risk for CVD (26% [95% CI, 24%-28%]) used wearable devices compared with an estimated 29% (95% CI, 27%-30%) of the overall US adult population. After accounting for differences in demographic characteristics, cardiovascular risk factor profile, and socioeconomic features, older age (odds ratio [OR], 0.35 [95% CI, 0.26-0.48]), lower educational attainment (OR, 0.35 [95% CI, 0.24-0.52]), and lower household income (OR, 0.42 [95% CI, 0.29-0.60]) were independently associated with lower use of wearable devices in US adults at risk for CVD. Among wearable device users, a smaller proportion of adults with CVD reported using wearable devices every day (38% [95% CI, 26%-50%]) compared with overall (49% [95% CI, 45%-53%]) and at-risk (48% [95% CI, 43%-53%]) populations. Among wearable device users, an estimated 83% (95% CI, 70%-92%) of US adults with CVD and 81% (95% CI, 76%-85%) at risk for CVD favored sharing wearable device data with their clinicians to improve care. Conclusions and Relevance: Among individuals with or at risk for CVD, fewer than 1 in 4 use wearable devices, with only half of those reporting consistent daily use. As wearable devices emerge as tools that can improve cardiovascular health, the current use patterns could exacerbate disparities unless there are strategies to ensure equitable adoption.


Subject(s)
Cardiovascular Diseases , Hypertension , Adult , Humans , Female , Middle Aged , Male , Cardiovascular Diseases/epidemiology , Cross-Sectional Studies , Hypertension/epidemiology , Risk Factors , Obesity/epidemiology
SELECTION OF CITATIONS
SEARCH DETAIL
...