Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 125
Filter
1.
Article in English | MEDLINE | ID: mdl-38827063

ABSTRACT

Large Language Models (LLMs) have demonstrated immense potential in artificial intelligence across various domains, including healthcare. However, their efficacy is hindered by the need for high-quality labeled data, which is often expensive and time-consuming to create, particularly in low-resource domains like healthcare. To address these challenges, we propose a crowdsourcing (CS) framework enriched with quality control measures at the pre-, real-time-, and post-data gathering stages. Our study evaluated the effectiveness of enhancing data quality through its impact on LLMs (Bio-BERT) for predicting autism-related symptoms. The results show that real-time quality control improves data quality by 19% compared to pre-quality control. Fine-tuning Bio-BERT using crowdsourced data generally increased recall compared to the Bio-BERT baseline but lowered precision. Our findings highlighted the potential of crowdsourcing and quality control in resource-constrained environments and offered insights into optimizing healthcare LLMs for informed decision-making and improved patient care.

2.
Blood Transfus ; 2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38557324

ABSTRACT

BACKGROUND: Pediatric patient blood management (PBM) programs require continuous surveillance of errors and near misses. However, most PBM programs rely on passive surveillance methods. Our objective was to develop and evaluate a set of automated trigger tools for active surveillance of pediatric PBM errors. MATERIALS AND METHODS: We used the Rand-UCLA method with an expert panel of pediatric transfusion medicine specialists to identify and prioritize candidate trigger tools for all transfused blood products. We then iteratively developed automated queries of electronic health record (EHR) data for the highest priority triggers. Two physicians manually reviewed a subset of cases meeting trigger tool criteria and estimated each trigger tool's positive predictive value (PPV). We then estimated the rate of PBM errors, whether they reached the patient, and adverse events for each trigger tool across four years in a single pediatric health system. RESULTS: We identified 28 potential triggers for pediatric PBM errors and developed 5 automated trigger tools (positive patient identification, missing irradiation, unwashed products despite prior anaphylaxis, transfusion lasting >4 hours, over-transfusion by volume). The PPV for ordering errors ranged from 38-100%. The most frequently detected near miss event reaching patients was first transfusions without positive patient identification (estimate 303, 95% CI: 288-318 per year). The only adverse events detected were from over-transfusions by volume, including 4 adverse events detected on manual review that had not been reported in passive surveillance systems. DISCUSSION: It is feasible to automatically detect pediatric PBM errors using existing data captured in the EHR that enable active surveillance systems. Over-transfusions may be one of the most frequent causes of harm in the pediatric environment.

3.
J Am Med Inform Assoc ; 31(6): 1313-1321, 2024 May 20.
Article in English | MEDLINE | ID: mdl-38626184

ABSTRACT

OBJECTIVE: Machine learning (ML) is increasingly employed to diagnose medical conditions, with algorithms trained to assign a single label using a black-box approach. We created an ML approach using deep learning that generates outcomes that are transparent and in line with clinical, diagnostic rules. We demonstrate our approach for autism spectrum disorders (ASD), a neurodevelopmental condition with increasing prevalence. METHODS: We use unstructured data from the Centers for Disease Control and Prevention (CDC) surveillance records labeled by a CDC-trained clinician with ASD A1-3 and B1-4 criterion labels per sentence and with ASD cases labels per record using Diagnostic and Statistical Manual of Mental Disorders (DSM5) rules. One rule-based and three deep ML algorithms and six ensembles were compared and evaluated using a test set with 6773 sentences (N = 35 cases) set aside in advance. Criterion and case labeling were evaluated for each ML algorithm and ensemble. Case labeling outcomes were compared also with seven traditional tests. RESULTS: Performance for criterion labeling was highest for the hybrid BiLSTM ML model. The best case labeling was achieved by an ensemble of two BiLSTM ML models using a majority vote. It achieved 100% precision (or PPV), 83% recall (or sensitivity), 100% specificity, 91% accuracy, and 0.91 F-measure. A comparison with existing diagnostic tests shows that our best ensemble was more accurate overall. CONCLUSIONS: Transparent ML is achievable even with small datasets. By focusing on intermediate steps, deep ML can provide transparent decisions. By leveraging data redundancies, ML errors at the intermediate level have a low impact on final outcomes.


Subject(s)
Algorithms , Autism Spectrum Disorder , Deep Learning , Electronic Health Records , Humans , Autism Spectrum Disorder/diagnosis , Child , United States , Natural Language Processing
4.
J Am Med Inform Assoc ; 31(4): 968-974, 2024 Apr 03.
Article in English | MEDLINE | ID: mdl-38383050

ABSTRACT

OBJECTIVE: To develop and evaluate a data-driven process to generate suggestions for improving alert criteria using explainable artificial intelligence (XAI) approaches. METHODS: We extracted data on alerts generated from January 1, 2019 to December 31, 2020, at Vanderbilt University Medical Center. We developed machine learning models to predict user responses to alerts. We applied XAI techniques to generate global explanations and local explanations. We evaluated the generated suggestions by comparing with alert's historical change logs and stakeholder interviews. Suggestions that either matched (or partially matched) changes already made to the alert or were considered clinically correct were classified as helpful. RESULTS: The final dataset included 2 991 823 firings with 2689 features. Among the 5 machine learning models, the LightGBM model achieved the highest Area under the ROC Curve: 0.919 [0.918, 0.920]. We identified 96 helpful suggestions. A total of 278 807 firings (9.3%) could have been eliminated. Some of the suggestions also revealed workflow and education issues. CONCLUSION: We developed a data-driven process to generate suggestions for improving alert criteria using XAI techniques. Our approach could identify improvements regarding clinical decision support (CDS) that might be overlooked or delayed in manual reviews. It also unveils a secondary purpose for the XAI: to improve quality by discovering scenarios where CDS alerts are not accepted due to workflow, education, or staffing issues.


Subject(s)
Artificial Intelligence , Decision Support Systems, Clinical , Humans , Machine Learning , Academic Medical Centers , Educational Status
5.
Vaccine ; 41(42): 6221-6226, 2023 Oct 06.
Article in English | MEDLINE | ID: mdl-37666694

ABSTRACT

BACKGROUND: Vaccinations against SARS-CoV-2 have consistently been shown to reduce the risk of severe COVID-19 disease. However, uptake of boosters has stalled in the United States at less than 20% of the eligible population. The objective of this study was to assess the reasons for not having obtained a bivalent booster within an existing COVID-19 cohort. METHODS: A total of 2196 adult participants from the Arizona CoVHORT, a population-based cohort in the United States established in May 2020, who had received at least one dose of the COVID-19 vaccine, responded to surveys administered between February 13 and March 29, 2023 querying receipt of a bivalent booster and if not, the reasons for not receiving it. Descriptive statistics were employed, including frequencies of responses by participant characteristics, and multivariable logistic regression was used to assess the association between participant characteristics and selected themes for not having received the bivalent booster. RESULTS: The most commonly reported reason for not having been boosted was a prior SARS-CoV-2 infection (39.5%), followed by concern about vaccine side effects (31.5%), believing that the booster would not provide additional protection over the vaccines already received (28.6%), and concern about booster safety (23.4%) or that it would not protect from SARS-CoV-2 infection (23.1%). For themes related to reasons for not having been boosted, those 60 years of age or older were less likely to select items related to knowledge (OR: 0.24; 95% CI: 0.11-0.55) or logistical concerns (OR: 0.09; 95% CI: 0.03-0.30) about the vaccine; while those reporting Hispanic ethnicity were more likely to convey concerns about logistics than those reporting non-Hispanic ethnicity (OR: 2.15; 95% CI: 1.08-4.30). Finally, compared to college graduates, those with some college or technical school were significantly more likely to select items related to the risks and benefits of the bivalent vaccine not being clear as reasons for not having been boosted (OR: 2.41; 95% CI: 1.69-3.43). CONCLUSIONS: Improvement in booster uptake is necessary for optimal public health in the United States. The development of vaccines against SARS-CoV-2 occurred at an unprecedented speed, but vaccine uptake remains among the greatest current public health challenges as updated boosters continue to be developed and made available to the public. Interventions to improve vaccination rates require a variety of approaches.

6.
J Clin Anesth ; 91: 111272, 2023 12.
Article in English | MEDLINE | ID: mdl-37774648

ABSTRACT

STUDY OBJECTIVE: To develop an algorithm to predict intraoperative Red Blood Cell (RBC) transfusion from preoperative variables contained in the electronic medical record of our institution, with the goal of guiding type and screen ordering. DESIGN: Machine Learning model development on retrospective single-center hospital data. SETTING: Preoperative period and operating room. PATIENTS: The study included patients ≥18 years old who underwent surgery during 2019-2022 and excluded those who refused transfusion, underwent emergency surgery, or surgery for organ donation after cardiac or brain death. INTERVENTION: Prediction of intraoperative transfusion vs. no intraoperative transfusion. MEASUREMENTS: The outcome variable was intraoperative transfusion of RBCs. Predictive variables were surgery, surgeon, anesthesiologist, age, sex, body mass index, race or ethnicity, preoperative hemoglobin (g/dL), partial thromboplastin time (s), platelet count x 109 per liter, and prothrombin time. We compared the performances of seven machine learning algorithms. After training and optimization on the 2019-2021 dataset, model thresholds were set to the current institutional performance level of sensitivity (93%). To qualify for comparison, models had to maintain clinically relevant sensitivity (>90%) when predicting on 2022 data; overall accuracy was the comparative metric. MAIN RESULTS: Out of 100,813 cases that met study criteria from 2019 to 2021, intraoperative transfusion occurred in 5488 (5.4%) of cases. The LightGBM model was the highest performing algorithm in external temporal validity experiments, with overall accuracy of (76.1%) [95% confidence interval (CI), 75.6-76.5], while maintaining clinically relevant sensitivity of (91.2%) [95% CI, 89.8-92.5]. If type and screens were ordered based upon the LightGBM model, the predicted type and screen to transfusion ratio would improve from 8.4 to 5.1. CONCLUSIONS: Machine learning approaches are feasible in predicting intraoperative transfusion from preoperative variables and may improve preoperative type and screen ordering practices when incorporated into the electronic health record.


Subject(s)
Blood Transfusion , Erythrocyte Transfusion , Humans , Adolescent , Retrospective Studies , Prothrombin Time , Machine Learning
7.
Epilepsia ; 64(12): 3365-3376, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37585367

ABSTRACT

OBJECTIVE: Genetic variants in the SCN8A gene underlie a wide spectrum of neurodevelopmental phenotypes including several distinct seizure types and a host of comorbidities. One of the major challenges facing clinicians and researchers alike is to identify genotype-phenotype (G-P) correlations that may improve prognosis, guide treatment decisions, and lead to precision medicine approaches. METHODS: We investigated G-P correlations among 270 participants harboring gain-of-function (GOF) variants enrolled in the International SCN8A Registry, a patient-driven online database. We performed correlation analyses stratifying the cohort by clinical phenotypes to identify diagnostic features that differ among patients with varying levels of clinical severity, and that differ among patients with distinct GOF variants. RESULTS: Our analyses confirm positive correlations between age at seizure onset and developmental skills acquisition (developmental quotient), rate of seizure freedom, and percentage of cohort with developmental delays, and identify negative correlations with number of current and weaned antiseizure medications. This set of features is more detrimentally affected in individuals with a priori expectations of more severe clinical phenotypes. Our analyses also reveal a significant correlation between a severity index combining clinical features of individuals with a particular highly recurrent variant and an independent electrophysiological score assigned to each variant based on in vitro testing. SIGNIFICANCE: This is one of the first studies to identify statistically significant G-P correlations for individual SCN8A variants with GOF properties. The results suggest that individual GOF variants (1) are predictive of clinical severity for individuals carrying those variants and (2) may underlie distinct clinical phenotypes of SCN8A disease, thus helping to explain the wide SCN8A-related epilepsy disease spectrum. These results also suggest that certain features present at initial diagnosis are predictive of clinical severity, and with more informed treatment plans, may serve to improve prognosis for patients with SCN8A GOF variants.


Subject(s)
Epilepsy , Gain of Function Mutation , Humans , Epilepsy/diagnosis , Epilepsy/genetics , Epilepsy/drug therapy , Seizures/genetics , Seizures/drug therapy , Phenotype , NAV1.6 Voltage-Gated Sodium Channel/genetics
9.
Birth Defects Res ; 115(17): 1608-1618, 2023 10 15.
Article in English | MEDLINE | ID: mdl-37578352

ABSTRACT

BACKGROUND: Research on the association between neighborhood social deprivation and health among adults with congenital heart defects (CHD) is sparse. METHODS: We evaluated the associations between neighborhood social deprivation and health care utilization, disability, and comorbidities using the population-based 2016-2019 Congenital Heart Survey To Recognize Outcomes, Needs, and well-beinG (CH STRONG) of young adults. Participants were identified from active birth defect surveillance systems in three U.S. sites and born with CHD between 1980 and 1997. We linked census tract-level 2017 American Community Survey information on median household income, percent of ≥25-year-old with greater than a high school degree, percent of ≥16-year-olds who are unemployed, and percent of families with children <18 years old living in poverty to survey data and used these variables to calculate a summary neighborhood social deprivation z-score, divided into tertiles. Adjusted prevalence ratios (aPR) and 95% confidence intervals (CI) derived from a log-linear regression model with a Poisson distribution estimated the association between tertile of neighborhood social deprivation and healthcare utilization in previous year (no encounters, 1 and ≥2 emergency room [ER] visits, and hospital admission), ≥1 disability, and ≥1 comorbidities. We accounted for age, place of birth, sex at birth, presence of chromosomal anomalies, and CHD severity in all models, and, additionally educational attainment and work status in all models except disability. RESULTS: Of the 1435 adults with CHD, 43.8% were 19-24 years old, 54.4% were female, 69.8% were non-Hispanic White, and 33.7% had a severe CHD. Compared to the least deprived tertile, respondents in the most deprived tertile were more likely to have no healthcare visit (aPR: 1.5 [95% CI: 1.1, 2.1]), ≥2 ER visits (1.6 [1.1, 2.3]), or hospitalization (1.6 [1.1, 2.3]) in the previous 12 months, a disability (1.2 [1.0, 1.5]), and ≥1 cardiac comorbidities (1.8 [1.2, 2.7]). CONCLUSIONS: Neighborhood social deprivation may be a useful metric to identify patients needing additional resources and referrals.


Subject(s)
Heart Defects, Congenital , Child , Infant, Newborn , Humans , Female , Young Adult , Adolescent , Adult , Male , Heart Defects, Congenital/epidemiology , Comorbidity , Surveys and Questionnaires , Patient Acceptance of Health Care , Social Deprivation
10.
J Registry Manag ; 50(1): 4-10, 2023.
Article in English | MEDLINE | ID: mdl-37577282

ABSTRACT

Genetic variants in the SCN8A gene underlie a wide spectrum of neurodevelopmental phenotypes that range from severe epileptic encephalopathy to benign familial infantile epilepsy to neurodevelopmental delays with or without seizures. A host of additional comorbidities also contribute to the phenotypic spectrum. As a result of the recent identification of the genetic etiology and the length of time it often takes to diagnose patients, little data are available on the natural history of these conditions. The International SCN8A Patient Registry was developed in 2015 to fill gaps in understanding the spectrum of the disease and its natural history, as well as the lived experiences of individuals with SCN8A syndrome. Another goal of the registry is to collect longitudinal data from participants on a regular basis. In this article, we describe the construction and structure of the International SCN8A Patient Registry, present the type of information available, and highlight particular analyses that demonstrate how registry data can provide insights into the clinical management of SCN8A syndrome.


Subject(s)
Epilepsy, Generalized , Epilepsy , Registries , Humans , Epilepsy/epidemiology , Epilepsy/genetics , Epilepsy/therapy , NAV1.6 Voltage-Gated Sodium Channel/genetics , Phenotype , Seizures/genetics , Syndrome
11.
J Am Assoc Nurse Pract ; 35(10): 620-628, 2023 Oct 01.
Article in English | MEDLINE | ID: mdl-37471528

ABSTRACT

BACKGROUND: An estimated 1.4 million adults in the United States have congenital heart disease (CHD). As this population grows and many pursue postsecondary education, these adults' health care needs and concerns should be at the forefront for providers, particularly nurse practitioners, at college health centers. PURPOSE: To understand how college health centers and providers identify and manage the care of students with chronic conditions to further support their health care transition, with a focus on students with CHD. METHODOLOGY: Qualitative key informant interviews were performed with providers at five college health centers to understand the processes in place and the challenges health care providers on college campuses face when caring for students with CHD. RESULTS: Most of the college health centers did not have formalized processes in place to care for these students. Although many felt that they had the capabilities in their health centers to manage these students' maintenance/preventive care needs, fewer felt comfortable with their urgent or emergent care needs. The onus was often on students or parents/guardians to initiate these transitions. CONCLUSIONS: This study highlights some challenges to providing care to students with chronic conditions like CHD. More collaborative relationships with specialists may be critical to ensuring that all the care needs of chronic disease students are met on college campuses. IMPLICATIONS: Nurse practitioners, who often staff these clinics, are well positioned to support this transition onto campuses and lead the development of processes to identify these students, ease care management transitions, and ensure easy provider communication that allow students with chronic diseases to thrive on campus.


Subject(s)
Heart Defects, Congenital , Transition to Adult Care , Humans , Young Adult , United States , Students , Universities , Heart Defects, Congenital/therapy , Chronic Disease
12.
Circulation ; 148(7): 575-588, 2023 08 15.
Article in English | MEDLINE | ID: mdl-37401461

ABSTRACT

BACKGROUND: Limited population-based information is available on long-term survival of US individuals with congenital heart defects (CHDs). Therefore, we assessed patterns in survival from birth until young adulthood (ie, 35 years of age) and associated factors among a population-based sample of US individuals with CHDs. METHODS: Individuals born between 1980 and 1997 with CHDs identified in 3 US birth defect surveillance systems were linked to death records through 2015 to identify those deceased and the year of their death. Kaplan-Meier survival curves, adjusted risk ratios (aRRs) for infant mortality (ie, death during the first year of life), and Cox proportional hazard ratios for survival after the first year of life (aHRs) were used to estimate the probability of survival and associated factors. Standardized mortality ratios compared infant mortality, >1-year mortality, >10-year mortality, and >20-year mortality among individuals with CHDs with general population estimates. RESULTS: Among 11 695 individuals with CHDs, the probability of survival to 35 years of age was 81.4% overall, 86.5% among those without co-occurring noncardiac anomalies, and 92.8% among those who survived the first year of life. Characteristics associated with both infant mortality and reduced survival after the first year of life, respectively, included severe CHDs (aRR=4.08; aHR=3.18), genetic syndromes (aRR=1.83; aHR=3.06) or other noncardiac anomalies (aRR=1.54; aHR=2.53), low birth weight (aRR=1.70; aHR=1.29), and Hispanic (aRR=1.27; aHR=1.42) or non-Hispanic Black (aRR=1.43; aHR=1.80) maternal race and ethnicity. Individuals with CHDs had higher infant mortality (standardized mortality ratio=10.17), >1-year mortality (standardized mortality ratio=3.29), and >10-year and >20-year mortality (both standardized mortality ratios ≈1.5) than the general population; however, after excluding those with noncardiac anomalies, >1-year mortality for those with nonsevere CHDs and >10-year and >20-year mortality for those with any CHD were similar to the general population. CONCLUSIONS: Eight in 10 individuals with CHDs born between1980 and 1997 survived to 35 years of age, with disparities by CHD severity, noncardiac anomalies, birth weight, and maternal race and ethnicity. Among individuals without noncardiac anomalies, those with nonsevere CHDs experienced similar mortality between 1 and 35 years of age as in the general population, and those with any CHD experienced similar mortality between 10 and 35 years of age as in the general population.


Subject(s)
Heart Defects, Congenital , Infant , Humans , Young Adult , Adult , Child , Adolescent , Retrospective Studies , Heart Defects, Congenital/epidemiology , Infant Mortality , Ethnicity , Hispanic or Latino
13.
J Clin Neuromuscul Dis ; 24(4): 171-187, 2023 Jun 01.
Article in English | MEDLINE | ID: mdl-37219861

ABSTRACT

ABSTRACT: The diagnosis of Duchenne and Becker muscular dystrophy (DBMD) is made by genetic testing in approximately 95% of cases. Although specific mutations can be associated with skeletal muscle phenotype, pulmonary and cardiac comorbidities (leading causes of death in Duchenne) have not been associated with Duchenne muscular dystrophy mutation type or location and vary within families. Therefore, identifying predictors for phenotype severity beyond frameshift prediction is important clinically. We performed a systematic review assessing research related to genotype-phenotype correlations in DBMD. While there are severity differences across the spectrum and within mild and severe forms of DBMD, few protective or exacerbating mutations within the dystrophin gene were reported. Except for intellectual disability, clinical test results reporting genotypic information are insufficient for clinical prediction of severity and comorbidities and the predictive validity is too low to be useful when advising families. Including expanded information coupled with proposed severity predictions in clinical genetic reports for DBMD is critical for improving anticipatory guidance.


Subject(s)
Genetic Testing , Muscular Dystrophy, Duchenne , Humans , Mutation , Phenotype , Muscle, Skeletal
14.
Obstet Med ; 16(1): 17-22, 2023 Mar.
Article in English | MEDLINE | ID: mdl-37139503

ABSTRACT

Background: Women with congenital heart disease (CHD) are surviving into adulthood, with more undergoing pregnancy. Methods: Retrospective review of the Vizient database from 2017-2019 for women 15-44 years old with moderate, severe or no CHD and vaginal delivery or caesarean section. Demographics, hospital outcomes and costs were compared. Results: There were 2,469,117 admissions: 2,467,589 with no CHD, 1277 with moderate and 251 with severe CHD. Both CHD groups were younger than no CHD, there were fewer white race/ethnicity in the no CHD group and more women with Medicare in both CHD groups compared to no CHD. With increasing CHD severity there was an increase in length of stay, ICU admission rates and costs. There were also higher rates of complications, mortality and caesarean section in the CHD groups. Conclusion: Pregnant women with CHD have more problematic pregnancies and understanding this impact is important to improve management and decrease healthcare utilization.

15.
Am J Cardiol ; 197: 42-45, 2023 06 15.
Article in English | MEDLINE | ID: mdl-37148718

ABSTRACT

Many of the estimated 1.4 million adults with congenital heart defects (CHDs) in the United States are lost to follow-up (LTF) despite recommendations for ongoing cardiology care. Using 2016 to 2019 CH STRONG (Congenital Heart Survey To Recognize Outcomes, Needs, and well-beinG) data, we describe cardiac care among community-based adults with CHD, born in 1980 to 1997, identified through state birth defects registries. Our estimates of LTF were standardized to the CH STRONG eligible population and likely more generalizable to adults with CHD than clinic-based data. Half of our sample were LTF and more than 45% had not received cardiology care in over 5 years. Of those who received care, only 1 in 3 saw an adult CHD physician at their last encounter. Not knowing they needed to see a cardiologist, being told they no longer needed cardiology care, and feeling "well" were the top reasons for LTF, and only half of respondents report doctors discussing the need for cardiac follow-up.


Subject(s)
Cardiology , Heart Defects, Congenital , Humans , Adult , United States/epidemiology , Follow-Up Studies , Heart Defects, Congenital/epidemiology , Heart Defects, Congenital/therapy , Surveys and Questionnaires , Registries
18.
Ann Epidemiol ; 79: 39-43, 2023 03.
Article in English | MEDLINE | ID: mdl-36669598

ABSTRACT

PURPOSE: Autism spectrum disorder (ASD) prevalence information is necessary for identifying community needs such as addressing disparities in identification and services. METHODS: Seven Autism and Developmental Disabilities Monitoring (ADDM) Network sites participated in a pilot project to link statewide health and education data to generate statewide and county-level prevalence estimates for a broader age range for their states for the first time. RESULTS: Statewide prevalence of ASD for ages 3-21 years in 2018 ranged from 1.5% in Tennessee and Wisconsin to 2.3% in Arizona. The median county-level prevalence of ASD was 1.4% of residents ages 3-21 years. More boys than girls had ASD at all sites, and prevalence was lower among non-Hispanic Black, Hispanic, Asian/Pacific Islander, and American Indian/Alaska Native residents compared to non-Hispanic White residents at most sites. ASD prevalence estimates for children aged 8 years were similar to 2018 ADDM Network estimates that used record review to provide more in-depth information, but showed greater variation for children aged 4 years. CONCLUSIONS: Linkage of statewide data sets provides less detailed but actionable local information when more resource-intensive methods are not possible.


Subject(s)
Autism Spectrum Disorder , Male , Child , Female , Humans , United States/epidemiology , Autism Spectrum Disorder/epidemiology , Prevalence , Pilot Projects , Population Surveillance/methods , Ethnicity
19.
PLoS One ; 18(1): e0277971, 2023.
Article in English | MEDLINE | ID: mdl-36649238

ABSTRACT

BACKGROUND: In-shoe pressure measurement systems are used in research and clinical practice to quantify areas and levels of pressure underfoot whilst shod. Their validity and reliability across different pressures, durations of load and contact areas determine their appropriateness to address different research questions or clinical assessments. XSENSOR is a relatively new pressure measurement device and warrants assessment. RESEARCH QUESTION: Does the XSENSOR in-shoe pressure measurement device have sufficient validity and reliability for clinical assessments in diabetes? METHODS: Two XSENSOR insoles were examined across two days with two lab-based protocols to assess regional and whole insole loading. The whole insole protocol applied 50-600 kPa of pressure across the insole surface for 30 seconds and measured at 0, 2, 10 and 30 seconds. The regional protocol used two (3.14 and 15.9 cm2 surface area) cylinders to apply pressures of 50, 110 and 200 kPa to each insole. Three trials of all conditions were averaged. The validity (% difference and Root Mean Square Error: RMSE) and repeatability (Bland Altman, Intra-Class Correlation Coefficient: ICC) of the target pressures (whole insole) and contact area (regional) were outcome variables. RESULTS: Regional results demonstrated mean contact area errors of less than 1 cm2 for both insoles and high repeatability (≥0.939). Whole insole measurement error was higher at higher pressures but resulted in average peak and mean pressures error < 10%. Reliability error was 3-10% for peak pressure, within the 15% defined as an analytical goal. SIGNIFICANCE: Errors associated with the quantification of pressure are low enough that they are unlikely to influence the assessments of interventions or screening of the at-risk-foot considering clinically relevant thresholds. Contact area is accurate due to a high spatial resolution and the repeatability of the XSENSOR system likely makes it appropriate for clinical applications that require multiple assessments.


Subject(s)
Diabetes Mellitus , Shoes , Humans , Reproducibility of Results , Pressure , Foot , Equipment Design
20.
J Pediatr Intensive Care ; 11(4): 341-348, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36388079

ABSTRACT

We augmented our standard extracorporeal membrane oxygenation laboratory protocol to include antifactor Xa assays, thromboelastography, and antithrombin measurements. We performed a retrospective chart review to determine outcomes for patients placed on extracorporeal membrane oxygenation (ECMO) prior to and after the initiation of our anticoagulation laboratory protocol. A total of 663 consecutive ECMO runs were evaluated from January 1, 2007 to June 30, 2018. Of these patients, 252 were on ECMO prior to initiation of the anticoagulation laboratory protocol on September 1, 2011, and 411 patients were on ECMO after initiation of the protocol. There were no major changes to our extracorporeal membrane oxygenation circuit or changes to our transfusion threshold during this continuous study period. Transfusion utilization data revealed statistically significant decreases in almost all blood components, and a savings in blood component inflation-adjusted acquisition costs of 31% bringing total blood product cost-savings to $309,905 per year. In addition, there was an increase in survival to hospital discharge from 45 to 56% associated with the initiation of the protocol ( p = 0.004). Our data indicate that implementation of a standardized ECMO anticoagulation protocol, which titrates unfractionated heparin infusions based on antifactor Xa assays, is associated with reduced blood product utilization, significant blood product cost savings, and increased patient survival. Future prospective evaluation is needed to establish an antifactor Xa assay-driven ECMO anticoagulation strategy as both clinically superior and cost-effective.

SELECTION OF CITATIONS
SEARCH DETAIL
...