Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 40
Filter
1.
AJPM Focus ; 3(1): 100156, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38149079

ABSTRACT

Introduction: Diabetes is a leading risk factor for COVID-19, disproportionally impacting marginalized populations. We analyzed racial/ethnic differences in glycemic control among patients who tested positive for SARS-CoV-2 in the Baltimore-Washington, District of Columbia region. Methods: Glycemic control measured by HbA1c was compared by race and ethnicity among patients with a positive SARS-CoV-2 test at the Johns Hopkins Health System between March 1, 2020, and March 31, 2022. Risk factors associated with poor glycemic control (HbA1c≥8) were identified using logistic regression. Results: Black, Latino, and Asian patients had a higher rate of prediabetes (HbA1c=5.7%-6.49%) and diabetes (HbA1c≥6.5%) than non-Hispanic White patients. Among patients with diabetes, poor glycemic control (HbA1c≥8%) was significantly higher among young adults (aged ≤44 years), Latino patients (AOR=1.5; 95% CI=1.1, 1.9), Black patients (AOR=1.2; 95% CI=1.0, 1.5), uninsured patients (AOR=1.5; 95% CI=1.2, 1.9), and those with limited English proficiency (AOR=1.3; 95% CI=1.0, 1.6) or without a primary care physician (AOR=1.6; 95% CI=1.3, 2.1). Conclusions: Disparities in glycemic control among patients who tested positive for SARS-CoV-2 were associated with underlying structural factors such as access to care, health insurance, and language proficiency. There is a need to implement accessible, culturally and language-appropriate preventive and primary care programs to engage socioeconomically disadvantaged populations in diabetic screening and care.

2.
Nanomaterials (Basel) ; 13(8)2023 Apr 11.
Article in English | MEDLINE | ID: mdl-37110920

ABSTRACT

A numerical simulation is a valuable tool since it allows the optimization of both time and the cost of experimental processes for time optimization and the cost of experimental processes. In addition, it will enable the interpretation of developed measurements in complex structures, the design and optimization of solar cells, and the prediction of the optimal parameters that contribute to manufacturing a device with the best performance. In this sense, a detailed simulation study was carried out in this work by the Solar Cell Capacitance Simulator (SCAPS). In particular, we evaluate the influence of absorber and buffer thickness, absorber defect density, work function in back contact, Rs, Rsh, and carrier concentration on a CdTe/CdS cell to maximize its performance. Furthermore, the incorporation effect of ZnO:Al (TCO) and CuSCN (HTL) nanolayers was studied for the first time. As a result, the efficiency of the solar cell was maximized from 16.04% to 17.74% by increasing the Jsc and Voc. This work will play an essential role in enhancing the performance of CdTe-based devices with the best performance.

3.
BMC Public Health ; 23(1): 735, 2023 04 21.
Article in English | MEDLINE | ID: mdl-37085801

ABSTRACT

BACKGROUND: This study examines the relationship between universal health coverage (UHC) and the burden of emergency diseases at a global level. METHODS: Data on Disability-Adjusted Life Years (DALYs) from emergency conditions were extracted from the Institute for Health Metrics and Evaluation (IHME) database for the years 2015 and 2019. Data on UHC, measured using two variables 1) coverage of essential health services and 2) proportion of the population spending more than 10% of household income on out-of-pocket health care expenditure, were extracted from the World Bank Database for years preceding our outcome of interest. A linear regression was used to analyze the association between UHC variables and DALYs for emergency diseases, controlling for other variables. RESULTS: A total of 132 countries were included. The median national coverage of essential health services index was 67.5/100, while the median national prevalence of catastrophic spending in the sample was 6.74% of households. There was a strong significant relationship between health service coverage and the burden of emergency diseases, with an 11.5-point reduction in DALYs of emergency medical diseases (95% CI -9.5, -14.8) for every point increase in the coverage of essential health services index. There was no statistically significant relationship between catastrophic expenditures and the burden of emergency diseases, which may be indicative of inelastic demand in seeking services for health emergencies. CONCLUSION: Increasing the coverage of essential health services, as measured by the essential health services index, is strongly correlated with a reduction in the burden of emergency conditions. In addition, data affirms that financial protection remains inadequate in many parts of the globe, with large numbers of households experiencing significant economic duress related to seeking healthcare. This evidence supports a strategy of strengthening UHC as a means of combating death and disability from health emergencies, as well as extending protection against impoverishment related to healthcare expenses.


Subject(s)
Emergencies , Universal Health Insurance , Humans , Acute Disease , Delivery of Health Care , Health Expenditures , Cost of Illness
4.
Am J Public Health ; 112(S9): S913-S917, 2022 11.
Article in English | MEDLINE | ID: mdl-36446060

ABSTRACT

The disproportionate impact of COVID-19 on low-income Latinos with limited access to health care services prompted the expansion of community-based COVID-19 services. From June 25, 2020, to May 20, 2021, we established a coalition of faith leaders, community organizations, and governmental organizations to implement a Spanish-language hotline and social media campaign that linked people to a COVID-19 testing site at a local church in a high-density Latino neighborhood in Baltimore, Maryland. This retrospective analysis compared the characteristics of Latinos accessing testing in community versus health care facility-based settings. (Am J Public Health. 2022;112(S9):S913-S917. https://doi.org/10.2105/AJPH.2022.307074).


Subject(s)
COVID-19 Testing , COVID-19 , Humans , COVID-19/epidemiology , COVID-19/prevention & control , Baltimore , Retrospective Studies , Hispanic or Latino
5.
Animals (Basel) ; 12(19)2022 Oct 08.
Article in English | MEDLINE | ID: mdl-36230447

ABSTRACT

This study aims to model the relationship among performance, whole body composition, and processing yield through meta-regression. Scientific papers found in Scopus and Google Scholar were included if they reported results and variability values of an actual experiment in the three mentioned groups of variables using a single broiler genetic line. Weighted mean effect sizes were determined with a random model, the risk of bias was determined, and heterogeneity was considered an indicator of usefulness. Meta-regressions considered the effect sizes of the response variable and the percent change in one or more variables as predictors. A 78-row database was built from 14 papers, including nine factors tested on 22,256 broilers. No influencing bias was found, and the data was determined useful. Meta-regressions showed that the changes in body weight gain (BWG) are inversely related to the effects in feed conversion ratio (FCR) (p < 0.001) and that the changes in FCR and effects in protein-to-fat gain (PFG) are directly related (p < 0.001). The changes in PFG and the effects on carcass conformation or the market value of birds are directly related (p < 0.001). In conclusion, body composition predicts carcass conformation and its market value, supporting its use to predict the economic value of broilers.

6.
Article in English | MEDLINE | ID: mdl-36300152

ABSTRACT

Background: Gastrointestinal procedures generally require pre-procedural fasting to optimize sedation safety. While the American Society of Anesthesiologists (ASA) recommends no intake of clear liquids and solid food 2-4 and 6-8 hours respectively prior to endoscopic procedures, the actual nil per os (NPO) duration for these procedures in practice is unknown. Our objective was to analyze NPO duration for patients undergoing these procedures and to determine its association with clinical and administrative variables. Methods: Inpatient data from 2016-2018 for the three procedures was extracted from electronic medical records and administrative data at a single-center tertiary academic medical center. Various statistical tests (Kruskal-Wallis, Wilcoxon, Pearson) were employed depending on the outcome type and data distribution. Results: One thousand three hundred and twenty-five esophagogastroduodenoscopies (EGDs), 753 colonoscopies, and 550 endoscopic retrograde cholangiopancreatographies (ERCPs) were included. The median NPO time for all procedures was 12.6 hours (IQR, 9.6-16.1 hours). The median NPO times were 12.6, 11.9, and 13.1 hours for EGD, colonoscopy, and ERCP respectively. NPO duration was greater for Hispanic than non-Hispanic patients (median 13.9 vs. 12.4, P=0.018). NPO duration was also associated with increased age (r=0.041, P=0.027) and inversely related to hospital occupancy (r=-0.08, P<0.0001). There were no statistically significant associations with provider type, hospital location or service, length of stay, and total number of comorbidities. Conclusions: NPO times for common inpatient gastroenterology (GI) procedures generally exceeded 12 hours, suggesting there is an opportunity to adopt changes to decrease NPO duration for low-risk patients while maintaining adherence to guidelines and best practice.

7.
J Public Health Manag Pract ; 28(6): E789-E794, 2022.
Article in English | MEDLINE | ID: mdl-36074797

ABSTRACT

BACKGROUND: Despite the disproportionate impact of COVID-19 on Latinos, there were disparities in vaccination, especially during the early phase of COVID-19 immunization rollout. METHODS: Leveraging a community-academic partnership established to expand access to SARS-CoV2 testing, we implemented community vaccination clinics with multifaceted outreach strategies and flexible appointments for limited English proficiency Latinos. RESULTS: Between February 26 and May 7 2021, 2250 individuals received the first dose of COVID-19 vaccination during 18 free community events. Among them, 92.4% (95% confidence interval [CI], 91.2%-93.4%) self-identified as Hispanic, 88.7% (95% CI, 87.2%-89.9%) were limited English proficiency Spanish speakers, 23.1% (95% CI, 20.9%-25.2%) reported prior COVID-19 infection, 19.4% (95% CI, 16.9%-22.25%) had a body mass index of more than 35, 35.0% (95% CI, 32.2%-37.8%) had cardiovascular disease, and 21.6% (95% CI, 19.2%-24.0%) had diabetes. The timely second-dose completion rate was high (98.7%; 95% CI, 97.6%-99.2%) and did not vary by outreach method. CONCLUSION: A free community-based vaccination initiative expanded access for Latinos with limited English proficiency at high risk for COVID-19 during the early phase of the immunization program in the US.


Subject(s)
AIDS Vaccines , COVID-19 , Influenza Vaccines , Limited English Proficiency , Papillomavirus Vaccines , Respiratory Syncytial Virus Vaccines , SAIDS Vaccines , BCG Vaccine , COVID-19/epidemiology , COVID-19/prevention & control , COVID-19 Vaccines/therapeutic use , Diphtheria-Tetanus-Pertussis Vaccine , Hispanic or Latino , Humans , Measles-Mumps-Rubella Vaccine , RNA, Viral , SARS-CoV-2 , Vaccination
8.
West J Emerg Med ; 23(2): 115-123, 2022 Feb 14.
Article in English | MEDLINE | ID: mdl-35302441

ABSTRACT

INTRODUCTION: Electronic influenza surveillance systems aid in health surveillance and clinical decision-making within the emergency department (ED). While major advances have been made in integrating clinical decision-making tools within the electronic health record (EHR), tools for sharing surveillance data are often piecemeal, with the need for data downloads and manual uploads to shared servers, delaying time from data acquisition to end-user. Real-time surveillance can help both clinicians and public health professionals recognize circulating influenza earlier in the season and provide ongoing situational awareness. METHODS: We created a prototype, cloud-based, real-time reporting system in two large, academically affiliated EDs that streamed continuous data on a web-based dashboard within hours of specimen collection during the influenza season. Data included influenza test results (positive or negative) coupled with test date, test instrument geolocation, and basic patient demographics. The system provided immediate reporting to frontline clinicians and to local, state, and federal health department partners. RESULTS: We describe the process, infrastructure requirements, and challenges of developing and implementing the prototype system. Key process-related requirements for system development included merging data from the molecular test (GeneXpert) with the hospitals' EHRs, securing data, authorizing/authenticating users, and providing permissions for data access refining visualizations for end-users. CONCLUSION: In this case study, we effectively integrated multiple data systems at four distinct hospital EDs, relaying data in near real time to hospital-based staff and local and national public health entities, to provide laboratory-confirmed influenza test results during the 2014-2015 influenza season. Future innovations need to focus on integrating the dashboard within the EHR and clinical decision tools.


Subject(s)
Influenza, Human , Cloud Computing , Emergency Service, Hospital , Humans , Influenza, Human/diagnosis , Influenza, Human/epidemiology , Population Surveillance/methods , Seasons
9.
Poult Sci ; 101(3): 101671, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35066383

ABSTRACT

The study of neurochemical-based interkingdom signaling and its impact on host-microbe interaction is called microbial endocrinology. Neurochemicals play a recognized role in determining bacterial colonization and interaction with the gut epithelium. While much attention has been devoted to the determination of neurochemical concentrations in the mammalian gut to better understand tissue and region-specific microbial endocrinology-based mechanisms of host-microbe interaction, little is known regarding the biogeography of neurochemicals in the avian gut. Greater resolution of avian gut neurochemical concentrations is needed especially as recent microbial endocrinology-based investigations into bacterial foodborne pathogen colonization of the chicken gut have demonstrated neurochemicals to affect Campylobacter jejuni and Salmonella spp. in vivo and in vitro. The aim of the present study was to determine the concentrations of stress-related neurochemicals in the tissue and luminal content of the duodenum, jejunum, ileum, cecum, and colon of the broiler intestinal tract, and to investigate if this biogeography changes with age of the bird. While all neurochemicals measured were detected in the intestinal tract, many displayed differences in regional concentrations. Whereas the catecholamine norepinephrine was detected in each region of the intestinal tract, epinephrine was present only in the cecum and colon. Likewise, dopamine, and its metabolite 3,4-dihydroxyphenylacetic acid were found in the greatest quantities in the cecum and colon. Serotonin and histamine were identified in each gut region. Region-specific age-related changes were observed (P < 0.05) for serotonin, its metabolite 5-hydroxyindole acetic acid as well as for histamine. Several neurochemicals, including norepinephrine, were found in the contents of each gut region. Epinephrine was not detected in the gut content of any region. Salsolinol, a microbial-produced neuroactive compound was detected in the gut content but not in tissue. Together, our data establish a neurochemical biogeography of the broiler chicken intestinal tract. By providing researchers with a region-by-region map of in vivo gut neurochemical concentrations of a modern broiler chicken breed, this neurochemical map is expected to inform future investigations that seek to utilize avian enteric neurochemistry.


Subject(s)
Campylobacter Infections , Campylobacter jejuni , Gastrointestinal Microbiome , Animals , Campylobacter Infections/microbiology , Campylobacter Infections/veterinary , Cecum/microbiology , Chickens/microbiology
10.
Health Care Manag Sci ; 25(1): 89-99, 2022 Mar.
Article in English | MEDLINE | ID: mdl-34559339

ABSTRACT

Proactive and objective regulatory risk management of ongoing clinical trials is limited, especially when it involves the safety of the trial. We seek to prospectively evaluate the risk of facing adverse outcomes from standardized and routinely collected protocol data. We conducted a retrospective cohort study of 2860 Phase 2 and Phase 3 trials that were started and completed between 1993 and 2017 and documented in ClinicalTrials.gov. Adverse outcomes considered in our work include Serious or Non-Serious as per the ClinicalTrials.gov definition. Random-forest-based prediction models were created to determine a trial's risk of adverse outcomes based on protocol data that is available before the start of a trial enrollment. A trial's risk is defined by dichotomic (classification) and continuous (log-odds) risk scores. The classification-based prediction models had an area under the curve (AUC) ranging from 0.865 to 0.971 and the continuous-score based models indicate a rank correlation of 0.6-0.66 (with p-values < 0.001), thereby demonstrating improved identification of risk of adverse outcomes. Whereas related frameworks highlight the prediction benefits of incorporating data that is highly context-specific, our results indicate that Adverse Event (AE) risks can be reliably predicted through a framework of mild data requirements. We propose three potential applications in leading regulatory remits, highlighting opportunities to support regulatory oversight and informed consent decisions.


Subject(s)
Models, Statistical , Humans , Prospective Studies , Retrospective Studies , Risk Factors , Treatment Outcome
11.
Clin Infect Dis ; 74(9): 1675-1677, 2022 05 03.
Article in English | MEDLINE | ID: mdl-34463697

ABSTRACT

We assessed temporal changes in the household secondary attack rate of severe acute respiratory syndrome coronavirus 2 and identified risk factors for transmission in vulnerable Latino households of Baltimore, Maryland. The household secondary attack rate was 45.8%, and it appeared to increase as the alpha variant spread, highlighting the magnified risk of spread in unvaccinated populations.


Subject(s)
COVID-19 , SARS-CoV-2 , Family Characteristics , Hispanic or Latino , Humans
12.
Health Care Manag Sci ; 25(1): 100-125, 2022 Mar.
Article in English | MEDLINE | ID: mdl-34401992

ABSTRACT

Prolonged waiting to access health care is a primary concern for nations aiming for comprehensive effective care, due to its adverse effects on mortality, quality of life, and government approval. Here, we propose two novel bargaining frameworks to reduce waiting lists in two-tier health care systems with local and regional actors. In particular, we assess the impact of 1) trading patients on waiting lists among hospitals, the 2) introduction of the role of private hospitals in capturing unfulfilled demand, and the 3) hospitals' willingness to share capacity on the system performance. We calibrated our models with 2008-2018 Chilean waiting list data. If hospitals trade unattended patients, our game-theoretic models indicate a potential reduction of waiting lists of up to 37%. However, when private hospitals are introduced into the system, we found a possible reduction of waiting lists of up to 60%. Further analyses revealed a trade-off between diagnosing unserved demand and the additional expense of using private hospitals as a back-up system. In summary, our game-theoretic frameworks of waiting list management in two-tier health systems suggest that public-private cooperation can be an effective mechanism to reduce waiting lists. Further empirical and prospective evaluations are needed.


Subject(s)
Quality of Life , Waiting Lists , Chile , Hospitals, Private , Hospitals, Public , Humans
13.
Infect Control Hosp Epidemiol ; 43(9): 1162-1170, 2022 09.
Article in English | MEDLINE | ID: mdl-34674791

ABSTRACT

OBJECTIVE: We analyzed the efficacy, cost, and cost-effectiveness of predictive decision-support systems based on surveillance interventions to reduce the spread of carbapenem-resistant Enterobacteriaceae (CRE). DESIGN: We developed a computational model that included patient movement between acute-care hospitals (ACHs), long-term care facilities (LTCFs), and communities to simulate the transmission and epidemiology of CRE. A comparative cost-effectiveness analysis was conducted on several surveillance strategies to detect asymptomatic CRE colonization, which included screening in ICUs at select or all hospitals, a statewide registry, or a combination of hospital screening and a statewide registry. SETTING: We investigated 51 ACHs, 222 LTCFs, and skilled nursing facilities, and 464 ZIP codes in the state of Maryland. PATIENTS OR PARTICIPANTS: The model was informed using 2013-2016 patient-mix data from the Maryland Health Services Cost Review Commission. This model included all patients that were admitted to an ACH. RESULTS: On average, the implementation of a statewide CRE registry reduced annual CRE infections by 6.3% (18.8 cases). Policies of screening in select or all ICUs without a statewide registry had no significant impact on the incidence of CRE infections. Predictive algorithms, which identified any high-risk patient, reduced colonization incidence by an average of 1.2% (3.7 cases) without a registry and 7.0% (20.9 cases) with a registry. Implementation of the registry was estimated to save $572,000 statewide in averted infections per year. CONCLUSIONS: Although hospital-level surveillance provided minimal reductions in CRE infections, regional coordination with a statewide registry of CRE patients reduced infections and was cost-effective.


Subject(s)
Carbapenem-Resistant Enterobacteriaceae , Cross Infection , Enterobacteriaceae Infections , Anti-Bacterial Agents/therapeutic use , Cost-Benefit Analysis , Cross Infection/drug therapy , Cross Infection/epidemiology , Cross Infection/prevention & control , Enterobacteriaceae Infections/drug therapy , Enterobacteriaceae Infections/epidemiology , Enterobacteriaceae Infections/prevention & control , Humans , Maryland/epidemiology
14.
Poult Sci ; 100(3): 100944, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33652538

ABSTRACT

Microbial endocrinology, which is the study of neurochemical-based host-microbe interaction, has demonstrated that neurochemicals affect bacterial pathogenicity. A variety of neurochemicals, including norepinephrine, were shown to enhance intestinal epithelial colonization by Campylobacter jejuni. Yet, little is known whether serotonin, an abundant neurochemical produced in the gut, affects the physiology of C. jejuni and its interaction with the host gut epithelium. Considering the avian gut produces serotonin and serves as a major reservoir of C. jejuni, we sought to investigate whether serotonin can affect C. jejuni physiology and gut epithelial colonization in vitro. We first determined the biogeographical distribution of serotonin concentrations in the serosa, mucosa, as well as the luminal contents of the broiler chicken ileum, cecum, and colon. Serotonin concentrations were greater (P < 0.05) in the mucosa and serosa compared to the luminal content in each gut region examined. Among the ileum, colon, and cecum, the colon was found to contain the greatest concentrations of serotonin. We then investigated whether serotonin may effect changes in C. jejuni growth and motility in vitro. The C. jejuni used in this study was previously isolated from the broiler chicken ceca. Serotonin at concentrations of 1mM or below did not elicit changes in growth (P > 0.05) or motility (P > 0.05) of C. jejuni. Next, we utilized liquid chromatography tandem mass spectrometry to investigate whether serotonin affected the proteome of C. jejuni. Serotonin caused (P < 0.05) the downregulation of a protein (CJJ81176_1037) previously identified to be essential in C. jejuni colonization. Based on our findings, we evaluated whether serotonin would cause a functional change in C. jejuni adhesion and invasion of the HT29MTX-E12 colonic epithelial cell line. Serotonin was found to cause a reduction in adhesion (P < 0.05) but not invasion (P > 0.05). Together, we have identified a potential role for serotonin in modulating C. jejuni colonization in the gut in vitro. Further studies are required to understand the practical implications of these findings for the control of C. jejuni enteric colonization in vivo.


Subject(s)
Campylobacter Infections , Campylobacter jejuni , Gastrointestinal Microbiome , Poultry Diseases , Animals , Campylobacter Infections/veterinary , Cecum , Chickens , Epithelium , Serotonin
15.
BMC Gastroenterol ; 21(1): 89, 2021 Feb 27.
Article in English | MEDLINE | ID: mdl-33639850

ABSTRACT

BACKGROUND: Inpatient colonoscopy bowel preparation (ICBP) is frequently inadequate and can lead to adverse events, delayed or repeated procedures, and negative patient outcomes. Guidelines to overcome the complex factors in this setting are not well established. Our aims were to use health systems engineering principles to comprehensively evaluate the ICBP process, create an ICBP protocol, increase adequate ICBP, and decrease length of stay. Our goal was to provide adaptable tools for other institutions and procedural specialties. METHODS: Patients admitted to our tertiary care academic hospital that underwent inpatient colonoscopy between July 3, 2017 to June 8, 2018 were included. Our multi-disciplinary team created a protocol employing health systems engineering techniques (i.e., process mapping, cause-effect diagrams, and plan-do-study-act cycles). We collected demographic and colonoscopy data. Our outcome measures were adequate preparation and length of stay. We compared pre-intervention (120 ICBP) vs. post-intervention (129 ICBP) outcomes using generalized linear regression models. Our new ICBP protocol included: split-dose 6-L polyethylene glycol-electrolyte solution, a gastroenterology electronic note template, and an education plan for patients, nurses, and physicians. RESULTS: The percent of adequate ICBPs significantly increased with the intervention from 61% pre-intervention to 74% post-intervention (adjusted odds ratio of 1.87, p value = 0.023). The median length of stay decreased by approximately 25%, from 4 days pre-intervention to 3 days post-intervention (p value = 0.11). CONCLUSIONS: By addressing issues at patient, provider, and system levels with health systems engineering principles, we addressed patient safety and quality of care provided by improving rates of adequate ICBP.


Subject(s)
Gastroenterology , Inpatients , Cathartics , Colonoscopy , Humans , Patient-Centered Care , Polyethylene Glycols
17.
Poult Sci ; 100(3): 100885, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33516475

ABSTRACT

Two meat-type broiler lines, line A and line B were fed experimental diets from 22-42 d with objectives to determine the effects of dietary metabolizable energy (ME) levels on feed intake (FI), performance, body composition, and processing yield as affected by environmental grow-out temperatures. Two thousand fifty male chicks from line A and 2,050 male chicks from line B were reared in 90-floor pens, 45 chicks per pen utilizing primary breeder nutrition and husbandry guidelines for starter (1-10 d) and grower (11-21 d) phases. Experimental finisher diets consisted of 5 increasing levels of apparent nitrogen corrected ME (2,800, 2,925, 3,050, 3,175, and 3,300 kcal/kg set at 19.5% crude protein and 1.0% dLys at each level) to represent 80, 90, 100, 110, and 120% ME of Evonik AminoChick energy level giving 2 × 5 factorial design and were fed from 22-42 d. All other amino acid levels in diets were formulated to a fixed ratio of dLys level. There were nine replicate pens for each diet and each line. The experiment was conducted twice-once in hot season (barn averages: 77.55 ˚F and 86.04% RH) and another in cool season (barn averages: 69.91 ˚F and 63.98% RH) of the year. Results showed that FI and feed conversion ratios (FCR) decreased (P < 0.05) linearly (R2 = 0.9) by 61.25 g and 0.073 units for every 10% increase in dietary ME for combined analysis of lines and seasons. The % fat mass of total body mass increased by 0.57%, whereas % protein mass decreased by 0.21% across ME levels (R2 > 0.9). However, there was no difference (P > 0.05) in % weights (of live weight) for wings, breast filet, tenders, or leg quarters across ME levels for both lines except % fat pad that increased (P < 0.05) by 0.20% for each 10% increment in dietary ME level. Line B had higher cumulative FI, BW gain, % lean, and protein mass of body mass than line A in hot season (P < 0.05). Feed intake was not different between lines in cool season (P > 0.05), whereas higher BW and improved FCR were observed for line A. Line A had higher % fat mass in both seasons. In summary, performance and yield results as affected by dietary ME levels were line specific and were affected by grow-out seasons. The optimal dietary ME level for the ME range studied (2,800-3,000 kcal/kg) at a constant recommended amino acid level lies in determining the best performance and profitability indices by taking into account the grow-out production inputs and processing yield outputs.


Subject(s)
Animal Nutritional Physiological Phenomena , Chickens , Diet , Energy Metabolism , Housing, Animal , Meat , Temperature , Animal Feed/analysis , Animals , Chickens/growth & development , Chickens/metabolism , Diet/veterinary , Housing, Animal/standards , Male , Meat/standards , Weight Gain/physiology
18.
Article in English | MEDLINE | ID: mdl-36168500

ABSTRACT

Artificial intelligence (AI) refers to the performance of tasks by machines ordinarily associated with human intelligence. Machine learning (ML) is a subtype of AI; it refers to the ability of computers to draw conclusions (ie, learn) from data without being directly programmed. ML builds from traditional statistical methods and has drawn significant interest in healthcare epidemiology due to its potential for improving disease prediction and patient care. This review provides an overview of ML in healthcare epidemiology and practical examples of ML tools used to support healthcare decision making at 4 stages of hospital-based care: triage, diagnosis, treatment, and discharge. Examples include model-building efforts to assist emergency department triage, predicting time before septic shock onset, detecting community-acquired pneumonia, and classifying COVID-19 disposition risk level. Increasing availability and quality of electronic health record (EHR) data as well as computing power provides opportunities for ML to increase patient safety, improve the efficiency of clinical management, and reduce healthcare costs.

19.
J Am Med Inform Assoc ; 28(1): 62-70, 2021 01 15.
Article in English | MEDLINE | ID: mdl-33164100

ABSTRACT

OBJECTIVE: Clinical trials ensure that pharmaceutical treatments are safe, efficacious, and effective for public consumption, but are extremely complex, taking up to 10 years and $2.6 billion to complete. One main source of complexity arises from the collaboration between actors, and network science methodologies can be leveraged to explore that complexity. We aim to characterize collaborations between actors in the clinical trials context and investigate trends of successful actors. MATERIALS AND METHODS: We constructed a temporal network of clinical trial collaborations between large and small-size pharmaceutical companies, academic institutions, nonprofit organizations, hospital systems, and government agencies from public and proprietary data and introduced metrics to quantify actors' collaboration network structure, organizational behavior, and partnership characteristics. A multivariable regression analysis was conducted to determine the metrics' relationship with success. RESULTS: We found a positive correlation between the number of successful approved trials and interdisciplinary collaborations measured by a collaboration diversity metric (P < .01). Our results also showed a negative effect of the local clustering coefficient (P < .01) on the success of clinical trials. Large pharmaceutical companies have the lowest local clustering coefficient and more diversity in partnerships across biomedical specializations. CONCLUSIONS: Large pharmaceutical companies are more likely to collaborate with a wider range of actors from other specialties, especially smaller industry actors who are newcomers in clinical research, resulting in exclusive access to smaller actors. Future investigations are needed to show how concentrations of influence and resources might result in diminished gains in treatment development.


Subject(s)
Clinical Trials as Topic/organization & administration , Drug Approval/organization & administration , Drug Industry/organization & administration , Pharmaceutical Preparations , Cooperative Behavior , Humans , Multivariate Analysis , Regression Analysis
20.
Ann Emerg Med ; 76(4): 501-514, 2020 10.
Article in English | MEDLINE | ID: mdl-32713624

ABSTRACT

STUDY OBJECTIVE: Acute kidney injury occurs commonly and is a leading cause of prolonged hospitalization, development and progression of chronic kidney disease, and death. Early acute kidney injury treatment can improve outcomes. However, current decision support is not able to detect patients at the highest risk of developing acute kidney injury. We analyzed routinely collected emergency department (ED) data and developed prediction models with capacity for early identification of ED patients at high risk for acute kidney injury. METHODS: A multisite, retrospective, cross-sectional study was performed at 3 EDs between January 2014 and July 2017. All adult ED visits in which patients were hospitalized and serum creatinine level was measured both on arrival and again with 72 hours were included. We built machine-learning-based classifiers that rely on vital signs, chief complaints, medical history and active medical visits, and laboratory results to predict the development of acute kidney injury stage 1 and 2 in the next 24 to 72 hours, according to creatinine-based international consensus criteria. Predictive performance was evaluated out of sample by Monte Carlo cross validation. RESULTS: The final cohort included 91,258 visits by 59,792 unique patients. Seventy-two-hour incidence of acute kidney injury was 7.9% for stages greater than or equal to 1 and 1.0% for stages greater than or equal to 2. The area under the receiver operating characteristic curve for acute kidney injury prediction ranged from 0.81 (95% confidence interval 0.80 to 0.82) to 0.74 (95% confidence interval 0.74 to 0.75), with a median time from ED arrival to prediction of 1.7 hours (interquartile range 1.3 to 2.5 hours). CONCLUSION: Machine learning applied to routinely collected ED data identified ED patients at high risk for acute kidney injury up to 72 hours before they met diagnostic criteria. Further prospective evaluation is necessary.


Subject(s)
Acute Kidney Injury/diagnosis , Electronic Health Records/statistics & numerical data , Machine Learning/standards , Adolescent , Adult , Aged , Aged, 80 and over , Clinical Decision Rules , Creatinine/analysis , Creatinine/blood , Cross-Sectional Studies , Emergency Service, Hospital/organization & administration , Emergency Service, Hospital/statistics & numerical data , Female , Humans , Machine Learning/statistics & numerical data , Male , Middle Aged , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL
...