Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
Small ; 18(26): e2107559, 2022 07.
Article in English | MEDLINE | ID: mdl-35606684

ABSTRACT

Decades of research into the topic of oral nanoparticle (NP) delivery has still not provided a clear consensus regarding which properties produce an effective oral drug delivery system. The surface properties-charge and bioadhesiveness-as well as in vitro and in vivo correlation seem to generate the greatest number of disagreements within the field. Herein, a mechanism underlying the in vivo behavior of NPs is proposed, which bridges the gaps between these disagreements. The mechanism relies on the idea of biocoating-the coating of NPs with mucus-which alters their surface properties, and ultimately their systemic uptake. Utilizing this mechanism, several coated NPs are tested in vitro, ex vivo, and in vivo, and biocoating is found to affect NPs size, zeta-potential, mucosal diffusion coefficient, the extent of aggregation, and in vivo/in vitro/ex vivo correlation. Based on these results, low molecular weight polylactic acid exhibits a 21-fold increase in mucosal diffusion coefficient after precoating as compared to uncoated particles, as well as 20% less aggregation, and about 30% uptake to the blood in vivo. These discoveries suggest that biocoating reduces negative NP charge which results in an enhanced mucosal diffusion rate, increased gastrointestinal retention time, and high systemic uptake.


Subject(s)
Drug Carriers , Nanoparticles , Administration, Oral , Drug Delivery Systems/methods , Mucus , Polymers
2.
Clin Gastroenterol Hepatol ; 20(5): e1050-e1060, 2022 05.
Article in English | MEDLINE | ID: mdl-34216826

ABSTRACT

BACKGROUND & AIMS: Older adults with colorectal polyps undergo frequent surveillance colonoscopy. There is no specific guidance regarding when to stop surveillance. We aimed to characterize endoscopist recommendations regarding surveillance colonoscopy in older adults and identify patient, procedure, and endoscopist characteristics associated with recommendations to stop. METHODS: This was a retrospective cohort study at a single academic medical center of adults aged ≥75 years who underwent colonoscopy for polyp surveillance or screening during which polyps were found. The primary outcome was a recommendation to stop surveillance. Predictors examined included patient age, sex, family history of colorectal cancer, polyp findings, and endoscopist sex and years in practice. Associations were evaluated using multilevel logistic regression. RESULTS: Among 1426 colonoscopies performed by 17 endoscopists, 34.6% contained a recommendation to stop and 52.3% to continue. Older patients were more likely to receive a recommendation to stop, including those 80-84 years (odds ratio [OR], 7.7; 95% confidence interval [CI], 4.8-12.3) and ≥85 years (OR, 9.0; 95% CI, 3.3-24.6), compared with those 75-79 years. Family history of colorectal cancer (OR, 0.42; 95% CI, 0.24-0.74) and a history of low-risk (OR, 0.17; 95% CI, 0.11-0.24) or high-risk (OR, 0.02; 95% CI, 0.01-0.04) polyps were inversely associated with recommendations to stop. The likelihood of a recommendation to stop varied significantly across endoscopists. CONCLUSIONS: Only 35% of adults ≥75 years of age are recommended to stop surveillance colonoscopy. The presence of polyps was strongly associated with fewer recommendations to stop. The variation in endoscopist recommendations highlights an opportunity to better standardize recommendations following colonoscopy in older adults.


Subject(s)
Colonic Polyps , Colorectal Neoplasms , Aged , Colonic Polyps/diagnosis , Colonic Polyps/epidemiology , Colonoscopy/methods , Colorectal Neoplasms/diagnosis , Humans , Mass Screening , Retrospective Studies
3.
Head Neck ; 44(1): 275-285, 2022 01.
Article in English | MEDLINE | ID: mdl-34729845

ABSTRACT

The present study aims to estimate a pooled hazard ratio (HR) comparing overall survival (OS) for salvage surgery compared to nonsurgical management of recurrent head and neck squamous cell carcinoma (HNSCC). PubMed/MEDLINE and Embase-Ovid were searched on March 5, 2020, for English-language articles reporting survival for salvage surgery and nonsurgical management of recurrent HNSCC. Meta-analysis of HR estimates using random effects model was performed. Fifteen studies reported survival for salvage surgery and nonsurgical management of recurrence. Five-year OS ranged from 26% to 67% for the salvage surgery groups, compared to 0% to 32% for the nonsurgical management groups. Six studies reported HRs comparing salvage surgery to nonsurgical management; the pooled HR was 0.25 (95% CI [0.16, 0.38]; p < 0.0001). Selection for salvage surgery was associated with one quarter of the mortality rate associated with nonsurgical management in light of confounding factors including subsite and treatment intent.


Subject(s)
Carcinoma, Squamous Cell , Head and Neck Neoplasms , Carcinoma, Squamous Cell/surgery , Head and Neck Neoplasms/surgery , Humans , Neoplasm Recurrence, Local/surgery , Salvage Therapy , Squamous Cell Carcinoma of Head and Neck/surgery
4.
Curr HIV Res ; 15(5): 307-317, 2017 Nov 23.
Article in English | MEDLINE | ID: mdl-28933280

ABSTRACT

BACKGROUND: Although HIV and injury contribute substantially to disease burdens in lowand middle-income countries (LMIC), their intersection is poorly characterized. OBJECTIVE: This systematic review assessed the prevalence and associated mortality impact of HIVseropositivity among injured patients in LMIC. METHODS: A systematic search of PubMed, EMBASE, Global Health, CINAHL, POPLINE and Cochrane databases through August 2016 was performed. Prospective and cross-sectional reports of injured patients from LMIC that evaluated HIV-serostatus were included. Two reviewers identified eligible records (kappa=0.83); quality was assessed using GRADE criteria. HIV-seroprevalence and mortality risks were summarized and pooled estimates were calculated using random-effects models with heterogeneity assessed. RESULTS: Of 472 retrieved records, sixteen met inclusion. All reports were of low or very low quality and derived from sub-Saharan Africa. HIV-serostatus was available for 3,994 patients. Individual report and pooled HIV-seroprevalence estimates were uniformly greater than temporally matched national statistics (range: 4.5-35.0%). Pooled reports from South Africa were three-fold greater than matched national prevalence (32.0%, 95% CI, 28.0-37.0%). Mortality data were available for 1,398 patients. Heterogeneity precluded pooled mortality analysis. Among individual reports, 66.7% demonstrated significantly increased relative risks (RR) of death; none found reduced risk of death among HIV-seropositive patients. Increased mortality risk among HIV-seropositive patients ranged from 1.86 (95% CI, 1.11-3.09) in Malawi to 10.7 (95% CI, 1.32-86.1) in South Africa. CONCLUSION: The available data indicate that HIV-seropositivity among the injured is high relative to national rates and may increase mortality, suggesting that integrated HIV-injury programming could be beneficial. Given the data limitations, further study of the HIV-injury intersection is crucially needed.


Subject(s)
HIV Seropositivity/epidemiology , HIV Seropositivity/mortality , Wounds and Injuries/complications , Africa South of the Sahara/epidemiology , Developing Countries , Humans , Risk Assessment , Seroepidemiologic Studies , Survival Analysis , Wounds and Injuries/diagnosis
5.
Lancet Infect Dis ; 17(6): 654-660, 2017 06.
Article in English | MEDLINE | ID: mdl-28258817

ABSTRACT

BACKGROUND: The 2014-15 Ebola virus disease (EVD) epidemic strained health systems in west Africa already overburdened with other diseases, including malaria. Because EVD and malaria can be difficult to distinguish clinically, and rapid testing was not available in many Ebola Treatment Units (ETUs), guidelines recommended empirical malaria treatment. Little is known, however, about the prevalence and characteristics of patients entering an ETU who were infected with malaria parasites, either alone or concurrently with Ebola virus. METHODS: Data for sociodemographics, disease characteristics, and mortality were analysed for patients with suspected EVD admitted to three ETUs in Sierra Leone using a retrospective cohort design. Testing for Ebola virus was done by real-time PCR and for malaria by a rapid diagnostic test. Characteristics of patients were compared and survival analyses were done to evaluate the effect of infection status on mortality. FINDINGS: Between Dec 1, 2014, and Oct 15, 2015, 1524 cases were treated at the three ETUs for suspected EVD, of whom 1368 (90%) had diagnostic data for malaria and EVD. Median age of patients was 29 years (IQR 20-44) and 715 (52%) were men. 1114 patients were EVD negative, of whom 365 (33%) tested positive for malaria. Of 254 EVD positive patients, 53 (21%) also tested positive for malaria. Mortality risk was highest in patients diagnosed with both EVD and malaria (35 [66%] of 53 died) and patients diagnosed with EVD alone (105 [52%] of 201 died). Compared with patients presenting to ETUs without malaria or EVD, mortality was increased in the malaria positive and EVD positive group (adjusted hazard ratio 9·36, 95% CI 6·18-14·18, p<0·0001), and the malaria negative and EVD positive group (5·97, 4·44-8·02, p<0·0001), but reduced in the malaria positive and EVD negative group (0·37, 0·20-1·23, p=0·0010). INTERPRETATION: Malaria parasite co-infection was common in patients presenting to ETUs and conferred an increased mortality risk in patients infected with Ebola virus, supporting empirical malaria treatment in ETUs. The high mortality among patients without EVD or malaria suggests expanded testing and treatment might improve care in future EVD epidemics. FUNDING: International Medical Corps.


Subject(s)
Coinfection/epidemiology , Ebolavirus/isolation & purification , Hemorrhagic Fever, Ebola/epidemiology , Malaria/epidemiology , Plasmodium falciparum/isolation & purification , Adult , Disease Outbreaks , Ebolavirus/pathogenicity , Female , Hemorrhagic Fever, Ebola/mortality , Humans , Malaria/mortality , Male , Plasmodium falciparum/pathogenicity , Prevalence , Real-Time Polymerase Chain Reaction , Retrospective Studies , Sierra Leone/epidemiology , Survivors
6.
Lancet Glob Health ; 4(10): e744-51, 2016 10.
Article in English | MEDLINE | ID: mdl-27567350

ABSTRACT

BACKGROUND: Dehydration due to diarrhoea is a leading cause of child death worldwide, yet no clinical tools for assessing dehydration have been validated in resource-limited settings. The Dehydration: Assessing Kids Accurately (DHAKA) score was derived for assessing dehydration in children with diarrhoea in a low-income country setting. In this study, we aimed to externally validate the DHAKA score in a new population of children and compare its accuracy and reliability to the current Integrated Management of Childhood Illness (IMCI) algorithm. METHODS: DHAKA was a prospective cohort study done in children younger than 60 months presenting to the International Centre for Diarrhoeal Disease Research, Bangladesh, with acute diarrhoea (defined by WHO as three or more loose stools per day for less than 14 days). Local nurses assessed children and classified their dehydration status using both the DHAKA score and the IMCI algorithm. Serial weights were obtained and dehydration status was established by percentage weight change with rehydration. We did regression analyses to validate the DHAKA score and compared the accuracy and reliability of the DHAKA score and IMCI algorithm with receiver operator characteristic (ROC) curves and the weighted κ statistic. This study was registered with ClinicalTrials.gov, number NCT02007733. FINDINGS: Between March 22, 2015, and May 15, 2015, 496 patients were included in our primary analyses. On the basis of our criterion standard, 242 (49%) of 496 children had no dehydration, 184 (37%) of 496 had some dehydration, and 70 (14%) of 496 had severe dehydration. In multivariable regression analyses, each 1-point increase in the DHAKA score predicted an increase of 0·6% in the percentage dehydration of the child and increased the odds of both some and severe dehydration by a factor of 1·4. Both the accuracy and reliability of the DHAKA score were significantly greater than those of the IMCI algorithm. INTERPRETATION: The DHAKA score is the first clinical tool for assessing dehydration in children with acute diarrhoea to be externally validated in a low-income country. Further validation studies in a diverse range of settings and paediatric populations are warranted. FUNDING: National Institutes of Health Fogarty International Center.


Subject(s)
Dehydration/diagnosis , Diarrhea/complications , Severity of Illness Index , Acute Disease , Algorithms , Bangladesh , Body Weight , Child, Preschool , Dehydration/epidemiology , Dehydration/etiology , Developing Countries , Female , Fluid Therapy , Humans , Infant , Male , Prevalence , Prospective Studies , ROC Curve , Regression Analysis , Reproducibility of Results
7.
PLoS One ; 11(1): e0146859, 2016.
Article in English | MEDLINE | ID: mdl-26766306

ABSTRACT

INTRODUCTION: Although dehydration from diarrhea is a leading cause of morbidity and mortality in children under five, existing methods of assessing dehydration status in children have limited accuracy. OBJECTIVE: To assess the accuracy of point-of-care ultrasound measurement of the aorta-to-IVC ratio as a predictor of dehydration in children. METHODS: A prospective cohort study of children under five years with acute diarrhea was conducted in the rehydration unit of the International Centre for Diarrhoeal Disease Research, Bangladesh (icddr,b). Ultrasound measurements of aorta-to-IVC ratio and dehydrated weight were obtained on patient arrival. Percent weight change was monitored during rehydration to classify children as having "some dehydration" with weight change 3-9% or "severe dehydration" with weight change > 9%. Logistic regression analysis and Receiver-Operator Characteristic (ROC) curves were used to evaluate the accuracy of aorta-to-IVC ratio as a predictor of dehydration severity. RESULTS: 850 children were enrolled, of which 771 were included in the final analysis. Aorta to IVC ratio was a significant predictor of the percent dehydration in children with acute diarrhea, with each 1-point increase in the aorta to IVC ratio predicting a 1.1% increase in the percent dehydration of the child. However, the area under the ROC curve (0.60), sensitivity (67%), and specificity (49%), for predicting severe dehydration were all poor. CONCLUSIONS: Point-of-care ultrasound of the aorta-to-IVC ratio was statistically associated with volume status, but was not accurate enough to be used as an independent screening tool for dehydration in children under five years presenting with acute diarrhea in a resource-limited setting.


Subject(s)
Dehydration/diagnostic imaging , Dehydration/etiology , Diarrhea/complications , Vena Cava, Inferior/diagnostic imaging , Acute Disease , Aorta/diagnostic imaging , Child, Preschool , Diarrhea/diagnosis , Female , Humans , Infant , Male , Point-of-Care Systems , Prognosis , Prospective Studies , ROC Curve , Reproducibility of Results , Risk Factors , Ultrasonography
8.
Glob Health Sci Pract ; 3(3): 405-18, 2015 Aug 18.
Article in English | MEDLINE | ID: mdl-26374802

ABSTRACT

INTRODUCTION: Diarrhea remains one of the most common and most deadly conditions affecting children worldwide. Accurately assessing dehydration status is critical to determining treatment course, yet no clinical diagnostic models for dehydration have been empirically derived and validated for use in resource-limited settings. METHODS: In the Dehydration: Assessing Kids Accurately (DHAKA) prospective cohort study, a random sample of children under 5 with acute diarrhea was enrolled between February and June 2014 in Bangladesh. Local nurses assessed children for clinical signs of dehydration on arrival, and then serial weights were obtained as subjects were rehydrated. For each child, the percent weight change with rehydration was used to classify subjects with severe dehydration (>9% weight change), some dehydration (3-9%), or no dehydration (<3%). Clinical variables were then entered into logistic regression and recursive partitioning models to develop the DHAKA Dehydration Score and DHAKA Dehydration Tree, respectively. Models were assessed for their accuracy using the area under their receiver operating characteristic curve (AUC) and for their reliability through repeat clinical exams. Bootstrapping was used to internally validate the models. RESULTS: A total of 850 children were enrolled, with 771 included in the final analysis. Of the 771 children included in the analysis, 11% were classified with severe dehydration, 45% with some dehydration, and 44% with no dehydration. Both the DHAKA Dehydration Score and DHAKA Dehydration Tree had significant AUCs of 0.79 (95% CI = 0.74, 0.84) and 0.76 (95% CI = 0.71, 0.80), respectively, for the diagnosis of severe dehydration. Additionally, the DHAKA Dehydration Score and DHAKA Dehydration Tree had significant positive likelihood ratios of 2.0 (95% CI = 1.8, 2.3) and 2.5 (95% CI = 2.1, 2.8), respectively, and significant negative likelihood ratios of 0.23 (95% CI = 0.13, 0.40) and 0.28 (95% CI = 0.18, 0.44), respectively, for the diagnosis of severe dehydration. Both models demonstrated 90% agreement between independent raters and good reproducibility using bootstrapping. CONCLUSION: This study is the first to empirically derive and internally validate accurate and reliable clinical diagnostic models for dehydration in a resource-limited setting. After external validation, frontline providers may use these new tools to better manage acute diarrhea in children.


Subject(s)
Decision Trees , Dehydration/diagnosis , Diarrhea/complications , Bangladesh , Child, Preschool , Cohort Studies , Dehydration/etiology , Dehydration/therapy , Female , Fluid Therapy/methods , Humans , Infant , Male , Prospective Studies , ROC Curve , Reproducibility of Results , Severity of Illness Index
SELECTION OF CITATIONS
SEARCH DETAIL
...