Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
1.
PLoS One ; 18(8): e0287697, 2023.
Article in English | MEDLINE | ID: mdl-37616195

ABSTRACT

BACKGROUND: Opioids are commonly prescribed for postoperative pain, but may lead to prolonged use and addiction. Diabetes impairs nerve function, complicates pain management, and makes opioid prescribing particularly challenging. METHODS: This retrospective observational study included a cohort of postoperative patients from a multisite academic health system to assess the relationship between diabetes, pain, and prolonged opioid use (POU), 2008-2019. POU was defined as a new opioid prescription 3-6 months after discharge. The odds that a patient had POU was assessed using multivariate logistic regression controlling for patient factors (e.g., demographic and clinical factors, as well as prior pain and opiate use). FINDINGS: A total of 43,654 patients were included, 12.4% with diabetes. Patients with diabetes had higher preoperative pain scores (2.1 vs 1.9, p<0.001) and lower opioid naïve rates (58.7% vs 68.6%, p<0.001). Following surgery, patients with diabetes had higher rates of POU (17.7% vs 12.7%, p<0.001) despite receiving similar opioid prescriptions at discharge. Patients with Type I diabetes were more likely to have POU compared to other patients (Odds Ratio [OR]: 2.22; 95% Confidence Interval [CI]:1.69-2.90 and OR:1.44, CI: 1.33-1.56, respectively). INTERPRETATION: In conclusion, surgical patients with diabetes are at increased risk for POU even after controlling for likely covariates, yet they receive similar postoperative opiate therapy. The results suggest a more tailored approach to diabetic postoperative pain management is warranted.


Subject(s)
Diabetes Mellitus , Opiate Alkaloids , Opioid-Related Disorders , Humans , Analgesics, Opioid/adverse effects , Pain Management , Practice Patterns, Physicians' , Pain, Postoperative/drug therapy , Diabetes Mellitus/drug therapy
2.
Front Digit Health ; 4: 995497, 2022.
Article in English | MEDLINE | ID: mdl-36561925

ABSTRACT

Objective: The opioid crisis brought scrutiny to opioid prescribing. Understanding how opioid prescribing patterns and corresponding patient outcomes changed during the epidemic is essential for future targeted policies. Many studies attempt to model trends in opioid prescriptions therefore understanding the temporal shift in opioid prescribing patterns across populations is necessary. This study characterized postoperative opioid prescribing patterns across different populations, 2010-2020. Data Source: Administrative data from Veteran Health Administration (VHA), six Medicaid state programs and an Academic Medical Center (AMC). Data extraction: Surgeries were identified using the Clinical Classifications Software. Study Design: Trends in average daily discharge Morphine Milligram Equivalent (MME), postoperative pain and subsequent opioid prescription were compared using regression and likelihood ratio test statistics. Principal Findings: The cohorts included 595,106 patients, with populations that varied considerably in demographics. Over the study period, MME decreased significantly at VHA (37.5-30.1; p = 0.002) and Medicaid (41.6-31.3; p = 0.019), and increased at AMC (36.9-41.7; p < 0.001). Persistent opioid users decreased after 2015 in VHA (p < 0.001) and Medicaid (p = 0.002) and increase at the AMC (p = 0.003), although a low rate was maintained. Average postoperative pain scores remained constant over the study period. Conclusions: VHA and Medicaid programs decreased opioid prescribing over the past decade, with differing response times and rates. In 2020, these systems achieved comparable opioid prescribing patterns and outcomes despite having very different populations. Acknowledging and incorporating these temporal distribution shifts into data learning models is essential for robust and generalizable models.

3.
Commun Med (Lond) ; 2: 88, 2022.
Article in English | MEDLINE | ID: mdl-35856080

ABSTRACT

Background: Statins conclusively decrease mortality in atherosclerotic cardiovascular disease (ASCVD), the leading cause of death worldwide, and are strongly recommended by guidelines. However, real-world statin utilization and persistence are low, resulting in excess mortality. Identifying reasons for statin nonuse at scale across health systems is crucial to developing targeted interventions to improve statin use. Methods: We developed and validated deep learning-based natural language processing (NLP) approaches (Clinical Bidirectional Encoder Representations from Transformers [BERT]) to classify statin nonuse and reasons for statin nonuse using unstructured electronic health records (EHRs) from a diverse healthcare system. Results: We present data from a cohort of 56,530 ASCVD patients, among whom 21,508 (38%) lack guideline-directed statin prescriptions and statins listed as allergies in structured EHR portions. Of these 21,508 patients without prescriptions, only 3,929 (18%) have any discussion of statin use or nonuse in EHR documentation. The NLP classifiers identify statin nonuse with an area under the curve (AUC) of 0.94 (95% CI 0.93-0.96) and reasons for nonuse with a weighted-average AUC of 0.88 (95% CI 0.86-0.91) when evaluated against manual expert chart review in a held-out test set. Clinical BERT identifies key patient-level reasons (side-effects, patient preference) and clinician-level reasons (guideline-discordant practices) for statin nonuse, including differences by type of ASCVD and patient race/ethnicity. Conclusions: Our deep learning NLP classifiers can identify crucial gaps in statin nonuse and reasons for nonuse in high-risk populations to support education, clinical decision support, and potential pathways for health systems to address ASCVD treatment gaps.

4.
PLoS Comput Biol ; 18(6): e1010175, 2022 06.
Article in English | MEDLINE | ID: mdl-35696426

ABSTRACT

Most biological processes are orchestrated by large-scale molecular networks which are described in large-scale model repositories and whose dynamics are extremely complex. An observed phenotype is a state of this system that results from control mechanisms whose identification is key to its understanding. The Biological Pathway Exchange (BioPAX) format is widely used to standardize the biological information relative to regulatory processes. However, few modeling approaches developed so far enable for computing the events that control a phenotype in large-scale networks. Here we developed an integrated approach to build large-scale dynamic networks from BioPAX knowledge databases in order to analyse trajectories and to identify sets of biological entities that control a phenotype. The Cadbiom approach relies on the guarded transitions formalism, a discrete modeling approach which models a system dynamics by taking into account competition and cooperation events in chains of reactions. The method can be applied to every BioPAX (large-scale) model thanks to a specific package which automatically generates Cadbiom models from BioPAX files. The Cadbiom framework was applied to the BioPAX version of two resources (PID, KEGG) of the Pathway Commons database and to the Atlas of Cancer Signalling Network (ACSN). As a case-study, it was used to characterize sets of biological entities implicated in the epithelial-mesenchymal transition. Our results highlight the similarities between the PID and ACSN resources in terms of biological content, and underline the heterogeneity of usage of the BioPAX semantics limiting the fusion of models that require curation. Causality analyses demonstrate the smart complementarity of the databases in terms of combinatorics of controllers that explain a phenotype. From a biological perspective, our results show the specificity of controllers for epithelial and mesenchymal phenotypes that are consistent with the literature and identify a novel signature for intermediate states.


Subject(s)
Biological Phenomena , Models, Biological , Databases, Factual , Semantics , Signal Transduction
5.
Public Health Rep ; 136(5): 543-547, 2021.
Article in English | MEDLINE | ID: mdl-34161176

ABSTRACT

Racial/ethnic minority groups are disproportionately affected by the COVID-19 pandemic. We examined ethnic differences in SARS-CoV-2 testing patterns and positivity rates in a large health care system in Northern California. The study population included patients tested for SARS-CoV-2 from March 4, 2020, through January 12, 2021, at Stanford Health Care. We used adjusted hierarchical logistic regression models to identify factors associated with receiving a positive test result. During the study period, 282 916 SARS-CoV-2 tests were administered to 179 032 unique patients, 32 766 (18.3%) of whom were Hispanic. Hispanic patients were 3 times more likely to receive a positive test result than patients in other racial/ethnic groups (odds ratio = 3.16; 95% CI, 3.00-3.32). The rate of receiving a positive test result for SARS-CoV-2 among Hispanic patients increased from 5.4% in mid-March to 15.7% in mid-July, decreased to 3.9% in mid-October, and increased to 21.2% toward the end of December. Hispanic patients were more likely than non-Hispanic patients to receive a positive test result for SARS-CoV-2, with increasing trends during regional surges. The disproportionate and growing overrepresentation of Hispanic people receiving a positive test result for SARS-CoV-2 demonstrates the need to focus public health prevention efforts on these communities.


Subject(s)
COVID-19 Testing/statistics & numerical data , COVID-19/diagnosis , COVID-19/ethnology , Hispanic or Latino/statistics & numerical data , Adult , Aged , California/epidemiology , Electronic Health Records , Female , Humans , Male , Middle Aged , Pandemics , SARS-CoV-2 , Socioeconomic Factors
6.
J Biomed Inform ; 119: 103802, 2021 07.
Article in English | MEDLINE | ID: mdl-33965640

ABSTRACT

BACKGROUND: Unlike well-established diseases that base clinical care on randomized trials, past experiences, and training, prognosis in COVID19 relies on a weaker foundation. Knowledge from other respiratory failure diseases may inform clinical decisions in this novel disease. The objective was to predict 48-hour invasive mechanical ventilation (IMV) within 48 h in patients hospitalized with COVID-19 using COVID-like diseases (CLD). METHODS: This retrospective multicenter study trained machine learning (ML) models on patients hospitalized with CLD to predict IMV within 48 h in COVID-19 patients. CLD patients were identified using diagnosis codes for bacterial pneumonia, viral pneumonia, influenza, unspecified pneumonia and acute respiratory distress syndrome (ARDS), 2008-2019. A total of 16 cohorts were constructed, including any combinations of the four diseases plus an exploratory ARDS cohort, to determine the most appropriate cohort to use. Candidate predictors included demographic and clinical parameters that were previously associated with poor COVID-19 outcomes. Model development included the implementation of logistic regression and three ensemble tree-based algorithms: decision tree, AdaBoost, and XGBoost. Models were validated in hospitalized COVID-19 patients at two healthcare systems, March 2020-July 2020. ML models were trained on CLD patients at Stanford Hospital Alliance (SHA). Models were validated on hospitalized COVID-19 patients at both SHA and Intermountain Healthcare. RESULTS: CLD training data were obtained from SHA (n = 14,030), and validation data included 444 adult COVID-19 hospitalized patients from SHA (n = 185) and Intermountain (n = 259). XGBoost was the top-performing ML model, and among the 16 CLD training cohorts, the best model achieved an area under curve (AUC) of 0.883 in the validation set. In COVID-19 patients, the prediction models exhibited moderate discrimination performance, with the best models achieving an AUC of 0.77 at SHA and 0.65 at Intermountain. The model trained on all pneumonia and influenza cohorts had the best overall performance (SHA: positive predictive value (PPV) 0.29, negative predictive value (NPV) 0.97, positive likelihood ratio (PLR) 10.7; Intermountain: PPV, 0.23, NPV 0.97, PLR 10.3). We identified important factors associated with IMV that are not traditionally considered for respiratory diseases. CONCLUSIONS: The performance of prediction models derived from CLD for 48-hour IMV in patients hospitalized with COVID-19 demonstrate high specificity and can be used as a triage tool at point of care. Novel predictors of IMV identified in COVID-19 are often overlooked in clinical practice. Lessons learned from our approach may assist other research institutes seeking to build artificial intelligence technologies for novel or rare diseases with limited data for training and validation.


Subject(s)
COVID-19 , Respiratory Insufficiency , Adult , Artificial Intelligence , Hospitalization , Humans , Respiratory Insufficiency/diagnosis , Respiratory Insufficiency/therapy , Retrospective Studies , SARS-CoV-2 , Triage , Ventilators, Mechanical
7.
J Med Internet Res ; 23(2): e23026, 2021 02 22.
Article in English | MEDLINE | ID: mdl-33534724

ABSTRACT

BACKGROUND: For the clinical care of patients with well-established diseases, randomized trials, literature, and research are supplemented with clinical judgment to understand disease prognosis and inform treatment choices. In the void created by a lack of clinical experience with COVID-19, artificial intelligence (AI) may be an important tool to bolster clinical judgment and decision making. However, a lack of clinical data restricts the design and development of such AI tools, particularly in preparation for an impending crisis or pandemic. OBJECTIVE: This study aimed to develop and test the feasibility of a "patients-like-me" framework to predict the deterioration of patients with COVID-19 using a retrospective cohort of patients with similar respiratory diseases. METHODS: Our framework used COVID-19-like cohorts to design and train AI models that were then validated on the COVID-19 population. The COVID-19-like cohorts included patients diagnosed with bacterial pneumonia, viral pneumonia, unspecified pneumonia, influenza, and acute respiratory distress syndrome (ARDS) at an academic medical center from 2008 to 2019. In total, 15 training cohorts were created using different combinations of the COVID-19-like cohorts with the ARDS cohort for exploratory purposes. In this study, two machine learning models were developed: one to predict invasive mechanical ventilation (IMV) within 48 hours for each hospitalized day, and one to predict all-cause mortality at the time of admission. Model performance was assessed using the area under the receiver operating characteristic curve (AUROC), sensitivity, specificity, positive predictive value, and negative predictive value. We established model interpretability by calculating SHapley Additive exPlanations (SHAP) scores to identify important features. RESULTS: Compared to the COVID-19-like cohorts (n=16,509), the patients hospitalized with COVID-19 (n=159) were significantly younger, with a higher proportion of patients of Hispanic ethnicity, a lower proportion of patients with smoking history, and fewer patients with comorbidities (P<.001). Patients with COVID-19 had a lower IMV rate (15.1 versus 23.2, P=.02) and shorter time to IMV (2.9 versus 4.1 days, P<.001) compared to the COVID-19-like patients. In the COVID-19-like training data, the top models achieved excellent performance (AUROC>0.90). Validating in the COVID-19 cohort, the top-performing model for predicting IMV was the XGBoost model (AUROC=0.826) trained on the viral pneumonia cohort. Similarly, the XGBoost model trained on all 4 COVID-19-like cohorts without ARDS achieved the best performance (AUROC=0.928) in predicting mortality. Important predictors included demographic information (age), vital signs (oxygen saturation), and laboratory values (white blood cell count, cardiac troponin, albumin, etc). Our models had class imbalance, which resulted in high negative predictive values and low positive predictive values. CONCLUSIONS: We provided a feasible framework for modeling patient deterioration using existing data and AI technology to address data limitations during the onset of a novel, rapidly changing pandemic.


Subject(s)
COVID-19/diagnosis , COVID-19/mortality , Machine Learning , Pneumonia, Viral/diagnosis , Aged , Area Under Curve , Cohort Studies , Comorbidity , Female , Hospitalization/statistics & numerical data , Humans , Male , Middle Aged , Pandemics , Pneumonia, Viral/mortality , Predictive Value of Tests , Prognosis , ROC Curve , Respiration, Artificial/statistics & numerical data , Retrospective Studies , SARS-CoV-2 , Treatment Outcome
8.
JAMA Netw Open ; 4(1): e2031730, 2021 01 04.
Article in English | MEDLINE | ID: mdl-33481032

ABSTRACT

Importance: Randomized clinical trials (RCTs) are considered the criterion standard for clinical evidence. Despite their many benefits, RCTs have limitations, such as costliness, that may reduce the generalizability of their findings among diverse populations and routine care settings. Objective: To assess the performance of an RCT-derived prognostic model that predicts survival among patients with metastatic castration-resistant prostate cancer (CRPC) when the model is applied to real-world data from electronic health records (EHRs). Design, Setting, and Participants: The RCT-trained model and patient data from the RCTs were obtained from the Dialogue for Reverse Engineering Assessments and Methods (DREAM) challenge for prostate cancer, which occurred from March 16 to July 27, 2015. This challenge included 4 phase 3 clinical trials of patients with metastatic CRPC. Real-world data were obtained from the EHRs of a tertiary care academic medical center that includes a comprehensive cancer center. In this study, the DREAM challenge RCT-trained model was applied to real-world data from January 1, 2008, to December 31, 2019; the model was then retrained using EHR data with optimized feature selection. Patients with metastatic CRPC were divided into RCT and EHR cohorts based on data source. Data were analyzed from March 23, 2018, to October 22, 2020. Exposures: Patients who received treatment for metastatic CRPC. Main Outcomes and Measures: The primary outcome was the performance of an RCT-derived prognostic model that predicts survival among patients with metastatic CRPC when the model is applied to real-world data. Model performance was compared using 10-fold cross-validation according to time-dependent integrated area under the curve (iAUC) statistics. Results: Among 2113 participants with metastatic CRPC, 1600 participants were included in the RCT cohort, and 513 participants were included in the EHR cohort. The RCT cohort comprised a larger proportion of White participants (1390 patients [86.9%] vs 337 patients [65.7%]) and a smaller proportion of Hispanic participants (14 patients [0.9%] vs 42 patients [8.2%]), Asian participants (41 patients [2.6%] vs 88 patients [17.2%]), and participants older than 75 years (388 patients [24.3%] vs 191 patients [37.2%]) compared with the EHR cohort. Participants in the RCT cohort also had fewer comorbidities (mean [SD], 1.6 [1.8] comorbidities vs 2.5 [2.6] comorbidities, respectively) compared with those in the EHR cohort. Of the 101 variables used in the RCT-derived model, 10 were not available in the EHR data set, 3 of which were among the top 10 features in the DREAM challenge RCT model. The best-performing EHR-trained model included only 25 of the 101 variables included in the RCT-trained model. The performance of the RCT-trained and EHR-trained models was adequate in the EHR cohort (mean [SD] iAUC, 0.722 [0.118] and 0.762 [0.106], respectively); model optimization was associated with improved performance of the best-performing EHR model (mean [SD] iAUC, 0.792 [0.097]). The EHR-trained model classified 256 patients as having a high risk of mortality and 256 patients as having a low risk of mortality (hazard ratio, 2.7; 95% CI, 2.0-3.7; log-rank P < .001). Conclusions and Relevance: In this study, although the RCT-trained models did not perform well when applied to real-world EHR data, retraining the models using real-world EHR data and optimizing variable selection was beneficial for model performance. As clinical evidence evolves to include more real-world data, both industry and academia will likely search for ways to balance model optimization with generalizability. This study provides a pragmatic approach to applying RCT-trained models to real-world data.


Subject(s)
Decision Making, Computer-Assisted , Models, Statistical , Prostatic Neoplasms, Castration-Resistant/mortality , Adolescent , Adult , Aged , Electronic Health Records , Humans , Machine Learning , Male , Middle Aged , Prognosis , Prostatic Neoplasms, Castration-Resistant/diagnosis , Prostatic Neoplasms, Castration-Resistant/epidemiology , Randomized Controlled Trials as Topic , Survival Analysis , Young Adult
9.
Brachytherapy ; 20(1): 50-57, 2021.
Article in English | MEDLINE | ID: mdl-32891570

ABSTRACT

PURPOSE: Brachytherapy (BrT) is a standard treatment for low-risk to favorable-intermediate-risk prostate cancer but is a relative contraindication for patients with obstructive symptoms. We aimed to assess the feasibility and urinary toxicity of a minimal photovaporization (mPVP) before implantation. MATERIALS AND METHODS: Between 04/2009 and 08/2016, 50 patients candidates for BrT but with International Prostate Symptom Score (IPSS)>15, uroflowmetry <15 mL/s, obstructive prostate or large median lobe underwent a mPVP (GreenLight Laser) at least 6 weeks (median 8.5) before permanent seed implantation (loose seeds, 125I, 160 Gy). RESULTS: Two patients (4%) did not have sufficient improvement and did not undergo BrT, although it would have been possible at 3 months. For the 48 (96%) other patients, at the baseline, mean IPSS was 15.5 (±5.3), vs. 8.6 (±4.4) after mPVP (p = 1 × 10-6), and uroflowmetry 11.7 mL/s (±4), vs. 17.4 (±5.4) (p = 1.4 × 10-5). We did not experience any difficulty for BrT. Mean IPSS did not significantly increase 1, 3, or 6 months after BrT. With a median followup of 60 months [30-120], (92% assessed at last followup), only 4 patients (4/48 = 8.3%) experienced urinary retention and 5 (10.4%) needed surgery for urinary toxicity. In addition, only 2 patients (4%) needed medical treatment at last followup. Considering the 8 patients with de novo incontinence at 1 year, only 2 (4%) had persistent mild symptoms at last followup (36 months) (ICS1-2). CONCLUSIONS: These results suggest that a two-step approach with an mPVP at least 6 weeks before BrT is feasible, with no excessive urinary toxicity, and may be a good strategy for obstructive patients seeking BrT.


Subject(s)
Brachytherapy , Prostatic Neoplasms , Urinary Incontinence , Brachytherapy/methods , Humans , Male , Prostate-Specific Antigen , Prostatic Neoplasms/radiotherapy
10.
Learn Health Syst ; 4(4): e10237, 2020 Oct.
Article in English | MEDLINE | ID: mdl-33083539

ABSTRACT

INTRODUCTION: A learning health system (LHS) must improve care in ways that are meaningful to patients, integrating patient-centered outcomes (PCOs) into core infrastructure. PCOs are common following cancer treatment, such as urinary incontinence (UI) following prostatectomy. However, PCOs are not systematically recorded because they can only be described by the patient, are subjective and captured as unstructured text in the electronic health record (EHR). Therefore, PCOs pose significant challenges for phenotyping patients. Here, we present a natural language processing (NLP) approach for phenotyping patients with UI to classify their disease into severity subtypes, which can increase opportunities to provide precision-based therapy and promote a value-based delivery system. METHODS: Patients undergoing prostate cancer treatment from 2008 to 2018 were identified at an academic medical center. Using a hybrid NLP pipeline that combines rule-based and deep learning methodologies, we classified positive UI cases as mild, moderate, and severe by mining clinical notes. RESULTS: The rule-based model accurately classified UI into disease severity categories (accuracy: 0.86), which outperformed the deep learning model (accuracy: 0.73). In the deep learning model, the recall rates for mild and moderate group were higher than the precision rate (0.78 and 0.79, respectively). A hybrid model that combined both methods did not improve the accuracy of the rule-based model but did outperform the deep learning model (accuracy: 0.75). CONCLUSION: Phenotyping patients based on indication and severity of PCOs is essential to advance a patient centered LHS. EHRs contain valuable information on PCOs and by using NLP methods, it is feasible to accurately and efficiently phenotype PCO severity. Phenotyping must extend beyond the identification of disease to provide classification of disease severity that can be used to guide treatment and inform shared decision-making. Our methods demonstrate a path to a patient centered LHS that could advance precision medicine.

11.
Rev Sci Instrum ; 91(8): 085114, 2020 Aug 01.
Article in English | MEDLINE | ID: mdl-32872921

ABSTRACT

We have developed a new internally heated diamond anvil cell (DAC) system for in situ high-pressure and high-temperature x-ray and optical experiments. We have adopted a self-heating W/Re gasket design allowing for both sample confinement and heating. This solution has been seldom used in the past but proved to be very efficient to reduce the size of the heating spot near the sample region, improving heating and cooling rates as compared to other resistive heating strategies. The system has been widely tested under high-temperature conditions by performing several thermal emission measurements. A robust relationship between electric power and average sample temperature inside the DAC has been established up to about 1500 K by a measurement campaign on different simple substances. A micro-Raman spectrometer was used for various in situ optical measurements and allowed us to map the temperature distribution of the sample. The distribution resulted to be uniform within the typical uncertainty of these measurements (5% at 1000 K). The high-temperature performances of the DAC were also verified in a series of XAS (x-ray absorption spectroscopy) experiments using both nano-polycrystalline and single-crystal diamond anvils. XAS measurements of germanium at 3.5 GPa were obtained in the 300 K-1300 K range, studying the melting transition and nucleation to the crystal phase. The achievable heating and cooling rates of the DAC were studied exploiting a XAS dispersive setup, collecting series of near-edge XAS spectra with sub-second time resolution. An original XAS-based dynamical temperature calibration procedure was developed and used to monitor the sample and diamond temperatures during the application of constant power cycles, indicating that heating and cooling rates in the 100 K/s range can be easily achieved using this device.

12.
Cancer Med ; 9(22): 8552-8561, 2020 11.
Article in English | MEDLINE | ID: mdl-32986931

ABSTRACT

PURPOSE: Prior studies suggest email communication between patients and providers may improve patient engagement and health outcomes. The purpose of this study was to determine whether patient-initiated emails are associated with overall survival benefits among cancer patients undergoing chemotherapy. PATIENTS AND METHODS: We identified patient-initiated emails through the patient portal in electronic health records (EHR) among 9900 cancer patients receiving chemotherapy between 2013 and 2018. Email users were defined as patients who sent at least one email 12 months before to 2 months after chemotherapy started. A propensity score-matched cohort analysis was carried out to reduce bias due to confounding (age, primary cancer type, gender, insurance payor, ethnicity, race, stage, income, Charlson score, county of residence). The cohort included 3223 email users and 3223 non-email users. The primary outcome was overall 2-year survival stratified by email use. Secondary outcomes included number of face-to-face visits, prescriptions, and telephone calls. The healthcare teams' response to emails and other forms of communication was also investigated. Finally, a quality measure related to chemotherapy-related inpatient and emergency department visits was evaluated. RESULTS: Overall 2-year survival was higher in patients who were email users, with an adjusted hazard ratio of 0.80 (95 CI 0.72-0.90; p < 0.001). Email users had higher rates of healthcare utilization, including face-to-face visits (63 vs. 50; p < 0.001), drug prescriptions (28 vs. 21; p < 0.001), and phone calls (18 vs. 16; p < 0.001). Clinical quality outcome measure of inpatient use was better among email users (p = 0.015). CONCLUSION: Patient-initiated emails are associated with a survival benefit among cancer patients receiving chemotherapy and may be a proxy for patient engagement. As value-based payment models emphasize incorporating the patients' voice into their care, email communications could serve as a novel source of patient-generated data.


Subject(s)
Antineoplastic Agents/therapeutic use , Electronic Mail , Neoplasms/drug therapy , Patient Participation , Adult , Aged , Electronic Health Records , Female , Humans , Male , Middle Aged , Neoplasms/mortality , Office Visits , Patient Acceptance of Health Care , Telemedicine , Telephone , Time Factors , Treatment Outcome
13.
J Contemp Brachytherapy ; 11(3): 195-200, 2019 Jun.
Article in English | MEDLINE | ID: mdl-31435425

ABSTRACT

PURPOSE: Prostate brachytherapy (BT) is a validated treatment for localized prostate cancer (CaP) and an attractive therapy option for patients seeking to preserve erectile function (EF). The aim of this paper is to prospectively assess EF evolution during 4 years after BT. MATERIAL AND METHODS: Between February 2007 and July 2012, 179 patients underwent an exclusive Iodine-125 BT, for low-intermediate favorable risk CaP of whom, 102 had an initial international index of erectile function 5 score (IIEF-5) > 16 and were included in the study. Of those, 12.7% received neo-adjuvant hormonotherapy (HT) to decrease the prostate volume. Post-BT intake of phosphodiesterase inhibitors (PDE5i) was not an exclusion criterion. Erectile function was prospectively assessed using a validated questionnaire IIEF-5 before treatment and annually for 4 years. RESULTS: At 1-year follow-up, 54% of patients preserved an IIEF-5 > 16 and only 8% suffered from severe ED. During the next 3 years, the results were not statistically different. The mean IIEF-5 lost 4 points during the first year, 17 vs. 21, and remained stable during the following 3 years. We did not find any significant differences in the proportion of patients treated by PDE5i (18-20%). As for patients with a normal preoperative IIEF-5 (> 21) (n = 52), 35-42% preserved a normal EF and 71-77% maintained an IIEF-5 > 16, including 13-19% of patients who needed PDE5i. Those results were stable for over 4 years. CONCLUSIONS: During the first 4 years after BT, more than half of patients maintained an IIEF-5 > 16, and EF results remained stable. Severe erectile dysfunction (ED) was very rare.

14.
J Biomed Inform ; 94: 103184, 2019 06.
Article in English | MEDLINE | ID: mdl-31014980

ABSTRACT

OBJECTIVE: Clinical care guidelines recommend that newly diagnosed prostate cancer patients at high risk for metastatic spread receive a bone scan prior to treatment and that low risk patients not receive it. The objective was to develop an automated pipeline to interrogate heterogeneous data to evaluate the use of bone scans using a two different Natural Language Processing (NLP) approaches. MATERIALS AND METHODS: Our cohort was divided into risk groups based on Electronic Health Records (EHR). Information on bone scan utilization was identified in both structured data and free text from clinical notes. Our pipeline annotated sentences with a combination of a rule-based method using the ConText algorithm (a generalization of NegEx) and a Convolutional Neural Network (CNN) method using word2vec to produce word embeddings. RESULTS: A total of 5500 patients and 369,764 notes were included in the study. A total of 39% of patients were high-risk and 73% of these received a bone scan; of the 18% low risk patients, 10% received one. The accuracy of CNN model outperformed the rule-based model one (F-measure = 0.918 and 0.897 respectively). We demonstrate a combination of both models could maximize precision or recall, based on the study question. CONCLUSION: Using structured data, we accurately classified patients' cancer risk group, identified bone scan documentation with two NLP methods, and evaluated guideline adherence. Our pipeline can be used to provide concrete feedback to clinicians and guide treatment decisions.


Subject(s)
Bone Neoplasms/secondary , Natural Language Processing , Phenotype , Prostatic Neoplasms/diagnostic imaging , Bone Neoplasms/diagnostic imaging , Electronic Health Records , Guideline Adherence , Humans , Male , Prostatic Neoplasms/pathology , Risk Factors
15.
Brachytherapy ; 17(5): 782-787, 2018.
Article in English | MEDLINE | ID: mdl-29936129

ABSTRACT

PURPOSE: "Quadrella" index has been recently developed to assess oncological and functional outcomes after prostate brachytherapy (PB). We aimed to evaluate this index at 1, 2, and 3 years, using validated questionnaires, assessed prospectively. METHODS AND MATERIALS: From 08/2007 to 01/2013, 193 patients underwent 125Iodine PB for low-risk or favorable intermediate-risk prostate adenocarcinoma. Inclusion criteria were as follows: no incontinence (International Continence Society Index initial score = 0) and good erectile function (International Index of Erectile Function-5 items: >16). One hundred patients were included (mean age: 64 y). Postimplantation intake of phosphodiesterase inhibitors was not considered as failure. The "Quadrella" index was defined by the absence of biochemical recurrence (Phoenix criteria), significant erectile dysfunction (ED) (Index of Erectile Function-5 items: >16), urinary toxicity (UT) (International Prostate Score Symptom [IPSS] <15 or IPSS> 15 with ΔIPSS <5), and rectal toxicity (RT) (Radiation Therapy Oncology Group = 0). RESULTS: At 12 months, 90 patients were evaluable: 42/90 (46.7%) achieved Quadrella. The main criteria for failure were as follows: ED in 77.1% (37/48) of cases, RT in 20.8% (10/48) of cases, and UT in 12.5% (9/57) of cases. At 24 and 36 months, 59.3% (48/81) and 61.1% (44/72) of patients achieved Quadrella, respectively. The main cause of failure was ED in 69.7% (23/33) and 85.7% (24/28) of cases, while RT was involved in 21.2% (7/33) and in 3.6% (1/28) of cases, and UT in 9.1% (3/33) and 3.6% (1/28) of cases. Only one case of biochemical recurrence was observed (i.e., 1/28 = 3.6% at 3 y). CONCLUSIONS: The Quadrella can be used at 1, 2, and 3 years after PB. It allows to take into account the urinary and RT specific to PB. ED was the main cause of failure. This index will be useful to assess midterm and long-term results.


Subject(s)
Adenocarcinoma/radiotherapy , Brachytherapy/methods , Erectile Dysfunction/physiopathology , Penile Erection/physiology , Prostatic Neoplasms/radiotherapy , Adenocarcinoma/complications , Adenocarcinoma/diagnosis , Aged , Erectile Dysfunction/blood , Erectile Dysfunction/etiology , Follow-Up Studies , Humans , Male , Middle Aged , Neoplasm Grading , Penile Erection/radiation effects , Prostate-Specific Antigen/blood , Prostatic Neoplasms/complications , Prostatic Neoplasms/diagnosis , Rectum , Surveys and Questionnaires , Time Factors
16.
Scand J Urol ; 52(3): 174-179, 2018 Jun.
Article in English | MEDLINE | ID: mdl-29463177

ABSTRACT

OBJECTIVE: Compared with standard systematic transrectal ultrasound (TRUS)-guided biopsies (SBx), targeted biopsies (TBx) using magnetic resonance imaging (MRI)/TRUS fusion could increase the detection of clinically significant prostate cancer (PCa-s) and reduce non-significant PCa (PCa-ns). This study aimed to compare the performance of the two approaches. MATERIALS AND METHODS: A prospective, single-center study was conducted on all consecutive patients with PCa suspicion who underwent prebiopsy multiparametric MRI (mpMRI) using the Prostate Imaging Reporting and Data System (PI-RADS). All patients underwent mpMRI/TRUS fusion TBx (two to four cores/target) using UroStation™ (Koelis, Grenoble, France) and SBx (10-12 cores) during the same session. PCa-s was defined as a maximal positive core length ≥4 mm or Gleason score ≥7. RESULTS: The study included 191 patients (at least one suspicious lesion: PI-RADS ≥3). PCa was detected in 55.5% (106/191) of the cases. The overall PCa detection rate and the PCa-s detection rate were not significantly higher in TBx alone versus SBx (44.5% vs 46.1%, p = .7, and 38.2% vs 33.5%, p = .2, respectively). Combined TBx and SBx diagnosed significantly more PCa-s than SBx alone (45% vs 33.5%, p = .02). PCa-s was detected only by TBx in 12% of cases (23/191) and only by SBx in 7.3% (14/191). Gleason score was upgraded by TBx in 16.8% (32/191) and by SBx in 13.6% (26/191) of patients (p = .4). CONCLUSIONS: The combination of TBx and SBx achieved the best results for the detection and prognosis of PCa-s. The use of SBx alone would have missed the detection of PCa-s in 12% of patients.


Subject(s)
Biopsy, Large-Core Needle/methods , Endosonography , Magnetic Resonance Imaging , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Aged , Aged, 80 and over , False Negative Reactions , Humans , Image Processing, Computer-Assisted , Male , Middle Aged , Multimodal Imaging , Neoplasm Grading , Prospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL
...