Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
2.
Value Health ; 25(3): 359-367, 2022 03.
Article in English | MEDLINE | ID: mdl-35227446

ABSTRACT

OBJECTIVES: The machine learning prediction model Pacmed Critical (PC), currently under development, may guide intensivists in their decision-making process on the most appropriate time to discharge a patient from the intensive care unit (ICU). Given the financial pressure on healthcare budgets, this study assessed whether PC has the potential to be cost-effective compared with standard care, without the use of PC, for Dutch patients in the ICU from a societal perspective. METHODS: A 1-year, 7-state Markov model reflecting the ICU care pathway and incorporating the PC decision tool was developed. A hypothetical cohort of 1000 adult Dutch patients admitted in the ICU was entered in the model. We used the literature, expert opinion, and data from Amsterdam University Medical Center for model parameters. The uncertainty surrounding the incremental cost-effectiveness ratio was assessed using deterministic and probabilistic sensitivity analyses and scenario analyses. RESULTS: PC was a cost-effective strategy with an incremental cost-effectiveness ratio of €18 507 per quality-adjusted life-year. PC remained cost-effective over standard care in multiple scenarios and sensitivity analyses. The likelihood that PC will be cost-effective was 71% at a willingness-to-pay threshold of €30 000 per quality-adjusted life-year. The key driver of the results was the parameter "reduction in ICU length of stay." CONCLUSIONS: We showed that PC has the potential to be cost-effective for Dutch ICUs in a time horizon of 1 year. This study is one of the first cost-effectiveness analyses of a machine learning device. Further research is needed to validate the effectiveness of PC, thereby focusing on the key parameter "reduction in ICU length of stay" and potential spill-over effects.


Subject(s)
Intensive Care Units/organization & administration , Machine Learning/economics , Patient Discharge/statistics & numerical data , Cost-Benefit Analysis , Decision Making , Humans , Intensive Care Units/economics , Markov Chains , Models, Economic , Netherlands , Patient Readmission/economics , Quality-Adjusted Life Years
3.
PLoS One ; 16(9): e0257086, 2021.
Article in English | MEDLINE | ID: mdl-34516562

ABSTRACT

Patent valuation is required to revitalize patent transactions, but calculating a reasonable value that consumers and suppliers could satisfy is difficult. When machine learning is used, a quantitative evaluation based on a large volume of data is possible, and evaluation can be conducted quickly and inexpensively, contributing to the activation of patent transactions. However, due to patent characteristics, securing the necessary training data is challenging because most patents are traded privately to prevent technical information leaks. In this study, the derived marketable value of a patent through event study is used for patent value evaluation, matching it with the semantic information from the patent calculated using latent Dirichlet allocation (LDA)-based topic modeling. In addition, an ensemble learning methodology that combines the predicted values of multiple predictive models was used to determine the prediction stability. Base learners with high predictive power for each fold were different, but the ensemble model that was trained on the base learners' predicted values exceeded the predictive power of the individual models. The Wilcoxon rank-sum test indicated that the superiority of the accuracy of the ensemble model was statistically significant at the 95% significance level.


Subject(s)
Electricity , Machine Learning/economics , Marketing/economics , Patents as Topic , Algorithms , Data Mining , Humans , Neural Networks, Computer , Regression Analysis , United States
4.
J Neurotrauma ; 38(7): 928-939, 2021 04 01.
Article in English | MEDLINE | ID: mdl-33054545

ABSTRACT

Traumatic brain injury (TBI) disproportionately affects low- and middle-income countries (LMICs). In these low-resource settings, effective triage of patients with TBI-including the decision of whether or not to perform neurosurgery-is critical in optimizing patient outcomes and healthcare resource utilization. Machine learning may allow for effective predictions of patient outcomes both with and without surgery. Data from patients with TBI was collected prospectively at Mulago National Referral Hospital in Kampala, Uganda, from 2016 to 2019. One linear and six non-linear machine learning models were designed to predict good versus poor outcome near hospital discharge and internally validated using nested five-fold cross-validation. The 13 predictors included clinical variables easily acquired on admission and whether or not the patient received surgery. Using an elastic-net regularized logistic regression model (GLMnet), with predictions calibrated using Platt scaling, the probability of poor outcome was calculated for each patient both with and without surgery (with the difference quantifying the "individual treatment effect," ITE). Relative ITE represents the percent reduction in chance of poor outcome, equaling this ITE divided by the probability of poor outcome with no surgery. Ultimately, 1766 patients were included. Areas under the receiver operating characteristic curve (AUROCs) ranged from 83.1% (single C5.0 ruleset) to 88.5% (random forest), with the GLMnet at 87.5%. The two variables promoting good outcomes in the GLMnet model were high Glasgow Coma Scale score and receiving surgery. For the subgroup not receiving surgery, the median relative ITE was 42.9% (interquartile range [IQR], 32.7% to 53.5%); similarly, in those receiving surgery, it was 43.2% (IQR, 32.9% to 54.3%). We provide the first machine learning-based model to predict TBI outcomes with and without surgery in LMICs, thus enabling more effective surgical decision making in the resource-limited setting. Predicted ITE similarity between surgical and non-surgical groups suggests that, currently, patients are not being chosen optimally for neurosurgical intervention. Our clinical decision aid has the potential to improve outcomes.


Subject(s)
Brain Injuries, Traumatic/economics , Brain Injuries, Traumatic/surgery , Health Resources/economics , Machine Learning/economics , Neurosurgical Procedures/economics , Adolescent , Adult , Brain Injuries, Traumatic/epidemiology , Child , Female , Glasgow Coma Scale/economics , Glasgow Coma Scale/trends , Health Resources/trends , Humans , Machine Learning/trends , Male , Middle Aged , Neurosurgical Procedures/trends , Predictive Value of Tests , Treatment Outcome , Uganda/epidemiology , Young Adult
5.
Sci Rep ; 10(1): 16581, 2020 10 06.
Article in English | MEDLINE | ID: mdl-33024236

ABSTRACT

Reducing hurdles to clinical trials without compromising the therapeutic promises of peptide candidates becomes an essential step in peptide-based drug design. Machine-learning models are cost-effective and time-saving strategies used to predict biological activities from primary sequences. Their limitations lie in the diversity of peptide sequences and biological information within these models. Additional outlier detection methods are needed to set the boundaries for reliable predictions; the applicability domain. Antimicrobial peptides (AMPs) constitute an extensive library of peptides offering promising avenues against antibiotic-resistant infections. Most AMPs present in clinical trials are administrated topically due to their hemolytic toxicity. Here we developed machine learning models and outlier detection methods that ensure robust predictions for the discovery of AMPs and the design of novel peptides with reduced hemolytic activity. Our best models, gradient boosting classifiers, predicted the hemolytic nature from any peptide sequence with 95-97% accuracy. Nearly 70% of AMPs were predicted as hemolytic peptides. Applying multivariate outlier detection models, we found that 273 AMPs (~ 9%) could not be predicted reliably. Our combined approach led to the discovery of 34 high-confidence non-hemolytic natural AMPs, the de novo design of 507 non-hemolytic peptides, and the guidelines for non-hemolytic peptide design.


Subject(s)
Drug Design , Machine Learning , Pore Forming Cytotoxic Proteins/chemistry , Amino Acid Sequence , Cost-Benefit Analysis , Hemolysis/drug effects , Machine Learning/economics , Pore Forming Cytotoxic Proteins/toxicity
6.
BMC Public Health ; 20(1): 608, 2020 May 01.
Article in English | MEDLINE | ID: mdl-32357871

ABSTRACT

BACKGROUND: Risk adjustment models are employed to prevent adverse selection, anticipate budgetary reserve needs, and offer care management services to high-risk individuals. We aimed to address two unknowns about risk adjustment: whether machine learning (ML) and inclusion of social determinants of health (SDH) indicators improve prospective risk adjustment for health plan payments. METHODS: We employed a 2-by-2 factorial design comparing: (i) linear regression versus ML (gradient boosting) and (ii) demographics and diagnostic codes alone, versus additional ZIP code-level SDH indicators. Healthcare claims from privately-insured US adults (2016-2017), and Census data were used for analysis. Data from 1.02 million adults were used for derivation, and data from 0.26 million to assess performance. Model performance was measured using coefficient of determination (R2), discrimination (C-statistic), and mean absolute error (MAE) for the overall population, and predictive ratio and net compensation for vulnerable subgroups. We provide 95% confidence intervals (CI) around each performance measure. RESULTS: Linear regression without SDH indicators achieved moderate determination (R2 0.327, 95% CI: 0.300, 0.353), error ($6992; 95% CI: $6889, $7094), and discrimination (C-statistic 0.703; 95% CI: 0.701, 0.705). ML without SDH indicators improved all metrics (R2 0.388; 95% CI: 0.357, 0.420; error $6637; 95% CI: $6539, $6735; C-statistic 0.717; 95% CI: 0.715, 0.718), reducing misestimation of cost by $3.5 M per 10,000 members. Among people living in areas with high poverty, high wealth inequality, or high prevalence of uninsured, SDH indicators reduced underestimation of cost, improving the predictive ratio by 3% (~$200/person/year). CONCLUSIONS: ML improved risk adjustment models and the incorporation of SDH indicators reduced underpayment in several vulnerable populations.


Subject(s)
Health Promotion/economics , Health Promotion/statistics & numerical data , Insurance, Health/economics , Insurance, Health/statistics & numerical data , Machine Learning/economics , Machine Learning/statistics & numerical data , Social Determinants of Health/economics , Social Determinants of Health/statistics & numerical data , Adult , Cost-Benefit Analysis , Female , Humans , Male , Middle Aged , Prospective Studies , Risk Adjustment
8.
J Am Coll Radiol ; 16(6): 840-844, 2019 Jun.
Article in English | MEDLINE | ID: mdl-30833164

ABSTRACT

OBJECTIVE: Radiology is a finite health care resource in high demand at most health centers. However, anticipating fluctuations in demand is a challenge because of the inherent uncertainty in disease prognosis. The aim of this study was to explore the potential of natural language processing (NLP) to predict downstream radiology resource utilization in patients undergoing surveillance for hepatocellular carcinoma (HCC). MATERIALS AND METHODS: All HCC surveillance CT examinations performed at our institution from January 1, 2010, to October 31, 2017 were selected from our departmental radiology information system. We used open source NLP and machine learning software to parse radiology report text into bag-of-words and term frequency-inverse document frequency (TF-IDF) representations. Three machine learning models-logistic regression, support vector machine (SVM), and random forest-were used to predict future utilization of radiology department resources. A test data set was used to calculate accuracy, sensitivity, and specificity in addition to the area under the curve (AUC). RESULTS: As a group, the bag-of-word models were slightly inferior to the TF-IDF feature extraction approach. The TF-IDF + SVM model outperformed all other models with an accuracy of 92%, a sensitivity of 83%, and a specificity of 96%, with an AUC of 0.971. CONCLUSIONS: NLP-based models can accurately predict downstream radiology resource utilization from narrative HCC surveillance reports and has potential for translation to health care management where it may improve decision making, reduce costs, and broaden access to care.


Subject(s)
Carcinoma, Hepatocellular/diagnostic imaging , Liver Neoplasms/diagnostic imaging , Machine Learning/economics , Natural Language Processing , Tomography, X-Ray Computed/economics , Aged , Area Under Curve , Databases, Factual , Female , Health Resources/statistics & numerical data , Humans , Machine Learning/statistics & numerical data , Male , Middle Aged , Ontario , Predictive Value of Tests , ROC Curve , Radiology Department, Hospital , Radiology Information Systems , Research Report , Retrospective Studies , Sensitivity and Specificity , Tomography, X-Ray Computed/methods
9.
J Gen Intern Med ; 34(2): 211-217, 2019 02.
Article in English | MEDLINE | ID: mdl-30543022

ABSTRACT

BACKGROUND: Efforts to improve the value of care for high-cost patients may benefit from care management strategies targeted at clinically distinct subgroups of patients. OBJECTIVE: To evaluate the performance of three different machine learning algorithms for identifying subgroups of high-cost patients. DESIGN: We applied three different clustering algorithms-connectivity-based clustering using agglomerative hierarchical clustering, centroid-based clustering with the k-medoids algorithm, and density-based clustering with the OPTICS algorithm-to a clinical and administrative dataset. We then examined the extent to which each algorithm identified subgroups of patients that were (1) clinically distinct and (2) associated with meaningful differences in relevant utilization metrics. PARTICIPANTS: Patients enrolled in a national Medicare Advantage plan, categorized in the top decile of spending (n = 6154). MAIN MEASURES: Post hoc discriminative models comparing the importance of variables for distinguishing observations in one cluster from the rest. Variance in utilization and spending measures. KEY RESULTS: Connectivity-based, centroid-based, and density-based clustering identified eight, five, and ten subgroups of high-cost patients, respectively. Post hoc discriminative models indicated that density-based clustering subgroups were the most clinically distinct. The variance of utilization and spending measures was the greatest among the subgroups identified through density-based clustering. CONCLUSIONS: Machine learning algorithms can be used to segment a high-cost patient population into subgroups of patients that are clinically distinct and associated with meaningful differences in utilization and spending measures. For these purposes, density-based clustering with the OPTICS algorithm outperformed connectivity-based and centroid-based clustering algorithms.


Subject(s)
Algorithms , Health Care Costs , Machine Learning/economics , Medicare Part C/economics , Aged , Aged, 80 and over , Cluster Analysis , Female , Health Care Costs/trends , Humans , Machine Learning/trends , Male , Medicare Part C/trends , United States/epidemiology
10.
Sci Eng Ethics ; 25(5): 1389-1407, 2019 10.
Article in English | MEDLINE | ID: mdl-30357558

ABSTRACT

This paper argues that even though massive technological unemployment will likely be one of the results of automation, we will not need to institute mass-scale redistribution of wealth (such as would be involved in, e.g., instituting universal basic income) to deal with its consequences. Instead, reasons are given for cautious optimism about the standards of living the newly unemployed workers may expect in the (almost) fully-automated future. It is not claimed that these predictions will certainly bear out. Rather, they are no less likely to come to fruition than the predictions of those authors who predict that massive technological unemployment will lead to the suffering of the masses on such a scale that significant redistributive policies will have to be instituted to alleviate it. Additionally, the paper challenges the idea that the existence of a moral obligation to help the victims of massive unemployment justifies the coercive taking of anyone else's property.


Subject(s)
Income/trends , Moral Obligations , Technology/economics , Technology/ethics , Technology/trends , Unemployment/trends , Ethical Analysis , Forecasting , Humans , Machine Learning/economics , Machine Learning/ethics , Machine Learning/trends , Social Change , Social Conditions
13.
Big Data ; 5(3): 246-255, 2017 09.
Article in English | MEDLINE | ID: mdl-28933947

ABSTRACT

Machine learning algorithms increasingly influence our decisions and interact with us in all parts of our daily lives. Therefore, just as we consider the safety of power plants, highways, and a variety of other engineered socio-technical systems, we must also take into account the safety of systems involving machine learning. Heretofore, the definition of safety has not been formalized in a machine learning context. In this article, we do so by defining machine learning safety in terms of risk, epistemic uncertainty, and the harm incurred by unwanted outcomes. We then use this definition to examine safety in all sorts of applications in cyber-physical systems, decision sciences, and data products. We find that the foundational principle of modern statistical machine learning, empirical risk minimization, is not always a sufficient objective. We discuss how four different categories of strategies for achieving safety in engineering, including inherently safe design, safety reserves, safe fail, and procedural safeguards can be mapped to a machine learning context. We then discuss example techniques that can be adopted in each category, such as considering interpretability and causality of predictive models, objective functions beyond expected prediction accuracy, human involvement for labeling difficult or rare examples, and user experience design of software and open data.


Subject(s)
Decision Making , Machine Learning , Safety , Algorithms , Machine Learning/economics
14.
AJR Am J Roentgenol ; 208(6): 1244-1248, 2017 Jun.
Article in English | MEDLINE | ID: mdl-28753031

ABSTRACT

OBJECTIVE: We assessed the initial clinical performance and third-party reimbursement rates of supplementary computer-aided detection (CAD) at CT colonography (CTC) for detecting colorectal polyps 6 mm or larger in routine clinical practice. MATERIALS AND METHODS: We retrospectively assessed the prospective clinical performance of a U.S. Food and Drug Administration-approved CAD system in second-reader mode in 347 consecutive adults (mean age, 57.6 years; 205 women, 142 men) undergoing CTC evaluation over a 5-month period. The reference standard consisted of the prospective interpretation by experienced CTC radiologists combined with subsequent optical colonoscopy (OC), if performed. We also assessed third-party reimbursement for CAD for studies performed over an 18-month period. RESULTS: In all, 69 patients (mean [± SD] age, 59.0 ± 7.7 years; 32 men, 37 women) had 129 polyps ≥ 6 mm. Per-patient CAD sensitivity was 91.3% (63 of 69). Per-polyp CAD-alone sensitivity was 88.4% (114 of 129), including 88.3% (83 of 94) for 6- to 9-mm polyps and 88.6% (31 of 35) for polyps 10 mm or larger. On retrospective review, three additional polyps 6 mm or larger were seen at OC and marked by CAD but dismissed as CAD false-positives at CTC. The mean number of false-positive CAD marks was 4.4 ± 3.1 per series. Of 1225 CTC cases reviewed for reimbursement, 31.0% of the total charges for CAD interpretation had been recovered from a variety of third-party payers. CONCLUSION: In our routine clinical practice, CAD showed good sensitivity for detecting colorectal polyps 6 mm or larger, with an acceptable number of false-positive marks. Importantly, CAD is already being reimbursed by some third-party payers in our clinical CTC practice.


Subject(s)
Colonography, Computed Tomographic/economics , Colorectal Neoplasms/diagnostic imaging , Colorectal Neoplasms/economics , Insurance, Health, Reimbursement/economics , Intestinal Polyps/diagnostic imaging , Intestinal Polyps/economics , Colonography, Computed Tomographic/statistics & numerical data , Female , Humans , Insurance, Health, Reimbursement/statistics & numerical data , Machine Learning/economics , Machine Learning/statistics & numerical data , Male , Radiographic Image Interpretation, Computer-Assisted/statistics & numerical data , Reproducibility of Results , Sensitivity and Specificity , United States/epidemiology
15.
ACS Nano ; 11(2): 2266-2274, 2017 02 28.
Article in English | MEDLINE | ID: mdl-28128933

ABSTRACT

Plasmonic sensors have been used for a wide range of biological and chemical sensing applications. Emerging nanofabrication techniques have enabled these sensors to be cost-effectively mass manufactured onto various types of substrates. To accompany these advances, major improvements in sensor read-out devices must also be achieved to fully realize the broad impact of plasmonic nanosensors. Here, we propose a machine learning framework which can be used to design low-cost and mobile multispectral plasmonic readers that do not use traditionally employed bulky and expensive stabilized light sources or high-resolution spectrometers. By training a feature selection model over a large set of fabricated plasmonic nanosensors, we select the optimal set of illumination light-emitting diodes needed to create a minimum-error refractive index prediction model, which statistically takes into account the varied spectral responses and fabrication-induced variability of a given sensor design. This computational sensing approach was experimentally validated using a modular mobile plasmonic reader. We tested different plasmonic sensors with hexagonal and square periodicity nanohole arrays and revealed that the optimal illumination bands differ from those that are "intuitively" selected based on the spectral features of the sensor, e.g., transmission peaks or valleys. This framework provides a universal tool for the plasmonics community to design low-cost and mobile multispectral readers, helping the translation of nanosensing technologies to various emerging applications such as wearable sensing, personalized medicine, and point-of-care diagnostics. Beyond plasmonics, other types of sensors that operate based on spectral changes can broadly benefit from this approach, including e.g., aptamer-enabled nanoparticle assays and graphene-based sensors, among others.


Subject(s)
Biosensing Techniques/instrumentation , Machine Learning , Nanostructures/chemistry , Nanotechnology/instrumentation , Surface Plasmon Resonance/instrumentation , Biosensing Techniques/economics , Equipment Design , Machine Learning/economics , Nanostructures/economics , Nanotechnology/economics , Surface Plasmon Resonance/economics
16.
Sci Rep ; 5: 12215, 2015 Jul 27.
Article in English | MEDLINE | ID: mdl-26212560

ABSTRACT

Molecular tests hold great potential for tuberculosis (TB) diagnosis, but are costly, time consuming, and HIV-infected patients are often sputum scarce. Therefore, alternative approaches are needed. We evaluated automated digital chest radiography (ACR) as a rapid and cheap pre-screen test prior to Xpert MTB/RIF (Xpert). 388 suspected TB subjects underwent chest radiography, Xpert and sputum culture testing. Radiographs were analysed by computer software (CAD4TB) and specialist readers, and abnormality scores were allocated. A triage algorithm was simulated in which subjects with a score above a threshold underwent Xpert. We computed sensitivity, specificity, cost per screened subject (CSS), cost per notified TB case (CNTBC) and throughput for different diagnostic thresholds. 18.3% of subjects had culture positive TB. For Xpert alone, sensitivity was 78.9%, specificity 98.1%, CSS $13.09 and CNTBC $90.70. In a pre-screening setting where 40% of subjects would undergo Xpert, CSS decreased to $6.72 and CNTBC to $54.34, with eight TB cases missed and throughput increased from 45 to 113 patients/day. Specialists, on average, read 57% of radiographs as abnormal, reducing CSS ($8.95) and CNTBC ($64.84). ACR pre-screening could substantially reduce costs, and increase daily throughput with few TB cases missed. These data inform public health policy in resource-constrained settings.


Subject(s)
Health Care Costs/statistics & numerical data , Pattern Recognition, Automated/economics , Radiography, Thoracic/economics , Triage/economics , Tuberculosis, Pulmonary/diagnosis , Tuberculosis, Pulmonary/economics , Adult , Female , Humans , Machine Learning/economics , Machine Learning/statistics & numerical data , Male , Molecular Diagnostic Techniques/economics , Netherlands/epidemiology , Pattern Recognition, Automated/methods , Prevalence , Prospective Studies , Radiography, Thoracic/statistics & numerical data , Reproducibility of Results , Resource Allocation/economics , Sensitivity and Specificity , Triage/statistics & numerical data , Tuberculosis, Pulmonary/epidemiology , Utilization Review
17.
J Public Health Manag Pract ; 20(5): 523-9, 2014.
Article in English | MEDLINE | ID: mdl-24084391

ABSTRACT

CONTEXT: Most local public health departments serve limited English proficiency groups but lack sufficient resources to translate the health promotion materials that they produce into different languages. Machine translation (MT) with human postediting could fill this gap and work toward decreasing health disparities among non-English speakers. OBJECTIVES: (1) To identify the time and costs associated with human translation (HT) of public health documents, (2) determine the time necessary for human postediting of MT, and (3) compare the quality of postedited MT and HT. DESIGN: A quality comparison of 25 MT and HT documents was performed with public health translators. The public health professionals involved were queried about the workflow, costs, and time for HT of 11 English public health documents over a 20-month period. Three recently translated documents of similar size and topic were then machine translated, the time for human postediting was recorded, and a blind quality analysis was performed. SETTING: Seattle/King County, Washington. PARTICIPANTS: Public health professionals. MAIN OUTCOME MEASURES: (1) Estimated times for various HT tasks; (2) observed postediting times for MT documents; (3) actual costs for HT; and (4) comparison of quality ratings for HT and MT. RESULTS: Human translation via local health department methods took 17 hours to 6 days. While HT postediting words per minute ranged from 1.58 to 5.88, MT plus human postediting words per minute ranged from 10 to 30. The cost of HT ranged from $130 to $1220; MT required no additional costs. A quality comparison by bilingual public health professionals showed that MT and HT were equivalently preferred. CONCLUSIONS: MT with human postediting can reduce the time and costs of translating public health materials while maintaining quality similar to HT. In conjunction with postediting, MT could greatly improve the availability of multilingual public health materials.


Subject(s)
Electronic Data Processing , Health Promotion , Public Health Informatics , Public Health Practice , Quality Control , Translating , Access to Information , Electronic Data Processing/economics , Humans , Language , Machine Learning/economics , Public Health Informatics/economics , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...