Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Sci Rep ; 14(1): 8719, 2024 04 15.
Article in English | MEDLINE | ID: mdl-38622207

ABSTRACT

Occult hemorrhages after trauma can be present insidiously, and if not detected early enough can result in patient death. This study evaluated a hemorrhage model on 18 human subjects, comparing the performance of traditional vital signs to multiple off-the-shelf non-invasive biomarkers. A validated lower body negative pressure (LBNP) model was used to induce progression towards hypovolemic cardiovascular instability. Traditional vital signs included mean arterial pressure (MAP), electrocardiography (ECG), plethysmography (Pleth), and the test systems utilized electrical impedance via commercial electrical impedance tomography (EIT) and multifrequency electrical impedance spectroscopy (EIS) devices. Absolute and relative metrics were used to evaluate the performance in addition to machine learning-based modeling. Relative EIT-based metrics measured on the thorax outperformed vital sign metrics (MAP, ECG, and Pleth) achieving an area-under-the-curve (AUC) of 0.99 (CI 0.95-1.00, 100% sensitivity, 87.5% specificity) at the smallest LBNP change (0-15 mmHg). The best vital sign metric (MAP) at this LBNP change yielded an AUC of 0.6 (CI 0.38-0.79, 100% sensitivity, 25% specificity). Out-of-sample predictive performance from machine learning models were strong, especially when combining signals from multiple technologies simultaneously. EIT, alone or in machine learning-based combination, appears promising as a technology for early detection of progression toward hemodynamic instability.


Subject(s)
Cardiovascular System , Hypovolemia , Humans , Hypovolemia/diagnosis , Lower Body Negative Pressure , Vital Signs , Biomarkers
2.
Mil Med ; 2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38537150

ABSTRACT

INTRODUCTION: Detection of occult hemorrhage (OH) before progression to clinically apparent changes in vital signs remains an important clinical problem in managing trauma patients. The resource-intensiveness associated with continuous clinical patient monitoring and rescue from frank shock makes accurate early detection and prediction with noninvasive measurement technology a desirable innovation. Despite significant efforts directed toward the development of innovative noninvasive diagnostics, the implementation and performance of the newest bedside technologies remain inadequate. This poor performance may reflect the limitations of univariate systems based on one sensor in one anatomic location. It is possible that when signals are measured with multiple modalities in multiple locations, the resulting multivariate anatomic and temporal patterns of measured signals may provide additional discriminative power over single technology univariate measurements. We evaluated the potential superiority of multivariate methods over univariate methods. Additionally, we utilized machine learning-based models to compare the performance of noninvasive-only to noninvasive-plus-invasive measurements in predicting the onset of OH. MATERIALS AND METHODS: We applied machine learning methods to preexisting datasets derived using the lower body negative pressure human model of simulated hemorrhage. Employing multivariate measured physiological signals, we investigated the extent to which machine learning methods can effectively predict the onset of OH. In particular, we applied 2 ensemble learning methods, namely, random forest and gradient boosting. RESULTS: Analysis of precision, recall, and area under the receiver operating characteristic curve showed a superior performance of multivariate approach to that of the univariate ones. In addition, when using both invasive and noninvasive features, random forest classifier had a recall 95% confidence interval (CI) of 0.81 to 0.86 with a precision 95% CI of 0.65 to 0.72. Interestingly, when only noninvasive features were employed, the results worsened only slightly to a recall 95% CI of 0.80 to 0.85 and a precision 95% CI of 0.61 to 0.73. CONCLUSIONS: Multivariate ensemble machine learning-based approaches for the prediction of hemodynamic instability appear to hold promise for the development of effective solutions. In the lower body negative pressure multivariate hemorrhage model, predictions based only on noninvasive measurements performed comparably to those using both invasive and noninvasive measurements.

3.
Indian J Microbiol ; 62(4): 602-609, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36090200

ABSTRACT

To analyze the clinic-pathological profile of patients presenting with mucormycosis infection to a tertiary care center during the second wave of the COVID-19 pandemic in India. This descriptive cross-sectional study was conducted in a hospital setting from April 2021 to July 2021 and analysis was carried out to find associations between the stratified data and the extent of the disease involvement based on radiological findings. Statistical tests like percentage, average, chi-square test, etc. were used wherever relevant using software called Minitab13. All the 51 patients had involvement of at least one paranasal sinus. The incidence of previously established risk factors was diabetes (66.67%), history of severe COVID-19 disease (5.88%), raised serum iron levels (1.96%), Acidosis (3.92%), steroid administration (62.75%), oxygen administration (25.49%). Elevated serum urea levels (76.47%), alkalosis in 50.98% and hyperglycemia on multiple occasions (41.17%) were observed. The mean days between start of treatment for COVID-19 and appearance of first symptom suggesting mucormycosis were found to be 27.59 days. Only in 5.88% participants mucormycosis preceded COVID-19 infection detection. The current work finds presence of traditional risk factors and associations in significantly lower frequencies than the reviewed literature. However, blood urea was elevated in three fourths of the participants. Larger scale studies in mucormycosis patients are warranted for finding the role of other risk factors including possible role of elevated blood urea and hyperglycemia in the present era.

4.
Biomed Opt Express ; 13(6): 3171-3186, 2022 Jun 01.
Article in English | MEDLINE | ID: mdl-35781962

ABSTRACT

Dynamic contrast-enhanced fluorescence imaging (DCE-FI) classification of tissue viability in twelve adult patients undergoing below knee leg amputation is presented. During amputation and with the distal bone exposed, indocyanine green contrast-enhanced images were acquired sequentially during baseline, following transverse osteotomy and following periosteal stripping, offering a uniquely well-controlled fluorescence dataset. An unsupervised classification machine leveraging 21 different spatiotemporal features was trained and evaluated by cross-validation in 3.5 million regions-of-interest obtained from 9 patients, demonstrating accurate stratification into normal, suspicious, and compromised regions. The machine learning (ML) approach also outperformed the standard method of using fluorescence intensity only to evaluate tissue perfusion by a two-fold increase in accuracy. The generalizability of the machine was evaluated in image series acquired in an additional three patients, confirming the stability of the model and ability to sort future patient image-sets into viability categories.

5.
J Oral Maxillofac Pathol ; 26(1): 129, 2022.
Article in English | MEDLINE | ID: mdl-35571324

ABSTRACT

Proliferative fasciitis (PF) is a rare pseudosarcomatous myofibroblastic benign tumor, a subcutaneous counterpart of proliferative myositis. Usually seen in upper extremities, no case has yet been documented in tongue or any other subsites in oral cavity. The present case becomes the first to be reported at this site as well as the first case of synchronous coexistent PF with squamous cell carcinoma (SCC) of tongue. The patient was 50 years male, having a polypoidal swelling at right lateral border of tongue with an ulcer adjacent to it. Histopathologically, the swelling was diagnosed as PF and ulcer as SCC; both the diagnoses were confirmed by immunohistochemistry. The polypoidal lesion was immunopositive for smooth muscle actin and calponin and immunonegative for pan cytokeratin, cytokeratins (5/6), P40 and P63, proving it to be a non-SCC lesion, different from its adjacent ulcerative one.

6.
Physiol Meas ; 43(5)2022 05 25.
Article in English | MEDLINE | ID: mdl-35508144

ABSTRACT

Objective.Analyze the performance of electrical impedance tomography (EIT) in an innovative porcine model of subclinical hemorrhage and investigate associations between EIT and hemodynamic trends.Approach. Twenty-five swine were bled at slow rates to create an extended period of subclinical hemorrhage during which the animal's heart rate (HR) and blood pressure (BP) remained stable from before hemodynamic deterioration, where stable was defined as <15% decrease in BP and <20% increase in HR-i.e.hemorrhages were hidden from standard vital signs of HR and BP. Continuous vital signs, photo-plethysmography, and continuous non-invasive EIT data were recorded and analyzed with the objective of developing an improved means of detecting subclinical hemorrhage-ideally as early as possible.Main results. Best area-under-the-curve (AUC) values from comparing bleed to no-bleed epochs were 0.96 at a 80 ml bleed (∼15.4 min) using an EIT-data-based metric and 0.79 at a 120 ml bleed (∼23.1 min) from invasively measured BP-i.e.the EIT-data-based metric achieved higher AUCs at earlier points compared to standard clinical metrics without requiring image reconstructions.Significance.In this clinically relevant porcine model of subclinical hemorrhage, EIT appears to be superior to standard clinical metrics in early detection of hemorrhage.


Subject(s)
Hemorrhage , Tomography , Animals , Electric Impedance , Hemorrhage/diagnostic imaging , Image Processing, Computer-Assisted , Swine , Tomography/methods , Tomography, X-Ray Computed
7.
Health Care Manag Sci ; 24(3): 460-481, 2021 Sep.
Article in English | MEDLINE | ID: mdl-33394213

ABSTRACT

This study is concerned with the determination of an optimal appointment schedule in an outpatient-inpatient hospital system where the inpatient exams can be cancelled based on certain rules while the outpatient exams cannot be cancelled. Stochastic programming models were formulated and solved to tackle the stochasticity in the procedure durations and patient arrival patterns. The first model, a two-stage stochastic programming model, is formulated to optimize the slot size. The second model further optimizes the inpatient block (IPB) placement and slot size simultaneously. A computational method is developed to solve the second optimization problem. A case study is conducted using the data from Magnetic Resonance Imaging (MRI) centers of Lahey Hospital and Medical Center (LHMC). The current schedule and the schedules obtained from the optimization models are evaluated and compared using simulation based on FlexSim Healthcare. Results indicate that the overall weighted cost can be reduced by 11.6% by optimizing the slot size and can be further reduced by an additional 12.6% by optimizing slot size and IPB placement simultaneously. Three commonly used sequencing rules (IPBEG, OPBEG, and a variant of ALTER rule) were also evaluated. The results showed that when optimization tools are not available, ALTER variant which evenly distributes the IPBs across the day has the best performance. Sensitivity analysis of weights for patient waiting time, machine idle time and exam cancellations further supports the superiority of ALTER variant sequencing rules compared to the other sequencing methods. A Pareto frontier was also developed and presented between patient waiting time and machine idle time to enable medical centers with different priorities to obtain solutions that accurately reflect their respective optimal tradeoffs. An extended optimization model was also developed to incorporate the emergency patient arrivals. The optimal schedules from the extended model show only minor differences compared to those from the original model, thus proving the robustness of the scheduling solutions obtained from our optimal models against the impacts of emergency patient arrivals.


HIGHLIGHTS: Timestamped operational data was analyzed to identify sources of uncertainty and delays. Stochastic programming models were developed to optimize slot size and inpatient block placement. A case study showed that the optimized schedules can reduce overall costs by 23%. Distributing inpatient and outpatient slots evenly throughout the day provides the best performance. A Pareto frontier was developed to allow practitioners to choose their own best tradeoffs between multiple objectives.


Subject(s)
Inpatients , Outpatients , Appointments and Schedules , Computer Simulation , Humans , Time Factors
8.
Mil Med ; 186(Suppl 1): 440-444, 2021 01 25.
Article in English | MEDLINE | ID: mdl-33499451

ABSTRACT

INTRODUCTION: The ability to accurately detect hypotension in trauma patients at the earliest possible time is important in improving trauma outcomes. The earlier an accurate detection can be made, the more time is available to take corrective action. Currently, there is limited research on combining multiple physiological signals for an early detection of hemorrhagic shock. We studied the viability of early detection of hypotension based on multiple physiologic signals and machine learning methods. We explored proof of concept with a small (5 minutes) prediction window for application of machine learning tools and multiple physiologic signals to detecting hypotension. MATERIALS AND METHODS: Multivariate physiological signals from a preexisting dataset generated by an experimental hemorrhage model were employed. These experiments were conducted previously by another research group and the data made available publicly through a web portal. This dataset is among the few publicly available which incorporate measurement of multiple physiological signals from large animals during experimental hemorrhage. The data included two hemorrhage studies involving eight sheep. Supervised machine learning experiments were conducted in order to develop deep learning (viz., long short-term memory or LSTM), ensemble learning (viz., random forest), and classical learning (viz., support vector machine or SVM) models for the identification of physiological signals that can detect whether or not overall blood loss exceeds a predefined threshold 5 minutes ahead of time. To evaluate the performance of the machine learning technologies, 3-fold cross-validation was conducted and precision (also called positive predictive value) and recall (also called sensitivity) values were compared. As a first step in this development process, 5 minutes prediction windows were utilized. RESULTS: The results showed that SVM and random forest outperform LSTM neural networks, likely because LSTM tends to overfit the data on small sized datasets. Random forest has the highest recall (84%) with 56% precision while SVM has 62% recall with 82% precision. Upon analyzing the feature importance, it was observed that electrocardiogram has the highest significance while arterial blood pressure has the least importance among all other signals. CONCLUSION: In this research, we explored the viability of early detection of hypotension based on multiple signals in a preexisting animal hemorrhage dataset. The results show that a multivariate approach might be more effective than univariate approaches for this detection task.


Subject(s)
Hypotension , Machine Learning , Animals , Hypotension/diagnosis , Models, Theoretical , Neural Networks, Computer , Sheep , Support Vector Machine
9.
Mil Med ; 186(Suppl 1): 445-451, 2021 01 25.
Article in English | MEDLINE | ID: mdl-33499528

ABSTRACT

INTRODUCTION: Early prediction of the acute hypotensive episode (AHE) in critically ill patients has the potential to improve outcomes. In this study, we apply different machine learning algorithms to the MIMIC III Physionet dataset, containing more than 60,000 real-world intensive care unit records, to test commonly used machine learning technologies and compare their performances. MATERIALS AND METHODS: Five classification methods including K-nearest neighbor, logistic regression, support vector machine, random forest, and a deep learning method called long short-term memory are applied to predict an AHE 30 minutes in advance. An analysis comparing model performance when including versus excluding invasive features was conducted. To further study the pattern of the underlying mean arterial pressure (MAP), we apply a regression method to predict the continuous MAP values using linear regression over the next 60 minutes. RESULTS: Support vector machine yields the best performance in terms of recall (84%). Including the invasive features in the classification improves the performance significantly with both recall and precision increasing by more than 20 percentage points. We were able to predict the MAP with a root mean square error (a frequently used measure of the differences between the predicted values and the observed values) of 10 mmHg 60 minutes in the future. After converting continuous MAP predictions into AHE binary predictions, we achieve a 91% recall and 68% precision. In addition to predicting AHE, the MAP predictions provide clinically useful information regarding the timing and severity of the AHE occurrence. CONCLUSION: We were able to predict AHE with precision and recall above 80% 30 minutes in advance with the large real-world dataset. The prediction of regression model can provide a more fine-grained, interpretable signal to practitioners. Model performance is improved by the inclusion of invasive features in predicting AHE, when compared to predicting the AHE based on only the available, restricted set of noninvasive technologies. This demonstrates the importance of exploring more noninvasive technologies for AHE prediction.


Subject(s)
Hypotension , Algorithms , Critical Illness , Humans , Hypotension/diagnosis , Intensive Care Units , Machine Learning
10.
MDM Policy Pract ; 4(1): 2381468319856306, 2019.
Article in English | MEDLINE | ID: mdl-31259251

ABSTRACT

Background. In response to demand for fast and efficient clinical testing, the use of point-of-care testing (POCT) has become increasingly common in the United States. However, studies of POCT implementation have found that adopting POCT may not always be advantageous relative to centralized laboratory testing. Methods. We construct a simulation model of patient flow in an outpatient care setting to evaluate tradeoffs involved in POCT implementation across multiple dimensions, comparing measures of patient outcomes in varying clinical scenarios, testing regimes, and patient conditions. Results. We find that POCT can significantly reduce clinical time for patients, as compared to traditional testing regimes, in settings where clinic and central testing areas are far apart. However, as distance from clinic to central testing area decreased, POCT advantage over central laboratory testing also decreased, in terms of time in the clinical system and estimated subsequent productivity loss. For example, testing for pneumonia resulted in an estimated average of 27.80 (central lab) versus 15.50 (POCT) total lost productive hours in a rural scenario, and an average of 14.92 (central lab) versus 15.50 (POCT) hours in a hospital-based scenario. Conclusions. Our results show that POCT can effectively reduce the average time a patient spends in the system for varying condition profiles and clinical scenarios. However, the number of total lost productive hours, a more holistic measure, is greatly affected by testing quality, where POCT often is at a disadvantage. Thus, it is important to consider factors such as clinical setting, target condition, testing costs, and test quality when selecting appropriate testing regime.

SELECTION OF CITATIONS
SEARCH DETAIL
...