Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 5.266
Filtrar
1.
Front Public Health ; 12: 1417429, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38939564

RESUMO

The concept of race is prevalent in medical, nursing, and public health literature. Clinicians often incorporate race into diagnostics, prognostic tools, and treatment guidelines. An example is the recently heavily debated use of race and ethnicity in the Vaginal Birth After Cesarean (VBAC) calculator. In this case, the critics argued that the use of race in this calculator implied that race confers immutable characteristics that affect the ability of women to give birth vaginally after a c-section. This debate is co-occurring as research continues to highlight the racial disparities in health outcomes, such as high maternal mortality among Black women compared to other racial groups in the United States. As the healthcare system contemplates the necessity of utilizing race-a social and political construct, to monitor health outcomes, it has sparked more questions about incorporating race into clinical algorithms, including pulmonary tests, kidney function tests, pharmacotherapies, and genetic testing. This paper critically examines the argument against the race-based Vaginal Birth After Cesarean (VBAC) calculator, shedding light on its implications. Moreover, it delves into the detrimental effects of normalizing race as a biological variable, which hinders progress in improving health outcomes and equity.


Assuntos
Algoritmos , Humanos , Feminino , Gravidez , Estados Unidos , Saúde Materna/estatística & dados numéricos , Saúde Materna/etnologia , Grupos Raciais/estatística & dados numéricos , Cesárea/estatística & dados numéricos
2.
Curr Drug Deliv ; 2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-38939987

RESUMO

Nanoliposomal formulations, utilizing lipid bilayers to encapsulate therapeutic agents, hold promise for targeted drug delivery. Recent studies have explored the application of machine learning (ML) techniques in this field. This study aims to elucidate the motivations behind integrating ML into liposomal formulations, providing a nuanced understanding of its applications and highlighting potential advantages. The review begins with an overview of liposomal formulations and their role in targeted drug delivery. It then systematically progresses through current research on ML in this area, discussing the principles guiding ML adaptation for liposomal preparation and characterization. Additionally, the review proposes a conceptual model for effective ML incorporation. The review explores popular ML techniques, including ensemble learning, decision trees, instance- based learning, and neural networks. It discusses feature extraction and selection, emphasizing the influence of dataset nature and ML method choice on technique relevance. The review underscores the importance of supervised learning models for structured liposomal formulations, where labeled data is essential. It acknowledges the merits of K-fold cross-validation but notes the prevalent use of single train/test splits in liposomal formulation studies. This practice facilitates the visualization of results through 3D plots for practical interpretation. While highlighting the mean absolute error as a crucial metric, the review emphasizes consistency between predicted and actual values. It clearly demonstrates ML techniques' effectiveness in optimizing critical formulation parameters such as encapsulation efficiency, particle size, drug loading efficiency, polydispersity index, and liposomal flux. In conclusion, the review navigates the nuances of various ML algorithms, illustrating ML's role as a decision support system for liposomal formulation development. It proposes a structured framework involving experimentation, physicochemical analysis, and iterative ML model refinement through human-centered evaluation, guiding future studies. Emphasizing meticulous experimentation, interdisciplinary collaboration, and continuous validation, the review advocates seamless ML integration into liposomal drug delivery research for robust advancements. Future endeavors are encouraged to uphold these principles.

3.
J Med Internet Res ; 26: e54571, 2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-38935937

RESUMO

BACKGROUND: Artificial intelligence, particularly chatbot systems, is becoming an instrumental tool in health care, aiding clinical decision-making and patient engagement. OBJECTIVE: This study aims to analyze the performance of ChatGPT-3.5 and ChatGPT-4 in addressing complex clinical and ethical dilemmas, and to illustrate their potential role in health care decision-making while comparing seniors' and residents' ratings, and specific question types. METHODS: A total of 4 specialized physicians formulated 176 real-world clinical questions. A total of 8 senior physicians and residents assessed responses from GPT-3.5 and GPT-4 on a 1-5 scale across 5 categories: accuracy, relevance, clarity, utility, and comprehensiveness. Evaluations were conducted within internal medicine, emergency medicine, and ethics. Comparisons were made globally, between seniors and residents, and across classifications. RESULTS: Both GPT models received high mean scores (4.4, SD 0.8 for GPT-4 and 4.1, SD 1.0 for GPT-3.5). GPT-4 outperformed GPT-3.5 across all rating dimensions, with seniors consistently rating responses higher than residents for both models. Specifically, seniors rated GPT-4 as more beneficial and complete (mean 4.6 vs 4.0 and 4.6 vs 4.1, respectively; P<.001), and GPT-3.5 similarly (mean 4.1 vs 3.7 and 3.9 vs 3.5, respectively; P<.001). Ethical queries received the highest ratings for both models, with mean scores reflecting consistency across accuracy and completeness criteria. Distinctions among question types were significant, particularly for the GPT-4 mean scores in completeness across emergency, internal, and ethical questions (4.2, SD 1.0; 4.3, SD 0.8; and 4.5, SD 0.7, respectively; P<.001), and for GPT-3.5's accuracy, beneficial, and completeness dimensions. CONCLUSIONS: ChatGPT's potential to assist physicians with medical issues is promising, with prospects to enhance diagnostics, treatments, and ethics. While integration into clinical workflows may be valuable, it must complement, not replace, human expertise. Continued research is essential to ensure safe and effective implementation in clinical environments.


Assuntos
Tomada de Decisão Clínica , Humanos , Inteligência Artificial
4.
Entropy (Basel) ; 26(6)2024 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-38920470

RESUMO

Quantum computing (QC) has opened the door to advancements in machine learning (ML) tasks that are currently implemented in the classical domain. Convolutional neural networks (CNNs) are classical ML architectures that exploit data locality and possess a simpler structure than a fully connected multi-layer perceptrons (MLPs) without compromising the accuracy of classification. However, the concept of preserving data locality is usually overlooked in the existing quantum counterparts of CNNs, particularly for extracting multifeatures in multidimensional data. In this paper, we present an multidimensional quantum convolutional classifier (MQCC) that performs multidimensional and multifeature quantum convolution with average and Euclidean pooling, thus adapting the CNN structure to a variational quantum algorithm (VQA). The experimental work was conducted using multidimensional data to validate the correctness and demonstrate the scalability of the proposed method utilizing both noisy and noise-free quantum simulations. We evaluated the MQCC model with reference to reported work on state-of-the-art quantum simulators from IBM Quantum and Xanadu using a variety of standard ML datasets. The experimental results show the favorable characteristics of our proposed techniques compared with existing work with respect to a number of quantitative metrics, such as the number of training parameters, cross-entropy loss, classification accuracy, circuit depth, and quantum gate count.

5.
Curr Oncol ; 31(6): 3253-3268, 2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38920730

RESUMO

BACKGROUND: Abdominoperineal resection (APR)-the standard surgical procedure for low-lying rectal cancer (LRC)-leads to significant perineal defects, posing considerable reconstruction challenges that, in selected cases, necessitate the use of plastic surgery techniques (flaps). PURPOSE: To develop valuable decision algorithms for choosing the appropriate surgical plan for the reconstruction of perineal defects. METHODS: Our study included 245 LRC cases treated using APR. Guided by the few available publications in the field, we have designed several personalized decisional algorithms for managing perineal defects considering the following factors: preoperative radiotherapy, intraoperative position, surgical technique, perineal defect volume, and quality of tissues and perforators. The algorithms have been improved continuously during the entire period of our study based on the immediate and remote outcomes. RESULTS: In 239 patients following APR, the direct closing procedure was performed versus 6 cases in which we used various types of flaps for perineal reconstruction. Perineal incisional hernia occurred in 12 patients (5.02%) with direct perineal wound closure versus in none of those reconstructed using flaps. CONCLUSION: The reduced rate of postoperative complications suggests the efficiency of the proposed decisional algorithms; however, more extended studies are required to categorize them as evidence-based management guide tools.


Assuntos
Algoritmos , Procedimentos de Cirurgia Plástica , Neoplasias Retais , Humanos , Neoplasias Retais/cirurgia , Procedimentos de Cirurgia Plástica/métodos , Masculino , Feminino , Pessoa de Meia-Idade , Idoso , Períneo/cirurgia , Adulto , Idoso de 80 Anos ou mais , Protectomia/métodos , Retalhos Cirúrgicos
6.
Biomimetics (Basel) ; 9(6)2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38921251

RESUMO

This paper describes a novel bionic eye binocular vision system designed to mimic the natural movements of the human eye. The system provides a broader field of view and enhances visual perception in complex environments. Compared with similar bionic binocular cameras, the JEWXON BC200 bionic binocular camera developed in this study is more miniature. It consumes only 2.8 W of power, which makes it ideal for mobile robots. Combining axis and camera rotation enables more seamless panoramic image synthesis and is therefore suitable for self-rotating bionic binocular cameras. In addition, combined with the YOLO-V8 model, the camera can accurately recognize objects such as clocks and keyboards. This research provides new ideas for the development of robotic vision systems.

7.
Tomography ; 10(6): 912-921, 2024 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-38921946

RESUMO

Deep learning image reconstruction (DLIR) algorithms employ convolutional neural networks (CNNs) for CT image reconstruction to produce CT images with a very low noise level, even at a low radiation dose. The aim of this study was to assess whether the DLIR algorithm reduces the CT effective dose (ED) and improves CT image quality in comparison with filtered back projection (FBP) and iterative reconstruction (IR) algorithms in intensive care unit (ICU) patients. We identified all consecutive patients referred to the ICU of a single hospital who underwent at least two consecutive chest and/or abdominal contrast-enhanced CT scans within a time period of 30 days using DLIR and subsequently the FBP or IR algorithm (Advanced Modeled Iterative Reconstruction [ADMIRE] model-based algorithm or Adaptive Iterative Dose Reduction 3D [AIDR 3D] hybrid algorithm) for CT image reconstruction. The radiation ED, noise level, and signal-to-noise ratio (SNR) were compared between the different CT scanners. The non-parametric Wilcoxon test was used for statistical comparison. Statistical significance was set at p < 0.05. A total of 83 patients (mean age, 59 ± 15 years [standard deviation]; 56 men) were included. DLIR vs. FBP reduced the ED (18.45 ± 13.16 mSv vs. 22.06 ± 9.55 mSv, p < 0.05), while DLIR vs. FBP and vs. ADMIRE and AIDR 3D IR algorithms reduced image noise (8.45 ± 3.24 vs. 14.85 ± 2.73 vs. 14.77 ± 32.77 and 11.17 ± 32.77, p < 0.05) and increased the SNR (11.53 ± 9.28 vs. 3.99 ± 1.23 vs. 5.84 ± 2.74 and 3.58 ± 2.74, p < 0.05). CT scanners employing DLIR improved the SNR compared to CT scanners using FBP or IR algorithms in ICU patients despite maintaining a reduced ED.


Assuntos
Algoritmos , Aprendizado Profundo , Doses de Radiação , Interpretação de Imagem Radiográfica Assistida por Computador , Tomografia Computadorizada por Raios X , Humanos , Masculino , Feminino , Tomografia Computadorizada por Raios X/métodos , Pessoa de Meia-Idade , Idoso , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Cuidados Críticos/métodos , Razão Sinal-Ruído , Unidades de Terapia Intensiva , Estudos Retrospectivos , Processamento de Imagem Assistida por Computador/métodos , Adulto
8.
JMIR Ment Health ; 11: e56529, 2024 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-38861302

RESUMO

Recent breakthroughs in artificial intelligence (AI) language models have elevated the vision of using conversational AI support for mental health, with a growing body of literature indicating varying degrees of efficacy. In this paper, we ask when, in therapy, it will be easier to replace humans and, conversely, in what instances, human connection will still be more valued. We suggest that empathy lies at the heart of the answer to this question. First, we define different aspects of empathy and outline the potential empathic capabilities of humans versus AI. Next, we consider what determines when these aspects are needed most in therapy, both from the perspective of therapeutic methodology and from the perspective of patient objectives. Ultimately, our goal is to prompt further investigation and dialogue, urging both practitioners and scholars engaged in AI-mediated therapy to keep these questions and considerations in mind when investigating AI implementation in mental health.


Assuntos
Inteligência Artificial , Empatia , Humanos , Psicoterapia/métodos , Transtornos Mentais/terapia , Transtornos Mentais/psicologia
9.
Life (Basel) ; 14(6)2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38929638

RESUMO

Artificial intelligence models represented in machine learning algorithms are promising tools for risk assessment used to guide clinical and other health care decisions. Machine learning algorithms, however, may house biases that propagate stereotypes, inequities, and discrimination that contribute to socioeconomic health care disparities. The biases include those related to some sociodemographic characteristics such as race, ethnicity, gender, age, insurance, and socioeconomic status from the use of erroneous electronic health record data. Additionally, there is concern that training data and algorithmic biases in large language models pose potential drawbacks. These biases affect the lives and livelihoods of a significant percentage of the population in the United States and globally. The social and economic consequences of the associated backlash cannot be underestimated. Here, we outline some of the sociodemographic, training data, and algorithmic biases that undermine sound health care risk assessment and medical decision-making that should be addressed in the health care system. We present a perspective and overview of these biases by gender, race, ethnicity, age, historically marginalized communities, algorithmic bias, biased evaluations, implicit bias, selection/sampling bias, socioeconomic status biases, biased data distributions, cultural biases and insurance status bias, conformation bias, information bias and anchoring biases and make recommendations to improve large language model training data, including de-biasing techniques such as counterfactual role-reversed sentences during knowledge distillation, fine-tuning, prefix attachment at training time, the use of toxicity classifiers, retrieval augmented generation and algorithmic modification to mitigate the biases moving forward.

10.
Stroke ; 2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-38920051

RESUMO

BACKGROUND: A recent review of randomization methods used in large multicenter clinical trials within the National Institutes of Health Stroke Trials Network identified preservation of treatment allocation randomness, achievement of the desired group size balance between treatment groups, achievement of baseline covariate balance, and ease of implementation in practice as critical properties required for optimal randomization designs. Common-scale minimal sufficient balance (CS-MSB) adaptive randomization effectively controls for covariate imbalance between treatment groups while preserving allocation randomness but does not balance group sizes. This study extends the CS-MSB adaptive randomization method to achieve both group size and covariate balance while preserving allocation randomness in hyperacute stroke trials. METHODS: A full factorial in silico simulation study evaluated the performance of the proposed new CSSize-MSB adaptive randomization method in achieving group size balance, covariate balance, and allocation randomness compared with the original CS-MSB method. Data from 4 existing hyperacute stroke trials were used to investigate the performance of CSSize-MSB for a range of sample sizes and covariate numbers and types. A discrete-event simulation model created with AnyLogic was used to dynamically visualize the decision logic of the CSSize-MSB randomization process for communication with clinicians. RESULTS: The proposed new CSSize-MSB algorithm uniformly outperformed the CS-MSB algorithm in controlling for group size imbalance while maintaining comparable levels of covariate balance and allocation randomness in hyperacute stroke trials. This improvement was consistent across a distribution of simulated trials with varying levels of imbalance but was increasingly pronounced for trials with extreme cases of imbalance. The results were consistent across a range of trial data sets of different sizes and covariate numbers and types. CONCLUSIONS: The proposed adaptive CSSize-MSB algorithm successfully controls for group size imbalance in hyperacute stroke trials under various settings, and its logic can be readily explained to clinicians using dynamic visualization.

11.
Bioengineering (Basel) ; 11(6)2024 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-38927813

RESUMO

BACKGROUND: Recent advancements in deep learning have significantly impacted ophthalmology, especially in glaucoma, a leading cause of irreversible blindness worldwide. In this study, we developed a reliable predictive model for glaucoma detection using deep learning models based on clinical data, social and behavior risk factor, and demographic data from 1652 participants, split evenly between 826 control subjects and 826 glaucoma patients. METHODS: We extracted structural data from control and glaucoma patients' electronic health records (EHR). Three distinct machine learning classifiers, the Random Forest and Gradient Boosting algorithms, as well as the Sequential model from the Keras library of TensorFlow, were employed to conduct predictive analyses across our dataset. Key performance metrics such as accuracy, F1 score, precision, recall, and the area under the receiver operating characteristics curve (AUC) were computed to both train and optimize these models. RESULTS: The Random Forest model achieved an accuracy of 67.5%, with a ROC AUC of 0.67, outperforming the Gradient Boosting and Sequential models, which registered accuracies of 66.3% and 64.5%, respectively. Our results highlighted key predictive factors such as intraocular pressure, family history, and body mass index, substantiating their roles in glaucoma risk assessment. CONCLUSIONS: This study demonstrates the potential of utilizing readily available clinical, lifestyle, and demographic data from EHRs for glaucoma detection through deep learning models. While our model, using EHR data alone, has a lower accuracy compared to those incorporating imaging data, it still offers a promising avenue for early glaucoma risk assessment in primary care settings. The observed disparities in model performance and feature significance show the importance of tailoring detection strategies to individual patient characteristics, potentially leading to more effective and personalized glaucoma screening and intervention.

12.
JMIR Form Res ; 8: e53806, 2024 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-38857078

RESUMO

BACKGROUND: Sedentary behavior (SB) is one of the largest contributing factors increasing the risk of developing noncommunicable diseases, including cardiovascular disease and type 2 diabetes. Guidelines from the World Health Organization for physical activity suggest the substitution of SB with light physical activity. The Apple Watch contains a health metric known as the stand hour (SH). The SH is intended to record standing with movement for at least 1 minute per hour; however, the activity measured during the determination of the SH is unclear. OBJECTIVE: In this cross-sectional study, we analyzed the algorithm used to determine time spent standing per hour. To do this, we investigated activity measurements also recorded on Apple Watches that influence the recording of an SH. We also aimed to estimate the values of any significant SH predictors in the recording of a SH. METHODS: The cross-sectional study used anonymized data obtained in August 2022 from 20 healthy individuals gathered via convenience sampling. Apple Watch data were extracted from the Apple Health app through the use of a third-party app. Appropriate statistical models were fitted to analyze SH predictors. RESULTS: Our findings show that active energy (AE) and step count (SC) measurements influence the recording of an SH. Comparing when an SH is recorded with when an SH is not recorded, we found a significant difference in the mean and median AE and SC. Above a threshold of 97.5 steps or 100 kJ of energy, it became much more likely that an SH would be recorded when each predictor was analyzed as a separate entity. CONCLUSIONS: The findings of this study reveal the pivotal role of AE and SC measurements in the algorithm underlying the SH recording; however, our findings also suggest that a recording of an SH is influenced by more than one factor. Irrespective of the internal validity of the SH metric, it is representative of light physical activity and might, therefore, have use in encouraging individuals through various means, for example, notifications, to reduce their levels of SB.

13.
Heliyon ; 10(11): e31631, 2024 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-38828319

RESUMO

In this paper, a novel study on the way inter-individual information interacts in meta-heuristic algorithms (MHAs) is carried out using a scheme known as population interaction networks (PIN). Specifically, three representative MHAs, including the differential evolutionary algorithm (DE), the particle swarm optimization algorithm (PSO), the gravitational search algorithm (GSA), and four classical variations of the gravitational search algorithm, are analyzed in terms of inter-individual information interactions and the differences in the performance of each of the algorithms on IEEE Congress on Evolutionary Computation 2017 benchmark functions. The cumulative distribution function (CDF) of the node degree obtained by the algorithm on the benchmark function is fitted to the seven distribution models by using PIN. The results show that among the seven compared algorithms, the more powerful DE is more skewed towards the Poisson distribution, and the weaker PSO, GSA, and GSA variants are more skewed towards the Logistic distribution. The more deviation from Logistic distribution GSA variants conform, the stronger their performance. From the point of view of the CDF, deviating from the Logistic distribution facilitates the improvement of the GSA. Our findings suggest that the population interaction network is a powerful tool for characterizing and comparing the performance of different MHAs in a more comprehensive and meaningful way.

14.
Int J Environ Health Res ; : 1-14, 2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38832892

RESUMO

Tuberculosis remains a global health challenge, predicting its incidences is crucial for effective planning and intervention strategies. This study combines AutoRegressive Integrated Moving Average (ARIMA) and Nonlinear AutoRegressive with exogenous input (NARX) models as an innovative approach for TB incidence rate prediction. The performance of the proposed model (ARIMA-NARX) was evaluated using standard metrics (MSE, RMSE, MAE, and MAPE), and it was refined to achieve the best average predictive accuracies with an MSE: 0.0622, RMSE: 0.0851, MAE: 0.07520, and MAPE: 0.05535 followed by NARX 0.1597, 0.3189, 0.2724, and 0.3366, and ARIMA (2,0,0) 0.7781, 0.5959, 0.6524, and 0.6080 Models. These findings are expected to shed light on the TB incidence rate, providing valuable information to policymakers such as the World Health Organization (WHO) and health professionals. The developed model can potentially serve as a predictive tool for proactive TB control and intervention strategies in the region and the world at large.

15.
Ann Med Surg (Lond) ; 86(6): 3233-3241, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38846869

RESUMO

Background: Hypothyroidism is one of the most common endocrine diseases. It is, however, usually challenging for physicians to diagnose due to nonspecific symptoms. The usual procedure for diagnosis of Hypothyroidism is a blood test. In recent years, machine learning algorithms have proved to be powerful tools in medicine due to their diagnostic accuracy. In this study, the authors aim to predict and identify the most important symptoms of Hypothyroidism using machine learning algorithms. Method: In this cross-sectional, single-center study, 1296 individuals who visited an endocrinologist for the first time with symptoms of Hypothyroidism were studied, 676 of whom were identified as patients through thyroid-stimulating hormone testing. The outcome was binary (with Hypothyroidism /without Hypothyroidism). In a comparative analysis, random forest, decision tree, and logistic regression methods were used to diagnose primary Hypothyroidism. Results: Symptoms such as tiredness, unusual cold feeling, yellow skin (jaundice), cold hands and feet, numbness of hands, loss of appetite, and weight Hypothyroidism gain were recognized as the most important symptoms in identifying Hypothyroidism. Among the studied algorithms, random forest had the best performance in identifying these symptoms (accuracy=0.83, kappa=0.46, sensitivity=0.88, specificity=0.88). Conclusions: The findings suggest that machine learning methods can identify Hypothyroidism patients who show relatively simple symptoms with acceptable accuracy without the need for a blood test. Greater familiarity and utilization of such methods by physicians may, therefore, reduce the expense and stress burden of clinical testing.

16.
Spectrochim Acta A Mol Biomol Spectrosc ; 320: 124595, 2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38850828

RESUMO

The abuse of antibiotics has caused gradually increases drug-resistant bacterial strains that pose health risks. Herein, a sensitive SERS sensor coupled multivariate calibration was proposed for quantification of antibiotics in milk. Initially, octahedral gold-silver nanocages (Au@Ag MCs) were synthesized by Cu2O template etching method as SERS substrates, which enhanced the plasmonic effect through sharp edges and hollow nanostructures. Afterwards, five chemometric algorithms, like partial least square (PLS), uninformative variable elimination-PLS (UVE-PLS), competitive adaptive reweighted sampling-PLS (CARS-PLS), random frog-PLS (RF-PLS), and convolutional neural network (CNN) were applied for TTC and CAP. RF-PLS performed optimally for TTC and CAP (Rc = 0.9686, Rp = 0.9648, RPD = 3.79 for TTC and Rc = 0.9893, Rp = 0.9878, RPD = 5.88 for CAP). Furthermore, the detection limit of 0.0001 µg/mL for both TTC and CAP was obtained. Finally, satisfactory (p > 0.05) results were obtained with the standard HPLC method. Therefore, SERS combined RF-PLS could be applied for fast, nondestructive sensing of TTC and CAP in milk.

17.
J Emerg Med ; 2024 Apr 07.
Artigo em Inglês | MEDLINE | ID: mdl-38839452

RESUMO

BACKGROUND: The Shock Index (SI) is emerging as a potentially useful measure among children with injury or suspected sepsis. OBJECTIVE: The aim of this study was to evaluate the distribution of the SI and evaluate its association with clinical outcomes among all children presenting to the emergency department (ED). METHODS: A complex survey of nonfederal U.S. ED encounters from 2016 through 2021 was analyzed. Among children, the Pediatric Age-Adjusted Shock Index (SIPA), Pediatric Shock Index (PSI), and the Temperature- and Age-Adjusted Shock Index (TAMSI) were analyzed. The association of these criteria with disposition, acuity, medication administration, diagnoses and procedures was analyzed. RESULTS: A survey-weighted 81.5 million ED visits were included for children aged 4-16 years and 117.2 million visits were included for children aged 1-12 years. SI could be calculated for 78.6% of patients aged 4-16 years and 57.9% of patients aged 1-12 years. An abnormal SI was present in 15.9%, 11.1%, and 31.7% when using the SIPA, PSI, and TAMSI, respectively. With all criteria, an elevated SI was associated with greater hospitalization. The SIPA and PSI were associated with triage acuity. All criteria were associated with medical interventions, including provision of IV fluids and acquisition of blood cultures. CONCLUSIONS: An elevated SI is indicative of greater resource utilization needs among children in the ED. When using any criteria, an elevated SI was associated with clinically important outcomes. Further research is required to evaluate the distribution of the SI in children and to investigate its potential role within existing triage algorithms for children in the ED.

18.
J Diabetes Sci Technol ; : 19322968241256475, 2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38840523

RESUMO

BACKGROUND: Hybrid Closed-Loop Systems (HCLs) may not perform optimally on postprandial glucose control. We evaluated how first-generation and advanced HCLs manage meals varying in carbohydrates, fat, and protein. METHOD: According to a cross-sectional design, seven-day food records and HCLs reports from 120 adults with type 1 diabetes (MiniMed670G: n = 40, MiniMed780G: n = 49, Control-IQ [C-IQ]: n = 31) were analyzed. Breakfasts (n = 570), lunches (n = 658), and dinners (n = 619) were divided according to the median of their carbohydrate (g)/fat (g) plus protein (g) ratio (C/FP). After breakfast (4-hour), lunch (6-hour), and dinner (6-hour), continuous glucose monitoring (CGM) metrics and early and late glucose incremental area under the curves (iAUCs) and delivered insulin doses were evaluated. The association of C/FP and HCLs with postprandial glucose and insulin patterns was analyzed by univariate analysis of variance (ANOVA) with a two-factor design. RESULTS: Postprandial glucose time-in-range 70 to 180 mg/dL was optimal after breakfast (78.3 ± 26.9%), lunch (72.7 ± 26.1%), and dinner (70.8 ± 27.3%), with no significant differences between HCLs. Independent of C/FP, late glucose-iAUC after lunch was significantly lower in C-IQ users than 670G and 780G (P < .05), with no significant differences at breakfast and dinner. Postprandial insulin pattern (Ins3-6h minus Ins0-3h) differed by type of HCLs at lunch (P = .026) and dinner (P < .001), being the early insulin dose (Ins0-3h) higher than the late dose (Ins3-6h) in 670G and 780G users with an opposite pattern in C-IQ users. CONCLUSIONS: Independent of different proportions of dietary carbohydrates, fat, and protein, postprandial glucose response was similar in users of different HCLs, although obtained through different automatic insulin delivery patterns.

19.
IUCrJ ; 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38842120

RESUMO

Crystallography is a quintessential method for determining the atomic structure of crystals. The most common implementation of crystallography uses single crystals that must be of sufficient size, typically tens of micrometres or larger, depending on the complexity of the crystal structure. The emergence of serial data-collection methods in crystallography, particularly for time-resolved experiments, opens up opportunities to develop new routes to structure determination for nanocrystals and ensembles of crystals. Fluctuation X-ray scattering is a correlation-based approach for single-particle imaging from ensembles of identical particles, but has yet to be applied to crystal structure determination. Here, an iterative algorithm is presented that recovers crystal structure-factor intensities from fluctuation X-ray scattering correlations. The capabilities of this algorithm are demonstrated by recovering the structure of three small-molecule crystals and a protein crystal from simulated fluctuation X-ray scattering correlations. This method could facilitate the recovery of structure-factor intensities from crystals in serial crystallography experiments and relax sample requirements for crystallography experiments.

20.
Sci Rep ; 14(1): 12690, 2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38830916

RESUMO

A random initialization of the search particles is a strong argument in favor of the deployment of nature-inspired metaheuristic algorithms when the knowledge of a good initial guess is lacked. This article analyses the impact of the type of randomization on the working of algorithms and the acquired solutions. In this study, five different types of randomizations are applied to the Accelerated Particle Swarm Optimization (APSO) and Squirrel Search Algorithm (SSA) during the initializations and proceedings of the search particles for selective harmonics elimination (SHE). The types of randomizations include exponential, normal, Rayleigh, uniform, and Weibull characteristics. The statistical analysis shows that the type of randomization does impact the working of optimization algorithms and the fittest value of the objective function.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...