Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28.440
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38868706

RESUMO

Background and Aim: Endoscopic ultrasound shear wave elastography (EUS-SWE) can facilitate an objective evaluation of pancreatic fibrosis. Although it is primarily applied in evaluating chronic pancreatitis, its efficacy in assessing early chronic pancreatitis (ECP) remains underinvestigated. This study evaluated the diagnostic accuracy of EUS-SWE for assessing ECP diagnosed using the Japanese diagnostic criteria 2019. Methods: In total, 657 patients underwent EUS-SWE. Propensity score matching was used, and the participants were classified into the ECP and normal groups. ECP was diagnosed using the Japanese diagnostic criteria 2019. Pancreatic stiffness was assessed based on velocity (Vs) on EUS-SWE, and the optimal Vs cutoff value for ECP diagnosis was determined. A practical shear wave Vs value of ≥50% was considered significant. Results: Each group included 22 patients. The ECP group had higher pancreatic stiffness than the normal group (2.31 ± 0.67 m/s vs. 1.59 ± 0.40 m/s, p < 0.001). The Vs cutoff value for the diagnostic accuracy of ECP, as determined using the receiver operating characteristic curve, was 2.24m/s, with an area under the curve of 0.82 (95% confidence interval: 0.69-0.94). A high Vs was strongly correlated with the number of EUS findings (rs = 0.626, p < 0.001). Multiple regression analysis revealed that a history of acute pancreatitis and ≥2 EUS findings were independent predictors of a high Vs. Conclusions: There is a strong correlation between EUS-SWE findings and the Japanese diagnostic criteria 2019 for ECP. Hence, EUS-SWE can be an objective and invaluable diagnostic tool for ECP diagnosis.

2.
Clin Transl Oncol ; 2024 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-38965192

RESUMO

BACKGROUND: To develop and validate a serum protein nomogram for colorectal cancer (CRC) screening. METHODS: The serum protein characteristics were extracted from an independent sample containing 30 colorectal cancer and 12 polyp tissues along with their paired samples, and different serum protein expression profiles were validated using RNA microarrays. The prediction model was developed in a training cohort that included 1345 patients clinicopathologically confirmed CRC and 518 normal participants, and data were gathered from November 2011 to January 2017. The lasso logistic regression model was employed for features selection and serum nomogram building. An internal validation cohort containing 576 CRC patients and 222 normal participants was assessed. RESULTS: Serum signatures containing 27 secreted proteins were significantly differentially expressed in polyps and CRC compared to paired normal tissue, and REG family proteins were selected as potential predictors. The C-index of the nomogram1 (based on Lasso logistic regression model) which contains REG1A, REG3A, CEA and age was 0.913 (95% CI, 0.899 to 0.928) and was well calibrated. Addition of CA199 to the nomogram failed to show incremental prognostic value, as shown in nomogram2 (based on logistic regression model). Application of the nomogram1 in the independent validation cohort had similar discrimination (C-index, 0.912 [95% CI, 0.890 to 0.934]) and good calibration. The decision curve (DCA) and clinical impact curve (ICI) analysis demonstrated that nomogram1 was clinically useful. CONCLUSIONS: This study presents a serum nomogram that included REG1A, REG3A, CEA and age, which can be convenient for screening of colorectal cancer.

3.
Sci Rep ; 14(1): 15254, 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38956185

RESUMO

Maritime objects frequently exhibit low-quality and insufficient feature information, particularly in complex maritime environments characterized by challenges such as small objects, waves, and reflections. This situation poses significant challenges to the development of reliable object detection including the strategies of loss function and the feature understanding capabilities in common YOLOv8 (You Only Look Once) detectors. Furthermore, the widespread adoption and unmanned operation of intelligent ships have generated increasing demands on the computational efficiency and cost of object detection hardware, necessitating the development of more lightweight network architectures. This study proposes the EL-YOLO (Efficient Lightweight You Only Look Once) algorithm based on YOLOv8, designed specifically for intelligent ship object detection. EL-YOLO incorporates novel features, including adequate wise IoU (AWIoU) for improved bounding box regression, shortcut multi-fuse neck (SMFN) for a comprehensive analysis of features, and greedy-driven filter pruning (GDFP) to achieve a streamlined and lightweight network design. The findings of this study demonstrate notable advancements in both detection accuracy and lightweight characteristics across diverse maritime scenarios. EL-YOLO exhibits superior performance in intelligent ship object detection using RGB cameras, showcasing a significant improvement compared to standard YOLOv8 models.

4.
Cancer Radiother ; 2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-38981746

RESUMO

PURPOSE: This study aimed to develop nomograms that combine clinical factors and MRI tumour regression grade to predict the pathological response of mid-low locally advanced rectal cancer to neoadjuvant chemoradiotherapy. METHODS: The retrospective study included 204 patients who underwent neoadjuvant chemoradiotherapy and surgery between January 2013 and December 2021. Based on pathological tumour regression grade, patients were categorized into four groups: complete pathological response (pCR, n=45), non-complete pathological response (non-pCR; n=159), good pathological response (pGR, n=119), and non-good pathological response (non-pGR, n=85). The patients were divided into a training set and a validation set in a 7:3 ratio. Based on the results of univariate and multivariate analyses in the training set, two nomograms were respectively constructed to predict complete and good pathological responses. Subsequently, these predictive models underwent validation in the independent validation set. The prognostic performances of the models were evaluated using the area under the curve (AUC). RESULTS: The nomogram predicting complete pathological response incorporates tumour length, post-treatment mesorectal fascia involvement, white blood cell count, and MRI tumour regression grade. It yielded an AUC of 0.787 in the training set and 0.716 in the validation set, surpassing the performance of the model relying solely on MRI tumour regression grade (AUCs of 0.649 and 0.530, respectively). Similarly, the nomogram predicting good pathological response includes the distance of the tumour's lower border from the anal verge, post-treatment mesorectal fascia involvement, platelet/lymphocyte ratio, and MRI tumour regression grade. It achieved an AUC of 0.754 in the training set and 0.719 in the validation set, outperforming the model using MRI tumour regression grade alone (AUCs of 0.629 and 0.638, respectively). CONCLUSIONS: Nomograms combining MRI tumour regression grade with clinical factors may be useful for predicting pathological response of mid-low locally advanced rectal cancer to neoadjuvant chemoradiotherapy. The proposed models could be applied in clinical practice after validation in large samples.

5.
Genet Epidemiol ; 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38982682

RESUMO

The prediction of the susceptibility of an individual to a certain disease is an important and timely research area. An established technique is to estimate the risk of an individual with the help of an integrated risk model, that is, a polygenic risk score with added epidemiological covariates. However, integrated risk models do not capture any time dependence, and may provide a point estimate of the relative risk with respect to a reference population. The aim of this work is twofold. First, we explore and advocate the idea of predicting the time-dependent hazard and survival (defined as disease-free time) of an individual for the onset of a disease. This provides a practitioner with a much more differentiated view of absolute survival as a function of time. Second, to compute the time-dependent risk of an individual, we use published methodology to fit a Cox's proportional hazard model to data from a genetic SNP study of time to Alzheimer's disease (AD) onset, using the lasso to incorporate further epidemiological variables such as sex, APOE (apolipoprotein E, a genetic risk factor for AD) status, 10 leading principal components, and selected genomic loci. We apply the lasso for Cox's proportional hazards to a data set of 6792 AD patients (composed of 4102 cases and 2690 controls) and 87 covariates. We demonstrate that fitting a lasso model for Cox's proportional hazards allows one to obtain more accurate survival curves than with state-of-the-art (likelihood-based) methods. Moreover, the methodology allows one to obtain personalized survival curves for a patient, thus giving a much more differentiated view of the expected progression of a disease than the view offered by integrated risk models. The runtime to compute personalized survival curves is under a minute for the entire data set of AD patients, thus enabling it to handle datasets with 60,000-100,000 subjects in less than 1 h.

6.
Sensors (Basel) ; 24(13)2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-39000970

RESUMO

Machine learning (ML) methods are widely used in particulate matter prediction modelling, especially through use of air quality sensor data. Despite their advantages, these methods' black-box nature obscures the understanding of how a prediction has been made. Major issues with these types of models include the data quality and computational intensity. In this study, we employed feature selection methods using recursive feature elimination and global sensitivity analysis for a random-forest (RF)-based land-use regression model developed for the city of Berlin, Germany. Land-use-based predictors, including local climate zones, leaf area index, daily traffic volume, population density, building types, building heights, and street types were used to create a baseline RF model. Five additional models, three using recursive feature elimination method and two using a Sobol-based global sensitivity analysis (GSA), were implemented, and their performance was compared against that of the baseline RF model. The predictors that had a large effect on the prediction as determined using both the methods are discussed. Through feature elimination, the number of predictors were reduced from 220 in the baseline model to eight in the parsimonious models without sacrificing model performance. The model metrics were compared, which showed that the parsimonious_GSA-based model performs better than does the baseline model and reduces the mean absolute error (MAE) from 8.69 µg/m3 to 3.6 µg/m3 and the root mean squared error (RMSE) from 9.86 µg/m3 to 4.23 µg/m3 when applying the trained model to reference station data. The better performance of the GSA_parsimonious model is made possible by the curtailment of the uncertainties propagated through the model via the reduction of multicollinear and redundant predictors. The parsimonious model validated against reference stations was able to predict the PM2.5 concentrations with an MAE of less than 5 µg/m3 for 10 out of 12 locations. The GSA_parsimonious performed best in all model metrics and improved the R2 from 3% in the baseline model to 17%. However, the predictions exhibited a degree of uncertainty, making it unreliable for regional scale modelling. The GSA_parsimonious model can nevertheless be adapted to local scales to highlight the land-use parameters that are indicative of PM2.5 concentrations in Berlin. Overall, population density, leaf area index, and traffic volume are the major predictors of PM2.5, while building type and local climate zones are the less significant predictors. Feature selection based on sensitivity analysis has a large impact on the model performance. Optimising models through sensitivity analysis can enhance the interpretability of the model dynamics and potentially reduce computational costs and time when modelling is performed for larger areas.

7.
Sensors (Basel) ; 24(13)2024 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-39001041

RESUMO

Hyperspectral imaging was used to predict the total polyphenol content in low-temperature stressed tomato seedlings for the development of a multispectral image sensor. The spectral data with a full width at half maximum (FWHM) of 5 nm were merged to obtain FWHMs of 10 nm, 25 nm, and 50 nm using a commercialized bandpass filter. Using the permutation importance method and regression coefficients, we developed the least absolute shrinkage and selection operator (Lasso) regression models by setting the band number to ≥11, ≤10, and ≤5 for each FWHM. The regression model using 56 bands with an FWHM of 5 nm resulted in an R2 of 0.71, an RMSE of 3.99 mg/g, and an RE of 9.04%, whereas the model developed using the spectral data of only 5 bands with a FWHM of 25 nm (at 519.5 nm, 620.1 nm, 660.3 nm, 719.8 nm, and 980.3 nm) provided an R2 of 0.62, an RMSE of 4.54 mg/g, and an RE of 10.3%. These results show that a multispectral image sensor can be developed to predict the total polyphenol content of tomato seedlings subjected to low-temperature stress, paving the way for energy saving and low-temperature stress damage prevention in vegetable seedling production.


Assuntos
Imageamento Hiperespectral , Polifenóis , Plântula , Solanum lycopersicum , Solanum lycopersicum/química , Solanum lycopersicum/crescimento & desenvolvimento , Polifenóis/análise , Plântula/química , Imageamento Hiperespectral/métodos , Temperatura Baixa
8.
Sensors (Basel) ; 24(13)2024 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-39001115

RESUMO

In the field of autofocus for optical systems, although passive focusing methods are widely used due to their cost-effectiveness, fixed focusing windows and evaluation functions in certain scenarios can still lead to focusing failures. Additionally, the lack of datasets limits the extensive research of deep learning methods. In this work, we propose a neural network autofocus method with the capability of dynamically selecting the region of interest (ROI). Our main work is as follows: first, we construct a dataset for automatic focusing of grayscale images; second, we transform the autofocus issue into an ordinal regression problem and propose two focusing strategies: full-stack search and single-frame prediction; and third, we construct a MobileViT network with a linear self-attention mechanism to achieve automatic focusing on dynamic regions of interest. The effectiveness of the proposed focusing method is verified through experiments, and the results show that the focusing MAE of the full-stack search can be as low as 0.094, with a focusing time of 27.8 ms, and the focusing MAE of the single-frame prediction can be as low as 0.142, with a focusing time of 27.5 ms.

9.
Sensors (Basel) ; 24(13)2024 Jul 07.
Artigo em Inglês | MEDLINE | ID: mdl-39001177

RESUMO

The cognitive state of a person can be categorized using the circumplex model of emotional states, a continuous model of two dimensions: arousal and valence. The purpose of this research is to select a machine learning model(s) to be integrated into a virtual reality (VR) system that runs cognitive remediation exercises for people with mental health disorders. As such, the prediction of emotional states is essential to customize treatments for those individuals. We exploit the Remote Collaborative and Affective Interactions (RECOLA) database to predict arousal and valence values using machine learning techniques. RECOLA includes audio, video, and physiological recordings of interactions between human participants. To allow learners to focus on the most relevant data, features are extracted from raw data. Such features can be predesigned, learned, or extracted implicitly using deep learners. Our previous work on video recordings focused on predesigned and learned visual features. In this paper, we extend our work onto deep visual features. Our deep visual features are extracted using the MobileNet-v2 convolutional neural network (CNN) that we previously trained on RECOLA's video frames of full/half faces. As the final purpose of our work is to integrate our solution into a practical VR application using head-mounted displays, we experimented with half faces as a proof of concept. The extracted deep features were then used to predict arousal and valence values via optimizable ensemble regression. We also fused the extracted visual features with the predesigned visual features and predicted arousal and valence values using the combined feature set. In an attempt to enhance our prediction performance, we further fused the predictions of the optimizable ensemble model with the predictions of the MobileNet-v2 model. After decision fusion, we achieved a root mean squared error (RMSE) of 0.1140, a Pearson's correlation coefficient (PCC) of 0.8000, and a concordance correlation coefficient (CCC) of 0.7868 on arousal predictions. We achieved an RMSE of 0.0790, a PCC of 0.7904, and a CCC of 0.7645 on valence predictions.


Assuntos
Nível de Alerta , Emoções , Redes Neurais de Computação , Humanos , Emoções/fisiologia , Nível de Alerta/fisiologia , Aprendizado de Máquina , Realidade Virtual , Feminino , Masculino , Aprendizado Profundo , Adulto
10.
Diagnostics (Basel) ; 14(13)2024 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-39001287

RESUMO

BACKGROUND: Audiological diagnosis and rehabilitation often involve the assessment of whether the maximum speech identification score (PBmax) is poorer than expected from the pure-tone average (PTA) threshold. This requires the estimation of the lower boundary of the PBmax values expected for a given PTA (one-tailed 95% confidence limit, CL). This study compares the accuracy and consistency of three methods for estimating the 95% CL. METHOD: The 95% CL values were estimated using a simulation method, the Harrell-Davis (HD) estimator, and non-linear quantile regression (nQR); the latter two are both distribution-free methods. The first two methods require the formation of sub-groups with different PTAs. Accuracy and consistency in the estimation of the 95% CL were assessed by applying each method to many random samples of 50% of the available data and using the fitted parameters to predict the data for the remaining 50%. STUDY SAMPLE: A total of 642 participants aged 17 to 84 years with sensorineural hearing loss were recruited from audiology clinics. Pure-tone audiograms were obtained and PBmax scores were measured using monosyllables at 40 dB above the speech recognition threshold or at the most comfortable level. RESULTS: For the simulation method, 6.7 to 8.2% of the PBmax values fell below the 95% CL for both ears, exceeding the target value of 5%. For the HD and nQR methods, the PBmax values fell below the estimated 95% CL for approximately 5% of the ears, indicating good accuracy. Consistency, estimated from the standard deviation of the deviations from the target value of 5%, was similar for all the methods. CONCLUSIONS: The nQR method is recommended because it has good accuracy and consistency, and it does not require the formation of arbitrary PTA sub-groups.

11.
Cancers (Basel) ; 16(13)2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-39001444

RESUMO

Selenoprotein P (SELENOP) acts as a crucial mediator, distributing selenium from the liver to other tissues within the body. Despite its established role in selenium metabolism, the specific functions of SELENOP in the development of liver cancer remain enigmatic. This study aims to unravel SELENOP's associations in hepatocellular carcinoma (HCC) by scrutinizing its expression in correlation with disease characteristics and investigating links to hormonal and lipid/triglyceride metabolism biomarkers as well as its potential as a prognosticator for overall survival and predictor of hypoxia. SELENOP mRNA expression was analyzed in 372 HCC patients sourced from The Cancer Genome Atlas (TCGA), utilizing statistical methodologies in R programming and machine learning techniques in Python. SELENOP expression significantly varied across HCC grades (p < 0.000001) and among racial groups (p = 0.0246), with lower levels in higher grades and Asian individuals, respectively. Gender significantly influenced SELENOP expression (p < 0.000001), with females showing lower altered expression compared to males. Notably, the Spearman correlation revealed strong positive connections of SELENOP with hormonal markers (AR, ESR1, THRB) and key lipid/triglyceride metabolism markers (PPARA, APOC3, APOA5). Regarding prognosis, SELENOP showed a significant association with overall survival (p = 0.0142) but explained only a limited proportion of variability (~10%). Machine learning suggested its potential as a predictive biomarker for hypoxia, explaining approximately 18.89% of the variance in hypoxia scores. Future directions include validating SELENOP's prognostic and diagnostic value in serum for personalized HCC treatment. Large-scale prospective studies correlating serum SELENOP levels with patient outcomes are essential, along with integrating them with clinical parameters for enhanced prognostic accuracy and tailored therapeutic strategies.

12.
Front Aging Neurosci ; 16: 1421656, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38974906

RESUMO

Background: This study aimed to assess whether integrating handgrip strength (HGS) into the concept of motoric cognitive risk (MCR) would enhance its predictive validity for incident dementia and all-cause mortality. Methods: A cohort of 5, 899 adults from the Health and Retirement Study underwent assessments of gait speed, subjective cognitive complaints, and HGS were involved. Over a 10-year follow-up, biennial cognitive tests and mortality data were collected. Cox proportional hazard analyses assessed the predictive power of MCR alone and MCR plus HGS for incident dementia and all-cause mortality. Results: Patients with MCR and impaired HGS (MCR-HGS) showed the highest adjusted hazard ratios (AHR) for dementia (2.33; 95% CI, 1.49-3.65) and mortality (1.52; 95% CI, 1.07-2.17). Even patients with MCR and normal HGS (MCR-non-HGS) experienced a 1.77-fold increased risk of incident dementia; however, this association was not significant when adjusted for socioeconomic status, lifestyle factors, and medical conditions. Nevertheless, all MCR groups demonstrated increased risks of all-cause mortality. The inclusion of HGS in the MCR models significantly improved predictive discrimination for both incident dementia and all-cause mortality, as indicated by improvements in the C-statistic, integrated discrimination improvement (IDI) and net reclassification indices (NRI). Conclusion: Our study underscores the incremental predictive value of adding HGS to the MCR concept for estimating risks of adverse health outcomes among older adults. A modified MCR, incorporating HGS, could serve as an effective screening tool during national health examinations for identifying individuals at risk of dementia and mortality.

13.
Front Chem ; 12: 1395359, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38974990

RESUMO

This paper presents a thorough examination for drug release from a polymeric matrix to improve understanding of drug release behavior for tissue regeneration. A comprehensive model was developed utilizing mass transfer and machine learning (ML). In the machine learning section, three distinct regression models, namely, Decision Tree Regression (DTR), Passive Aggressive Regression (PAR), and Quadratic Polynomial Regression (QPR) applied to a comprehensive dataset of drug release. The dataset includes r(m) and z(m) inputs, with corresponding concentration of solute in the matrix (C) as response. The primary objective is to assess and compare the predictive performance of these models in finding the correlation between input parameters and chemical concentrations. The hyper-parameter optimization process is executed using Sequential Model-Based Optimization (SMBO), ensuring the robustness of the models in handling the complexity of the controlled drug release. The Decision Tree Regression model exhibits outstanding predictive accuracy, with an R2 score of 0.99887, RMSE of 9.0092E-06, MAE of 3.51486E-06, and a Max Error of 6.87000E-05. This exceptional performance underscores the model's capability to discern intricate patterns within the drug release dataset. The Passive Aggressive Regression model, while displaying a slightly lower R2 score of 0.94652, demonstrates commendable predictive capabilities with an RMSE of 6.0438E-05, MAE of 4.82782E-05, and a Max Error of 2.36600E-04. The model's effectiveness in capturing non-linear relationships within the dataset is evident. The Quadratic Polynomial Regression model, designed to accommodate quadratic relationships, yields a noteworthy R2 score of 0.95382, along with an RMSE of 5.6655E-05, MAE of 4.49198E-05, and a Max Error of 1.86375E-04. These results affirm the model's proficiency in capturing the inherent complexities of the drug release system.

14.
Heliyon ; 10(12): e32397, 2024 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-38975153

RESUMO

Topological indices play an essential role in defining a chemical compound numerically and are widely used in QSPR/QSAR analysis. Using this analysis, physicochemical properties of the compounds and the topological indices are studied. Quinolones are synthetic antibiotics employed for treating the diseases caused by bacteria. Across the years, Quinolones have shifted its position from minor drug to a very significant drug to treat the infections caused by bacteria and in the urinary tract. A study is carried out on various Quinolone antibiotic drugs by computing topological indices through QSPR analysis. Curvilinear regression models such as linear, quadratic and cubic regression models are determined for all topological indices. These regression models are depicted graphically by extending for fourth degree and fifth degree models for significant topological indices with its corresponding physical property showing the variation between each model. Various studies have been carried out using linear regression models while this work is extended for curvilinear regression models using a novel concept of finding minimal R M S E . R M S E is a significant measure to find potential predictive index that fits QSAR/QSPR analysis. The goal of R M S E lies in predicting a certain property of a chemical compound based on the molecular structure.

15.
J Chromatogr A ; 1730: 465109, 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38968662

RESUMO

The predictive modeling of liquid chromatography methods can be an invaluable asset, potentially saving countless hours of labor while also reducing solvent consumption and waste. Tasks such as physicochemical screening and preliminary method screening systems where large amounts of chromatography data are collected from fast and routine operations are particularly well suited for both leveraging large datasets and benefiting from predictive models. Therefore, the generation of predictive models for retention time is an active area of development. However, for these predictive models to gain acceptance, researchers first must have confidence in model performance and the computational cost of building them should be minimal. In this study, a simple and cost-effective workflow for the development of machine learning models to predict retention time using only Molecular Operating Environment 2D descriptors as input for support vector regression is developed. Furthermore, we investigated the relative performance of models based on molecular descriptor space by utilizing uniform manifold approximation and projection and clustering with Gaussian mixture models to identify chemically distinct clusters. Results outlined herein demonstrate that local models trained on clusters in chemical space perform equivalently when compared to models trained on all data. Through 10-fold cross-validation on a comprehensive set containing 67,950 of our company's proprietary analytes, these models achieved coefficients of determination of 0.84 and 3 % error in terms of retention time. This promising statistical significance is found to translate from cross-validation to prospective prediction on an external test set of pharmaceutically relevant analytes. The observed equivalency of global and local modeling of large datasets is retained with METLIN's SMRT dataset, thereby confirming the wider applicability of the developed machine learning workflows for global models.

16.
New Phytol ; 2024 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-39014516

RESUMO

Through enviromics, precision breeding leverages innovative geotechnologies to customize crop varieties to specific environments, potentially improving both crop yield and genetic selection gains. In Brazil's four southernmost states, data from 183 distinct geographic field trials (also accounting for 2017-2021) covered information on 164 genotypes: 79 phenotyped maize hybrid genotypes for grain yield and their 85 nonphenotyped parents. Additionally, 1342 envirotypic covariates from weather, soil, sensor-based, and satellite sources were collected to engineer 10 K synthetic enviromic markers via machine learning. Soil, radiation light, and surface temperature variations remarkably affect differential genotype yield, hinting at ecophysiological adjustments including evapotranspiration and photosynthesis. The enviromic ensemble-based random regression model showcases superior predictive performance and efficiency compared to the baseline and kernel models, matching the best genotypes to specific geographic coordinates. Clustering analysis has identified regions that minimize genotype-environment (G × E) interactions. These findings underscore the potential of enviromics in crafting specific parental combinations to breed new, higher-yielding hybrid crops. The adequate use of envirotypic information can enhance the precision and efficiency of maize breeding by providing important inputs about the environmental factors that affect the average crop performance. Generating enviromic markers associated with grain yield can enable a better selection of hybrids for specific environments.

17.
Ecol Evol ; 14(7): e11387, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38994210

RESUMO

Generalized linear models (GLMs) are an integral tool in ecology. Like general linear models, GLMs assume linearity, which entails a linear relationship between independent and dependent variables. However, because this assumption acts on the link rather than the natural scale in GLMs, it is more easily overlooked. We reviewed recent ecological literature to quantify the use of linearity. We then used two case studies to confront the linearity assumption via two GLMs fit to empirical data. In the first case study we compared GLMs to generalized additive models (GAMs) fit to mammal relative abundance data. In the second case study we tested for linearity in occupancy models using passerine point-count data. We reviewed 162 studies published in the last 5 years in five leading ecology journals and found less than 15% reported testing for linearity. These studies used transformations and GAMs more often than they reported a linearity test. In the first case study, GAMs strongly out-performed GLMs as measured by AIC in modeling relative abundance, and GAMs helped uncover nonlinear responses of carnivore species to landscape development. In the second case study, 14% of species-specific models failed a formal statistical test for linearity. We also found that differences between linear and nonlinear (i.e., those with a transformed independent variable) model predictions were similar for some species but not for others, with implications for inference and conservation decision-making. Our review suggests that reporting tests for linearity are rare in recent studies employing GLMs. Our case studies show how formally comparing models that allow for nonlinear relationships between the dependent and independent variables has the potential to impact inference, generate new hypotheses, and alter conservation implications. We conclude by suggesting that ecological studies report tests for linearity and use formal methods to address linearity assumption violations in GLMs.

18.
Drug Dev Ind Pharm ; : 1-9, 2024 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-38980706

RESUMO

OBJECTIVE: To develop a Raman spectroscopy-based analytical model for quantification of solid dosage forms of active pharmaceutical ingredient (API) of Atenolol.Significance: For the quantitative analysis of pharmaceutical drugs, Raman Spectroscopy is a reliable and fast detection method. As part of this study, Raman Spectroscopy is explored for the quantitative analysis of different concentrations of Atenolol. METHODS: Various solid-dosage forms of Atenolol were prepared by mixing API with excipients to form different solid-dosage formulations of Atenolol. Multivariate data analysis techniques, such as Principal Component Analysis (PCA) and Partial least square regression (PLSR) were used for the qualitative and quantitative analysis, respectively. RESULTS: As the concentration of the drug increased in formulation, the peak intensities of the distinctive Raman spectral characteristics associated with the API (Atenolol) gradually increased. Raman spectral data sets were classified using PCA due to their distinctive spectral characteristics. Additionally, a prediction model was built using PLSR analysis to assess the quantitative relationship between various API (Atenolol) concentrations and spectral features. With a goodness of fit value of 0.99, the root mean square errors of calibration (RMSEC) and prediction (RMSEP) were determined to be 1.0036 and 2.83 mg, respectively. The API content in the blind/unknown Atenolol formulation was determined as well using the PLSR model. CONCLUSIONS: Based on these results, Raman spectroscopy may be used to quickly and accurately analyze pharmaceutical samples and for their quantitative determination.

19.
Front Hum Neurosci ; 18: 1305446, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39015825

RESUMO

Introduction: Transcranial direct current stimulation (tDCS) administers low-intensity direct current electrical stimulation to brain regions via electrodes arranged on the surface of the scalp. The core promise of tDCS is its ability to modulate brain activity and affect performance on diverse cognitive functions (affording causal inferences regarding regional brain activity and behavior), but the optimal methodological parameters for maximizing behavioral effects remain to be elucidated. Here we sought to examine the effects of 10 stimulation and experimental design factors across a series of five cognitive domains: motor performance, visual search, working memory, vigilance, and response inhibition. The objective was to identify a set of optimal parameter settings that consistently and reliably maximized the behavioral effects of tDCS within each cognitive domain. Methods: We surveyed tDCS effects on these various cognitive functions in healthy young adults, ultimately resulting in 721 effects across 106 published reports. Hierarchical Bayesian meta-regression models were fit to characterize how (and to what extent) these design parameters differentially predict the likelihood of positive/negative behavioral outcomes. Results: Consistent with many previous meta-analyses of tDCS effects, extensive variability was observed across tasks and measured outcomes. Consequently, most design parameters did not confer consistent advantages or disadvantages to behavioral effects-a domain-general model suggested an advantage to using within-subjects designs (versus between-subjects) and the tendency for cathodal stimulation (relative to anodal stimulation) to produce reduced behavioral effects, but these associations were scarcely-evident in domain-specific models. Discussion: These findings highlight the urgent need for tDCS studies to more systematically probe the effects of these parameters on behavior to fulfill the promise of identifying causal links between brain function and cognition.

20.
Nutr Rev ; 2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39018497

RESUMO

CONTEXT: Several studies have demonstrated that dietary patterns identified by a posteriori and hybrid methods are associated with gastrointestinal (GI) cancer risk and mortality. These studies applied different methods for analyzing dietary data and reported inconsistent findings. OBJECTIVE: This systematic review and meta-analysis were aimed to determine the association between dietary patterns, derived using principal component analysis (PCA) and reduced rank regression (RRR), and GI cancer risk and GI cancer-caused mortality. DATA SOURCE: Articles published up to June 2023 in English were eligible for inclusion. The Medline, SCOPUS, Cochrane Library, CINHAL, PsycINFO, ProQuest, and Web of Sciences databases were used to identify prospective studies. The Preferred Reporting Item for Systematic Review and Meta-analysis Protocol 2020 was used to report results. DATA EXTRACTION: A total of 28 studies were eligible for inclusion. Varied approaches to deriving dietary patterns were used, including PCA (n = 22), RRR (n = 2), combined PCA and RRR (n = 1), cluster analysis (CA; n = 2) and combined PCA and CA (n = 1). DATA ANALYSIS: Two dietary patterns, "healthy" and "unhealthy," were derived using PCA and RRR. The healthy dietary pattern was characterized by a higher intake of fruits, whole grains, legumes, vegetables, milk, and other dairy products, whereas the unhealthy dietary pattern was characterized by a higher intake of red and processed meat, alcohol, and both refined and sugar-sweetened beverages. The findings indicated that the PCA-derived healthy dietary pattern was associated with an 8% reduced risk (relative risk [RR], 0.92; 95% CI, 0.87-0.98), and the unhealthy dietary pattern was associated with a 14% increased risk (RR, 1.14; 95% CI, 1.07-1.22) of GI cancers. Similarly, the RRR-derived healthy dietary pattern (RR, 0.83; 95% CI, 0.61-1.12) may be associated with reduced risk of GI cancers. In contrast, the RRR-derived unhealthy dietary pattern (RR, 0.93; 95% CI, 0.57-1.52) had no association with a reduced risk of GI cancers. Similarly, evidence suggested that PCA-derived healthy dietary patterns may reduce the risk of death from GI cancers, whereas PCA-derived unhealthy dietary patterns may increase the risk. CONCLUSION: Findings from prospective studies on the association of PCA-derived dietary patterns and the risk of GI cancers support the evidence of healthy and unhealthy dietary patterns as either protective or risk-increasing factors for GI cancers and for survivorship, respectively. The findings also suggest that the RRR-derived healthy dietary pattern reduces the risk of GI cancers (albeit with low precision), but no association was found for the RRR-derived unhealthy dietary pattern. Prospective studies are required to further clarify disparities in the association between PCA- and RRR-derived dietary patterns and the risk of GI cancers. Systematic review registration: PROSPERO registration no. CRD42022321644.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...