Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 15.797
Filtrar
1.
Brief Bioinform ; 25(4)2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38980374

RESUMO

Gene-environment (GE) interactions are essential in understanding human complex traits. Identifying these interactions is necessary for deciphering the biological basis of such traits. In this study, we review state-of-art methods for estimating the proportion of phenotypic variance explained by genome-wide GE interactions and introduce a novel statistical method Linkage-Disequilibrium Eigenvalue Regression for Gene-Environment interactions (LDER-GE). LDER-GE improves the accuracy of estimating the phenotypic variance component explained by genome-wide GE interactions using large-scale biobank association summary statistics. LDER-GE leverages the complete Linkage Disequilibrium (LD) matrix, as opposed to only the diagonal squared LD matrix utilized by LDSC (Linkage Disequilibrium Score)-based methods. Our extensive simulation studies demonstrate that LDER-GE performs better than LDSC-based approaches by enhancing statistical efficiency by ~23%. This improvement is equivalent to a sample size increase of around 51%. Additionally, LDER-GE effectively controls type-I error rate and produces unbiased results. We conducted an analysis using UK Biobank data, comprising 307 259 unrelated European-Ancestry subjects and 966 766 variants, across 217 environmental covariate-phenotype (E-Y) pairs. LDER-GE identified 34 significant E-Y pairs while LDSC-based method only identified 23 significant E-Y pairs with 22 overlapped with LDER-GE. Furthermore, we employed LDER-GE to estimate the aggregated variance component attributed to multiple GE interactions, leading to an increase in the explained phenotypic variance with GE interactions compared to considering main genetic effects only. Our results suggest the importance of impacts of GE interactions on human complex traits.


Assuntos
Interação Gene-Ambiente , Desequilíbrio de Ligação , Fenótipo , Humanos , Herança Multifatorial , Estudo de Associação Genômica Ampla/métodos , Polimorfismo de Nucleotídeo Único , Modelos Genéticos
2.
Environ Monit Assess ; 196(8): 716, 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38980517

RESUMO

Low-cost sensors integrated with the Internet of Things can enable real-time environmental monitoring networks and provide valuable water quality information to the public. However, the accuracy and precision of the values measured by the sensors are critical for widespread adoption. In this study, 19 different low-cost sensors, commonly found in the literature, from four different manufacturers are tested for measuring five water quality parameters: pH, dissolved oxygen, oxidation-reduction potential, turbidity, and temperature. The low-cost sensors are evaluated for each parameter by calculating the error and precision compared to a typical multiparameter probe assumed as a reference. The comparison was performed in a controlled environment with simultaneous measurements of real water samples. The relative error ranged from - 0.33 to 33.77%, and most of them were ≤ 5%. The pH and temperature were the ones with the most accurate results. In conclusion, low-cost sensors are a complementary alternative to quickly detect changes in water quality parameters. Further studies are necessary to establish a guideline for the operation and maintenance of low-cost sensors.


Assuntos
Monitoramento Ambiental , Qualidade da Água , Monitoramento Ambiental/métodos , Monitoramento Ambiental/instrumentação , Concentração de Íons de Hidrogênio , Temperatura , Poluentes Químicos da Água/análise , Oxigênio/análise
3.
Sci Rep ; 14(1): 15467, 2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38969702

RESUMO

In this article we address two related issues on the learning of probabilistic sequences of events. First, which features make the sequence of events generated by a stochastic chain more difficult to predict. Second, how to model the procedures employed by different learners to identify the structure of sequences of events. Playing the role of a goalkeeper in a video game, participants were told to predict step by step the successive directions-left, center or right-to which the penalty kicker would send the ball. The sequence of kicks was driven by a stochastic chain with memory of variable length. Results showed that at least three features play a role in the first issue: (1) the shape of the context tree summarizing the dependencies between present and past directions; (2) the entropy of the stochastic chain used to generate the sequences of events; (3) the existence or not of a deterministic periodic sequence underlying the sequences of events. Moreover, evidence suggests that best learners rely less on their own past choices to identify the structure of the sequences of events.


Assuntos
Jogos de Vídeo , Humanos , Masculino , Feminino , Adulto , Aprendizagem , Probabilidade , Adulto Jovem , Processos Estocásticos
4.
Sci Rep ; 14(1): 15526, 2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38969712

RESUMO

The study explores the intricate relationship between topological indices and the heat of formation in the benzyl sulfamoyl network. Topological indices of benzyl sulfamoyl networks are studied and also emphasize their properties statistically. The benzyl sulfamoyl has unique properties due to its crystalline structure and it is used in the form of artificial substance. We analyze the distributions and correlations of the benzyl sulfamoyl network with others by using statistical methods and also build a computational analysis for topological indices. The findings show a strong association between the variables, indicating that topological indices may be used to accurately predict thermodynamic characteristics and improve the effectiveness of molecular modelling and simulation procedures.

5.
Heliyon ; 10(12): e32720, 2024 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-38975113

RESUMO

There is an evident requirement for a rapid, efficient, and simple method to screen the authenticity of milk products in the market. Fourier transform infrared (FTIR) spectroscopy stands out as a promising solution. This work employed FTIR spectroscopy and modern statistical machine learning algorithms for the identification and quantification of pasteurized milk adulteration. Comparative results demonstrate modern statistical machine learning algorithms will improve the ability of FTIR spectroscopy to predict milk adulteration compared to partial least square (PLS). To discern the types of substances utilized in milk adulteration, a top-performing multiclassification model was established using multi-layer perceptron (MLP) algorithm, delivering an impressive prediction accuracy of 97.4 %. For quantification purposes, bayesian regularized neural networks (BRNN) provided the best results for the determination of both melamine, urea and milk powder adulteration, while extreme gradient boosting (XGB) and projection pursuit regression (PPR) gave better results in predicting sucrose and water adulteration levels, respectively. The regression models provided suitable predictive accuracy with the ratio of performance to deviation (RPD) values higher than 3. The proposed methodology proved to be a cost-effective and fast tool for screening the authenticity of pasteurized milk in the market.

6.
Cogn Sci ; 48(7): e13455, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38980958

RESUMO

Previous research described different cognitive processes on how individuals process distributional information. Based on these processes, the current research uncovered a novel phenomenon in distribution perception: the Endpoint Leverage Effect. Subjective endpoints influence distribution estimations not only locally around the endpoint but also influence estimations across the whole value range of the distribution. The influence is largest close to the respective endpoint and decreases in size toward the opposite end of the value range. Three experiments investigate this phenomenon: Experiment 1 provides correlational evidence for the Endpoint Leverage Effect after presenting participants with a numerical distribution. Experiment 2 demonstrates the Endpoint Leverage Effect by manipulating the subjective endpoints of a numerical distribution directly. Experiment 3 generalizes the phenomenon by investigating a general population sample and estimations regarding a real-world income distribution. In addition, quantitative model analysis examines the cognitive processes underlying the effect. Overall, the novel Endpoint Leverage Effect is found in all three experiments, inspiring further research in a wide area of contexts.


Assuntos
Cognição , Humanos , Masculino , Feminino , Adulto , Adulto Jovem , Percepção
7.
Ann Biomed Eng ; 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38955891

RESUMO

In dynamic impact events, thoracic injuries often involve rib fractures, which are closely related to injury severity. Previous studies have investigated the behavior of isolated ribs under impact loading conditions, but often neglected the variability in anatomical shape and tissue material properties. In this study, we used probabilistic finite element analysis and statistical shape modeling to investigate the effect of population-wide variability in rib cortical bone tissue mechanical properties and rib shape on the biomechanical response of the rib to impact loading. Using the probabilistic finite element analysis results, a response surface model was generated to rapidly investigate the biomechanical response of an isolated rib under dynamic anterior-posterior load given the variability in rib morphometry and tissue material properties. The response surface was used to generate pre-fracture force-displacement computational corridors for the overall population and a population sub-group of older mid-sized males. When compared to the experimental data, the computational mean response had a RMSE of 4.28N (peak force 94N) and 6.11N (peak force 116N) for the overall population and sub-group respectively, whereas the normalized area metric when comparing the experimental and computational corridors ranged from 3.32% to 22.65% for the population and 10.90% to 32.81% for the sub-group. Furthermore, probabilistic sensitivities were computed in which the contribution of uncertainty and variability of the parameters of interest was quantified. The study found that rib cortical bone elastic modulus, rib morphometry and cortical thickness are the random variables that produce the largest variability in the predicted force-displacement response. The proposed framework offers a novel approach for accounting biological variability in a representative population and has the potential to improve the generalizability of findings in biomechanical studies.

8.
J Oral Rehabil ; 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38956893

RESUMO

BACKGROUND: The proper interpretation of a study's results requires both excellent understanding of good methodological practices and deep knowledge of prior results, aided by the availability of effect sizes. METHODS: This review takes the form of an expository essay exploring the complex and nuanced relationships among statistical significance, clinical importance, and effect sizes. RESULTS: Careful attention to study design and methodology will increase the likelihood of obtaining statistical significance and may enhance the ability of investigators/readers to accurately interpret results. Measures of effect size show how well the variables used in a study account for/explain the variability in the data. Studies reporting strong effects may have greater practical value/utility than studies reporting weak effects. Effect sizes need to be interpreted in context. Verbal summary characterizations of effect sizes (e.g., "weak", "strong") are fundamentally flawed and can lead to inappropriate characterization of results. Common language effect size (CLES) indicators are a relatively new approach to effect sizes that may offer a more accessible interpretation of results that can benefit providers, patients, and the public at large. CONCLUSIONS: It is important to convey research findings in ways that are clear to both the research community and to the public. At a minimum, this requires inclusion of standard effect size data in research reports. Proper selection of measures and careful design of studies are foundational to the interpretation of a study's results. The ability to draw useful conclusions from a study is increased when investigators enhance the methodological quality of their work.

9.
Am J Sports Med ; 52(8): 1915-1917, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38946456
10.
Artigo em Inglês | MEDLINE | ID: mdl-38949732

RESUMO

The presence of phenazopyridine in water is an environmental problem that can cause damage to human health and the environment. However, few studies have reported the adsorption of this emerging contaminant from aqueous matrices. Furthermore, existing research explored only conventional modeling to describe the adsorption phenomenon without understanding the behavior at the molecular level. Herein, the statistical physical modeling of phenazopyridine adsorption into graphene oxide is reported. Steric, energetic, and thermodynamic interpretations were used to describe the phenomenon that controls drug adsorption. The equilibrium data were fitted by mono, double, and multi-layer models, considering factors such as the numbers of phenazopyridine molecules by adsorption sites, density of receptor sites, and half saturation concentration. Furthermore, the statistical physical approach also calculated the thermodynamic parameters (free enthalpy, internal energy, Gibbs free energy, and entropy). The maximum adsorption capacity at the equilibrium was reached at 298 K (510.94 mg g-1). The results showed the physical meaning of adsorption, indicating that the adsorption occurs in multiple layers. The temperature affected the density of receptor sites and half saturation concentration. At the same time, the adsorbed species assumes different positions on the adsorbent surface as a function of the increase in the temperature. Meanwhile, the thermodynamic functions revealed increased entropy with the temperature and the equilibrium concentration.

11.
Biochem Genet ; 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38951354

RESUMO

The genomic evaluation process relies on the assumption of linkage disequilibrium between dense single-nucleotide polymorphism (SNP) markers at the genome level and quantitative trait loci (QTL). The present study was conducted with the aim of evaluating four frequentist methods including Ridge Regression, Least Absolute Shrinkage and Selection Operator (LASSO), Elastic Net, and Genomic Best Linear Unbiased Prediction (GBLUP) and five Bayesian methods including Bayes Ridge Regression (BRR), Bayes A, Bayesian LASSO, Bayes C, and Bayes B, in genomic selection using simulation data. The difference between prediction accuracy was assessed in pairs based on statistical significance (p-value) (i.e., t test and Mann-Whitney U test) and practical significance (Cohen's d effect size) For this purpose, the data were simulated based on two scenarios in different marker densities (4000 and 8000, in the whole genome). The simulated data included a genome with four chromosomes, 1 Morgan each, on which 100 randomly distributed QTL and two different densities of evenly distributed SNPs (1000 and 2000), at the heritability level of 0.4, was considered. For the frequentist methods except for GBLUP, the regularization parameter λ was calculated using a five-fold cross-validation approach. For both scenarios, among the frequentist methods, the highest prediction accuracy was observed by Ridge Regression and GBLUP. The lowest and the highest bias were shown by Ridge Regression and GBLUP, respectively. Also, among the Bayesian methods, Bayes B and BRR showed the highest and lowest prediction accuracy, respectively. The lowest bias in both scenarios was registered by Bayesian LASSO and the highest bias in the first and the second scenario were shown by BRR and Bayes B, respectively. Across all the studied methods in both scenarios, the highest and the lowest accuracy were shown by Bayes B and LASSO and Elastic Net, respectively. As expected, the greatest similarity in performance was observed between GBLUP and BRR ( d = 0.007 , in the first scenario and d = 0.003 , in the second scenario). The results obtained from parametric t and non-parametric Mann-Whitney U tests were similar. In the first and second scenario, out of 36 t test between the performance of the studied methods in each scenario, 14 ( P < . 001 ) and 2 ( P < . 05 ) comparisons were significant, respectively, which indicates that with the increase in the number of predictors, the difference in the performance of different methods decreases. This was proven based on the Cohen's d effect size, so that with the increase in the complexity of the model, the effect size was not seen as very large. The regularization parameters in frequentist methods should be optimized by cross-validation approach before using these methods in genomic evaluation.

13.
J Hazard Mater ; 476: 135073, 2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-38968826

RESUMO

This study conducted a comprehensive analysis of trace element concentrations in the Upper Indus River Basin (UIRB), a glacier-fed region in the Western Himalayas (WH), aiming to discern their environmental and anthropogenic sources and implications. Despite limited prior data, 69 samples were collected in 2019 from diverse sources within the UIRB, including mainstream, tributaries, and groundwater, to assess trace element concentrations. Enrichment factor (EF) results and comparisons with regional and global averages suggest that rising levels of Zn, Cd, and As may pose safety concerns for drinking water quality. Advanced multivariate statistical techniques such as principal component analysis (PCA), absolute principal component scores (APCS-MLR), Monte Carlo simulation (MCS), etc were applied to estimate the associated human health hazards and also identified key sources of trace elements. The 95th percentile of the MCS results indicates that the estimated total cancer risk for children is significantly greater than (>1000 times) the USEPA's acceptable risk threshold of 1.0 × 10-6. The results classified most of the trace elements into two distinct groups: Group A (Li, Rb, Sr, U, Cs, V, Ni, TI, Sb, Mo, Ge), linked to geogenic sources, showed lower concentrations in the lower-middle river reaches, including tributaries and downstream regions. Group B (Pb, Nb, Cr, Zn, Be, Al, Th, Ga, Cu, Co), influenced by both geogenic and anthropogenic activities, exhibited higher concentrations near urban centers and midstream areas, aligning with increased municipal waste and agricultural activities. Furthermore, APCS-MLR source apportionment indicated that trace elements originated from natural geogenic processes, including rock-water interactions and mineral dissolution, as well as anthropogenic activities. These findings underscore the need for targeted measures to mitigate anthropogenic impacts and safeguard water resources for communities along the IRB and WH.

14.
Environ Pollut ; 358: 124468, 2024 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-38950847

RESUMO

Urban aquifers are at risk of contamination from persistent and mobile organic compounds (PMOCs), especially per- and polyfluoroalkyl substances (PFAS), which are artificial organic substances widely used across various industrial sectors. PFAS are considered toxic, mobile and persistent, and have therefore gained significant attention in environmental chemistry. Moreover, precursors could transform into more recalcitrant products under natural conditions. However, there is limited information about the processes which affect their behaviour in groundwater at the field-scale. In this context, the aim of this study is to assess the presence of PFAS in an urban aquifer in Barcelona, and identify processes that control their evolution along the groundwater flow. 21 groundwater and 6 river samples were collected revealing the presence of 16 PFAS products and 3 novel PFAS. Short and ultra-short chain PFAS were found to be ubiquitous, with the highest concentrations detected for perfluorobutanesulfonic acid (PFBS), trifluoroacetic acid (TFA) and trifluoromethanesulfonic acid (TFSA). Long chain PFAS and novel PFAS were found to be present in very low concentrations (<50 ng/L). It was observed that redox conditions influence the behaviour of a number of PFAS controlling their attenuation or recalcitrant behaviour. Most substances showed accumulation, possibly explained by sorption/desorption processes or transformation processes, highlighting the challenges associated with PFAS remediation. In addition, the removal processes of different intensities for three PFAS were revealed. Our results help to establish the principles of the evolution of PFAS along the groundwater flow, which are important for the development of conceptual models used to plan and adopt site specific groundwater management activities (e.g., Managed Aquifer Recharge).

15.
Environ Geochem Health ; 46(8): 263, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38954066

RESUMO

Sustainable management of river systems is a serious concern, requiring vigilant monitoring of water contamination levels that could potentially threaten the ecological community. This study focused on the evaluation of water quality in the Jhelum River (JR), Azad Jammu and Kashmir, and northern Punjab, Pakistan. To achieve this, 60 water samples were collected from various points within the JR Basin (JRB) and subjected to a comprehensive analysis of their physicochemical parameters. The study findings indicated that the concentrations of physicochemical parameters in the JRB water remained within safety thresholds for both drinking and irrigation water, as established by the World Health Organization and Pakistan Environmental Protection Agency. These physicochemical parameters refer to various chemical and physical characteristics of the water that can have implications for both human health (drinking water) and agricultural practices (irrigation water). The spatial variations throughout the river course distinguished between the upstream, midstream, and downstream sections. Specifically, the downstream section exhibited significantly higher values for physicochemical parameters and a broader range, highlighting a substantial decline in its quality. Significant disparities in mean values and ranges were evident, particularly in the case of nitrates and total dissolved solids, when the downstream section was compared with its upstream and midstream counterparts. These variations indicated a deteriorating downstream water quality profile, which is likely attributable to a combination of geological and anthropogenic influences. Despite the observed deterioration in the downstream water quality, this study underscores that the JRB within the upper Indus Basin remains safe and suitable for domestic and agricultural purposes. The JRB was evaluated for various irrigation water quality indices. The principal component analysis conducted in this study revealed distinct covariance patterns among water quality variables, with the first five components explaining approximately 79% of the total variance. Recommending the continued utilization of the JRB for irrigation, we advocate for the preservation and enhancement of water quality in the downstream regions.


Assuntos
Irrigação Agrícola , Análise Espacial , Conservação dos Recursos Hídricos , Rios/química , Abastecimento de Água , Qualidade da Água/normas
16.
BMJ Ment Health ; 27(1): 1-7, 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38960412

RESUMO

BACKGROUND: Circadian rhythms influence cognitive performance which peaks in the morning for early chronotypes and evening for late chronotypes. It is unknown whether cognitive interventions are susceptible to such synchrony effects and could be optimised at certain times-of-day. OBJECTIVE: A pilot study testing whether the effectiveness of cognitive bias modification (CBM) for facial emotion processing was improved when delivered at a time-of-day that was synchronised to chronotype. METHODS: 173 healthy young adults (aged 18-25) with an early or late chronotype completed one online session of CBM training in either the morning (06:00 hours to 10:00 hours) or evening (18:00 hours to 22:00 hours). FINDINGS: Moderate evidence that participants learnt better (higher post-training balance point) when they completed CBM training in the synchronous (evening for late chronotypes, morning for early chronotypes) compared with asynchronous (morning for late chronotypes, evening for early chronotypes) condition, controlling for pre-training balance point, sleep quality and negative affect. There was also a group×condition interaction where late chronotypes learnt faster and more effectively in synchronous versus asynchronous conditions. CONCLUSIONS: Preliminary evidence that synchrony effects apply to this psychological intervention. Tailoring the delivery timing of CBM training to chronotype may optimise its effectiveness. This may be particularly important for late chronotypes who were less able to adapt to non-optimal times-of-day, possibly because they experience more social jetlag. CLINICAL IMPLICATIONS: To consider delivery timing of CBM training when administering to early and late chronotypes. This may generalise to other psychological interventions and be relevant for online interventions where the timing can be flexible.


Assuntos
Ritmo Circadiano , Terapia Cognitivo-Comportamental , Emoções , Humanos , Projetos Piloto , Masculino , Feminino , Adulto Jovem , Adulto , Adolescente , Ritmo Circadiano/fisiologia , Terapia Cognitivo-Comportamental/métodos , Emoções/fisiologia , Fatores de Tempo , Expressão Facial , Cronotipo
17.
Scand J Med Sci Sports ; 34(7): e14691, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38970442

RESUMO

Quantifying movement coordination in cross-country (XC) skiing, specifically the technique with its elemental forms, is challenging. Particularly, this applies when trying to establish a bidirectional transfer between scientific theory and practical experts' knowledge as expressed, for example, in ski instruction curricula. The objective of this study was to translate 14 curricula-informed distinct elements of the V2 ski-skating technique (horizontal and vertical posture, lateral tilt, head position, upper body rotation, arm swing, shoulder abduction, elbow flexion, hand and leg distance, plantar flexion, ski set-down, leg push-off, and gliding phase) into plausible, valid and applicable measures to make the technique training process more quantifiable and scientifically grounded. Inertial measurement unit (IMU) data of 10 highly experienced XC skiers who demonstrated the technique elements by two extreme forms each (e.g., anterior versus posterior positioning for the horizontal posture) were recorded. Element-specific principal component analyses (PCAs)-driven by the variance produced by the technique extremes-resulted in movement components that express quantifiable measures of the underlying technique elements. Ten measures were found to be sensitive in distinguishing between the inputted extreme variations using statistical parametric mapping (SPM), whereas for four elements the SPM did not detect differences (lateral tilt, plantar flexion, ski set-down, and leg push-off). Applicability of the established technique measures was determined based on quantifying individual techniques through them. The study introduces a novel approach to quantitatively assess V2 ski-skating technique, which might help to enhance technique feedback and bridge the communication gap that often exists between practitioners and scientists.


Assuntos
Postura , Análise de Componente Principal , Esqui , Esqui/fisiologia , Humanos , Masculino , Postura/fisiologia , Fenômenos Biomecânicos , Adulto , Movimento/fisiologia , Feminino , Adulto Jovem , Braço/fisiologia , Ombro/fisiologia , Rotação
18.
Gait Posture ; 113: 272-279, 2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-38970929

RESUMO

BACKGROUND: Total ankle arthroplasty (TAA) is used to treat symptomatic end-stage ankle arthritis (AA). However, little is known about TAA's effects on gait symmetry. RESEARCH QUESTION: Determine if symmetry changes from before surgery through two years following TAA utilizing the normalized symmetry index (NSI) and statistical parametric mapping (SPM). METHODS: 141 patients with end-stage unilateral AA were evaluated from a previously collected prospective database, where each participant was tested within two weeks of surgery (Pre-Op), one year and two years following TAA. Walking speed, hip extension angle and moment, hip flexion angle, ankle plantarflexion angle and moment, ankle dorsiflexion angle, weight acceptance (GRF1), and propulsive (GRF2) vertical ground reaction forces were calculated for each limb. Gait symmetry was assessed using the NSI. A linear mixed effects model with a single response for each gait symmetry variable was used to examine the fixed effect of follow-up time (Pre-Op, Post-1 yr, Post-2 yr) and the random effect of participant with gait speed as a covariate in the model. A one-dimensional repeated measures analysis of variance (ANOVA) statistical parameter mapping (SPM) was completed to examine differences in the time-series NSI to determine regions of significant differences between follow-up times. RESULTS: Relative to Pre-Op values, GRF1, and GRF2 showed increased symmetry for discrete metrics and the time-series NSI across sessions. Hip extension moment had the largest symmetry improvement. Ankle plantarflexion angle was different between Pre-Op and Post-2 yr (p=0.010); and plantarflexion moment was different between Pre- Op and each post-operative session (p<0.001). The time-series Ankle Angle NSI was greater during the early stance phase in the Pre-Op session compared to Post-2 yr. SIGNIFICANCE: Symmetry across most of the stance phase improved following TAA indicating that TAA successfully improves gait symmetry and future work should determine if these improvements restore symmetry to levels equivalent with health age-match controls.

19.
Phys Med Biol ; 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38981595

RESUMO

Head and neck cancer patients experience systematic anatomical changes as well as random day to day anatomical changes during fractionated radiotherapy treatment. Modelling the expected systematic anatomical changes could aid in creating treatment plans which are more robust against such changes. A patient specific (SM) and population average (AM) model are presented which are able to capture the systematic anatomical changes of some head and neck cancer patients over the course of radiotherapy treatment. Inter- patient correspondence aligned all patients to a model space. Intra- patient correspondence between each planning CT scan and on treatment cone beam CT scans was obtained using diffeomorphic deformable image registration. The stationary velocity fields were then used to develop B-Spline based SMs and AMs. The models were evaluated geometrically and dosimetrically. A leave-one-out method was used to compare the training and testing accuracy of the models. Both SMs and AMs were able to capture systematic changes. The average surface distance between the registration propagated contours and the contours generated by the SM was less than 2mm, showing that the SM are able to capture the anatomical changes which a patient experiences during the course of radiotherapy. The testing accuracy was lower than the training accuracy of the SM, suggesting that the model overfits to the limited data available and therefore also captures some of the random day to day changes. For most patients the AMs were a better estimate of the anatomical changes than assuming there were no changes, but the AMs could not capture the variability in the anatomical changes seen in all patients. No difference was seen in the training and testing accuracy of the AMs. These observations were highlighted in both the geometric and dosimetric evaluations and comparisons. The large patient variability highlights the need for more complex, capable population models.

20.
J Proteome Res ; 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38981598

RESUMO

Single-cell analysis is an active area of research in many fields of biology. Measurements at single-cell resolution allow researchers to study diverse populations without losing biologically meaningful information to sample averages. Many technologies have been used to study single cells, including mass spectrometry-based single-cell proteomics (SCP). SCP has seen a lot of growth over the past couple of years through improvements in data acquisition and analysis, leading to greater proteomic depth. Because method development has been the main focus in SCP, biological applications have been sprinkled in only as proof-of-concept. However, SCP methods now provide significant coverage of the proteome and have been implemented in many laboratories. Thus, a primary question to address in our community is whether the current state of technology is ready for widespread adoption for biological inquiry. In this Perspective, we examine the potential for SCP in three thematic areas of biological investigation: cell annotation, developmental trajectories, and spatial mapping. We identify that the primary limitation of SCP is sample throughput. As proteome depth has been the primary target for method development to date, we advocate for a change in focus to facilitate measuring tens of thousands of single-cell proteomes to enable biological applications beyond proof-of-concept.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...