Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 225
Filter
2.
Article in English | MEDLINE | ID: mdl-38717719

ABSTRACT

Traumatic brain injury is a major cause of morbidity in civilian as well as military populations. Computational simulations of injurious events are an important tool to understanding the biomechanics of brain injury and evaluating injury criteria and safety measures. However, these computational models are highly dependent on the material parameters used to represent the brain tissue. Reported material properties of tissue from the cerebrum and cerebellum remain poorly defined at high rates and with respect to anisotropy. In this work, brain tissue from the cerebrum and cerebellum of male Göttingen minipigs was tested in one of three directions relative to axon fibers in oscillatory simple shear over a large range of strain rates from 0.025 to 250 s-1. Brain tissue showed significant direction dependence in both regions, each with a single preferred loading direction. The tissue also showed strong rate dependence over the full range of rates considered. Transversely isotropic hyper-viscoelastic constitutive models were fit to experimental data using dynamic inverse finite element models to account for wave propagation observed at high strain rates. The fit constitutive models predicted the response in all directions well at rates below 100 s-1, after which they adequately predicted the initial two loading cycles, with the exception of the 250 s-1 rate, where models performed poorly. These constitutive models can be readily implemented in finite element packages and are suitable for simulation of both conventional and blast injury in porcine, especially Göttingen minipig, models.

3.
J Sleep Res ; : e14220, 2024 Apr 18.
Article in English | MEDLINE | ID: mdl-38634269

ABSTRACT

It is well established that individuals differ in their response to sleep loss. However, existing methods to predict an individual's sleep-loss phenotype are not scalable or involve effort-dependent neurobehavioural tests. To overcome these limitations, we sought to predict an individual's level of resilience or vulnerability to sleep loss using electroencephalographic (EEG) features obtained from routine night sleep. To this end, we retrospectively analysed five studies in which 96 healthy young adults (41 women) completed a laboratory baseline-sleep phase followed by a sleep-loss challenge. After classifying subjects into sleep-loss phenotypic groups, we extracted two EEG features from the first sleep cycle (median duration: 1.6 h), slow-wave activity (SWA) power and SWA rise rate, from four channels during the baseline nights. Using these data, we developed two sets of logistic regression classifiers (resilient versus not-resilient and vulnerable versus not-vulnerable) to predict the probability of sleep-loss resilience or vulnerability, respectively, and evaluated model performance using test datasets not used in model development. Consistently, the most predictive features came from the left cerebral hemisphere. For the resilient versus not-resilient classifiers, we obtained an average testing performance of 0.68 for the area under the receiver operating characteristic curve, 0.72 for accuracy, 0.50 for sensitivity, 0.84 for specificity, 0.61 for positive predictive value, and 3.59 for likelihood ratio. We obtained similar performance for the vulnerable versus not-vulnerable classifiers. These results indicate that logistic regression classifiers based on SWA power and SWA rise rate from routine night sleep can largely predict an individual's sleep-loss phenotype.

4.
Front Physiol ; 15: 1327948, 2024.
Article in English | MEDLINE | ID: mdl-38332989

ABSTRACT

A deep neural network-based artificial intelligence (AI) model was assessed for its utility in predicting vital signs of hemorrhage patients and optimizing the management of fluid resuscitation in mass casualties. With the use of a cardio-respiratory computational model to generate synthetic data of hemorrhage casualties, an application was created where a limited data stream (the initial 10 min of vital-sign monitoring) could be used to predict the outcomes of different fluid resuscitation allocations 60 min into the future. The predicted outcomes were then used to select the optimal resuscitation allocation for various simulated mass-casualty scenarios. This allowed the assessment of the potential benefits of using an allocation method based on personalized predictions of future vital signs versus a static population-based method that only uses currently available vital-sign information. The theoretical benefits of this approach included up to 46% additional casualties restored to healthy vital signs and a 119% increase in fluid-utilization efficiency. Although the study is not immune from limitations associated with synthetic data under specific assumptions, the work demonstrated the potential for incorporating neural network-based AI technologies in hemorrhage detection and treatment. The simulated injury and treatment scenarios used delineated possible benefits and opportunities available for using AI in pre-hospital trauma care. The greatest benefit of this technology lies in its ability to provide personalized interventions that optimize clinical outcomes under resource-limited conditions, such as in civilian or military mass-casualty events, involving moderate and severe hemorrhage.

5.
Sleep ; 47(3)2024 Mar 11.
Article in English | MEDLINE | ID: mdl-37947051

ABSTRACT

STUDY OBJECTIVES: Wearable sleep-tracker devices are ubiquitously used to measure sleep; however, the estimated sleep parameters often differ from the gold-standard polysomnography (PSG). It is unclear to what extent we can tolerate these errors within the context of a particular clinical or operational application. Here, we sought to develop a method to quantitatively determine whether a sleep tracker yields acceptable sleep-parameter estimates for assessing alertness impairment. METHODS: Using literature data, we characterized sleep-measurement errors of 18 unique sleep-tracker devices with respect to PSG. Then, using predictions based on the unified model of performance, we compared the temporal variation of alertness in terms of the psychomotor vigilance test mean response time for simulations with and without added PSG-device sleep-measurement errors, for nominal schedules of 5, 8, or 9 hours of sleep/night or an irregular sleep schedule each night for 30 consecutive days. Finally, we deemed a device error acceptable when the predicted differences were smaller than the within-subject variability of 30 milliseconds. We also established the capability to estimate the extent to which a specific sleep-tracker device meets this acceptance criterion. RESULTS: On average, the 18 sleep-tracker devices overestimated sleep duration by 19 (standard deviation = 44) minutes. Using these errors for 30 consecutive days, we found that, regardless of sleep schedule, in nearly 80% of the time the resulting predicted alertness differences were smaller than 30 milliseconds. CONCLUSIONS: We provide a method to quantitatively determine whether a sleep-tracker device produces sleep measurements that are operationally acceptable for fatigue management.


Subject(s)
Sleep , Wearable Electronic Devices , Humans , Reproducibility of Results , Sleep/physiology , Polysomnography/methods , Fatigue/therapy
6.
Front Neurosci ; 17: 1254154, 2023.
Article in English | MEDLINE | ID: mdl-37942142

ABSTRACT

Hyperalgesic priming, a form of neuroplasticity induced by inflammatory mediators, in peripheral nociceptors enhances the magnitude and duration of action potential (AP) firing to future inflammatory events and can potentially lead to pain chronification. The mechanisms underlying the development of hyperalgesic priming are not well understood, limiting the identification of novel therapeutic strategies to combat chronic pain. In this study, we used a computational model to identify key proteins whose modifications caused priming of muscle nociceptors and made them hyperexcitable to a subsequent inflammatory event. First, we extended a previously validated model of mouse muscle nociceptor sensitization to incorporate Epac-mediated interaction between two G protein-coupled receptor signaling pathways commonly activated by inflammatory mediators. Next, we calibrated and validated the model simulations of the nociceptor's AP response to both innocuous and noxious levels of mechanical force after two subsequent inflammatory events using literature data. Then, by performing global sensitivity analyses that simulated thousands of nociceptor-priming scenarios, we identified five ion channels and two molecular processes (from the 18 modeled transmembrane proteins and 29 intracellular signaling components) as potential regulators of the increase in AP firing in response to mechanical forces. Finally, when we simulated specific neuroplastic modifications in Kv1.1 and Nav1.7 alone as well as with simultaneous modifications in Nav1.7, Nav1.8, TRPA1, and Kv7.2, we observed a considerable increase in the fold change in the number of triggered APs in primed nociceptors. These results suggest that altering the expression of Kv1.1 and Nav1.7 might regulate the neuronal hyperexcitability in primed mechanosensitive muscle nociceptors.

7.
J Sleep Res ; : e14060, 2023 Oct 06.
Article in English | MEDLINE | ID: mdl-37800178

ABSTRACT

Sleep loss impairs cognition; however, individuals differ in their response to sleep loss. Current methods to identify an individual's vulnerability to sleep loss involve time-consuming sleep-loss challenges and neurobehavioural tests. Here, we sought to identify electroencephalographic markers of sleep-loss vulnerability obtained from routine night sleep. We retrospectively analysed four studies in which 50 healthy young adults (21 women) completed a laboratory baseline-sleep phase followed by a sleep-loss challenge. After classifying subjects as resilient or vulnerable to sleep loss, we extracted three electroencephalographic features from four channels during the baseline nights, evaluated the discriminatory power of these features using the first two studies (discovery), and assessed reproducibility of the results using the remaining two studies (reproducibility). In the discovery analysis, we found that, compared to resilient subjects, vulnerable subjects exhibited: (1) higher slow-wave activity power in channel O1 (p < 0.0042, corrected for multiple comparisons) and in channels O2 and C3 (p < 0.05, uncorrected); (2) higher slow-wave activity rise rate in channels O1 and O2 (p < 0.05, uncorrected); and (3) lower sleep spindle frequency in channels C3 and C4 (p < 0.05, uncorrected). Our reproducibility analysis confirmed the discovery results on slow-wave activity power and slow-wave activity rise rate, and for these two electroencephalographic features we observed consistent group-difference trends across all four channels in both analyses. The higher slow-wave activity power and slow-wave activity rise rate in vulnerable individuals suggest that they have a persistently higher sleep pressure under normal rested conditions.

8.
Front Bioeng Biotechnol ; 11: 1250937, 2023.
Article in English | MEDLINE | ID: mdl-37854880

ABSTRACT

During U.S. Army basic combat training (BCT), women are more prone to lower-extremity musculoskeletal injuries, including stress fracture (SF) of the tibia, with injury rates two to four times higher than those in men. There is evidence to suggest that the different injury rates are, in part, due to sex-specific differences in running biomechanics, including lower-extremity joint kinematics and kinetics, which are not fully understood, particularly when running with external load. To address this knowledge gap, we collected computed tomography images and motion-capture data from 41 young, healthy adults (20 women and 21 men) running on an instrumented treadmill at 3.0 m/s with loads of 0.0 kg, 11.3 kg, or 22.7 kg. Using individualized computational models, we quantified the running biomechanics and estimated tibial SF risk over 10 weeks of BCT, for each load condition. Across all load conditions, compared to men, women had a significantly smaller flexion angle at the trunk (16.9%-24.6%) but larger flexion angles at the ankle (14.0%-14.7%). Under load-carriage conditions, women had a larger flexion angle at the hip (17.7%-23.5%). In addition, women had a significantly smaller hip extension moment (11.8%-20.0%) and ankle plantarflexion moment (10.2%-14.3%), but larger joint reaction forces (JRFs) at the hip (16.1%-22.0%), knee (9.1%-14.2%), and ankle (8.2%-12.9%). Consequently, we found that women had a greater increase in tibial strain and SF risk than men as load increases, indicating higher susceptibility to injuries. When load carriage increased from 0.0 kg to 22.7 kg, SF risk increased by about 250% in women but only 133% in men. These results provide quantitative evidence to support the Army's new training and testing doctrine, as it shifts to a more personalized approach that shall account for sex and individual differences.

9.
BMC Musculoskelet Disord ; 24(1): 604, 2023 Jul 24.
Article in English | MEDLINE | ID: mdl-37488528

ABSTRACT

BACKGROUND: Tibial stress fracture is a debilitating musculoskeletal injury that diminishes the physical performance of individuals who engage in high-volume running, including Service members during basic combat training (BCT) and recreational athletes. While several studies have shown that reducing stride length decreases musculoskeletal loads and the potential risk of tibial injury, we do not know whether stride-length reduction affects individuals of varying stature differently. METHODS: We investigated the effects of reducing the running stride length on the biomechanics of the lower extremity of young, healthy women of different statures. Using individualized musculoskeletal and finite-element models of women of short (N = 6), medium (N = 7), and tall (N = 7) statures, we computed the joint kinematics and kinetics at the lower extremity and tibial strain for each participant as they ran on a treadmill at 3.0 m/s with their preferred stride length and with a stride length reduced by 10%. Using a probabilistic model, we estimated the stress-fracture risk for running regimens representative of U.S. Army Soldiers during BCT and recreational athletes training for a marathon. RESULTS: When study participants reduced their stride length by 10%, the joint kinetics, kinematics, tibial strain, and stress-fracture risk were not significantly different among the three stature groups. Compared to the preferred stride length, a 10% reduction in stride length significantly decreased peak hip (p = 0.002) and knee (p < 0.001) flexion angles during the stance phase. In addition, it significantly decreased the peak hip adduction (p = 0.013), hip internal rotation (p = 0.004), knee extension (p = 0.012), and ankle plantar flexion (p = 0.026) moments, as well as the hip, knee, and ankle joint reaction forces (p < 0.001) and tibial strain (p < 0.001). Finally, for the simulated regimens, reducing the stride length decreased the relative risk of stress fracture by as much as 96%. CONCLUSIONS: Our results show that reducing stride length by 10% decreases musculoskeletal loads, tibial strain, and stress-fracture risk, regardless of stature. We also observed large between-subject variability, which supports the development of individualized training strategies to decrease the incidence of stress fracture.


Subject(s)
Fractures, Stress , Humans , Female , Biomechanical Phenomena , Lower Extremity , Tibia , Knee Joint
10.
Shock ; 60(2): 199-205, 2023 08 01.
Article in English | MEDLINE | ID: mdl-37335312

ABSTRACT

ABSTRACT: Background: Hemorrhage remains the leading cause of death on the battlefield. This study aims to assess the ability of an artificial intelligence triage algorithm to automatically analyze vital-sign data and stratify hemorrhage risk in trauma patients. Methods: Here, we developed the APPRAISE-Hemorrhage Risk Index (HRI) algorithm, which uses three routinely measured vital signs (heart rate and diastolic and systolic blood pressures) to identify trauma patients at greatest risk of hemorrhage. The algorithm preprocesses the vital signs to discard unreliable data, analyzes reliable data using an artificial intelligence-based linear regression model, and stratifies hemorrhage risk into low (HRI:I), average (HRI:II), and high (HRI:III). Results: To train and test the algorithm, we used 540 h of continuous vital-sign data collected from 1,659 trauma patients in prehospital and hospital (i.e., emergency department) settings. We defined hemorrhage cases (n = 198) as those patients who received ≥1 unit of packed red blood cells within 24 h of hospital admission and had documented hemorrhagic injuries. The APPRAISE-HRI stratification yielded a hemorrhage likelihood ratio (95% confidence interval) of 0.28 (0.13-0.43) for HRI:I, 1.00 (0.85-1.15) for HRI:II, and 5.75 (3.57-7.93) for HRI:III, suggesting that patients categorized in the low-risk (high-risk) category were at least 3-fold less (more) likely to have hemorrhage than those in the average trauma population. We obtained similar results in a cross-validation analysis. Conclusions: The APPRAISE-HRI algorithm provides a new capability to evaluate routine vital signs and alert medics to specific casualties who have the highest risk of hemorrhage, to optimize decision-making for triage, treatment, and evacuation.


Subject(s)
Artificial Intelligence , Triage , Humans , Triage/methods , Hemorrhage/diagnosis , Hemorrhage/therapy , Algorithms , Emergency Service, Hospital
11.
Front Neurosci ; 17: 1147437, 2023.
Article in English | MEDLINE | ID: mdl-37250415

ABSTRACT

Sensory neurons embedded in muscle tissue that initiate pain sensations, i.e., nociceptors, are temporarily sensitized by inflammatory mediators during musculoskeletal trauma. These neurons transduce peripheral noxious stimuli into an electrical signal [i.e., an action potential (AP)] and, when sensitized, demonstrate lower activation thresholds and a heightened AP response. We still do not understand the relative contributions of the various transmembrane proteins and intracellular signaling processes that drive the inflammation-induced hyperexcitability of nociceptors. In this study, we used computational analysis to identify key proteins that could regulate the inflammation-induced increase in the magnitude of AP firing in mechanosensitive muscle nociceptors. First, we extended a previously validated model of a mechanosensitive mouse muscle nociceptor to incorporate two inflammation-activated G protein-coupled receptor (GPCR) signaling pathways and validated the model simulations of inflammation-induced nociceptor sensitization using literature data. Then, by performing global sensitivity analyses that simulated thousands of inflammation-induced nociceptor sensitization scenarios, we identified three ion channels and four molecular processes (from the 17 modeled transmembrane proteins and 28 intracellular signaling components) as potential regulators of the inflammation-induced increase in AP firing in response to mechanical forces. Moreover, we found that simulating single knockouts of transient receptor potential ankyrin 1 (TRPA1) and reducing the rates of Gαq-coupled receptor phosphorylation and Gαq subunit activation considerably altered the excitability of nociceptors (i.e., each modification increased or decreased the inflammation-induced fold change in the number of triggered APs compared to when all channels were present). These results suggest that altering the expression of TRPA1 or the concentration of intracellular Gαq might regulate the inflammation-induced increase in AP response of mechanosensitive muscle nociceptors.

12.
IEEE Trans Biomed Eng ; 70(8): 2445-2453, 2023 08.
Article in English | MEDLINE | ID: mdl-37027627

ABSTRACT

OBJECTIVE: Overuse musculoskeletal injuries, often precipitated by walking or running with heavy loads, are the leading cause of lost-duty days or discharge during basic combat training (BCT) in the U.S. military. The present study investigates the impact of stature and load carriage on the running biomechanics of men during BCT. METHODS: We collected computed tomography images and motion-capture data for 21 young, healthy men of short, medium, and tall stature (n = 7 in each group) running with no load, an 11.3-kg load, and a 22.7-kg load. We then developed individualized musculoskeletal finite-element models to determine the running biomechanics for each participant under each condition, and used a probabilistic model to estimate the risk of tibial stress fracture during a 10-week BCT regimen. RESULTS: Under all load conditions, we found that the running biomechanics were not significantly different among the three stature groups. However, compared to no load, a 22.7-kg load significantly decreased the stride length, while significantly increasing the joint forces and moments at the lower extremities, as well as the tibial strain and stress-fracture risk. CONCLUSION: Load carriage but not stature significantly affected the running biomechanics of healthy men. SIGNIFICANCE: We expect that the quantitative analysis reported here may help guide training regimens and reduce the risk of stress fracture.


Subject(s)
Fractures, Stress , Male , Humans , Fractures, Stress/diagnostic imaging , Biomechanical Phenomena , Weight-Bearing , Lower Extremity , Walking
13.
Sleep ; 46(7)2023 07 11.
Article in English | MEDLINE | ID: mdl-36987747

ABSTRACT

STUDY OBJECTIVES: If properly consumed, caffeine can safely and effectively mitigate the effects of sleep loss on alertness. However, there are no tools to determine the amount and time to consume caffeine to maximize its effectiveness. Here, we extended the capabilities of the 2B-Alert app, a unique smartphone application that learns an individual's trait-like response to sleep loss, to provide personalized caffeine recommendations to optimize alertness. METHODS: We prospectively validated 2B-Alert's capabilities in a 62-hour total sleep deprivation study in which 21 participants used the app to measure their alertness throughout the study via the psychomotor vigilance test (PVT). Using PVT data collected during the first 36 hours of the sleep challenge, the app learned the participant's sleep-loss response and provided personalized caffeine recommendations so that each participant would sustain alertness at a pre-specified target level (mean response time of 270 milliseconds) during a 6-hour period starting at 44 hours of wakefulness, using the least amount of caffeine possible. Starting at 42 hours, participants consumed 0 to 800 mg of caffeine, per the app recommendation. RESULTS: 2B-Alert recommended no caffeine to five participants, 100-400 mg to 11 participants, and 500-800 mg to five participants. Regardless of the consumed amount, participants sustained the target alertness level ~80% of the time. CONCLUSIONS: 2B-Alert automatically learns an individual's phenotype and provides personalized caffeine recommendations in real time so that individuals achieve a desired alertness level regardless of their sleep-loss susceptibility.


Subject(s)
Caffeine , Mobile Applications , Humans , Caffeine/pharmacology , Psychomotor Performance/physiology , Attention/physiology , Wakefulness/physiology , Reaction Time/physiology , Sleep Deprivation
14.
Malar J ; 22(1): 56, 2023 Feb 14.
Article in English | MEDLINE | ID: mdl-36788578

ABSTRACT

BACKGROUND: Spiroindolone and pyrazoleamide antimalarial compounds target Plasmodium falciparum P-type ATPase (PfATP4) and induce disruption of intracellular Na+ homeostasis. Recently, a PfATP4 mutation was discovered that confers resistance to a pyrazoleamide while increasing sensitivity to a spiroindolone. Transcriptomic and metabolic adaptations that underlie this seemingly contradictory response of P. falciparum to sublethal concentrations of each compound were examined to understand the different cellular accommodation to PfATP4 disruptions. METHODS: A genetically engineered P. falciparum Dd2 strain (Dd2A211V) carrying an Ala211Val (A211V) mutation in PfATP4 was used to identify metabolic adaptations associated with the mutation that results in decreased sensitivity to PA21A092 (a pyrazoleamide) and increased sensitivity to KAE609 (a spiroindolone). First, sublethal doses of PA21A092 and KAE609 causing substantial reduction (30-70%) in Dd2A211V parasite replication were identified. Then, at this sublethal dose of PA21A092 (or KAE609), metabolomic and transcriptomic data were collected during the first intraerythrocytic developmental cycle. Finally, the time-resolved data were integrated with a whole-genome metabolic network model of P. falciparum to characterize antimalarial-induced physiological adaptations. RESULTS: Sublethal treatment with PA21A092 caused significant (p < 0.001) alterations in the abundances of 91 Plasmodium gene transcripts, whereas only 21 transcripts were significantly altered due to sublethal treatment with KAE609. In the metabolomic data, a substantial alteration (≥ fourfold) in the abundances of carbohydrate metabolites in the presence of either compound was found. The estimated rates of macromolecule syntheses between the two antimalarial-treated conditions were also comparable, except for the rate of lipid synthesis. A closer examination of parasite metabolism in the presence of either compound indicated statistically significant differences in enzymatic activities associated with synthesis of phosphatidylcholine, phosphatidylserine, and phosphatidylinositol. CONCLUSION: The results of this study suggest that malaria parasites activate protein kinases via phospholipid-dependent signalling in response to the ionic perturbation induced by the Na+ homeostasis disruptor PA21A092. Therefore, targeted disruption of phospholipid signalling in PA21A092-resistant parasites could be a means to block the emergence of resistance to PA21A092.


Subject(s)
Antimalarials , Malaria, Falciparum , Malaria , Parasites , Animals , Antimalarials/therapeutic use , Malaria/drug therapy , Malaria, Falciparum/parasitology , Plasmodium falciparum , Phospholipids/metabolism , Phospholipids/therapeutic use
15.
Med Sci Sports Exerc ; 55(4): 751-764, 2023 04 01.
Article in English | MEDLINE | ID: mdl-36730025

ABSTRACT

INTRODUCTION: An uncontrollably rising core body temperature (T C ) is an indicator of an impending exertional heat illness. However, measuring T C invasively in field settings is challenging. By contrast, wearable sensors combined with machine-learning algorithms can continuously monitor T C nonintrusively. Here, we prospectively validated 2B-Cool , a hardware/software system that automatically learns how individuals respond to heat stress and provides individualized estimates of T C , 20-min ahead predictions, and early warning of a rising T C . METHODS: We performed a crossover heat stress study in an environmental chamber, involving 11 men and 11 women (mean ± SD age = 20 ± 2 yr) who performed three bouts of varying physical activities on a treadmill over a 7.5-h trial, each under four different clothing and environmental conditions. Subjects wore the 2B-Cool system, consisting of a smartwatch, which collected vital signs, and a paired smartphone, which housed machine-learning algorithms and used the vital sign data to make individualized real-time forecasts. Subjects also wore a chest strap heart rate sensor and a rectal probe for comparison purposes. RESULTS: We observed very good agreement between the 2B-Cool forecasts and the measured T C , with a mean bias of 0.16°C for T C estimates and nearly 75% of measurements falling within the 95% prediction intervals of ±0.62°C for the 20-min predictions. The early-warning system results for a 38.50°C threshold yielded a 98% sensitivity, an 81% specificity, a prediction horizon of 35 min, and a false alarm rate of 0.12 events per hour. We observed no sex differences in the measured or predicted peak T C . CONCLUSION: 2B-Cool provides early warning of a rising T C with a sufficient lead time to enable clinical interventions and to help reduce the risk of exertional heat illness.


Subject(s)
Heat Stress Disorders , Wearable Electronic Devices , Male , Humans , Female , Adolescent , Young Adult , Adult , Body Temperature/physiology , Cold Temperature , Exercise/physiology , Heat Stress Disorders/diagnosis , Heat Stress Disorders/prevention & control , Hot Temperature
16.
Eur J Appl Physiol ; 123(5): 1125-1134, 2023 May.
Article in English | MEDLINE | ID: mdl-36651993

ABSTRACT

INTRODUCTION: Personal protective equipment (PPE) inhibits heat dissipation and elevates heat strain. Impaired cooling with PPE warrants investigation into practical strategies to improve work capacity and mitigate exertional heat illness. PURPOSE: Examine physiological and subjective effects of forearm immersion (FC), fan mist (MC), and passive cooling (PC) following three intermittent treadmill bouts while wearing PPE. METHODS: Twelve males (27 ± 6 years; 57.6 ± 6.2 ml/kg/min; 78.3 ± 8.1 kg; 183.1 ± 7.2 cm) performed three 50-min (10 min of 40%, 70%, 40%, 60%, 50% vVO2max) treadmill bouts in the heat (36 °C, 30% relative humidity). Thirty minutes of cooling followed each bout, using one of the three strategies per trial. Rectal temperature (Tcore), skin temperature (Tsk), heart rate (HR), heart rate recovery (HRR), rating of perceived exertion (RPE), thirst, thermal sensation (TS), and fatigue were obtained. Repeated-measures analysis of variance (condition x time) detected differences between interventions. RESULTS: Final Tcore was similar between trials (P > .05). Cooling rates were larger in FC and MC vs PC following bout one (P < .05). HRR was greatest in FC following bouts two (P = .013) and three (P < .001). Tsk, fluid consumption, and sweat rate were similar between all trials (P > .05). TS and fatigue during bout three were lower in MC, despite similar Tcore and HR. CONCLUSION: Utilizing FC and MC during intermittent work in the heat with PPE yields some thermoregulatory and cardiovascular benefit, but military health and safety personnel should explore new and novel strategies to mitigate risk and maximize performance under hot conditions while wearing PPE.


Subject(s)
Body Temperature Regulation , Hot Temperature , Male , Humans , Body Temperature Regulation/physiology , Skin Temperature , Personal Protective Equipment , Fatigue , Heart Rate/physiology , Body Temperature , Protective Clothing
17.
J Sleep Res ; 32(2): e13626, 2023 04.
Article in English | MEDLINE | ID: mdl-35521938

ABSTRACT

To be effective as a key component of fatigue-management systems, biomathematical models that predict alertness impairment as a function of time of day, sleep history, and caffeine consumption must demonstrate the ability to make accurate predictions across a range of sleep-loss and caffeine schedules. Here, we assessed the ability of the previously reported unified model of performance (UMP) to predict alertness impairment at the group-average and individualised levels in a comprehensive set of 12 studies, including 22 sleep and caffeine conditions, for a total of 301 unique subjects. Given sleep and caffeine schedules, the UMP predicted alertness impairment based on the psychomotor vigilance test (PVT) for the duration of the schedule. To quantify prediction performance, we computed the root mean square error (RMSE) between model predictions and PVT data, and the fraction of measured PVTs that fell within the models' prediction intervals (PIs). For the group-average model predictions, the overall RMSE was 43 ms (range 15-74 ms) and the fraction of PVTs within the PIs was 80% (range 41%-100%). At the individualised level, the UMP could predict alertness for 81% of the subjects, with an overall average RMSE of 64 ms (range 32-147 ms) and fraction of PVTs within the PIs conservatively estimated as 71% (range 41%-100%). Altogether, these results suggest that, for the group-average model and 81% of the individualised models, in three out of four PVT measurements we cannot distinguish between study data and model predictions.


Subject(s)
Caffeine , Sleep Deprivation , Humans , Attention , Caffeine/pharmacology , Psychomotor Performance
18.
J Biomech Eng ; 145(6)2023 06 01.
Article in English | MEDLINE | ID: mdl-36524865

ABSTRACT

Traumatic brain injury (TBI), particularly from explosive blasts, is a major cause of casualties in modern military conflicts. Computational models are an important tool in understanding the underlying biomechanics of TBI but are highly dependent on the mechanical properties of soft tissue to produce accurate results. Reported material properties of brain tissue can vary by several orders of magnitude between studies, and no published set of material parameters exists for porcine brain tissue at strain rates relevant to blast. In this work, brain tissue from the brainstem, cerebellum, and cerebrum of freshly euthanized adolescent male Göttingen minipigs was tested in simple shear and unconfined compression at strain rates ranging from quasi-static (QS) to 300 s-1. Brain tissue showed significant strain rate stiffening in both shear and compression. Minimal differences were seen between different regions of the brain. Both hyperelastic and hyper-viscoelastic constitutive models were fit to experimental stress, considering data from either a single loading mode (unidirectional) or two loading modes together (bidirectional). The unidirectional hyper-viscoelastic models with an Ogden hyperelastic representation and a one-term Prony series best captured the response of brain tissue in all regions and rates. The bidirectional models were generally able to capture the response of the tissue in high-rate shear and all compression modes, but not the QS shear. Our constitutive models describe the first set of material parameters for porcine brain tissue relevant to loading modes and rates seen in blast injury.


Subject(s)
Brain Injuries, Traumatic , Brain , Swine , Animals , Male , Swine, Miniature , Stress, Mechanical , Biomechanical Phenomena , Elasticity , Viscosity
19.
Int J Numer Method Biomed Eng ; 39(1): e3662, 2023 01.
Article in English | MEDLINE | ID: mdl-36385572

ABSTRACT

Mathematical models of human cardiovascular and respiratory systems provide a viable alternative to generate synthetic data to train artificial intelligence (AI) clinical decision-support systems and assess closed-loop control technologies, for military medical applications. However, existing models are either complex, standalone systems that lack the interface to other applications or fail to capture the essential features of the physiological responses to the major causes of battlefield trauma (i.e., hemorrhage and airway compromise). To address these limitations, we developed the cardio-respiratory (CR) model by expanding and integrating two previously published models of the cardiovascular and respiratory systems. We compared the vital signs predicted by the CR model with those from three models, using experimental data from 27 subjects in five studies, involving hemorrhage, fluid resuscitation, and respiratory perturbations. Overall, the CR model yielded relatively small root mean square errors (RMSEs) for mean arterial pressure (MAP; 20.88 mm Hg), end-tidal CO2 (ETCO2 ; 3.50 mm Hg), O2 saturation (SpO2 ; 3.40%), and arterial O2 pressure (PaO2 ; 10.06 mm Hg), but a relatively large RMSE for heart rate (HR; 70.23 beats/min). In addition, the RMSEs for the CR model were 3% to 10% smaller than the three other models for HR, 11% to 15% for ETCO2 , 0% to 33% for SpO2 , and 10% to 64% for PaO2 , while they were similar for MAP. In conclusion, the CR model balances simplicity and accuracy, while qualitatively and quantitatively capturing human physiological responses to battlefield trauma, supporting its use to train and assess emerging AI and control systems.


Subject(s)
Artificial Intelligence , Lung , Humans , Hemorrhage , Arterial Pressure/physiology , Models, Theoretical
20.
J Comput Aided Mol Des ; 36(12): 867-878, 2022 12.
Article in English | MEDLINE | ID: mdl-36272041

ABSTRACT

The main limitation in developing deep neural network (DNN) models to predict bioactivity properties of chemicals is the lack of sufficient assay data to train the network's classification layers. Focusing on feedforward DNNs that use atom- and bond-based structural fingerprints as input, we examined whether layers of a fully trained DNN based on large amounts of data to predict one property could be used to develop DNNs to predict other related or unrelated properties based on limited amounts of data. Hence, we assessed if and under what conditions the dense layers of a pre-trained DNN could be transferred and used for the development of another DNN associated with limited training data. We carried out a quantitative study employing more than 400 pairs of assay datasets, where we used fully trained layers from a large dataset to augment the training of a small dataset. We found that the higher the correlation r between two assay datasets, the more efficient the transfer learning is in reducing prediction errors associated with the smaller dataset DNN predictions. The reduction in mean squared prediction errors ranged from 10 to 20% for every 0.1 increase in r2 between the datasets, with the bulk of the error reductions associated with transfers of the first dense layer. Transfer of other dense layers did not result in additional benefits, suggesting that deeper, dense layers conveyed more specialized and assay-specific information. Importantly, depending on the dataset correlation, training sample size could be reduced by up to tenfold without any loss of prediction accuracy.


Subject(s)
Machine Learning , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...