Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters











Database
Language
Publication year range
1.
Biostatistics ; 2024 Aug 05.
Article in English | MEDLINE | ID: mdl-39103178

ABSTRACT

The under-5 mortality rate (U5MR), a critical health indicator, is typically estimated from household surveys in lower and middle income countries. Spatio-temporal disaggregation of household survey data can lead to highly variable estimates of U5MR, necessitating the usage of smoothing models which borrow information across space and time. The assumptions of common smoothing models may be unrealistic when certain time periods or regions are expected to have shocks in mortality relative to their neighbors, which can lead to oversmoothing of U5MR estimates. In this paper, we develop a spatial and temporal smoothing approach based on Gaussian Markov random field models which incorporate knowledge of these expected shocks in mortality. We demonstrate the potential for these models to improve upon alternatives not incorporating knowledge of expected shocks in a simulation study. We apply these models to estimate U5MR in Rwanda at the national level from 1985 to 2019, a time period which includes the Rwandan civil war and genocide.

2.
J Biomed Inform ; 137: 104255, 2023 01.
Article in English | MEDLINE | ID: mdl-36462600

ABSTRACT

The analysis of registry data has important implications for cancer monitoring, control, and treatment. In such analysis, (semi)parametric models, such as the Cox Proportional Hazards model, have been routinely adopted. In recent years, deep neural network (DNN) has been shown to excel in many fields with its flexibility and superior prediction performance, and it has been applied to the analysis of cancer survival data. Cancer registry data usually has a broad spatial and temporal coverage, leading to significant heterogeneity. Published studies have suggested that it is not sensible to fit one model for all spatial and temporal locations combined. On the other hand, it is inefficient to fit one model for each spatial/temporal location separately. Motivated by such considerations, in this study, we develop a spatio-temporally smoothed DNN approach for the analysis of cancer registry data with a (censored) survival outcome. This approach can accommodate the significant differences across time and space, while recognizing that the spatial and temporal changes are smooth. It is effectively realized via cutting-edge optimization techniques. To draw more definitive conclusions, we also develop an approach for assessing the importance of each individual input variable. Data on head and neck cancer (HNC) and pancreatic cancer from the Surveillance, Epidemiology, and End Results (SEER) database is analyzed. Compared to direct competitors, the proposed approach leads to network architectures that are smoother. Evaluated using the time-dependent Concordance-Index, it has a better prediction performance. The important variables are also biomedically sensible. Overall, this study can deliver a new and effective tool for deciphering cancer survival at the population level.


Subject(s)
Neural Networks, Computer , Pancreatic Neoplasms , Humans , Proportional Hazards Models , Registries , Pancreatic Neoplasms/therapy , Databases, Factual
3.
Stat Methods Med Res ; 31(8): 1566-1578, 2022 08.
Article in English | MEDLINE | ID: mdl-35585712

ABSTRACT

Bayesian disease mapping, yet if undeniably useful to describe variation in risk over time and space, comes with the hurdle of prior elicitation on hard-to-interpret random effect precision parameters. We introduce a reparametrized version of the popular spatio-temporal interaction models, based on Kronecker product intrinsic Gaussian Markov random fields, that we name the variance partitioning model. The variance partitioning model includes a mixing parameter that balances the contribution of the main and interaction effects to the total (generalized) variance and enhances interpretability. The use of a penalized complexity prior on the mixing parameter aids in coding prior information in an intuitive way. We illustrate the advantages of the variance partitioning model using two case studies.


Subject(s)
Models, Statistical , Bayes Theorem
4.
Int J Comput Assist Radiol Surg ; 17(8): 1445-1452, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35362848

ABSTRACT

PURPOSE: Workflow recognition can aid surgeons before an operation when used as a training tool, during an operation by increasing operating room efficiency, and after an operation in the completion of operation notes. Although several methods have been applied to this task, they have been tested on few surgical datasets. Therefore, their generalisability is not well tested, particularly for surgical approaches utilising smaller working spaces which are susceptible to occlusion and necessitate frequent withdrawal of the endoscope. This leads to rapidly changing predictions, which reduces the clinical confidence of the methods, and hence limits their suitability for clinical translation. METHODS: Firstly, the optimal neural network is found using established methods, using endoscopic pituitary surgery as an exemplar. Then, prediction volatility is formally defined as a new evaluation metric as a proxy for uncertainty, and two temporal smoothing functions are created. The first (modal, [Formula: see text]) mode-averages over the previous n predictions, and the second (threshold, [Formula: see text]) ensures a class is only changed after being continuously predicted for n predictions. Both functions are independently applied to the predictions of the optimal network. RESULTS: The methods are evaluated on a 50-video dataset using fivefold cross-validation, and the optimised evaluation metric is weighted-[Formula: see text] score. The optimal model is ResNet-50+LSTM achieving 0.84 in 3-phase classification and 0.74 in 7-step classification. Applying threshold smoothing further improves these results, achieving 0.86 in 3-phase classification, and 0.75 in 7-step classification, while also drastically reducing the prediction volatility. CONCLUSION: The results confirm the established methods generalise to endoscopic pituitary surgery, and show simple temporal smoothing not only reduces prediction volatility, but actively improves performance.


Subject(s)
Endoscopy , Neural Networks, Computer , Humans , Workflow
5.
Neuroimage ; 238: 118235, 2021 09.
Article in English | MEDLINE | ID: mdl-34091032

ABSTRACT

Acceleration methods in fMRI aim to reconstruct high fidelity images from under-sampled k-space, allowing fMRI datasets to achieve higher temporal resolution, reduced physiological noise aliasing, and increased statistical degrees of freedom. While low levels of acceleration are typically part of standard fMRI protocols through parallel imaging, there exists the potential for approaches that allow much greater acceleration. One such existing approach is k-t FASTER, which exploits the inherent low-rank nature of fMRI. In this paper, we present a reformulated version of k-t FASTER which includes additional L2 constraints within a low-rank framework. We evaluated the effect of three different constraints against existing low-rank approaches to fMRI reconstruction: Tikhonov constraints, low-resolution priors, and temporal subspace smoothness. The different approaches are separately tested for robustness to under-sampling and thermal noise levels, in both retrospectively and prospectively-undersampled finger-tapping task fMRI data. Reconstruction quality is evaluated by accurate reconstruction of low-rank subspaces and activation maps. The use of L2 constraints was found to achieve consistently improved results, producing high fidelity reconstructions of statistical parameter maps at higher acceleration factors and lower SNR values than existing methods, but at a cost of longer computation time. In particular, the Tikhonov constraint proved very robust across all tested datasets, and the temporal subspace smoothness constraint provided the best reconstruction scores in the prospectively-undersampled dataset. These results demonstrate that regularized low-rank reconstruction of fMRI data can recover functional information at high acceleration factors without the use of any model-based spatial constraints.


Subject(s)
Functional Neuroimaging/methods , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Acceleration , Datasets as Topic , Humans , Nonlinear Dynamics , Prospective Studies , Retrospective Studies
6.
Cancers (Basel) ; 11(11)2019 Nov 05.
Article in English | MEDLINE | ID: mdl-31694302

ABSTRACT

In cancer research, population-based survival analysis has played an important role. In this article, we conduct survival analysis on patients with brain tumors using the SEER (Surveillance, Epidemiology, and End Results) database from the NCI (National Cancer Institute). It has been recognized that cancer survival models have spatial and temporal variations which are caused by multiple factors, but such variations are usually not "abrupt" (that is, they should be smooth). As such, spatially and temporally pooling all data and analyzing each spatial/temporal point separately are either inappropriate or ineffective. In this article, we develop and implement a spatial- and temporal-smoothing technique, which can effectively accommodate spatial/temporal variations and realize information borrowing across spatial/temporal points. Simulation demonstrates effectiveness of the proposed approach in improving estimation. Data on a total of 123,571 patients with brain tumors diagnosed between 1911 and 2010 from 16 SEER sites is analyzed. Findings different from separate estimation and simple pooling are made. Overall, this study may provide a practically useful way for modeling the survival of brain tumor (and other cancers) using population data.

7.
Article in English | MEDLINE | ID: mdl-30931146

ABSTRACT

OBJECTIVE: High-frequency band (HFB) activity, measured using implanted sensors over the cortex, is increasingly considered as a feature for the study of brain function and the design of neural-implants, such as Brain-Computer Interfaces (BCIs). One common way of extracting these power signals is using a wavelet dictionary, which involves the selection of different temporal sampling and temporal smoothing parameters, such that the resulting HFB signal best represents the temporal features of the neuronal event of interest. Typically, the use of neuro-electrical signals for closed-loop BCI control requires a certain level of signal downsampling and smoothing in order to remove uncorrelated noise, optimize performance and provide fast feedback. However, a fixed setting of the sampling and smoothing parameters may lead to a suboptimal representation of the underlying neural responses and poor BCI control. This problem can be resolved with a systematic assessment of parameter settings. APPROACH: With classification of HFB power responses as performance measure, different combinations of temporal sampling and temporal smoothing values were applied to data from sensory and motor tasks recorded with high-density and standard clinical electrocorticography (ECoG) grids in 12 epilepsy patients. MAIN RESULTS: The results suggest that HFB ECoG responses are best performed with high sampling and subsequent smoothing. For the paradigms used in this study, optimal temporal sampling ranged from 29 Hz to 50 Hz. Regarding optimal smoothing, values were similar between tasks (0.1-0.9 s), except for executed complex hand gestures, for which two optimal possible smoothing windows were found (0.4-0.6 s and 0.9-2.7 s). SIGNIFICANCE: The range of optimal values indicates that parameter optimization depends on the functional paradigm and may be subject-specific. Our results advocate a methodical assessment of parameter settings for optimal decodability of ECoG signals.

8.
Spat Spatiotemporal Epidemiol ; 11: 89-107, 2014 Oct.
Article in English | MEDLINE | ID: mdl-25457599

ABSTRACT

Birth history data-the primary source of data on under-5 mortality in developing countries-are infrequently used for subnational estimates due to concerns over small sample sizes. In this study we consider different methods for analyzing birth history data in combination with various small area models. We construct a simulation environment to assess the performance of different combinations of birth history methods and small area models in terms of bias, efficiency, and coverage. We find that performance is highly dependent on the birth history method applied and how temporal trends are accounted for. We estimated trends in district-level under-5 mortality in Zambia from 1980 to 2010 using the best-performing model. We find that under-5 mortality is highly variable within Zambia: there was a 1.8-fold difference between the lowest and highest levels in 2010, and declines over the period 1980 to 2010 ranged from less than 5% to more than 50%.


Subject(s)
Child Mortality/trends , Health Surveys/methods , Spatial Analysis , Child, Preschool , Developing Countries/statistics & numerical data , Female , Health Surveys/statistics & numerical data , Humans , Infant , Infant, Newborn , Male , Risk Factors , Zambia/epidemiology
9.
Front Comput Neurosci ; 7: 185, 2013.
Article in English | MEDLINE | ID: mdl-24391580

ABSTRACT

A wide range of blind source separation methods have been used in motor control research for the extraction of movement primitives from EMG and kinematic data. Popular examples are principal component analysis (PCA), independent component analysis (ICA), anechoic demixing, and the time-varying synergy model (d'Avella and Tresch, 2002). However, choosing the parameters of these models, or indeed choosing the type of model, is often done in a heuristic fashion, driven by result expectations as much as by the data. We propose an objective criterion which allows to select the model type, number of primitives and the temporal smoothness prior. Our approach is based on a Laplace approximation to the posterior distribution of the parameters of a given blind source separation model, re-formulated as a Bayesian generative model. We first validate our criterion on ground truth data, showing that it performs at least as good as traditional model selection criteria [Bayesian information criterion, BIC (Schwarz, 1978) and the Akaike Information Criterion (AIC) (Akaike, 1974)]. Then, we analyze human gait data, finding that an anechoic mixture model with a temporal smoothness constraint on the sources can best account for the data.

SELECTION OF CITATIONS
SEARCH DETAIL