Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 20
Filter
Add more filters










Publication year range
1.
J Pathol Inform ; 13: 100110, 2022.
Article in English | MEDLINE | ID: mdl-36268074

ABSTRACT

Improvements to patient care through the development of automated image analysis in pathology are restricted by the small image patch size that can be processed by convolutional neural networks (CNNs), when compared to the whole-slide image (WSI). Tile-by-tile processing across the entire WSI is slow and inefficient. While this may improve with future computing power, the technique remains vulnerable to noise from uninformative image areas. We propose a novel attention-inspired algorithm that selects image patches from informative parts of the WSI, first using a sparse randomised grid pattern, then iteratively re-sampling at higher density in regions where a CNN classifies patches as tumour. Subsequent uniform sampling across the enclosing region of interest (ROI) is used to mitigate sampling bias. Benchmarking tests informed the adoption of VGG19 as the main CNN architecture, with 79% classification accuracy. A further CNN was trained to separate false-positive normal epithelium from tumour epithelium, in a novel adaptation of a two-stage model used in brain imaging. These subsystems were combined in a processing pipeline to generate spatial distributions of classified patches from unseen WSIs. The ROI was predicted with a mean F1 (Dice) score of 86.6% over 100 evaluation WSIs. Several algorithms for evaluating tumour-stroma ratio (TSR) within the ROI were compared, giving a lowest root mean square (RMS) error of 11.3% relative to pathologists' annotations, against 13.5% for an equivalent tile-by-tile pipeline. Our pipeline processed WSIs between 3.3x and 6.3x faster than tile-by-tile processing. We propose our attention-based sampling pipeline as a useful tool for pathology researchers, with the further potential for incorporating additional diagnostic calculations.

2.
Cancers (Basel) ; 14(20)2022 Oct 14.
Article in English | MEDLINE | ID: mdl-36291807

ABSTRACT

Oesophago-gastric cancer is difficult to diagnose in the early stages given its typical non-specific initial manifestation. We hypothesise that machine learning can improve upon the diagnostic performance of current primary care risk-assessment tools by using advanced analytical techniques to exploit the wealth of evidence available in the electronic health record. We used a primary care electronic health record dataset derived from the UK General Practice Research Database (7471 cases; 32,877 controls) and developed five probabilistic machine learning classifiers: Support Vector Machine, Random Forest, Logistic Regression, Naïve Bayes, and Extreme Gradient Boosted Decision Trees. Features included basic demographics, symptoms, and lab test results. The Logistic Regression, Support Vector Machine, and Extreme Gradient Boosted Decision Tree models achieved the highest performance in terms of accuracy and AUROC (0.89 accuracy, 0.87 AUROC), outperforming a current UK oesophago-gastric cancer risk-assessment tool (ogRAT). Machine learning also identified more cancer patients than the ogRAT: 11.0% more with little to no effect on false positives, or up to 25.0% more with a slight increase in false positives (for Logistic Regression, results threshold-dependent). Feature contribution estimates and individual prediction explanations indicated clinical relevance. We conclude that machine learning could improve primary care cancer risk-assessment tools, potentially helping clinicians to identify additional cancer cases earlier. This could, in turn, improve survival outcomes.

3.
Front Neuroinform ; 16: 883796, 2022.
Article in English | MEDLINE | ID: mdl-35935536

ABSTRACT

Population density techniques can be used to simulate the behavior of a population of neurons which adhere to a common underlying neuron model. They have previously been used for analyzing models of orientation tuning and decision making tasks. They produce a fully deterministic solution to neural simulations which often involve a non-deterministic or noise component. Until now, numerical population density techniques have been limited to only one- and two-dimensional models. For the first time, we demonstrate a method to take an N-dimensional underlying neuron model and simulate the behavior of a population. The technique enables so-called graceful degradation of the dynamics allowing a balance between accuracy and simulation speed while maintaining important behavioral features such as rate curves and bifurcations. It is an extension of the numerical population density technique implemented in the MIIND software framework that simulates networks of populations of neurons. Here, we describe the extension to N dimensions and simulate populations of leaky integrate-and-fire neurons with excitatory and inhibitory synaptic conductances then demonstrate the effect of degrading the accuracy on the solution. We also simulate two separate populations in an E-I configuration to demonstrate the technique's ability to capture complex behaviors of interacting populations. Finally, we simulate a population of four-dimensional Hodgkin-Huxley neurons under the influence of noise. Though the MIIND software has been used only for neural modeling up to this point, the technique can be used to simulate the behavior of a population of agents adhering to any system of ordinary differential equations under the influence of shot noise. MIIND has been modified to render a visualization of any three of an N-dimensional state space of a population which encourages fast model prototyping and debugging and could prove a useful educational tool for understanding dynamical systems.

4.
Front Neurorobot ; 16: 856797, 2022.
Article in English | MEDLINE | ID: mdl-35903555

ABSTRACT

Although we can measure muscle activity and analyze their activation patterns, we understand little about how individual muscles affect the joint torque generated. It is known that they are controlled by circuits in the spinal cord, a system much less well-understood than the cortex. Knowing the contribution of the muscles toward a joint torque would improve our understanding of human limb control. We present a novel framework to examine the control of biomechanics using physics simulations informed by electromyography (EMG) data. These signals drive a virtual musculoskeletal model in the Neurorobotics Platform (NRP), which we then use to evaluate resulting joint torques. We use our framework to analyze raw EMG data collected during an isometric knee extension study to identify synergies that drive a musculoskeletal lower limb model. The resulting knee torques are used as a reference for genetic algorithms (GA) to generate new simulated activation patterns. On the platform the GA finds solutions that generate torques matching those observed. Possible solutions include synergies that are similar to those extracted from the human study. In addition, the GA finds activation patterns that are different from the biological ones while still producing the same knee torque. The NRP forms a highly modular integrated simulation platform allowing these in silico experiments. We argue that our framework allows for research of the neurobiomechanical control of muscles during tasks, which would otherwise not be possible.

5.
Adv Exp Med Biol ; 1359: 159-178, 2022.
Article in English | MEDLINE | ID: mdl-35471539

ABSTRACT

The problem of how to create efficient multi-scale models of large networks of neurons is a pressing one. It requires a balance between computational efficiency and a reduction of the number of parameters involved against biological realism. Simulations of point-model neurons show very realistic features of neural dynamics but are very hard to configure and to analyse. Instead of using hundreds or thousands of point-model neurons, a population can often be modeled by a single density function in a way that accurately reproduces quantities aggregated over the population, such as population firing rate or average membrane potential. These techniques have been widely applied in neuroscience, mainly on populations comprised of one-dimensional point-model neurons, such as leaky-integrate-and-fire neurons. Here, we present very general density methods that can be applied to point-model neurons of higher dimensionality that can represent biological features not present in simpler ones, such as adaptation and bursting. The methods are geometrical in nature and lend themselves to immediate visualisation of the population state. By decoupling the neural dynamics and the stochastic processes that model inter-neuron communication, an efficient GPGPU implementation is possible that makes the study of such high-dimensional models feasible. This decoupling also allows the study of different noise models, such as Poisson, white noise, and gamma-distributed interspike intervals, which broadens the application domain considerably compared to the Fokker-Planck equations that have traditionally dominated this approach. We will present several examples based on high-dimensional neural models. We will use dynamical systems that represent point-model neurons, but inherently there is nothing to restrict the approach presented here to neuroscience. MIIND is an open-source simulator that contains an implementation of these techniques.


Subject(s)
Models, Neurological , Neurosciences , Neurons/physiology , Neurosciences/methods , Noise , Population Density
6.
J Neurophysiol ; 127(1): 173-187, 2022 01 01.
Article in English | MEDLINE | ID: mdl-34879209

ABSTRACT

The influence of proprioceptive feedback on muscle activity during isometric tasks is the subject of conflicting studies. We performed an isometric knee extension task experiment based on two common clinical tests for mobility and flexibility. The task was carried out at four preset angles of the knee, and we recorded from five muscles for two different hip positions. We applied muscle synergy analysis using nonnegative matrix factorization on surface electromyograph recordings to identify patterns in the data that changed with internal knee angle, suggesting a link between proprioception and muscle activity. We hypothesized that such patterns arise from the way proprioceptive and cortical signals are integrated in neural circuits of the spinal cord. Using the MIIND neural simulation platform, we developed a computational model based on current understanding of spinal circuits with an adjustable afferent input. The model produces the same synergy trends as observed in the data, driven by changes in the afferent input. To match the activation patterns from each knee angle and position of the experiment, the model predicts the need for three distinct inputs: two to control a nonlinear bias toward the extensors and against the flexors, and a further input to control additional inhibition of rectus femoris. The results show that proprioception may be involved in modulating muscle synergies encoded in cortical or spinal neural circuits.NEW & NOTEWORTHY The role of sensory feedback in motor control when limbs are held in a fixed position is disputed. We performed a novel experiment involving fixed position tasks based on two common clinical tests. We identified patterns of muscle activity during the tasks that changed with different leg positions and then inferred how sensory feedback might influence the observations. We developed a computational model that required three distinct inputs to reproduce the activity patterns observed experimentally. The model provides a neural explanation for how the activity patterns can be changed by sensory feedback.


Subject(s)
Feedback, Sensory/physiology , Motor Activity/physiology , Muscle, Skeletal/physiology , Nerve Net/physiology , Proprioception/physiology , Spinal Cord/physiology , Adolescent , Adult , Electromyography , Female , Humans , Knee/physiology , Male , Models, Biological , Young Adult
7.
Front Neuroinform ; 15: 614881, 2021.
Article in English | MEDLINE | ID: mdl-34295233

ABSTRACT

MIIND is a software platform for easily and efficiently simulating the behaviour of interacting populations of point neurons governed by any 1D or 2D dynamical system. The simulator is entirely agnostic to the underlying neuron model of each population and provides an intuitive method for controlling the amount of noise which can significantly affect the overall behaviour. A network of populations can be set up quickly and easily using MIIND's XML-style simulation file format describing simulation parameters such as how populations interact, transmission delays, post-synaptic potentials, and what output to record. During simulation, a visual display of each population's state is provided for immediate feedback of the behaviour and population activity can be output to a file or passed to a Python script for further processing. The Python support also means that MIIND can be integrated into other software such as The Virtual Brain. MIIND's population density technique is a geometric and visual method for describing the activity of each neuron population which encourages a deep consideration of the dynamics of the neuron model and provides insight into how the behaviour of each population is affected by the behaviour of its neighbours in the network. For 1D neuron models, MIIND performs far better than direct simulation solutions for large populations. For 2D models, performance comparison is more nuanced but the population density approach still confers certain advantages over direct simulation. MIIND can be used to build neural systems that bridge the scales between an individual neuron model and a population network. This allows researchers to maintain a plausible path back from mesoscopic to microscopic scales while minimising the complexity of managing large numbers of interconnected neurons. In this paper, we introduce the MIIND system, its usage, and provide implementation details where appropriate.

8.
Int J Epidemiol ; 49(6): 2074-2082, 2021 01 23.
Article in English | MEDLINE | ID: mdl-32380551

ABSTRACT

Prediction and causal explanation are fundamentally distinct tasks of data analysis. In health applications, this difference can be understood in terms of the difference between prognosis (prediction) and prevention/treatment (causal explanation). Nevertheless, these two concepts are often conflated in practice. We use the framework of generalized linear models (GLMs) to illustrate that predictive and causal queries require distinct processes for their application and subsequent interpretation of results. In particular, we identify five primary ways in which GLMs for prediction differ from GLMs for causal inference: (i) the covariates that should be considered for inclusion in (and possibly exclusion from) the model; (ii) how a suitable set of covariates to include in the model is determined; (iii) which covariates are ultimately selected and what functional form (i.e. parameterization) they take; (iv) how the model is evaluated; and (v) how the model is interpreted. We outline some of the potential consequences of failing to acknowledge and respect these differences, and additionally consider the implications for machine learning (ML) methods. We then conclude with three recommendations that we hope will help ensure that both prediction and causal modelling are used appropriately and to greatest effect in health research.


Subject(s)
Machine Learning , Causality , Humans , Linear Models , Prognosis
9.
Lancet Digit Health ; 2(12): e677-e680, 2020 12.
Article in English | MEDLINE | ID: mdl-33328030

ABSTRACT

Machine learning methods, combined with large electronic health databases, could enable a personalised approach to medicine through improved diagnosis and prediction of individual responses to therapies. If successful, this strategy would represent a revolution in clinical research and practice. However, although the vision of individually tailored medicine is alluring, there is a need to distinguish genuine potential from hype. We argue that the goal of personalised medical care faces serious challenges, many of which cannot be addressed through algorithmic complexity, and call for collaboration between traditional methodologists and experts in medical machine learning to avoid extensive research waste.


Subject(s)
Delivery of Health Care/methods , Machine Learning , Precision Medicine/methods , Humans
10.
Neuron ; 102(4): 735-744, 2019 05 22.
Article in English | MEDLINE | ID: mdl-31121126

ABSTRACT

A key element of the European Union's Human Brain Project (HBP) and other large-scale brain research projects is the simulation of large-scale model networks of neurons. Here, we argue why such simulations will likely be indispensable for bridging the scales between the neuron and system levels in the brain, and why a set of brain simulators based on neuron models at different levels of biological detail should therefore be developed. To allow for systematic refinement of candidate network models by comparison with experiments, the simulations should be multimodal in the sense that they should predict not only action potentials, but also electric, magnetic, and optical signals measured at the population and system levels.


Subject(s)
Brain/physiology , Computer Simulation , Models, Neurological , Neurons/physiology , Humans , Neural Networks, Computer , Neurosciences
11.
PLoS Comput Biol ; 15(3): e1006729, 2019 03.
Article in English | MEDLINE | ID: mdl-30830903

ABSTRACT

The importance of a mesoscopic description level of the brain has now been well established. Rate based models are widely used, but have limitations. Recently, several extremely efficient population-level methods have been proposed that go beyond the characterization of a population in terms of a single variable. Here, we present a method for simulating neural populations based on two dimensional (2D) point spiking neuron models that defines the state of the population in terms of a density function over the neural state space. Our method differs in that we do not make the diffusion approximation, nor do we reduce the state space to a single dimension (1D). We do not hard code the neural model, but read in a grid describing its state space in the relevant simulation region. Novel models can be studied without even recompiling the code. The method is highly modular: variations of the deterministic neural dynamics and the stochastic process can be investigated independently. Currently, there is a trend to reduce complex high dimensional neuron models to 2D ones as they offer a rich dynamical repertoire that is not available in 1D, such as limit cycles. We will demonstrate that our method is ideally suited to investigate noise in such systems, replicating results obtained in the diffusion limit and generalizing them to a regime of large jumps. The joint probability density function is much more informative than 1D marginals, and we will argue that the study of 2D systems subject to noise is important complementary to 1D systems.


Subject(s)
Computer Simulation , Models, Neurological , Neurons/cytology , Action Potentials , Neurons/physiology , Stochastic Processes , Synapses/physiology
12.
Phys Rev E ; 95(6-1): 062125, 2017 Jun.
Article in English | MEDLINE | ID: mdl-28709222

ABSTRACT

We present a method for solving population density equations (PDEs)--a mean-field technique describing homogeneous populations of uncoupled neurons-where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. The method combines recent developments in two different disciplines that traditionally have had limited interaction: computational neuroscience and the theory of random networks. The method uses a geometric binning scheme, based on the method of characteristics, to capture the deterministic neurodynamics of the population, separating the deterministic and stochastic process cleanly. We can independently vary the choice of the deterministic model and the model for the stochastic process, leading to a highly modular numerical solution strategy. We demonstrate this by replacing the master equation implicit in many formulations of the PDE formalism by a generalization called the generalized Montroll-Weiss equation-a recent result from random network theory-describing a random walker subject to transitions realized by a non-Markovian process. We demonstrate the method for leaky- and quadratic-integrate and fire neurons subject to spike trains with Poisson and gamma-distributed interspike intervals. We are able to model jump responses for both models accurately to both excitatory and inhibitory input under the assumption that all inputs are generated by one renewal process.

13.
Cogn Neurodyn ; 9(4): 359-70, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26157510

ABSTRACT

In his review of neural binding problems, Feldman (Cogn Neurodyn 7:1-11, 2013) addressed two types of models as solutions of (novel) variable binding. The one type uses labels such as phase synchrony of activation. The other ('connectivity based') type uses dedicated connections structures to achieve novel variable binding. Feldman argued that label (synchrony) based models are the only possible candidates to handle novel variable binding, whereas connectivity based models lack the flexibility required for that. We argue and illustrate that Feldman's analysis is incorrect. Contrary to his conclusion, connectivity based models are the only viable candidates for models of novel variable binding because they are the only type of models that can produce behavior. We will show that the label (synchrony) based models analyzed by Feldman are in fact examples of connectivity based models. Feldman's analysis that novel variable binding can be achieved without existing connection structures seems to result from analyzing the binding problem in a wrong frame of reference, in particular in an outside instead of the required inside frame of reference. Connectivity based models can be models of novel variable binding when they possess a connection structure that resembles a small-world network, as found in the brain. We will illustrate binding with this type of model with episode binding and the binding of words, including novel words, in sentence structures.

14.
Front Neurosci ; 8: 30, 2014.
Article in English | MEDLINE | ID: mdl-24600342

ABSTRACT

Computational models of learning have proved largely successful in characterizing potential mechanisms which allow humans to make decisions in uncertain and volatile contexts. We report here findings that extend existing knowledge and show that a modified reinforcement learning model, which has separate parameters according to whether the previous trial gave a reward or a punishment, can provide the best fit to human behavior in decision making under uncertainty. More specifically, we examined the fit of our modified reinforcement learning model to human behavioral data in a probabilistic two-alternative decision making task with rule reversals. Our results demonstrate that this model predicted human behavior better than a series of other models based on reinforcement learning or Bayesian reasoning. Unlike the Bayesian models, our modified reinforcement learning model does not include any representation of rule switches. When our task is considered purely as a machine learning task, to gain as many rewards as possible without trying to describe human behavior, the performance of modified reinforcement learning and Bayesian methods is similar. Others have used various computational models to describe human behavior in similar tasks, however, we are not aware of any who have compared Bayesian reasoning with reinforcement learning modified to differentiate rewards and punishments.

15.
Article in English | MEDLINE | ID: mdl-23162457

ABSTRACT

Synfire chains have long been proposed to generate precisely timed sequences of neural activity. Such activity has been linked to numerous neural functions including sensory encoding, cognitive and motor responses. In particular, it has been argued that synfire chains underlie the precise spatiotemporal firing patterns that control song production in a variety of songbirds. Previous studies have suggested that the development of synfire chains requires either initial sparse connectivity or strong topological constraints, in addition to any synaptic learning rules. Here, we show that this necessity can be removed by using a previously reported but hitherto unconsidered spike-timing-dependent plasticity (STDP) rule and activity-dependent excitability. Under this rule the network develops stable synfire chains that possess a non-trivial, scalable multi-layer structure, in which relative layer sizes appear to follow a universal function. Using computational modeling and a coarse grained random walk model, we demonstrate the role of the STDP rule in growing, molding and stabilizing the chain, and link model parameters to the resulting structure.

16.
Neural Netw ; 21(8): 1164-81, 2008 Oct.
Article in English | MEDLINE | ID: mdl-18783918

ABSTRACT

MIIND (Multiple Interacting Instantiations of Neural Dynamics) is a highly modular multi-level C++ framework, that aims to shorten the development time for models in Cognitive Neuroscience (CNS). It offers reusable code modules (libraries of classes and functions) aimed at solving problems that occur repeatedly in modelling, but tries not to impose a specific modelling philosophy or methodology. At the lowest level, it offers support for the implementation of sparse networks. For example, the library SparseImplementationLib supports sparse random networks and the library LayerMappingLib can be used for sparse regular networks of filter-like operators. The library DynamicLib, which builds on top of the library SparseImplementationLib, offers a generic framework for simulating network processes. Presently, several specific network process implementations are provided in MIIND: the Wilson-Cowan and Ornstein-Uhlenbeck type, and population density techniques for leaky-integrate-and-fire neurons driven by Poisson input. A design principle of MIIND is to support detailing: the refinement of an originally simple model into a form where more biological detail is included. Another design principle is extensibility: the reuse of an existing model in a larger, more extended one. One of the main uses of MIIND so far has been the instantiation of neural models of visual attention. Recently, we have added a library for implementing biologically-inspired models of artificial vision, such as HMAX and recent successors. In the long run we hope to be able to apply suitably adapted neuronal mechanisms of attention to these artificial models.


Subject(s)
Database Management Systems , Models, Neurological , Neurons/physiology , Neurosciences/instrumentation , Nonlinear Dynamics , Animals , Computer Simulation , Humans , Neurosciences/methods
18.
Behav Brain Sci ; 29(1): 37-70; discussion 70-108, 2006 Feb.
Article in English | MEDLINE | ID: mdl-16542539

ABSTRACT

Human cognition is unique in the way in which it relies on combinatorial (or compositional) structures. Language provides ample evidence for the existence of combinatorial structures, but they can also be found in visual cognition. To understand the neural basis of human cognition, it is therefore essential to understand how combinatorial structures can be instantiated in neural terms. In his recent book on the foundations of language, Jackendoff described four fundamental problems for a neural instantiation of combinatorial structures: the massiveness of the binding problem, the problem of 2, the problem of variables, and the transformation of combinatorial structures from working memory to long-term memory. This paper aims to show that these problems can be solved by means of neural "blackboard" architectures. For this purpose, a neural blackboard architecture for sentence structure is presented. In this architecture, neural structures that encode for words are temporarily bound in a manner that preserves the structure of the sentence. It is shown that the architecture solves the four problems presented by Jackendoff. The ability of the architecture to instantiate sentence structures is illustrated with examples of sentence complexity observed in human language performance. Similarities exist between the architecture for sentence structure and blackboard architectures for combinatorial structures in visual cognition, derived from the structure of the visual cortex. These architectures are briefly discussed, together with an example of a combinatorial structure in which the blackboard architectures for language and vision are combined. In this way, the architecture for language is grounded in perception. Perspectives and potential developments of the architectures are discussed.


Subject(s)
Cognition/physiology , Language , Models, Neurological , Neural Pathways/physiology , Semantics , Humans
19.
J Neural Eng ; 3(1): R1-12, 2006 Mar.
Article in English | MEDLINE | ID: mdl-16510935

ABSTRACT

In this paper, we will first introduce the notions of systematicity and combinatorial productivity and we will argue that these notions are essential for human cognition and probably for every agent that needs to be able to deal with novel, unexpected situations in a complex environment. Agents that use compositional representations are faced with the so-called binding problem and the question of how to create neural network architectures that can deal with it is essential for understanding higher level cognition. Moreover, an architecture that can solve this problem is likely to scale better with problem size than other neural network architectures. Then, we will discuss object-based attention. The influence of spatial attention is well known, but there is solid evidence for object-based attention as well. We will discuss experiments that demonstrate object-based attention and will discuss a model that can explain the data of these experiments very well. The model strongly suggests that this mode of attention provides a neural basis for parallel search. Next, we will show a model for binding in visual cortex. This model is based on a so-called neural blackboard architecture, where higher cortical areas act as processors, specialized for specific features of a visual stimulus, and lower visual areas act as a blackboard for communication between these processors. This implies that lower visual areas are involved in more than bottom-up visual processing, something which already was apparent from the large number of recurrent connections from higher to lower visual areas. This model identifies a specific role for these feedback connections. Finally, we will discuss the experimental evidence that exists for this architecture.


Subject(s)
Attention/physiology , Fixation, Ocular/physiology , Models, Neurological , Nerve Net/physiology , Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Animals , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...