Welcome to a Scintilla of Playful Musings

Welcome to my new blog, noos anakainisis, translated literally as mind renewal. The primary obsessions are neuroscience, computation, information, structure, form, art and history of science. Some environmental, political, and technological developments will also be included.

I hope your neurons are sufficiently stimulated...
ENTER

Saturday, October 2, 2010

Computational and dynamic models in neuroimaging


Much of our understanding of the brain is modular. Investigation has necessarily focused on its individual parts at different levels of analysis (e.g. individual neurons and brain areas), because understanding the parts is a prerequisite to understanding the whole, but also because of historical limitations inherent in our tools of investigation. But recent years have seen a rise in approaches designed to gain a more integrative understanding of the brain as interacting networks of neurons, areas, and systems. Functional neuroimaging has allowed big pictures of activity throughout the human brain. This permits direct comparisons of patterns of activation across many brain areas simultaneously and, by examining coherent fluctuations in blow flood, identifies putative large-scale, brain-wide, networks. There has also been the rise of large-scale multiple-electrode neurophysiology, the implantation of up to 100 or more electrodes, often in multiple brain structures. This allows comparisons of neuron populations in different brain areas that are not confounded by extraneous factors (differences in level of experience, ongoing behavior, etc.) as well as measurements of the relative timing of activity between neurons that give insight into network properties. This growth in integrative approaches is technically and conceptually driven. The statistical and computational expertise required to design and analyze neuroimaging experiments means that most practitioners in functional magnetic resonance imaging (fMRI) and electrophysiology (single unit, EEG or MEG) could call themselves computational neuroscientists. I will briefly review two aspects of this trend, models of brain function (that try to account for perception, action and cognition) and biophysical models of neuronal dynamics.

Computational Models of Brain Function Implied by fMRI/EEG/MEG

Techniques adopted from computational neuroscience, machine learning and optimal decision and game theory provide both a mechanistic formulation and also allow one to make quantitative predictions that can be operationalized in terms of explanatory variables (such as regressors in an fMRI design matrix). Many of the currents trends in fMRI/EEG/MEG studies include: autonomous brain dynamics as measured with resting state fMRI, neuroeconomics and game theory, optimal control theory and information theory to ask how the brain makes optimal decisions/actions under uncertainty. For perception, top-down and bottom-up effects are increasingly described in terms of Bayesian inference and network communications.

Instead of simply modeling observed brain signals in terms of experimental factors (e.g. as in conventional ANOVA models), researchers have begun to explain their data in terms of quantities the brain must encode, under simplifying assumptions about how the brain works. Most computational formulations of brain function assume it can be cast as an optimization of some function of sensory input, with respect to internal brain states and the actions it emits. For Karl Friston and colleagues the quantity being optimized is free energy, which, under certain simplifying assumptions, is prediction error.

Perception

For perception, the brain is thus maximizing mutual information between sensory inputs and internal representations of their causes or minimizing prediction error. Optimization in perception appears as a principle of maximum efficiency or minimum redundancy, the infomax principle, predictive coding, the Bayesian brain hypothesis and Friston's Free Energy Principle account which unifies all these approaches.

Decision and Action

In terms of motor control, many different costs functions have been proposed, which the brain is trying to minimize during action (usually conveyed in terms of a prediction error). Optimal game theory (decision theory) and reinforcement learning assume that choices and behavior are trying to maximize expected utility or reward, where this optimization rests upon learning the value or quality of sensory contingencies and action. This learning may also ultimately rely on an assumption that animals extremize expected utility or cost functions (or minimize a reward-related prediction error), which link perceptual (Bayesian) inference on hidden states of the world to behavior and choice. Action and optimal game theoretic brain functions manifest as Bayes optimality and bounded rationality (where bounds place constraints on optimization). The bounded optimality provides a useful, principled method of specify the mapping between sensory inputs and observed behavior, as well as suggests candidate latent variables (represented by brain states) that mediate this mapping. Researchers can thus work out what an ideal Bayesian observer or rational person would do in response to cues, under a particular model of cue generation and cue outcome associations. The model is then optimized to account for the observed behavior, with its latent variables used as explanatory variables to identify regionally specific neurophysiological correlates.

A typical experimental paradigm invokes inference (learning) and decisions (responses). The latent variables (such as prediction error, value, uncertainty, risk, surprise, etc) entailed by the paradigm are then evaluated under optimality assumptions that the subject is Bayes optimal. The subject's behavior is used to resolve uncertainty about which model or model parameters a particular subject is actually using, by matching the optimal responses to the subject's choices in a maximum likelihood sense of adjusting the Bayes optimal scheme parameters. Once a match is attained, the implicit latent variables subtending Bayes optimal responses are used to explain the observed brain responses by convolving them with a hemodynamic response function to form regressors in conventional linear convolution models of the fMRI data. Significant regions of the ensuing statistical parametric map or a priori regions of interest of the functional anatomy can then be associated with processing or encoding these idealized computational quantities.
>
Biophysical Models of Neuronal Dynamics

Current fMRI/EEG/MEG studies also move away from simple descriptive models and towards biophysically informed forward models of data, using electrophysiological source modeling, which allows the informed interrogation of evoked and induced responses at their source in the brain rather than at the sensors. For fMRI, this has meant the replacement of simple linear convolution models to state-space models with hidden neuronal and hemodynamic states that can explain multiple modalities. The key to these dynamic causal models of the data is model comparison: each model embodies a mechanistic hypothesis about how the data were generated (generative models) and the behavior of these different models can then be compared against each other and the observed behavior. The spectral properties and spatial deployment of self-organized dynamics in the brain place constraints on the anatomical and functional architectures that could support them.

Understanding emergent properties of neuronal systems

Resting state fMRI brain signals can be characterized in terms of remarkably reproducible principal components or modes (ie. resting state networks). The numerous resting state fMRI studies highlight that endogenous brain activity is self-organizing and highly structured, even at rest. This leads to many mechanistic questions about the genesis of autonomous dynamics and the structures that support them. The endogenous fluctuations of resting state fMRI are a consequence of dynamics on anatomical connectivity structures with particular scale-invariant and small-world characteristics (well-studied and universal characteristics of complex systems).

Using field theoretic methods for nonequilibrium statistical processes to describe both neural fluctuations and responses to stimuli, low spiking rates are predicted to lead to neocortical activity that exhibits a phase transition (which is in the universality class of directed percolation). The density and spatial extent of lateral cortical interactions induce a region of state-space that is negligibly affected by fluctuations. As the generation and decay of neuronal activity becomes more balanced, there is a crossover into a critical fluctuation region. How the brain maintains its dynamics and self-organization near phase transitions is of great interest and future work can benefit from universal patterns and structures revealed by synergetics studies (ie. the enslaving principles in which the dynamics of fast-relaxing, stable modes are completely determined by the slow dynamics of amplitudes of a small number of unstable modes). Understanding and characterizing these modes may be a helpful step towards a universal dynamical model of how the brain organizes itself to predict and act on its sensorium.

Most neuroimaging studies have focused on generative models of neuronal dynamics that define a mapping from causes to neuronal dynamics. The inversion of these models, mapping from neuronal dynamics to their causes, now allows one to test different models against empirical data. One good example of this model inversion approach is dynamic causal modeling (Bayesian inversion and comparison of dynamic models that cause observed data). DCMs are continuous time, state-space models of how data are caused in terms of a network of distributed sources talking to each other through parameterized connections and influencing the dynamics of hidden states that are intrinsic to each. Model inversion provides conditional densities on their parameters in terms of extrinsic connection strengths and intrinsic, synaptic parameters. These conditional densities are used to integrate out dependencies on the parameters to provide the probability of the data given the model per se (model evidence that is used for model comparison). DCMs consider point sources for fMRI/MEG/EEG data (formally equivalent to graphical models) and infer coupling within and between nodes (brain regions) based on perturbing the system with known experimental inputs and trying to explain the observed responses by optimizing the model. The optimization furnishes posterior (conditional) probability distributions on the unknown parameters and the evidence for the model, where each model is a specific hypothesis about functional brain architectures.

Future developments in computational neuroimaging will seek to use computational models of brain function to constrain biophysical models of observed brain responses. Current DCMs are biophysically but not functionally informed. Future computational models should provide not only a hypothesis about how the brain works but predictions about both neuronal and behavioral responses that can be tested jointly in a neuroimaging context. This may require generalizing the notion of a connection to a coupling tensor (4D object) that couples two (2D) cortical/subcortical fields. It also implicitly requires better inference of unknown instantaneous neuronal states that show self-organized behavior.


References:


  1. Friston KJ, Kilner J, Harrison L (2003) A free-energy principle for the brain. J. Physiol. Paris. 100 (1-3), 70-87.
  2. Friston K, Mattout J, Trujillo-Barreto N, Ashburner J, Penny W (2007) Variational free energy and the Laplace approximation. NeuroImage 34:220–234.
  3. Friston KJ (2010) The free-energy principle: A unified brain theory? Nat. Rev. Neurosci. 11, 127-138.

Related Posts by Categories



0 comments:

Post a Comment