Welcome to a Scintilla of Playful Musings

Welcome to my new blog, noos anakainisis, translated literally as mind renewal. The primary obsessions are neuroscience, computation, information, structure, form, art and history of science. Some environmental, political, and technological developments will also be included.

I hope your neurons are sufficiently stimulated...
ENTER

Tuesday, June 14, 2011

Traffic Solutions

Still waiting for the anti-jamiton. Phantom jams are born of a lot of cars using the road. No surprise there. But when traffic gets too heavy, it takes the smallest disturbance in the flow – a driver laying on the brakes, someone tailgating too closely or some moron picking pickles off his burger – to ripple through traffic and create a self-sustaining traffic jam. The mathematics of such traffic jams are strikingly similar to the equations that describe detonation waves produced by explosions and those used to describe fluid mechanics, and they model traffic jams as a self-sustaining wave. Speed, traffic density and other factors can determine conditions that will lead to a jamiton and how quickly it will spread. Once the jam forms drivers have no choice but to wait for it to clear. The new model could lead to roads designed with sufficient capacity to keep traffic density below the point at which a jamiton can form. Jamitons have a “sonic point,” which separates traffic flow into upstream and downstream components, much like the event horizon of a black hole. This sonic point prevents communication between these distinct components so information about free-flowing conditions just beyond the front of the jam can’t reach drivers behind the sonic point. Ergo, there you sit, stuck in traffic and have no idea that the jam has no external cause, your blood pressure racing toward the stratosphere.

The Model:


The Experiment:


We need to learn from the pengiuns! (Be sure to check out the penguin videos, quite striking.)

Or we need robot cars, since the congestion is a result of human error. Since the robot cars would communicate with each other once in traffic and basically hook up into a virtual train with virtual space buffers and all going the same speed without slowing and without additive errors of braking too much, or tailgating too much. It would be a hybrid b/t individual transport and mass transit, where you hail a car by phone, and then tell it where to go and later release it as needed and then the units can add to these car flocks on the highways and major roads or pop off and go their own way to your specific destination.

I want the future now!

Tuesday, March 1, 2011

What is Phenotypic Computing?

Interesting perspective in computing by Jaron Lanier.

I like it because it sounds like computing approaches might begin converging on how the nervous system approaches problems. He takes on the dogma of elegant thought/theory by giants like Shannon, Turing, von Neumann, Wiener, pointing out that most of their viewpoint was constrained (to a degree) by sending signals down wires, which forces a particular temporal perspective, of getting single datapoints over time.

A paraphrase: "If you model information theory on signals going down a wire, you simplify your task in that you only have one point being measured or modified at a time at each end…At the same time, though, you pay by adding complexity at another level….which leads to a particular set of ideas about coding schemes in which the sender and receiver have agreed on a temporal syntactical layer in advance…You stretch information out in time and have past bits give context to future bits in order to create a coding scheme….In order to keep track of a protocol you have to devote huge memory and computational resources to representing the protocol rather than the stuff of ultimate interest. This kind of memory use is populated by software artifacts called data-structures, such as stacks, caches, hash tables, links and so on. They are the first objects in history to be purely syntactical…..With protocols you tend to be drawn into all-or-nothing high wire acts of perfect adherence in at least some aspects of your design….leads to…. brittleness in existing computer software, which means that it breaks before it bends."

So just as we neuroscientists are learning that a 1 or even a 2-compartment model of a neuron is not enough, that distal vs basal dendritic inputs have vastly nonlinear interaction effects, that neuropeptides are abundant and very important functional components in neural circuits, that neurons are strongly influenced by ephaptic coupling, we come to that same conclusion, there is a lot more going on in your brains than wire transmission down axons, lots of volume transmission. this leads to a system that has a constant minor presence of errors or noise, and to a world of approximation and guessing.

Another paraphrase: "The alternative, in which you have a lot of measurements available at one time on a surface, is called pattern classification….The distinction between protocols and patterns is not absolute-one can in theory convert between them. But it’s an important distinction in practice…you enter into a different world that has its own tradeoffs and expenses. You’re trying to be an ever better guesser instead of a perfect decoder. You probably start to try to guess ahead, to predict what you are about to see, in order to get more confident about your guesses. You might even start to apply the guessing method between parts of your own guessing process. You rely on feedback to improve your guesses….you enter into a world of approximation rather than perfection. With protocols you tend to be drawn into all-or-nothing high wire acts of perfect adherence in at least some aspects of your design. Pattern recognition, in contrast, assumes the constant minor presence of errors and doesn’t mind them. I’ve suggested that we call the alternative approach to software that I’ve outlined above “Phenotropic.”…The goal is to have all of the components in the system connect to each other by recognizing and interpreting each other as patterns rather than as followers of a protocol that is vulnerable to catastrophic failures. One day I’d like to build large computers using pattern classification as the most fundamental binding principle, where the different modules of the computer are essentially looking at each other and recognizing states in each other, rather than adhering to codes in order to perfectly match up with each other."

Saturday, October 2, 2010

Computational and dynamic models in neuroimaging


Much of our understanding of the brain is modular. Investigation has necessarily focused on its individual parts at different levels of analysis (e.g. individual neurons and brain areas), because understanding the parts is a prerequisite to understanding the whole, but also because of historical limitations inherent in our tools of investigation. But recent years have seen a rise in approaches designed to gain a more integrative understanding of the brain as interacting networks of neurons, areas, and systems. Functional neuroimaging has allowed big pictures of activity throughout the human brain. This permits direct comparisons of patterns of activation across many brain areas simultaneously and, by examining coherent fluctuations in blow flood, identifies putative large-scale, brain-wide, networks. There has also been the rise of large-scale multiple-electrode neurophysiology, the implantation of up to 100 or more electrodes, often in multiple brain structures. This allows comparisons of neuron populations in different brain areas that are not confounded by extraneous factors (differences in level of experience, ongoing behavior, etc.) as well as measurements of the relative timing of activity between neurons that give insight into network properties. This growth in integrative approaches is technically and conceptually driven. The statistical and computational expertise required to design and analyze neuroimaging experiments means that most practitioners in functional magnetic resonance imaging (fMRI) and electrophysiology (single unit, EEG or MEG) could call themselves computational neuroscientists. I will briefly review two aspects of this trend, models of brain function (that try to account for perception, action and cognition) and biophysical models of neuronal dynamics.

Computational Models of Brain Function Implied by fMRI/EEG/MEG

Techniques adopted from computational neuroscience, machine learning and optimal decision and game theory provide both a mechanistic formulation and also allow one to make quantitative predictions that can be operationalized in terms of explanatory variables (such as regressors in an fMRI design matrix). Many of the currents trends in fMRI/EEG/MEG studies include: autonomous brain dynamics as measured with resting state fMRI, neuroeconomics and game theory, optimal control theory and information theory to ask how the brain makes optimal decisions/actions under uncertainty. For perception, top-down and bottom-up effects are increasingly described in terms of Bayesian inference and network communications.

Instead of simply modeling observed brain signals in terms of experimental factors (e.g. as in conventional ANOVA models), researchers have begun to explain their data in terms of quantities the brain must encode, under simplifying assumptions about how the brain works. Most computational formulations of brain function assume it can be cast as an optimization of some function of sensory input, with respect to internal brain states and the actions it emits. For Karl Friston and colleagues the quantity being optimized is free energy, which, under certain simplifying assumptions, is prediction error.

Perception

For perception, the brain is thus maximizing mutual information between sensory inputs and internal representations of their causes or minimizing prediction error. Optimization in perception appears as a principle of maximum efficiency or minimum redundancy, the infomax principle, predictive coding, the Bayesian brain hypothesis and Friston's Free Energy Principle account which unifies all these approaches.

Decision and Action

In terms of motor control, many different costs functions have been proposed, which the brain is trying to minimize during action (usually conveyed in terms of a prediction error). Optimal game theory (decision theory) and reinforcement learning assume that choices and behavior are trying to maximize expected utility or reward, where this optimization rests upon learning the value or quality of sensory contingencies and action. This learning may also ultimately rely on an assumption that animals extremize expected utility or cost functions (or minimize a reward-related prediction error), which link perceptual (Bayesian) inference on hidden states of the world to behavior and choice. Action and optimal game theoretic brain functions manifest as Bayes optimality and bounded rationality (where bounds place constraints on optimization). The bounded optimality provides a useful, principled method of specify the mapping between sensory inputs and observed behavior, as well as suggests candidate latent variables (represented by brain states) that mediate this mapping. Researchers can thus work out what an ideal Bayesian observer or rational person would do in response to cues, under a particular model of cue generation and cue outcome associations. The model is then optimized to account for the observed behavior, with its latent variables used as explanatory variables to identify regionally specific neurophysiological correlates.

A typical experimental paradigm invokes inference (learning) and decisions (responses). The latent variables (such as prediction error, value, uncertainty, risk, surprise, etc) entailed by the paradigm are then evaluated under optimality assumptions that the subject is Bayes optimal. The subject's behavior is used to resolve uncertainty about which model or model parameters a particular subject is actually using, by matching the optimal responses to the subject's choices in a maximum likelihood sense of adjusting the Bayes optimal scheme parameters. Once a match is attained, the implicit latent variables subtending Bayes optimal responses are used to explain the observed brain responses by convolving them with a hemodynamic response function to form regressors in conventional linear convolution models of the fMRI data. Significant regions of the ensuing statistical parametric map or a priori regions of interest of the functional anatomy can then be associated with processing or encoding these idealized computational quantities.
>
Biophysical Models of Neuronal Dynamics

Current fMRI/EEG/MEG studies also move away from simple descriptive models and towards biophysically informed forward models of data, using electrophysiological source modeling, which allows the informed interrogation of evoked and induced responses at their source in the brain rather than at the sensors. For fMRI, this has meant the replacement of simple linear convolution models to state-space models with hidden neuronal and hemodynamic states that can explain multiple modalities. The key to these dynamic causal models of the data is model comparison: each model embodies a mechanistic hypothesis about how the data were generated (generative models) and the behavior of these different models can then be compared against each other and the observed behavior. The spectral properties and spatial deployment of self-organized dynamics in the brain place constraints on the anatomical and functional architectures that could support them.

Understanding emergent properties of neuronal systems

Resting state fMRI brain signals can be characterized in terms of remarkably reproducible principal components or modes (ie. resting state networks). The numerous resting state fMRI studies highlight that endogenous brain activity is self-organizing and highly structured, even at rest. This leads to many mechanistic questions about the genesis of autonomous dynamics and the structures that support them. The endogenous fluctuations of resting state fMRI are a consequence of dynamics on anatomical connectivity structures with particular scale-invariant and small-world characteristics (well-studied and universal characteristics of complex systems).

Using field theoretic methods for nonequilibrium statistical processes to describe both neural fluctuations and responses to stimuli, low spiking rates are predicted to lead to neocortical activity that exhibits a phase transition (which is in the universality class of directed percolation). The density and spatial extent of lateral cortical interactions induce a region of state-space that is negligibly affected by fluctuations. As the generation and decay of neuronal activity becomes more balanced, there is a crossover into a critical fluctuation region. How the brain maintains its dynamics and self-organization near phase transitions is of great interest and future work can benefit from universal patterns and structures revealed by synergetics studies (ie. the enslaving principles in which the dynamics of fast-relaxing, stable modes are completely determined by the slow dynamics of amplitudes of a small number of unstable modes). Understanding and characterizing these modes may be a helpful step towards a universal dynamical model of how the brain organizes itself to predict and act on its sensorium.

Most neuroimaging studies have focused on generative models of neuronal dynamics that define a mapping from causes to neuronal dynamics. The inversion of these models, mapping from neuronal dynamics to their causes, now allows one to test different models against empirical data. One good example of this model inversion approach is dynamic causal modeling (Bayesian inversion and comparison of dynamic models that cause observed data). DCMs are continuous time, state-space models of how data are caused in terms of a network of distributed sources talking to each other through parameterized connections and influencing the dynamics of hidden states that are intrinsic to each. Model inversion provides conditional densities on their parameters in terms of extrinsic connection strengths and intrinsic, synaptic parameters. These conditional densities are used to integrate out dependencies on the parameters to provide the probability of the data given the model per se (model evidence that is used for model comparison). DCMs consider point sources for fMRI/MEG/EEG data (formally equivalent to graphical models) and infer coupling within and between nodes (brain regions) based on perturbing the system with known experimental inputs and trying to explain the observed responses by optimizing the model. The optimization furnishes posterior (conditional) probability distributions on the unknown parameters and the evidence for the model, where each model is a specific hypothesis about functional brain architectures.

Future developments in computational neuroimaging will seek to use computational models of brain function to constrain biophysical models of observed brain responses. Current DCMs are biophysically but not functionally informed. Future computational models should provide not only a hypothesis about how the brain works but predictions about both neuronal and behavioral responses that can be tested jointly in a neuroimaging context. This may require generalizing the notion of a connection to a coupling tensor (4D object) that couples two (2D) cortical/subcortical fields. It also implicitly requires better inference of unknown instantaneous neuronal states that show self-organized behavior.


References:


  1. Friston KJ, Kilner J, Harrison L (2003) A free-energy principle for the brain. J. Physiol. Paris. 100 (1-3), 70-87.
  2. Friston K, Mattout J, Trujillo-Barreto N, Ashburner J, Penny W (2007) Variational free energy and the Laplace approximation. NeuroImage 34:220–234.
  3. Friston KJ (2010) The free-energy principle: A unified brain theory? Nat. Rev. Neurosci. 11, 127-138.

Tuesday, September 28, 2010

Repurposing 2 liter plastic bottles for a ship and an island

If one ever becomes unemployed, clearly the best option is to collect recyclable plastic bottles like many other unemployed (and often homeless) people. But instead of turning it in for $0.05 a piece, use them to build a seaworthy vessel such as the Plastiki and your own island:

First improvement of fundamental Max-Flow Algorithm in over 10 years!

The maximum flow problem (or max flow) is, roughly speaking, to calculate the maximum amount of items that can move from one end of a network to another, given the capacity limitations of the network’s links. The items could be data packets traveling over the Internet or boxes of goods traveling over the highways; the links’ limitations could be the bandwidth of Internet connections or the average traffic speeds on congested roads.

Max flow (and its dual, the minimum s-t cut problem) is one of the most fundamental and extensively studied problems in computer science (and operations research and optimization) and a staple of introductory courses on algorithms.  For decades it was a prominent research subject, with new algorithms that solved it more and more efficiently coming out once or twice a year.  But as the problem became better understood, the pace of innovation slowed.  Now, however, Jonathan Kelner, MIT assistant professor of applied mathematics, CSAIL grad student Aleksander Madry, math undergrad Paul Christiano, and Yale Professor Daniel Spielman and USC Professor Shanghua Teng, have demonstrated the first improvement of the max-flow algorithm in 10 years.

More technically, the problem has to do with what mathematicians call graphs. A graph is a collection of vertices and edges, which are generally depicted as circles and the lines connecting them. The standard diagram of a communications network is a graph, as is, say, a family tree. In the max-flow problem, one of the vertices in the graph — one of the circles — is designated the source, where the item comes from; another is designated the drain, where the item is headed. Each of the edges — the lines connecting the circles — has an associated capacity, or how many items can pass over it.

Such graphs have direct applications modeling real-world transportation and communication networks in a fairly straightforward way, but their applications are actually much broader.  Max flow is the fastest algorithm right now for solving most optimization problems, often used as subroutines in other algorithms.  Outside of network analysis, a short list of applications that use max flow include airline scheduling, circuit analysis, task distribution in supercomputers, digital image processing, and DNA sequence alignment.

Graphs to grids

Traditionally, algorithms for calculating max flow would consider one path through the graph at a time.  If it had unused capacity, the algorithm would simply send more items over it and see what happened. Improvements in the algorithms’ efficiency came from cleverer and cleverer ways of selecting the order in which the paths were explored.

But Kelner and colleagues treat a capacitated, undirected graph as a network of resistors and describe a fundamentally new technique for approximating the maximum flow in these graphs by computing electrical flows in resistor networks. They then use this technique to develop the asymptotically fastest-known algorithm for solving the max flow problem by solving a sequence of electrical flow problems with varying resistances on the edges. Each of these electrical flow problems can be reduced to the solution of a system of linear equations in a Laplacian matrix, which can be solved in nearly-linear time.

By representing the graph as a matrix, where each node in the graph is assigned one row and one column of the matrix and the number value where one node represented by a row and another node represented by a column intersect represents the capacity for transferring items between those two nodes. In the branch of mathematics known as linear algebra, a row of a matrix can also be interpreted as a mathematical equation, and the tools of linear algebra enable the simultaneous solution of all the equations embodied by all of a matrix’s rows. By repeatedly modifying the numbers in the matrix and re-solving the equations of the Laplacian system, the researchers effectively evaluate the whole graph at once, which turns out to be more efficient than trying out paths one by one.

If V is the number of vertices in a graph, and E is the number of edges between them, then the execution of the fastest previous max-flow algorithm was proportional to (V + E)^(3/2). The execution of the new algorithm is proportional to (V + E)^(4/3). For a network like the Internet, which has hundreds of billions of nodes, the new algorithm could solve the max-flow problem hundreds of times faster than its predecessor.  In addition to the immediate practical use of the algorithm, its breakthrough approach will likely cause a paradigm shift in a number of fields and their approach to related problems.

Sunday, September 26, 2010

Retinal Isomerization Almost Perfectly Efficient

Polli et al. excited retinal in rhodopsin and then followed the molecule as it returned to its electronic ground state.  By monitoring stimulated emission and absorption of light from the molecule, they mapped out the energy gap between the ground and excited electronic states as a function of time after excitation.  Their data revealed an initial decrease and a subsequent increase of the energy gap, consistent with passage through a conical intersection (an intersection of a 3N-dimensional 'landscape' that plots the total energy of a collection of N atoms as a function of the atomic positions during transitions from excited states to ground states of a molecule).   The authors also simulated the excited-state dynamics of retinal in rhodopsin, which agreed with the measured data and allow an inference of the time-evolution of the retinal geometry after excitation.  In this molecular 'movie' of the first step in vision, retinal in a crowded protein environment reaches its conical intersection seam within 75 femtoseconds (this is astonishingly short, essentially the same time as that predicted by theoretical simulations of retinal in the gas phase).  This indicates that the binding pocket for retinal in rhodopsin must be ideally organized to both promote and accommodate the observed conformational change, and indicates which of the geometries along the seam of the conical intersections is responsible for the ultrafast de-excitation in rhodopsin.  The conical intersection topography is strongly 'peaked'--spectral signatures of part of the molecular wavepacket remaining on the excited state are largely absent from the experimental data--showing how the passage of retinal through the conical intersection is nearly perfectly efficient.

References:
Conical intersection dynamics of the primary photoisomerization event in vision
Dario Polli, Piero Altoè, Oliver Weingart, Katelyn M. Spillane, Cristian Manzoni,
Daniele Brida, Gaia Tomasello, Giorgio Orlandi, Philipp Kukura, Richard A. Mathies, Marco Garavelli, Giulio Cerullo. Nature 467, 440–443 (2010) 10.1038/nature09346

Physical chemistry: Seaming is believing
Todd J. Martinez. Nature 467:7314, 412 (2010) doi:10.1038/467412a

Friday, September 17, 2010

The Ratio Club

From left to right;
standing: Giles Brindley, Harold Shipton, Tom McClardy, John 
Bates,
Ross Ashby, Edmund Hick, Thomas Gold, John Pringle, Donald

Sholl, Albert Uttley, John Westcott, Donald MacKay;
sitting: Alan 
Turing, Gurney Sutton, William Rushton, George Dawson, Horace Barlow
The British physiologist William Grey Walter (1910–1977) was an early member of the interdisciplinary Ratio Club. This was a small dining club that met several times a year from 1949 to 1955, with a nostalgic
final meeting in 1958, at London’s National Hospital for Neurological Diseases. The founder-secretary was the neurosurgeon John Bates, who had worked (alongside the psychologist Kenneth Craik) on servomechanisms for gun turrets during the war.

The club was a pioneering source of ideas in what Norbert Wiener had recently dubbed ‘cybernetics’ Indeed, Bates’ archive shows that the letter inviting membership spoke of ‘people who had Wiener’s ideas before Wiener’s book appeared’. In fact, its founders had considered calling it the Craik Club, in memory of Craik’s work—not least, his stress on ‘synthetic’ models of psychological theories. In short, the club was the nucleus of a thriving British tradition of cybernetics, started independently of the transatlantic version.

The Ratio members—about twenty at any given time—were a very carefully chosen group. Several of them had been involved in wartime signals research or intelligence work at Bletchley Park, where Alan Turing had used primitive computers to decipher the Nazis’ Enigma code. They were drawn from a wide range of disciplines: clinical psychiatry and neurology, physiology, neuroanatomy, mathematics/statistics, physics, astrophysics, and the new areas of control engineering and computer science.

The aim was to discuss novel ideas: their own, and those of guests—such as Warren McCulloch. Indeed, McCulloch—the prime author, a few years earlier, of what became the seminal paper in cognitive science (McCulloch and Pitts 1943)—was their very first speaker in December 1949. (Bates and Donald MacKay, who’d hatched the idea of the club on a shared train journey after visiting Grey Walter, knew that McCulloch was due to visit England and timed the first meeting accordingly.) Turing himself gave a guest talk on Educating a Digital Computer exactly a year later, and soon became a member. (His other talk to the club was on morphogenesis.) Professors were barred, to protect the openness of speculative discussion. So the imaginative anatomist J. Z. Young (who’d discovered the squid’s giant neurones, and later suggested the ‘selective’ account of learning) couldn’t join the club, but gave a talk as a guest.

The club’s archives contain a list of thirty possible discussion topics drawn up by Ashby (Owen Holland p.c.). Virtually all of these are still current. What’s more, if one ignores the details, they can’t be better answered now than they could in those days. These wide-ranging meetings were enormously influential, making intellectual waves that are still spreading in various areas of cognitive science. The neurophysiologist Horace Barlow (p.c.) now sees them as crucial for his own intellectual development, in leading him to think about the nervous system in terms of information theory.  And Giles Brindley, another important neuroscientist, who was brought along as a guest by Barlow before joining for a short time, also remembers them as hugely exciting occasions.  See attached image archived at Wellcome Library, London, of “The Ratio Club” at Cambridge. Fortuitously, the single photo was taken at a Ratio Club meeting held May 2-3, 1952 that was attended by a guest, “Giles Brindley (London Hospital).” Giles is the gent marked by the yellow circle. Also in this group are two pioneers in computer science that are so significant that their names are immediately recognizable: that’s Donald MacKay marked in red and Alan Turing in green.


Reference: http://www.rutherfordjournal.org/article020101.html#sdfootnote8sym

Unusual Science Talks: Extreme Show & Tell

In doing a little research on history of visual prostheses, I uncovered a gem, Sir Giles Skey Brindley (an account paraphrased from the sources below).

A rather diverse fellow, he made significant contributions to cortical prostheses in the 1960s, but perhaps even more noteworthy was his later work in the 80s on penile dysfunction and various cures. This culminated in a rather unusual scientific presentation at the 1983 Las Vegas meeting of the American Urological Association.

The lecture, which had an innocuous title along the lines of ‘Vaso-active therapy for erectile dysfunction’ was scheduled as an evening lecture of the Urodynamics Society. Professor Brindley, still in his blue track suit, was introduced as a psychiatrist with broad research interests. He began his lecture without aplomb. He had, he indicated, hypothesized that injection with vasoactive agents into the corporal bodies of the penis might induce an erection. Lacking ready access to an appropriate animal model, and cognisant of the long medical tradition of using oneself as a research subject, he began a series of experiments on self-injection of his penis with various vasoactive agents, including papaverine, phentolamine, and several others. (While this is now commonplace, at the time it was unheard of). His slide-based talk consisted of a large series of photographs of his penis in various states of tumescence after injection with a variety of doses of phentolamine and papaverine. The Professor wanted to make his case in the most convincing style possible. He indicated that, in his view, no normal person would find the experience of giving a lecture to a large audience to be erotically stimulating or erection-inducing. He had, he said, therefore injected himself with papaverine in his hotel room before coming to give the lecture, and deliberately wore loose clothes (hence the track-suit) to make it possible to exhibit the results. He stepped around the podium, and pulled his loose pants tight up around his genitalia in an attempt to demonstrate his erection.

At this point, I, and I believe everyone else in the room, was agog. I could scarcely believe what was occurring on stage. But Prof. Brindley was not satisfied. He looked down sceptically at his pants and shook his head with dismay. ‘Unfortunately, this doesn’t display the results clearly enough’. He then summarily dropped his trousers and shorts, revealing a long, thin, clearly erect penis. There was not a sound in the room. Everyone had stopped breathing. But the mere public showing of his erection from the podium was not sufficient. He paused, and seemed to ponder his next move. The sense of drama in the room was palpable. He then said, with gravity, ‘I’d like to give some of the audience the opportunity to confirm the degree of tumescence’. With his pants at his knees, he waddled down the stairs, approaching (to their horror) the urologists and their partners in the front row. As he approached them, erection waggling before him, four or five of the women in the front rows threw their arms up in the air, seemingly in unison, and screamed loudly. The scientific merits of the presentation had been overwhelmed, for them, by the novel and unusual mode of demonstrating the results.

References:

  1. How (not) to communicate new scientific information: a memoir of the famous Brindley lecture. DOI: 10.1111/j.1464-410X.2005.05797.x
  2. Professor Giles Brindley – Extreme Show & Tell February 15, 2010.
  3. Brindley GS. Cavernosal alpha-blockade: a new technique for investigating and treating erectile impotence. Br J Psychiatry (1983) 143: 332–337.

Tuesday, September 14, 2010

New Proof that the Sum of Digits of Prime Numbers is Evenly Distributed

Many arithmetical problems involve prime numbers and remain unresolved even after centuries. For example, the sequence of prime numbers is infinite, but it is still not known if an infinity of prime numbers p exists such that p+2 is also a prime number (the problem of twin prime numbers). One hypothesis about prime numbers, first formulated in 1968 by Alexandre Gelfond, has recently been proven by Christian Mauduit and Joel Rivat from the Institut de Mathématiques de Luminy.  It states that on average, there are as many prime numbers for which the sum of decimal digits is even as prime numbers for which it is odd.  In order to arrive at this result, the researchers employed highly groundbreaking methods derived from combinatorial mathematics, the analytical theory of numbers and harmonic analysis.  This proof should pave the way for the resolution of other difficult questions concerning the representation of certain sequences of integers.   Apart from their theoretical interest, these questions are directly linked to the construction of sequences of pseudo-random numbers and have important applications in digital simulation and cryptography.

Christian Mauduit, Joël Rivat. Sur un problème de Gelfond: la somme des chiffres des nombres premiers.  Annals of Mathematics, (2010) 171(3):1591.  DOI:10.4007/annals.2010.171.1591

Tuesday, September 7, 2010

The Evolution of Spite (and Altruism)

Behaviors that decrease the relative fitness of the actor--and also either benefit (altruism) or harm (spite) other individuals--are difficult to reconcile with natural selection and maximization of individual fitness. The paragon of altruism is the sterile worker caste within eusocial insect colonies, which help rear the offspring of their queen, or the slime mold cells that altruistically give up their own survival to become the nonviable stalk of a fruiting body, helping other cells to disperse in the form of spores. These behaviors reduce the reproductive success of the altruist--so why doesn't natural selection weed out the genes responsible for such behaviors?


Hamilton showed that genes can spread not only through their direct impact on their own transmission, but also through their indirect impact on the transmission of copies present in other individuals. He introduced the theoretical concept of inclusive fitness--Hamilton's Rule--which states that a trait will be favored by selection when rb-c>0, where c is the fitness cost to the actor, b is the fitness benefit to the recipient and r is their genetic relatedness. Consequently, altruistic behaviors are favored if the benefits are directed toward other individuals who share genes for altruism.


Eusociality, depending on how it is defined, has evolved 3-11 times in Hymenoptera (ants, bees, wasps), termites, thrips, aphids, spiders, beetles, shrimps and mole rats. A crucial parameter necessary for the evolution of eusociality is strict, one-time monogamy (which has been shown as the ancestral state in all independent origins of eusociality studied), in which females only mate one male in their entire life. This monogamy leads to a potential worker being equally related (r=0.5) to her own offspring and to the offspring of her mother (siblings). In this case, any small efficiency benefit for rearing siblings over their own offspring (b/c>1) will favor eusociality (such benefits include life insurance=helpers completing parental care after the death of the mother, as well as fortress defense=help use or defend a food source when opportunities for successful migration are low). Later in evolutionary trajectories of eusocial animals, once the workers have lost the ability to mate and realize full reproductive potential themselves and generally have specialized to a division of labor that gives a substantial b/c (a large efficiency benefit for sibling-rearing since siblings are less related to the individual than their own offspring would be), some queens develop the ability/behavior to mate with multiple males.


Spiteful behaviors would be favored--i.e. rb-c>0 is satisfied--if c is positive (which is costly to the actor) and b is negative (which is costly to the primary recipient of the spiteful behavior), only if relatedness between the actor and recipient, r, is negative (negative relatedness is when the recipient is less related to the actor than expected by chance). The indirect fitness of spite is that secondary recipients, more closely related to the actor than the primary recipient, experience reduced competition from the primary recipient harmed by the spiteful behavior. Spite is therefore altruism to the secondary recipients: harming an individual is favored if it provides a benefit closer relatives.


Some confusion about spite arose due to certain behaviors only being evaluated with respect to direct fitness over the short term rather than over the lifetime of the actor; these include bird siblicide at neighboring nests and fish egg cannibalism (decreased competition for resources for the actor and/or actor's offspring), mammalian infacticide, especially of juvenile males (decreases competition for offspring or mates) and human punishment/rejection of low offers in economic games (increased cooperation over the long term). All of these examples are selfish behaviors that are costly to the recipient but provide a benefit to the actor (c<0). The specific conditions required to favor evolutionary spite, population structures in which harming non-relatives is an efficient way of helping relatives, may be rare in general and unlikely in humans and other primates.


An example of spite can be found in the polyembryonic parasitoid wasps.  A female wasp lays eggs on moth caterpillars, after which the wasp eggs divide asexually into many larvae and consume the growing caterpillar from the inside. Most larvae develop normally, but a fraction become soldier morphs. Developing as a soldier is costly to the actor (they are sterile) and costly to the primary recipient (soldiers seek out and kill larvae that developed from the other eggs within the host), however it is beneficial to the soldier's clone-mates that developed from the same egg, freeing up resources (the caterpillar body) for their consumption.


From a theoretical perspective, spite is plausible if there is large variance in relatedness between competitors, kin discrimination (with harming behaviors aimed at individuals to whom the actor is relatively unrelated), and strong local competition so that harming the primary recipient provides appreciable benefits to the secondary recipients.  Local competition for resources typically selects for spite and against altruism; altruistic traits show a positive, monotonic relationship to relatedness, whereas spiteful traits show a domed relationship; kin discrimination is key for spite, whereas altruism can often evolve without kin discrimination when limited dispersal keeps relatives together.

As Hamilton has pointed out, the indirect fitness benefits derived from altruism and spite require genetic relatedness per se, not kinship (ie. genetic relatedness at the altruism locus, not geneaological relationship over the whole genome). This can be accomplished in two ways: a gene or set of tightly-linked genes that both cause the cooperative behavior and cause cooperators to associate (coined "greenbeards" by Dawkins) or by geneaological kinship. In the slime mold, Dictyostelium discoideum, individuals with the csa gene adhere to each other in aggregation streams and cooperatively form fruiting bodies while excluding noncarriers of the gene. A spiteful greenbeard in fire ants, Solenopsis invicta, is the b allele of the Gp-9 gene, which enables workers to use oder to determine whether prospective queens also carry this allele, dismembering them if they do not.

There are four categories of greenbeards: altruistic and always expressed (obligate), altruistic and only expressed in response to presence of greenbeard in others (facultative), spiteful and obligate, spiteful and facultative.  For all cases except altruistic facultative, the greenbeard is selected against at low frequencies and only favored when it has established itself to a certain frequency. Population structure can solve this problem by keeping individuals with greenbeards together. Some models for altruism in humans implicitly invoke greenbeard mechanisms (suggesting altruistic individuals differ from non-altruistic individuals in some observable characteristic like smiling or tendency for punishment), which is only true if the greenbeard mechanism is encoded by the same gene or closely linked genes as those that lead to the altruism, otherwise falsebeards could too easily arise and the altruism (and its detection) would not be evolutionarily stable.


Microbes are ideal model organisms to look for new greenbeards because their asexual growth leads to extreme population structuring, the genotype is relatively simply linked to the phenotype and this simplicity may prevent decoupling between the greenbeards and falsebeards (cheats that displayed the signal without also performing the behavior), and genetic knockouts can be designed to aid in the detection of greenbeards.  

Another example of spite is the costly production and release of antimicrobial bacteriocins, toxins that can kill unrelated strains of the same species that lack the specific immunity gene. In some cases, cell death is required to release the bacteriocins into the environment, so it is clearly costly to the actor. The bacteriocin production genes are genetically linked to the immunity genes so that close relatives both produce it and are immune to it. When one bacteria does release its bacteriocin, it will thus only kill non-relatives and free up resources for clone-mates.


In the Hawlena study, two natural populations of Xenorhabdus bacteria are carried by entomopathogenic nematodes, dispersing over a range of a few metres within these symbiotic hosts, and use bacteriocins as weapons. The authors found that genetic relatedness decreased and the probability of bacteriocin-mediated (i.e. spiteful) interactions increased with spatial distance between isolates. Measurements were taken at a scale ranging from 1 to 120 metres. Whilst this work has only been done on a relatively small scale and in one system, it is clearly important to test theoretical results with real systems and, fortunately, in this case, the experimental results support the theory.




Hamilton WD. (1963) The Evolution of Altruistic Behavior.  American Naturalist. 97:354-6.


Wloch-Salamon DM, Geria D, Hoekstra RF, deVisser JAGM. (2008) Effect of dispersal and nutrient availability on the competitive ability of toxin-producing yeast. Proc R Soc Lond B. 275:535-41


Hawlena H, Bashey F, Lively CM. (2010) The Evolution of Spite: Population Structure and Bacteriocin-Mediated Antagonism in Two Natural Populations of Xenorhabdus Bacteria. Evolution.


West SA, Gardner A. (2010) Altruism, Spite, and Greenbeards. Science. 327:1341-1344.

Debate on Link Between Long-Term Circadian Disruption and Cancer

Epidemiological studies have revealed that human night-shift workers show an increased risk of breast, colon, lung, endometrial and prostate cancer, hepatocellular carcinoma and non-Hodgkin's lymphoma.  Disruption of circadian rhythm increases spontaneous and carcinogen-induced mammary tumors in rodents. Loss of circadian rhythm is also associated with accelerated tumor growth in both rodents and human cancer patients. These findings raise the question of how circadian dysfunction increases the risk of cancers.  A new mechanism for how long-term disruption of circadian homeostasis can also increase your risk of developing cancer is currently being debated (Lee et al. 2010).  


Circadian rhythms in mammals are generated by an endogenous clock composed of a central clock located in the hypothalamic suprachiasmatic nucleus (SCN) and subordinate clocks in all peripheral tissues.  The timing of peripheral oscillators is controlled by the SCN when food is available ad libitum. Time of feeding, as modulated by temporal restricted feeding, is a potent 'Zeitgeber' (synchronizer) for peripheral oscillators with only weak synchronizing influence on the SCN clockwork.  When restricted feeding is coupled with caloric restriction, however, timing of clock gene expression is altered within the SCN.  The SCN clock responds to external cues--daily resetting of the phase of the clock by light stimuli and metabolic cues--and drives peripheral clocks via circadian output pathways.  The components of the circadian timing system can be differentially synchronized according to distinct, sometimes conflicting, temporal (time of light exposure and feeding) and homeostatic (metabolic) cues.  Both the central and peripheral clocks are operated by feedback loops of specific temporal expression patterns of circadian genes, including Bmal1ClockPeriod (Per1-3) and Cryptochrome (Cry1 and 2).  Bmal1 and Clock encode bHLH-PAS transcription factors that heterodimerize and bind to E-boxes in gene promoters to activate Per and Cry transcription, whereas Per and Cry encode repressors of BMAL1/CLOCK. The alternating activation and suppression of the BMAL1-driven positive loop and the PER/CRY-controlled negative loop result in a circadian oscillation of the molecular clock, allowing them to run autonomously with their characteristic, near-24h period.     


Cell proliferation in all rapidly renewing mammalian tissues follows a circadian rhythm (Matsuo et al. 2003) and is paced by both central and peripheral clocks.  The central clock-controlled mitogenic signals simultaneously activates the cell cycle and peripheral clocks leading to a circadian coupling of cell cycle and tumor suppressor gene expression.  Thus these clock genes also function as tumor suppressors during cell cycle control.  For example, BMAL1 suppresses proto-oncogene c-myc but stimulates the tumor suppressor Wee1, CRY2 indirectly regulates the intra S-check point, and PER1 directly interacts with ATM in response to γ-radiation in vitro. In mice, mutation in Per2 leads to deregulation of DNA-damage response and increased neoplastic growth. In humans, deregulation or polymorphism of Per1Per2Cry2Npas2 and Clock is associated with acute myelogenous leukemia, hepatocellular carcinoma, breast, lung, endometrial and pancreatic cancers, and non-Hodgkin's lymphoma.


Disruption of circadian rhythm in cell proliferation is frequently associated with tumor development and progression in mammals, due to, at least in part, loss of the homeostasis of cell cycle control.  The central clock generates a robust circadian rhythm in SNS signaling via direct and indirect targeting of the presympathetic neurons located in the hypothalamic autonomic paraventricular nucleus [43]In vivo, the SNS controls all peripheral tissues by releasing the hormones epinephrine and norepinephrine that target adrenergic receptors (ADRs) on the cell membrane [46]. Norepinephrine is directly released from postganglionic sympathetic neurons, whereas epinephrine is released from preganglionic sympathetic neuron-controlled chromaffin cells located in the adrenal medulla.  Disruption of circadian rhythm desynchronizes the central clock-SNS-peripheral clock axis, suppresses peripheral clock function and abolishes peripheral clock-dependent ATM activation, leading to myc oncogenic activation and increased incidence of tumors in wild-type mice. Our studies identify a previously unknown molecular pathway that links disruption of circadian rhythm with oncogenesis and demonstrate that tumor suppression in vivo is a clock-controlled physiological process but not a non-clock function of a specific circadian gene.  Using the central clock-SNS-peripheral clock axis as a model system, we propose that the central clock-controlled SNS signaling generates a coupled AP1, peripheral clock, and ATM activation. The activation of AP1 leads to myc-induced cell cycle progression, while the activation of the peripheral clock inhibits myc overexpression and is required for ATM activity. ATM then induces p53 to prevent Myc oncogenic signaling by blocking p53-MDM2 interaction. Disruption of circadian rhythm desynchronizes the central clock-SNS-peripheral clock axis which suppresses peripheral clock and peripheral clock-dependent ATM-p53 signaling but has no effect on c-myc activation. Together, these events lead to Myc oncogenic activation that promotes genomic instability and tumor development (Fig. 7i). Our model suggests that the circadian clock plays a dual role in cell cycle control and it suppresses tumor development by controlling the homeostasis but not the inhibition of cell proliferation.


Robin McAllen argues that the evidence for SNS involvement is merely correlative rather than causative--since the endogenous measures used by the paper, catecholamine urine levels and UCP1 expression, are intimately involved in patterns of activity, body temperature and feeding, which also have circadian rhythms that are disrupted by clock gene knockouts and jetlag--and that the authors over-simplify the workings of the SNS--claiming that the specialized sympathetic nerves that innervate different body tissues can be treated as a single entity, bathing all tissues in uniform levels of catecholamine soup.  


The paper's authors counter that SNS maintains many homeostatic functions in addition to the flight-or-fight response. The sympathetic tone to all tissues is low during the sleeping phase but increases before waking, which is coupled with the increase in urine volume, rate of heart contraction and body temperature. Such sympathetic control provides one of the key mechanisms that couple various physiological processes with daily physical activity, and our studies clearly show disruption of this control in response to circadian disruption in mice, not just in cultured cells. Finally, and most importantly, the sympathetic target genes found in their in vitro studies are expressed in all tissues following a robust circadian rhythm in vivo that is disrupted in response to SNS dysfunction. They demonstrate that this circadian activation of the p53 tumor suppressor in the thymus is lost in the absence of ATM, which itself is directly regulated by the clock.  It is well established that loss of function in some of these genes including Per, Atm and p53 promotes tumor development in mice. Thus, the authors claim that McAllen's conclusion that their studies use a mitogenic function of catecholamines on cells in vitro to explain tumor promotion in vivo is a misunderstanding.


Matsuo T, Yamaguchi S, Mitsui S, Emi A, Shimoda F, et al. (2003) Control mechanism of the circadian clock for timing of cell division in vivo. Science 302: 255–259. 


Lee S, Donehower LA, Herron AJ, Moore DD, Fu L.  (2010) Disrupting circadian homeostasis of sympathetic signaling promotes tumor development in mice.  PLoS One 2010 5(6):e10995.

How Exactly Do Bacteria Cope with Rapid Environmental Change?


Bacterial DNA replication is generally extremely accurate; however, spontaneous mutants may have increased fitness due to new beneficial proteins (traits) that may be selected for in a rapidly changing environment.  The contribution of post-replication processes to genetic variation has not be examined rigorously and thus transcriptional and translational fidelity (or lack thereof...) has been underappreciated in bacterial selection, and may even be an in-built strategy used by biology to increase protein variation at the single cell level to ensure bacterial robustness under rapid environmental change.


Using a new method for quantifying errors in gene expression at the single cell level in the bacterium Bacillus subtilis, Meyerovich and colleagues reveal that the transcription and translation machinery does not strictly follow the DNA code.  The new method relies on the mutation of a chromosomally encoded green fluorescent protein (GFP) reporter allele, containing frameshifts and premature stop codons, so that errors in gene expression result in the formation of GFP, which would then be observable via imaging of single cells in real time.  Using this method, the authors show that errors in decoding the DNA sequence occur around 1% of the time.  This error rate is at least ten times higher than previous estimates.  Furthermore, the frequency of errors increases markedly in response to certain environmental conditions such as nutrient deprivation (stationary phase), lower temperatures, and toxic accumulation.  The implications are that many individual protein molecules contain potentially significant variations from the encoded amino acid sequence, and that this could increase survival in fluctuating environments or in response to sudden stress.  Consistent with this increased protein plasticity for rapid adaptation, gene-expression errors could combine with a genetic mutation in one gene, allowing the organism to bypass the need to undergo two independent mutations simultaneously. It is unclear whether this error rate increase is due to energetic constraints--the bacteria can't afford error correcting mechanisms under such conditions--or if the bacterial genetic code is selected as a consensus sequence from which protein production generates useful variations.


For any organism, the amount of errors represents a compromise between a cost of dysfunctional proteins and a payoff of beneficial variants that lead to increased phenotypic heterogeneity.  It is likely that evolutionary pressure that shapes codon usage would allow different genes to be prone to unequal error rates according to their cellular function.


Visualizing high error levels during gene expression in living bacterial cells.
Meyerovich M, Mamou G, Ben-Yehuda S.  Proc Natl Acad Sci U S A 2010 Jun 22 107(25):11543-8

The Bare Skin Hypothesis




A paraphrase of the hypothesis offered by Professor Nina G. Jablonksi:

Starting 3 million yrs ago, earth entered into a phase of global cooling that had a drying effect in East and Central Africa, where our human ancestors lived. The decline in regular rainfall changed woodlands into open savanna grasslands. The dwindling resources of fruits, leaves, tubers and seeds as well as drinking water forced our ancestors to abandon leisurely foraging habits for sustained activity of walking/running many miles to stay hydrated and obtain enough calories. Around this time, hominids also began incorporating meat into their diet, as revealed by the appearance of stone tools and butchered animal bones around 2.6 million yrs ago.

Homo ergaster evolved essentially modern body proportions that would have permitted prolonged walking/running and details of the joint surfaces of the ankle, knee and hip make clear that these hominids actually exerted themselves in this way.  The increase in walking and running builds up heat internally in the muscle and would have required that hominds both enhance their eccrine sweating ability (2-5 million watery glands close to skin surface that can produce up to 12 liters of sweat a day, rather than oily apocrine and sebaceous glands associated with deeper hair follicles, all of which develop from the same unspecialized epidermal stem cells) and lose their body hair to avoid overheating in the hot open savannas.  This combination of naked skin and watery sweat that sits directly atop it rather than collecting in the fur allows humans to eliminate excess heat very efficiently.  For furry animals, the effectiveness of cooling diminishes as an animal's coat become wet and matted with this thick, oily sweat.  Under conditions of duress, heat transfer is inefficient (evaporation occurs at the tips of the fur rather than the surface of the skin), requiring that the animal drink large amounts of water, which may not be readily available, in which case, the animal will collapse from heat exhaustion.  Human cooling system is so superior that in a marathon on a hot day, a human could outcompete a horse.  

MC1R gene is one of the genes responsible for producing skin pigmentation.  A specific gene variant always found in Africans with dark pigmentation originated ~1.2 million years ago.  Early human ancestors are believed to have had pinkish skin covered with black fur, much like chimps, so the evolution of permanently dark skin was a presumed requisite evolutionary follow-up to the loss  of our sun-shielding body hair.  

Comparison of human and chimp DNA reveals that one of the most significant differences are in the genes that code for proteins controlling properties of the skin (waterproofness, scuff-resistance).  The outermost skin layer--the stratum corneum of the epidermis--is composed of flattened, brick-like dead cells--corneocytes--which contain a unique combination of proteins, including novel types of keratin and involucrin, and are surrounded by ultrathin layers of lipids that act like mortar. Most genes directing SC development are ancient and highly conserved among vertebrates, so the human mutations signify that they were important to survival.

Maintenance of hair in armpits and groins despite loss elsewhere must serve to propagate pheromones (chemicals that serve to elicit behavioral responses from other individuals) and to help keep these areas lubricated during locomotion.  Hair on the head was most likely retained to help shield against excess heat on the top of the head (a barrier layer of air between sweating scalp and hot surface of the hair, with tightly curled hair being the optimum for max thickness of this airspace).  Other hairtypes/body types evolved as humans dispersed out of tropical Africa.


Daniel E. Lieberman and Dennis M. Bramble. (2007) The Evolution of Marathon Running: Capabilities in Humans.  Sports Medicine 37(4-5): 288-290.

Alan R. Rogers, D. Iltis, S. Wooding. (2004) Genetic Variation at the MC1R Locus and the Time since Loss of Human Body Hair.  Current Anthropology, 45(1): 105-108.