Neurons do not operate in isolation. They are embedded into circuits of often staggering complexity in which each cell is influenced by—and influences—many others via thousands of input and output connections. How this multitude of interactions supports intelligent behaviour is the key problem in neuroscience.
Half a century ago, biology faced a similar problem of how complexity springs from simple physical and chemical mechanisms. Then, the central question was the nature of the genetic code: how do genes specify biological function? The solution to this problem ushered in the golden age of molecular biology. Many of the same factors that aligned for molecular biology in the 1960s are aligning again, this time in neuroscience.
First, questions of causality as well as correlation can now be addressed, as genetically encoded agents provide means to turn specific cell types in functioning circuits on, or off, at will.
Second, there is now sufficient analytical resolution to reveal the actions of the relevant circuit elements—neurons and synapses—during behaviour.
Third, there is progress at the mezzanine level between cells and systems. The operations of multicellular systems, like those of single cells, are most clearly revealed in controlled settings stripped of extraneous complexities, or in genetically tractable organisms in which precisely defined elements can be altered one at a time. Such systems have historically led to the discovery of all the principal processes determining the behaviour of individual cells. The key to understanding multicellular processes will similarly be found in experimental systems that preserve core principles, yet allow a high degree of experimental control, including the capacity to probe inside and outside of normal physiological limits. The requisite precision is now attainable through molecular manipulations.
The research interests of CNCB scientists fall into four interconnected areas where general principles will likely be found. These principles are expected to hold for many different circuits in many different species.
The anatomical fine structure across a macroscopic brain region is often remarkably constant. For example, Mountcastle noted that a histological slide of neocortex offers few clues as to whether the section was obtained from an auditory, language, or somatosensory area. This has suggested that the nervous system is built from innumerable copies of a limited number of circuits—generalized processing architectures running generalized algorithms. Neither the structure of these circuits in different parts of the brain, nor the nature of the algorithms they implement, nor indeed the catalogue of such ‘canonical’ circuits is currently known.
Actions are sequential and sensory signals are time-dependent. Yet, the neural representation of time is poorly understood; as is the nature of working memory, upon which the processing of temporal sequences depends. The problem of decision-making illustrates both: the information necessary to make a choice is rarely available all at once but must be gathered over time. Two interconnecting problems are where information is accumulated and in what form it is stored. In all likelihood, this form of memory is in the dynamic state of a circuit, not its synaptic weights.
Nervous systems transform sensory signals and internal states into actions. Learning is a higher-order process by which this transformation is altered, producing different actions from the same stimulus. Computational strategies, such as reinforcement learning, make it possible to estimate an organism’s internal state from its recent history and allow predictions about the frequency with which particular choices will be made. Consistent with such models, neurons encoding predictors of action choices or commands for specific motor programmes have been found in many species. What remains unknown is how the activity of these neurons themselves is controlled.
Stimuli that capture our attention in some circumstances are ignored in others. Visual responses, for example, are powerfully modulated by contextual and task-specific information. Little is known about how attentional feedback controls sensory microcircuits. Neural information flow is gated by other neural signals, just as transistors are turned on and off by the same kind of signal they control. Even small networks of cells can theoretically exist in an astronomical number of states. Information processing must somehow select particular circuit states out of this seemingly limitless array of alternatives. Understanding the operation of circuits thus requires answers to two questions: What are the allowed states? And how are transitions between these states regulated?