By Eric Kandel, John D. Koester, Sarah H. Mack, Steven Siegelbaum November 09, 2021 ⋅ 343 min read ⋅ Textbooks
As the German neuroscientist Olaf Sporns has put it: "Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding." Despite the vast number of facts being accumulated, our understanding of the brain appears to be approaching an impasse.
Matthew Cobb
Part I: Overall Perspective
A first step towards understanding the brain is to learn how neurons are organized into signaling pathways and how they communicate by synaptic transmission.
The specificity of the synaptic connections established during development and refined during experience underlie behavior.
Chapter 1: The Brain and Behavior
The last frontier of biological science is to understand the biological basis of the mind: the brain.
The current challenge is to unify psychology, the science of the mind, with neural science, the science of the brain.
We assume that all behavior is the result of brain function.
How do the billions of nerve cells in the brain produce behavior and cognition?
What is the appropriate level of biological description to understand a thought, the movement of a limb, or the desire to make the movement?
The appropriate level depends on the goal. Certain levels have more explanatory power than others.
The goal of modern neural science is to integrate all of these specialized levels into a unified science.
As we’ll see, questions about the levels of organization, specialization of cells, and localization of function recur throughout neural science.
Review of the debate between Golgi and Cajal on the neuron doctrine, the principle that individual neurons are the elementary building blocks of the nervous system and aren’t a continuous web of tissue like blood vessels.
Review of the history of neural science.
Six major brain structures
Medulla oblongata: directly rostral to the spinal cord and is responsible for vital autonomic functions.
E.g. Digestion, breathing, and heart rate.
Pons: conveys information about movement from the cerebral hemispheres to the cerebellum.
Cerebellum: behind the pons, modulates the force and range of movement, and is involved in the learning of motor skills.
Midbrain: rostral to the pons and controls many sensory and motor functions.
E.g. Eye movement and the coordination of visual and auditory reflexes.
Diencephalon: rostral to the midbrain and contains the thalamus and hypothalamus.
Thalamus: processes most of the information reaching the cerebral cortex from the rest of the central nervous system.
Hypothalamus: regulates autonomic, endocrine, and visceral functions.
Cerebrum: comprises two cerebral hemispheres and the basal ganglia, hippocampus, and amygdala.
Basal ganglia: regulates movement execution, motor- and habit-learning.
Hippocampus: critical for the memory of people, places, things, and events.
Amygdala: coordinates the autonomic and endocrine responses of emotional states.
Each of these structures is made up of distinct groups of neurons with distinct connectivity and developmental origins.
E.g. In the medulla, pons, midbrain, and diencephalon, neurons are often grouped into distinct clusters termed nuclei.
E.g. The surface of the cerebrum and cerebellum is a large, layered, folded sheet of neurons called the cerebral cortex and the cerebellar cortex respectively.
The cerebrum also has a number of structures located below the cortex (subcortical).
E.g. Basal ganglia and amygdala.
The first strong evidence for localization of cognitive abilities came from studies of language disorders.
Review of Broca’s and Wernicke’s work.
The most basic mental functions, such as perception and motion, are mediated entirely by neurons in discrete local areas of the cortex.
However, more complex cognitive functions, such as language and memory, result from interconnections between several functional sites.
In other words, basic mental functions are localized while complex mental functions are distributed.
The power of Wernicke’s model wasn’t only its completeness but also its predictive utility.
E.g. It correctly predicted a third type of aphasia, one that results from the disconnection between both language areas.
A given function may not be eliminated by a single lesion if it’s a complex and thus distributed function.
Functional specialization is a key organizing principle in the cerebral cortex.
Who would’ve guessed that the neural analysis of the movement and color of an object occurs in different pathways rather than a single pathway unified by the percept of the object.
Similarly, the neural organization of language might not conform exactly to the axioms described by a theory of universal grammar, but it could still support the functionality described by it.
Mental processes are the product of interactions between elementary processing units in the brain.
We now think that all cognitive abilities result from the interaction of many processing mechanisms distributed in several regions of the brain.
E.g. Perception, movement, language, thought, and memory are made possible by the integration of serial and parallel processing in discrete brain regions.
So, damage to a single region doesn’t result in the complete loss of a cognitive function as many earlier neurologists believed.
Instead of thinking of mental functions as a chain of nerve cells and brain areas, we should think of it as many parallel pathways in a network of modules that ultimately converge upon a common set of targets.
Thus, malfunction of a single pathway within a network may affect the information carried by that pathway without disrupting the entire system.
Our experience isn’t a faithful guide to how such processes occur in the brain.
E.g. Recalling the concept ‘apple’ doesn’t help us understand how the concept was recalled.
At present, there’s no satisfactory theory that explains why only some of the information that reaches our eyes leads to a state of subjective awareness, while other information doesn’t.
Highlights
Neural science: to understand the brain at multiple levels of organization.
E.g. From cell to circuit to operations of the mind.
The fundamental principles of neural science bridge levels of time, complexity, and state.
E.g. From cell to action and ideation, from development to learning to expertise and forgetting, from normal function to neurological deficits and recovery.
Neuron doctrine: that individual nerve cells (neurons) are the elementary building blocks and signaling elements of the nervous system.
Neurons are organized into circuits with specialized functions, which integrate to form more complex cognitive functions.
No area of the cerebral cortex functions independently of other cortical and subcortical structures.
Chapter 2: Genes and Behavior
All behaviors are shaped by the interplay between genes, environment, and culture.
Genes don’t directly control behavior but they do code for proteins and RNAs that act at different times and at many levels to affect the brain.
This chapter asks how genes contribute to behavior.
E.g. Brain development, genetic modifications in other animals, and genetic risk factors in neurodevelopmental and psychiatric syndromes.
Heritability: the extent to which genetic factors account for traits in a population.
Review of DNA, genes, transcription, translation, exons, and introns.
The brain expresses a greater number of genes than any other organ in the body, and within the brain, diverse populations of neurons express different groups of genes.
Genes not only specify the initial development and properties of the nervous system, they can also be changed by experience.
Review of alleles, genotype (genetic makeup), phenotype (appearance), recessive and dominant.
Completion of the human genome project in 2001 lead to a surprising conclusion: the unique human species didn’t result from the invention of unique human genes.
E.g. Humans and chimpanzees share 99% of their protein-coding genes.
The conclusion is that ancient genes that humans share with other animals are regulated in new ways to produce new human adaptations.
Because of this conservation of genes throughout evolution, insights from studies of one animal can often be applied to other animals with related genes.
We have a mostly complete picture of the genetic basis of the circadian control of behavior.
The core of circadian regulation is an intrinsic biological clock that oscillates over a 24-hour cycle and persists in the absence of light.
A group of genes, not one gene, are conserved regulators of the circadian clock.
Molecules such as protein kinases are particularly significant at transforming short-term neural signals into long-term changes in the property of a neuron or circuit.
Examples of links between genetics and behavior
The ‘for’ gene regulates the activity level and locomotion in Drosophila and in bees to be either sitters or rovers.
The ‘npr-1’ gene encodes a neuropeptide receptor that regulates social behavior in C. elegans.
Other neuropeptides have been implicated in the regulation of mammalian social behavior such as oxytocin and vasopressin in prairie voles.
The ‘PKU’ gene, when it interacts with dietary protein, causes intellectual disability.
Genetic links between autism spectrum disorder and Williams syndrome further supports that the domains of cognitive and behavioral functioning are different, but may share important molecular mechanisms.
Skipping the rest of the chapter on genetics applied to autism spectrum disorders and schizophrenia due to disinterest.
Highlights
Rare genetic syndromes have provided important insights into the molecular mechanisms of complex human behaviors.
E.g. Fragile X syndrome, Rett syndrome, and Williams syndrome.
Sequencing the human genome, development of high-throughput genomic assays, and simultaneous computing and methodological advances have lead to profound changes in our understanding of the genetics of human behavior and psychiatric illness.
E.g. Schizophrenia and autism.
Chapter 3: Nerve Cells, Neural Circuitry, and Behavior
The brain both gathers and discards a lot of information.
Neurons are the basic signaling units of the brain.
The human brain has at least 86 billion neurons that can be classified into at least a thousand different types.
Yet this great variety of neurons is less of a contributor to the complexity of human behavior than their organization into anatomical circuits with precise functions.
E.g. Similar neurons can produce different actions because of the way they’re connected.
Five basic features of the nervous system
Structural components of individual neurons.
Mechanisms behind neurons producing signals within themselves and between each other.
Pattern of connection between neurons and between neurons and their targets.
Relationship of different patterns of interconnection to different types of behavior.
How neurons and their connections are modified by experience.
Two main classes of cells in the nervous system
Neurons (nerve cells): the signaling units of the nervous system.
Glia (glial cells): supports neurons.
Review of soma, dendrites, axon, presynaptic terminals, and action potential (AP).
The axon typically extends some distance away from the cell body before it branches, allowing it to carry signals to many target neurons.
The amplitude of an AP remains constant at 100 mV because it’s an all-or-none impulse that’s regenerated at regular intervals along the axon.
APs are the signals that the brain uses to receive, analyze, and convey information.
E.g. The APs that convey information about vision are identical to those that carry odor information.
Since all APs are the same, the type of information conveyed by an AP isn’t determined by the form of the signal, but by the pathway the signal travels in the brain.
Thus, the brain analyzes and interprets patterns of incoming electrical signals carried over specific pathways, and creates our sensations of sight, touch, taste, smell, and sound.
Review of myelin sheath, nodes of Ranvier, synapse, synaptic cleft, pre- and post-synaptic neuron and terminal, excitation and inhibition.
Principle of dynamic polarization: electrical signals within a neuron flow in only one direction, from the postsynaptic neuron to the axon.
In most neurons studied to date, electrical signals only travel in one direction in the axon.
Connectional specificity: neurons don’t randomly connect with each other but make specific connections with certain postsynaptic target cells and not others.
The feature that most distinguishes one type of neuron from another is form, specifically the number and order of dendrites and axon.
E.g. Unipolar, bipolar, and multipolar neurons.
Unipolar neurons show up in our autonomic nervous system, bipolar neurons show up in sensory organs, and multipolar neurons dominate the nervous system of vertebrates.
Multipolar cells vary greatly in shape.
E.g. Length of axons, extent, dimensions, and intricacy of dendritic branching.
Usually, the extent of branching correlates with the number of synaptic contacts that other neurons make onto them.
E.g. A spinal motor neuron can receive 10,000 contacts while a Purkinje cell can receive as many as a million contacts.
Review of sensory neurons, motor neurons, interneurons, afferent (towards CNS), efferent (away CNS).
Interneurons are the most numerous and are subdivided into two classes
Relay/Projection: long axons to convey signals over long distances, from one brain region to another.
Local: short axons that form connections with nearby neurons in local circuits.
Glia surround cell bodies, axons, and dendrites of neurons and differ morphologically from neurons in that they don’t form dendrites and axons.
Glia also differ functionally as they don’t have the same membrane properties as neurons and aren’t electrically excitable.
Every behavior is mediated by a specific set of interconnected neurons, and every neuron’s behavioral function is determined by its connections with other neurons.
Review of the knee-jerk reflex, divergence, convergence, feedforward and feedback inhibition, and resting membrane potential (-65 mV).
Signaling is organized in the same way in all neurons
A receptive component for producing graded input signals.
A summing or integrative component that produces a trigger signal.
A conducting long-range signaling component that produces all-or-none conducting signals.
A synaptic component that produces output signals to the next neuron or muscle or gland.
Receptor potential: a change in membrane potential at a sensory receptor.
The amplitude and duration of a receptor potential depends on the intensity of the signal.
Unlike APs, receptor potentials are graded and are either depolarizing or hyperpolarizing.
The receptor potential is the first representation of a stimulus to be coded in the nervous system.
However, since the receptor potential isn’t regenerated like an AP and spreads passively, it doesn’t travel far.
To be successfully carried to the spinal cord, the local signal must be amplified and it must be converted into APs.
Synaptic potential: when a neurotransmitter alters the membrane potential of the postsynaptic cell.
Like the receptor potential, the synaptic potential is graded and its amplitude depends on how much transmitter is released.
Signals from dendrites are integrated at the axon trigger zone where the activity of all receptor or synaptic potentials is summed and, if the sum reaches threshold, where the neuron generates an AP.
APs carried into the nervous system by a sensory axon are often indistinguishable from those carried out of the nervous system to muscles.
Two features of AP trains
Number of APs (counters)
Time intervals between them (timers)
What determines the intensity of sensation or speed of movement is the frequency of APs.
What determines the duration of sensation or movement is the period of time that APs are generated.
The pattern of APs also conveys important information.
E.g. Spontaneous, regularly active neurons (beating) and brief bursts of APs (bursting).
If APs are stereotyped and only reflect the most elementary properties of the stimulus, then how do they carry the rich variety of information needed for complex behavior?
The answer is simple and is one of the most important organizational principles of the nervous system.
Interconnected neurons form anatomically and functionally distinct pathways and it’s these pathways of connected neurons, not individual neurons, that convey information.
E.g. The neural pathways activated by receptor cells in the retina that respond to light are completely distinct from the pathways activated by sensory cells in the skin that respond to touch.
Review of neurotransmitters, synaptic vesicles, active zones, and exocytosis.
The released neurotransmitters are the neuron’s output signal and are graded according to the amount of transmitter released, which is controlled by the number and frequency of APs that reach the presynaptic terminals.
The more the receptor potential exceeds the threshold, the greater the depolarization and the greater the frequency of APs.
The duration of the input signal also determines the duration of the train of APs.
The model of neuronal signaling that we’ve outlined is a simplification that applies to most, but not all, neurons.
E.g. Some neurons don’t generate APs and instead only use graded potentials to release neurotransmitter. Spontaneously active neurons don’t require input to fire APs because they have a special class of ion channels that allows sodium ion flow even in the absence of excitatory synaptic input.
Neurons can have different ion channel combinations and use different neurotransmitters.
Because the nervous system has so many cell types and variations at the molecular level, it’s susceptible to more diseases than any other organ in the body.
Despite this complexity, the molecular mechanisms of electrical signaling are surprisingly similar and aids in the understanding of how signaling occurs since understanding one means understanding all.
For complex behavior, many neurons are needed but the basic neural structure of the simple reflex is often preserved.
Basic neural structure of a reflex
There’s often an identifiable group of neurons whose firing rate changes in response to a particular type of environmental stimulus.
There’s often an identifiable group of neurons whose firing rate changes before an animal performs a motor action.
Learning can change behavior that endures for years or even a lifetime, but simple reflexes can also be modified, albeit for a shorter amount of time.
The fact that behavior can be modified by learning at all raises the question: How can behavior be modified if the nervous system is wired so precisely?
We don’t have a clear answer, but the most likely solution is the plasticity hypothesis.
Plasticity hypothesis: the nervous system changes in response to stimuli. Changes can occur at all levels of the nervous system from molecular, pathway, circuit, and regional.
There’s now considerable evidence for plasticity at chemical synapses.
Highlights
Neurons are the signaling units of the nervous system. The signals are mainly electrical within the cell and chemical between cells.
Neurons share common features such as receptors, mechanism to convert input to electrical signals, a threshold mechanism to generate APs, APs, and neurotransmitters.
Neurons differ in their morphology or shape, the connections they make, and where they make them.
Glia support neurons such as being the myelin sheath that speeds up APs or by cleaning up used neurotransmitters.
Neural connections can be modified by experience.
Chapter 4: The Neuroanatomical Bases by Which Neural Circuits Mediate Behavior
The brain can accomplish complex feats of perception and motion because its neurons are wired together in very precise functional circuits.
At a gross level, the brain is hierarchically organized such that information processed at one level is passed to higher-level circuits for more complex and refined processing.
In essence, the brain is a network of networks.
This chapter explores how neural circuits enable the brain to process sensory input and produce motor output.
Different circuits in the brain have evolved an organization to most efficiently carry out specific functions.
Information carried by long pathways, such as the corpus callosum, integrates the output of many local circuits.
Different types of information, even within a single sensory modality, are processed in several anatomically discrete pathways.
E.g. In the somatosensory system, a light touch and a painful pin prick to the same area of skin are mediated by different sensory receptors that connect to distinct pathways.
Fibers that relay information from different parts of the body maintain an orderly relationship to each other and form a map of body surface in their pattern of termination at each synaptic relay.
This is called a topographical representation since locations that are close on the body are also represented as close in the nervous system.
Review of the spinal cord, four major regions (cervical, thoracic, lumbar, and sacral).
The rostral (head) spinal cord has an increasing proportion of ascending and descending axons, while the caudal (tail) spinal cord has an increasing proportion of gray matter.
Another organizational feature of the spinal cord is its variation in the size of the ventral and dorsal horns.
The ventral horn is larger at the levels where the motor nerves innervate the arms and legs, as the number of ventral motor neurons dedicated to a body region roughly parallels the dexterity of movements of that region.
Similarly, the dorsal horn is larger where sensory nerves from the limbs enter the cord, as limbs have a greater density of sensory receptors to mediate finer tactile discrimination.
Two types of somatosensory pathways
Epicritic: touch and stretch.
Protopathic: pain and temperature.
Neurons that make up neural circuits at any particular level are often connected in a systematic way and appear similar from individual to individual.
E.g. In the cervical cord, axons from all parts of the body have already entered, with sensory fibers from the lower body located medially in the dorsal column, while fibers from the mid and upper body occupying more lateral areas.
Each somatic submodality (touch, pain, temperature, and position) are processed in the brain through different pathways that end in different brain regions.
The thalamus isn’t just a relay, but it acts as a gatekeeper for information to the cerebral cortex, preventing or enhancing the passage of specific information depending on the behavioral state of the organism.
The cerebral cortex has feedback projections that terminate in a special part of the thalamus called the thalamic reticular nucleus (TRN).
The TRN forms a thin sheet around the thalamus and is made up almost completely of inhibitory neurons that synapse onto the relay cells. It doesn’t project to the neocortex at all.
So, the TRN can modulate the response of relay cells to incoming sensory information.
Four groups of thalamic nuclei
Anterior
Connected to the hypothalamus and presubiculum of the hippocampal formation.
Function is uncertain.
Medial
Three subdivisions, each connected to a particular part of the frontal cortex.
Function in memory and emotional processing.
Ventrolateral
Function in motor control, carrying information from the basal ganglia and cerebellum to the motor cortex, carrying somatosensory information to the neocortex, and carrying information from the spinal cord.
Posterior
Function in audition (organized tonotopically based on sound frequency) and in vision.
Most nuclei of the thalamus receive prominent return projections from the cerebral cortex, and the significance of these projections is one of the unsolved mysteries of neuroscience.
Sensory information processing culminates in the cerebral cortex.
Review of the sensory and motor homunculus and how those regions can expand/shrink with experience.
The amount of cortical area dedicated to a part of the body reflects the density of sensory receptors and degree of motor control of that part.
E.g. Lips and hands occupy more area of the cortex than the elbow.
Six layers of the neocortex
Layer I (molecular): made up of dendrites from deeper layers and axons that travel through this layer make connections to other cortical areas.
Layer II (external granule): made up of small spherical neurons and small pyramidal neurons.
Layer III (external pyramidal): made up of small pyramidal neurons that project locally to other neurons within the same cortical area and to other cortical areas, mediating intracortical communication.
Layer IV (internal granule): made up of many small spherical neurons and is the main recipient of sensory input from the thalamus. Most prominent in primary sensory areas.
Layer V (internal pyramidal): made up of many pyramidal neurons that are larger than those in layer III. Gives rise to major output pathways of the cortex, projecting to other cortical areas and to subcortical structures.
Layer VI (multiform): made up of heterogenous-shaped neurons. Blends into the white matter that forms the deep limit of the cortex and carries axons to and from cortical areas.
The thickness of individual layers and the details of their functional organization vary throughout the cortex.
Within the neocortex, information passes from one synaptic relay to another using feedforward and feedback connections.
E.g. Feedforward projections from the primary visual cortex to secondary and tertiary visual areas mainly start in layer II and stop in layer IV. In contrast, feedback projections to earlier stages of processing start from layers V and VI and stop in layers I, II, and VI.
The cerebral cortex is organized functionally into columns of cells that extend from the white matter to the surface of the cortex, also called cortical columns.
This columnar organization isn’t particularly evident in standard histological preparations and was first discovered in electrophysiological studies.
Each column is about one-third of a millimeter in diameter and forms a computational module with a highly specialized function.
Neurons within a column tend to have very similar response properties and the larger the area of cortex dedicated to a function, the more computational columns it dedicates to that function.
E.g. The highly discriminative sense of touch in the fingers results from many cortical columns in the large area of cortex dedicated to processing somatosensory information.
Another important insight about the neocortex is that the somatosensory cortex has not one, but several somatotopic maps of the body surface.
E.g. The primary somatosensory cortex has four complete maps of the skin.
The axons from layer V neurons in the primary motor cortex provide the major output of the neocortex to control movement.
The output may be through direct projections to the corticospinal tract or through indirect projections to the medulla and basal ganglia.
E.g. Out of the one million corticospinal tract axons, about 40% originate in the motor cortex.
In the medulla, the fibers from prominent bumps on the ventral surface are called the medullary pyramids, so the entire projection is sometimes called the pyramidal tract.
Review of somatic, autonomic, sympathetic, parasympathetic.
Memory is a complex behavior mediated by structures distinct from those that carry out sensation or movement.
Regardless of the behavior, the general principle is that the structure of a circuit is specific to the function that it mediates.
The hippocampal formation is the structure that mediates memory and mostly consists of unidirectional connections.
While the hippocampal formation is essential for the initial formation of memories, these memories are ultimately stored elsewhere in the brain.
E.g. In patient H.M., removal of his hippocampal system left memories prior to the surgery mostly intact.
Highlights
Individual neurons can’t carry out behavior and they must be part of circuits that comprise of different types of neurons.
Sensory and motor information is processed separately and in parallel in the brain.
All sensory and motor systems follow the pattern of hierarchical and reciprocal processing of information, while the hippocampal memory system follow serial processing of very complex, polysensory information.
A general principle is that circuits in the brain have an organizational structure that’s suited for the functions that they carry out.
Binding problem: how sensation is integrated into a conscious experience and how conscious experience emerges from the brain’s analysis of incoming sensory information.
Chapter 5: The Computational Bases of Neural Circuits That Mediate Behavior
This chapter introduces ideas, techniques, and approaches used to characterize and interpret the activity of neural populations and circuits.
Neural firing patterns provide a code for information on external sensory stimuli and internal muscle movement.
The structure of a neural representation plays an important role in how information is further processed by the nervous system.
The sequence of APs fired by a neuron in response to a sensory stimulus represents how that stimulus changes over time.
Neural coding seeks to understand both the stimulus features that drive a neuron to respond, and the temporal structure of the response and its relationship to changes in the external world.
Sensory neurons encode information by firing APs in response to sensory features.
Brain areas must correctly interpret the meaning of AP sequences that they receive from sensory areas to respond properly.
Decoding: the process of extracting information from neural activity.
E.g. Recording APs and inferring what the animal or human is seeing or hearing.
Review of place and grid cells in the hippocampus.
During active exploration of an environment, hippocampal activity reflects place coding, but during immobile or resting behavior, the hippocampus enters a different state in which neural activity is instead dominated by discrete semi-synchronous population bursts called sharp-wave ripples.
It’s hypothesized that these sharp-wave ripples are internally generated by the hippocampus.
From the lowest to highest stages of visual processing, neurons have increasingly larger receptive fields and higher degrees of selectivity.
Recurrent circuitry underlies sustained activity and integration.
If a neuron’s response decays within a few tens of milliseconds, then how do patterns of neural activity persist long enough to support cognitive operations such as memory or decision making?
Integration requires both computation and memory to compute and maintain a running total.
For a neural circuit to perform integration, a transient (short-lived) input must produce activity that’s sustained at a constant level even after the input is gone.
Thus, the sustained activity provides a memory of the transient input.
One of the best studied neural integrators is the circuitry that allows animals to maintain constant gaze direction with their eyes.
Lesions or inactivation of the brain stem nuclei medial vestibular nucleus and nucleus prepositus hypoglossi result in a failure to maintain steady horizontal eye position following eye movements, suggesting that the neural integrator circuits lies within these structures.
How do neural circuits perform integration?
One possibility is that neurons are wired such that its output is used as an input, a recurrent connection, and if excitation is increased to precisely cancel the decay, then the response can last indefinitely.
However, eye position in the dark tends to drift back to the center after about 20 seconds, suggesting that the neural integrator isn’t tuned perfectly. If it was, then eye position wouldn’t drift at all.
In short, although much has been learned about how integration could be implemented, the actual details of the network architecture that support integration remain unknown.
Experience can modify neural circuits to support memory and learning.
Multiple forms of plasticity have been identified and each of these presumably supports a different form of learning.
Review of unsupervised, supervised, reinforcement learning, and Hebbian learning.
Hebbian plasticity provides a way for neurons to determine and extract the most interesting signal carried by inputs.
The eyeblink conditioning paradigm is an example of how synaptic plasticity in the cerebellum plays a key role in motor learning.
This paradigm provides a concrete example of how neural circuits can mediate learning through trial and error.
E.g. Purkinje cells integrate signals related to both the external world and internal state of the animal (conveyed by granule cells), with highly specific information about errors or unexpected events (conveyed by climbing fibers). The climbing fiber acts as a teacher, weakening previously active synapses that could have contributed to errors.
These changes in synaptic strength alter the firing patterns of Purkinje cells and, by virtue of specific wiring patterns, alter behavior such that errors are gradually reduced.
Highlights
Neural coding: how stimulus features or actions are represented by neuronal activity.
Neural circuits are highly interconnected and there are a few basic motifs used to characterize their functions and modes of operation.
E.g. Feedforward and feedback.
Levels of neural activity must often be maintained for many seconds to minutes. One mechanism is networks of recurrent excitation.
Synaptic plasticity supports longer-lasting changes in neural circuits that underlie learning and memory.
E.g. Hebbian plasticity can extract interesting signals without the need for supervision/teacher.
Chapter 6: Imaging and Behavior
This chapter focuses on fMRI.
Benefits of using fMRI
Non-invasive.
Can measure brain function over short periods of time (seconds).
Measures activity across the whole brain simultaneously.
fMRI experiments measure neurovascular activity, specifically changes in local blood oxygen levels that occur in response to neural activity.
Review of the physics of MRI and the BOLD signal.
Drawbacks of using fMRI
Unclear whether BOLD is more closely tied to the firing of individual neurons or populations of neurons.
Difficult to distinguish whether increased blood oxygenation is caused by increases in local excitation or inhibition.
The mechanisms of neurovascular coupling, how the brain knows when and where to deliver oxygenated blood, remains mysterious.
fMRI has utility as a tool to localize changes in neural activity in the human brain induced by mental operations.
Five basic fMRI preprocessing steps
Motion correction: addresses head movements causing misaligned data.
Slice-time correction: addresses differences in timing of the acquisition of samples across different slices.
Temporal filtering: removes components of the time course that are highly likely to be noise.
Spatial smoothing: applies a kernel to blur individual volumes, averaging out noise and improving alignment.
Anatomical alignment: registers data across runs and subjects to a structural scan and then a standard template such as the Montreal Neurological Institute (MNI) or Talairach space.
Three insights gained from fMRI studies
Have inspired neurophysiological studies in animals such as the location of face processing.
Have challenged theories from cognitive psychology and systems neuroscience such as the location of memory and the role of the hippocampus.
Have tested predictions from animal studies and computational models such as reinforcement learning models.
Highlights
Functional brain imaging seeks to record activity in the human brain associated with mental processes as they unfold.
The link between BOLD activity and behavior is inferred through a series of preprocessing steps and statistical analyses.
This has led to fundamental insights about how the human brain processes faces, how memories are stored and retrieved, and how we learn from trial and error. Across these domains, data from fMRI have converged with neuronal recordings and theoretical predictions.
fMRI records brain activity but doesn’t directly modify activity. So it doesn’t support inferences about whether a region is necessary for a behavior, but rather whether the region is involved in that behavior.
Part II: Cell and Molecular Biology of Cells of the Nervous System
In all biological systems, the basic building block is the cell.
Complex biological systems have another basic feature: they are architectonic meaning their anatomy, structure, and dynamic properties all reflect a specific physiological function.
Four key features of neurons
Polarized. This restricts the flow of voltage impulses to one direction.
Electrically excitable. Its cell membrane contains specialized proteins, ion channels and receptors, that allow for the movement of ions, thus creating electrical currents that generate voltage across the membrane.
Neurotransmitters and synapse machinery.
Cytoskeletal structure. Enables the efficient transport of various proteins, mRNAs, and organelles between compartments.
Chapter 7: The Cells of the Nervous System
Neurons and glia share many characteristics with cells in general.
However, neurons are special in their ability to communicate precisely and rapidly with other cells at distant sites in the body.
Two unique features of neurons
High degree of morphological and functional asymmetry. This arrangement is the structural basis for unidirectional neuronal signaling.
E.g. Dendrites and axon.
Both electrically and chemically excitable.
E.g. Ion channels and receptors.
Two classes of glia
Macroglia
E.g. Oligodendrocytes, Schwann cells, and astrocytes.
Microglia: the brain’s resident immune cells and phagocytes.
In the human brain, about 90% of all glial cells are macroglia. Of that 90%, about half of glia are myelin-producing cells (oligodendrocytes and Schwann cells) and half are astrocytes.
Oligodendrocytes provide the insulating myelin sheath of axon in the CNS, while Schwann cells myelinate the axons in the PNS.
Nonmyelinating Schwann cells promote, develop, maintain, and repair the neuromuscular synapse, while astrocytes support neurons and modulate neuronal signaling.
Neurons and glia develop from common neuroepithelial progenitors and share many structural characteristics.
Skimming over the organelles of a neuron.
In contrast to the continuity of the cell body and dendrites, a sharp functional boundary exists between the cell body and the axon called the axon hillock.
The organelles that make the main machinery for proteins in the neuron are generally excluded from axons.
E.g. Ribosomes, rough endoplasmic reticulum, and the Golgi complexes.
However, axons are rich in smooth endoplasmic reticulum, synaptic vesicles, and their precursor membranes.
A cell’s cytoskeleton determines its shape and is responsible for the asymmetric distribution of organelles within the cytoplasm.
Three parts of the cytoskeleton
Microtubules: long scaffolds from one end of a neuron to the other and play a key role in developing and maintaining cell shape.
Neurofilaments: the bones of the cytoskeleton.
Microfilaments: the thinnest of the three main types of fibers.
Like microtubules, microfilaments undergo cycles of polymerization and depolymerization.
The dynamic state of microtubules and microfilaments allows a mature neuron to retract old axons and dendrites and to extend new ones.
This structural plasticity is thought to be a major factor in changes of synaptic connections and efficacy, therefore it’s a part of the cellular mechanisms of long-term memory and learning.
Microtubules are arranged in parallel in the axon with plus ends pointing away from the cell body and minus ends facing the cell body.
This allows some organelles to move towards and others to move away from nerve endings.
Because axons and terminals often lie at great distances from the cell body, such as over 10,000 times the cell body diameter in leg motor neurons, sustaining the function of these remote regions presents a challenge.
E.g. How are nutrients, proteins, and molecules transported to the axon terminal?
Membrane and secretory products formed in the cell body must be actively transported to the end of the axon.
Two types of axoplasmic flow
Fast axonal transport: membranous organelles move toward axon terminals (anterograde) and back (retrograde) at speeds of up to 400 mm per day.
Slow axonal transport: cytosolic and cytoskeletal proteins only move toward axon terminals at speeds of 0.2 to 2.5 mm per day.
Microtubules provide a stationary track on which specific organelles can be moved by molecular motors.
Fast retrograde transport also delivers signals that regulate gene expression in the neuron’s nucleus.
E.g. Activated growth factor receptors at nerve endings are taken up into vesicles and transported back along the axon to the nucleus. This informs the genes transcription apparatus and can result in nerve regeneration and axon regrowth.
Retrograde fast transport is about one-half to two-thirds the speed of anterograde fast transport.
Skipping over protein synthesis details in neurons.
CNS myelin is similar but not identical to PNS myelin.
E.g. One Schwann cell produces a single myelin sheath for one segment of one axon, while one oligodendrocyte produces myelin sheaths for segments of as many as 30 axons.
The number of myelin layers on an axon is proportional to the diameter of the axon.
E.g. Larger axons have thicker sheaths, while small axons aren’t myelinated.
Astrocytes play important roles in nourishing neurons and in regulating the concentrations of ions and neurotransmitters in the extracellular space.
E.g. Astrocytes express many of the same voltage-gated ion channels and neurotransmitter receptors that neurons do so they may receive and transmit signals that could affect neuronal excitability and synaptic function.
How do astrocytes regulate axonal conduction and synaptic activity?
One way is by acting as a spatial buffer. When neurons fire, they release potassium ions into the extracellular space and astrocytes take up the excess ions and release it at distant contacts with blood vessels.
Astrocytes also regulate neurotransmitter concentrations in the brain.
E.g. Clearing glutamate from the synaptic cleft by ingesting and converting it into glutamine. Glutamine is then transferred to neurons where it servers as an immediate precursor of glutamate.
Astrocytes also degrade dopamine, norepinephrine, epinephrine, and serotonin.
An increase in free calcium ions within one astrocyte increases calcium ion concentrations in adjacent astrocytes, which leads to a calcium ion wave that propagates through the astrocyte network, enhancing synaptic function and behavior.
Astrocyte-neuron signaling contributes to normal neuronal circuit functioning.
Astrocytes are also important for the development of synapses.
E.g. They secrete synaptogenic factors that promote the formation of new synapses, and can remodel and eliminate excess synapses by phagocytosis, thus contributing to learning and memory.
Unlike neurons, astrocytes, and oligodendrocytes, microglia are poorly understood.
During development, microglia help sculpt developing neural circuits by engulfing pre- and post-synaptic structures.
Highlights
The morphology or structure of neurons is elegantly suited to receive, conduct, and transmit information in the brain.
E.g. Dendrites and axons.
Neurons in different locations differ in the complexity of their dendritic trees, axon branching, and number of synaptic terminals. The functional significance of these morphological differences is evident.
E.g. Motor neurons must have a more complex dendritic tree than sensory neurons because controlling muscles requires integrating many inputs, not outputs.
Different types of neurons use different neurotransmitters, ion channels, and neurotransmitter receptors. All of these contribute to the great complexity of information processing in the brain.
Neurons are among the most highly polarized cells in our body.
The cytoskeleton provides an important framework for the transport of organelles to different intracellular locations in addition to controlling axonal and dendritic morphology.
All of these fundamental cell biological processes are modifiable by neuronal activity, providing the mechanisms behind how neural circuits adapt to experience (learning).
The nervous system contains several types of glial cells.
E.g. Oligodendrocytes and Schwann cells produce myelin insulation that enable axons to conduct electrical signals rapidly. Astrocytes and nonmyelinating Schwann cells cover other parts of the neuron, mainly synapses.
E.g. Astrocytes also control extracellular ion and neurotransmitter concentrations and actively participate in the formation and function of synapses.
E.g. Microglia resident immune cells have diverse roles in health and disease.
The cells in the choroid plexus and ependymal layer contribute to CSF production, composition, and dynamics.
Chapter 8: Ion Channels
Signaling in the brain depends on the ability of neurons to respond to very small stimuli changes with rapid and large changes in the electrical potential difference across the cell membrane.
E.g. Retinal neurons respond to a single photon of light, olfactory neurons detect a single odorant molecule, and hair cells in the inner ear respond to tiny movements of atomic dimensions.
The rapid changes in membrane potential are mediated by specialized pores or openings in the membrane called ion channels.
Ion channel: a class of membrane proteins found in all cells of the body that respond to specific physical and chemical signals.
Since ion channels play key roles in electrical signaling, malfunctioning of such channels can cause a wide variety of neurological diseases.
E.g. Cystic fibrosis and certain types of cardiac arrhythmia.
Thus, ion channels have crucial roles in both the normal physiology and pathophysiology of the nervous system.
Also crucial for neurons are proteins specialized for moving ions across cell membranes called ion pumps.
Ion pumps don’t participate in rapid neuronal signaling but rather are important for establishing and maintaining the concentration gradients of physiologically important ions between the inside and outside of the cell.
Three important properties of ion channels
They recognize and select specific ions.
They open and close in response to specific electrical, chemical, or mechanical signals.
They conduct ions across the membrane.
Up to 100 million ions can pass through a single channel each second, which causes the rapid changes in membrane potential required for signaling.
A key to the great versatility of neuronal signaling is the regulated activation of different classes of ion channels, each of which is selective for specific ions.
E.g. Voltage-gated channels controlled by changes in membrane potential, ligand-gated channels controlled by the binding of chemical transmitters, and mechanically-gated channels controlled by membrane stretch.
With only passive ion movement using ion channels, the ion concentration gradient would eventually dissipate were it not for ion pumps.
Different types of ion pumps maintain the concentration gradients for sodium, potassium, calcium, and other ions.
Two features that distinguish ion pumps from ion channels
The rate of ion flow through pumps is 100 to 100,000 times slower than through channels.
Pumps use energy in the form of ATP to transport ions against their electrical and chemical gradients.
Resting channels and pumps generate the resting potential, voltage-gated channels generate the action potential, and ligand-gated channels produce synaptic potentials.
The lipid bilayer that makes up the cell membrane is uncharged, which makes it almost impermeable to ions.
This is why cells have ion channels, to bypass the cell membrane and allow ions in or out.
It’s currently hypothesized that ion channels are selective both because of specific chemical interactions and because of molecular sieving based on pore diameter.
Most cells are capable of local signaling, but only nerve and muscle cells are specialized for rapid signaling over long distances.
All cell share ion channels with several functional characteristics and neurons are no exception.
The rapid rate that an ion unbinds to a channel is necessary to achieve the very high conduction rates responsible for the rapid changes in membrane potential during signaling.
The opening and close of a channel involves conformational changes and each channel has two or more conformational states.
Three types of conformational changes
Change in one region
Change in general structure
Blocking particle
Regulators can control the entry of a channel into one of three states
Resting: closed and activatable.
Inactive/Refractory: closed and not activatable.
Active: open.
A change in the functional state of a channel requires energy.
E.g. Changes in membrane potential, change in chemical free energy from transmitter binding, or mechanical energy from distortion of the lipid bilayer.
Stimuli that gate the channel also control the rates of transition between the open and closed states of a channel.
For voltage-gated channels, the rates are dependent on membrane potential. Once a channel opens, it stays open for a few milliseconds, and after closing, it stays closed for a few milliseconds.
Once the transition between open and closed states begins, it proceeds nearly instantaneously, thus giving rise to the abrupt, all-or-none, step-like changes in current through the channel.
Ligand-gated and voltage-gated channels enter refractory states through different mechanisms.
Ligand-gated channels enter the refractory state with prolonged exposure to the agonist, a process called desensitization (an intrinsic property of the interaction between ligand and channel).
Voltage-gated channels enter a refractory state after opening, a process called inactivation.
Antagonist molecules can interfere with normal gating by binding to the same site at which the endogenous agonist normally binds, preventing the channel from opening and blocking access of the agonist to the binding site.
Skimming over the structure of ion channels.
The snug fit between potassium ion channels and potassium ions helps explain the unusually high selectivity of these channels compared to other ion channels.
E.g. Many channel pore diameters are significantly wider than the principal permeating ion, contributing to a lower degree of selectivity.
Highlights
Ions cross cell membranes through two main classes of membrane proteins: ion channels and ion pumps/transporters.
Most ion channels are selectively permeable to certain ions and this is determined by the part of the channel pore called the selectivity filter. The selectivity filter filters based on ion charge, size, and physicochemical interactions.
Ion channels have gates that open and close in response to different signals and the gates control an ion channel’s three possible states: open, closed, and inactivated.
Various types of ion channels are differentially expressed in different types of neurons and in different regions of neurons, contributing to the functional complexity and computational power of the nervous system.
Active transport, which is mediated by ion pumps, enables ions to move across the membrane against their electrochemical gradient. The driving force comes either from chemical energy in the form of ATP or from an electrochemical potential difference.
Chapter 9: Membrane Potential and the Passive Electrical Properties of the Neuron
Two types of ion channels
Resting
Gated
Resting channels are mostly important for maintaining the resting membrane potential, the electrical potential across the membrane in the absence of signaling.
Some resting channels are always open, while others are gated by changes in voltage but are also open at the negative resting potential.
In contrast, most voltage-gated channels are closed when the membrane is at rest and require membrane depolarization to open.
The resting membrane potential of neurons comes from the separation of charge across the cell membrane.
At rest, the extracellular surface of a cell has an excess of positive charge, and the cytoplasmic/intercellular surface has an excess of negative charge. This is maintained due to the impermeable lipid bilayer.
Membrane potential: a difference of electrical potential across the membrane.
By convention, the potential outside the cell is defined as zero.
The resting membrane potential is between -60 to -70 mV.
All electrical signaling involves brief changes to the resting membrane potential caused by electric currents across the cell membrane.
Depolarization: a decrease in charge separation leading to a more positive membrane potential.
Hyperpolarization: an increase in charge separation leading to a more negative membrane potential.
Electronic potentials: changes in membrane potential that don’t lead to the opening of gated ion channels.
Hyperpolarizing responses and small depolarizations are almost always passive and don’t trigger an active response in the cell.
However, when depolarization approaches a critical threshold, the cell responds by opening voltage-gated ion channels, which produces an all-or-none action potential (AP).
Sodium and chloride ions are concentrated outside the cell, while potassium and organic anions are concentrated inside.
Ions are subject to two forces driving them across the membrane
Chemical driving force: a function of the concentration gradient across the membrane.
Electrical driving force: a function of the electrical potential difference across the membrane.
Glia are only permeable to potassium ions while neurons are permeable to potassium, sodium, and chloride ions.
Dissipation of ionic gradients is prevented by the sodium-potassium pump, which moves sodium and potassium ions against their electrochemical gradients.
The sodium-potassium pump expels sodium from the cell and admits potassium using ATP.
At the resting membrane potential, the cell isn’t in equilibrium but rather in a steady state.
Skimming over the math behind the resting potential and equivalent circuit for the resting membrane potential.
Generally, axons with the largest diameter have the lowest threshold for excitation.
Neurons that convey different types of information often differ in axon diameter and conduction velocity.
The nodes of Ranvier are packed with voltage-gated sodium ion channels and periodically boost the amplitude of the AP, thus preventing it from decaying with distance.
Highlights
At rest, the passive fluxes of ions into and out of cells are balanced, resulting in charge separation and the resting membrane potential.
The ion permeability of the cell membrane is proportional to the number of open channels that allow ions to pass.
Changes in membrane potential that generate neuronal electrical signals (action potentials, synaptic potentials, and receptor potentials) are caused by changes in the membrane’s relative permeability to potassium, chloride, sodium, and calcium ions.
Ion pumps maintain the resting membrane potential by using ATP to exchange internal sodium ions for external potassium ions.
For pathways where fast signaling is important, conduction of APs is enhanced by myelination of the axon, increasing axon diameter, or both.
Chapter 10: Propagated Signaling: The Action Potential
Neurons can carry electrical signals over long distances because the AP is continually regenerated and thus, isn’t attenuated/decayed as it moves down the axon.
Four important properties of APs
Is only initiated when the membrane potential reaches a threshold.
Is an all-or-none event.
Is conducted without decay as it has a self-regenerative feature that keep the amplitude constant.
Is followed by a refractory period where the neuron’s ability to fire a second AP is suppressed.
The refractory period limits the frequency that neurons can fire APs and thus limits the information-carrying capacity of the axon.
These four properties are unusual for biological processes, which typically respond in a graded manner to changes in the environment.
The AP is generated by the flow of ions through voltage-gated channels.
The AP can be reconstructed from the properties of sodium and potassium channels.
The Hodgkin and Huxley mathematical model of the AP almost perfectly matches the experimentally recorded AP.
According to the model, an AP involves the following sequence of events
Depolarization of the membrane causes sodium ion channels to open, resulting in an inward sodium current.
This current depolarizes the membrane, causing more sodium channels to open resulting in the rising phase of the AP.
The depolarization gradually inactivates the voltage-gated sodium channels and, with some delay, opens the voltage-gated potassium channels.
Although each step in the model is gradual, the all-or-none phenomenon of the AP is due to the runaway effect of the voltage-gated sodium channels open due to depolarization, which causes more depolarization, which causes more channels to open, etc.
Review of the absolute and relative refractory period.
The squid axon that Hodgkin and Huxley studied was unusually simple in that it only expressed two types of voltage-gated ion channels in comparison to the mammalian brain’s dozens or more.
The great variety of voltage-gated channels in the membranes of most neurons enables a neuron to fire APs with a much greater range of frequencies and patterns than is possible in the squid axon, allowing for more complex information processing and control.
Skipping over the diversity of voltage-gated channels.
The electrical properties of different neurons have evolved to match the dynamic needs of information processing.
The function of a neuron isn’t only defined by its synaptic inputs and outputs, but also by its intrinsic excitability properties.
Different types of neurons in the mammalian nervous system generate APs that have different shapes and fire different patterns, reflecting different expression of voltage-gated channels.
E.g. Cerebellar Purkinje neurons have high levels of Kv3 channel expression resulting in narrow APs, while dopaminergic neurons have high levels of voltage-activated calcium channels resulting in broad APs.
The shape of the AP in a neuron isn’t always invariant, and can be dynamically regulated either intrinsically (repetitive firing) or extrinsically (synaptic modulation).
The input-output function of a neuron can be characterized by the frequency and pattern of AP firing in response to injected current and stimuli.
Some neurons can sustain repetitive firing at high frequencies up to 500 Hz.
E.g. Mammalian auditory neurons.
A surprisingly large number of neurons in the mammalian brain fire spontaneously in the absence of any synaptic input.
E.g. Many neurons that release modulatory neurotransmitters, such as dopamine, serotonin, norepinephrine, and acetylcholine, fire spontaneously, resulting in a constant release of transmitter.
Excitability properties vary between regions of the neuron.
E.g. The axon initial segment has the lowest threshold for AP generation, in part because it has an exceptionally high density of voltage-gated sodium channels.
These channels play a critical role in transforming graded synaptic or receptor potentials into a train of APs.
Dendrites in many neurons have voltage-gated ion channels to help shape the amplitude, time course, and propagation of synaptic potentials to the cell body.
In some neurons, the density of voltage-gated channels in dendrites is enough to support local APs. This may be used when the neuron generates an AP and it propagates back into the dendrites, serving as a signal to the synaptic regions that the cell has fired.
Highlights
An action potential (AP) is a transient depolarization of membrane voltage lasting about 1 ms when ions move across the cell membrane through voltage-gated channels.
In the depolarizing phase of the AP, sodium ions leave the cell. In the repolarizing phase, potassium ions enter the cell.
The sharp threshold for AP generation happens at a voltage when the inward sodium current just exceeds outward potassium current through leak channels and voltage-gated channels.
The refractory period reflects sodium channel inactivation and potassium channel activation after an AP. This limits the AP firing rate.
Most neurons express multiple kinds of voltage-gated ion channels, which reflects the expression of multiple gene products.
Activity of some voltage-gated ion channels can be modulated by cytoplasmic calcium ions.
The regional expression and functional state of ion channels can be regulated in response to cell activity, changes in cell environment, or pathological processes, resulting in plasticity of the intrinsic excitability of neurons.
Part III: Synaptic Transmission
In this part, we cover how neurons communicate with each other.
Three components of the synapse
Terminals of the presynaptic axon
Target on the postsynaptic cell
Zone of apposition
Two types of synapses
Electrical: when the presynaptic and postsynaptic cell are very close at regions called gap junctions and the current generated by an AP in the presynaptic neuron directly enters the postsynaptic cell.
Chemical: when the presynaptic and postsynaptic cell are separated by the synaptic cleft and transmitter diffuses across, binding to receptor molecules on the postsynaptic membrane.
Two types of transmitter receptors
Ionotopic: when transmitter binding directly opens an ion channels..
Metabotropic: when transmitter binding indirectly regulates a channel by activating secondary messengers.
Both types of receptors can result in excitation or inhibition. This doesn’t depend on the identity of the transmitter but on the properties of the receptor that the transmitter interacts with.
One key theme of this part and this book is the concept of plasticity. At all synapses, the strength of a synaptic connection isn’t fixed but can be modified in various ways by experience.
Chapter 11: Overview of Synaptic Transmission
The average neuron forms thousands of synaptic connections and receives a similar number of inputs.
E.g. Purkinje cells receive up to 100,000 synaptic inputs while granule neurons receive only four excitatory inputs.
Electrical synapses are mostly used to send rapid depolarizing signals, while chemical synapses are used to produce more variable signaling.
Chemical synaptic transmission is central to our understanding of the brain and behavior.
Electrical synapses are virtually instantaneous as the postsynaptic response follows the presynaptic stimulation in a fraction of a millisecond.
At an electrical synapse, when a weak depolarizing current is injected into the presynaptic neuron, some current enters the postsynaptic cell and depolarizes it.
In contrast, at a chemical synapse, a depolarizing current injected into the presynaptic neuron must reach threshold for the release of transmitter to elicit a response in the postsynaptic cell.
Most electrical synapses can transmit both depolarizing and hyperpolarizing currents.
The gap between electrical synapses is small at around 4 nm compared to the 20 nm for the normal nonsynaptic space between neurons.
This narrow gap is bridged by gap-junction channels, specialized protein structures that conduct ionic current directly from the presynaptic to the postsynaptic cell.
The pore of gap-junction channels is large at 1.5 nm compared to the 0.3-0.5 nm diameter of ion-selective ligand-gated or voltage-gated channels. This means the channel doesn’t select among ions and is even wide enough to allow small organic molecules to pass through.
Electrical transmission allows rapid and synchronous firing of interconnected cells.
E.g. The tail-flip response in goldfish and the ink response in Aplysia.
Gap junctions are also formed between neurons and glia.
E.g. A wave of calcium ions in astrocytes can cause neurotransmitter release in neurons. The precise function of these waves is unknown.
The gap between chemical synapses is wide at around 20-40 nm and is sometimes bigger than the normal nonsynaptic space between neurons.
Chemical synaptic transmission depends on a neurotransmitter.
Neurotransmitter: a chemical substance that diffuses across the synaptic cleft, binds to receptors, and activates receptors in the membrane of the target cell.
The neurotransmitter is released from specialized vesicles with a typical release having around 100-200 synaptic vesicles, each with several thousand molecules of neurotransmitter.
The release is triggered by an increase in intracellular calcium ions, which causes the vesicles to fuse with the presynaptic membrane and release neurotransmitter into the synaptic cleft. This process is called exocytosis.
The transmitter molecules then diffuse across the synaptic cleft and bind to receptors on the postsynaptic cell.
This activates the receptors, leading to the opening or closing of ion channels, changing the membrane potential of the postsynaptic cell.
These several steps account for the synaptic delay (around 1 ms or less) at chemical synapses.
What chemical transmission lacks in speed it makes up for in amplification.
E.g. A small presynaptic nerve terminal, which only generates a weak electrical current, can depolarize a large postsynaptic cell.
Two steps of chemical synaptic transmission
Transmitting: when the presynaptic cell releases a chemical messenger.
Receptive: when the transmitter binds to and activates the receptor molecules in the postsynaptic cell.
The transmitting step resembles endocrine hormone release as chemical synaptic transmission can be seen as a modified form of hormone secretion.
However, the important difference between endocrine and synaptic signaling is that endocrine signaling isn’t targeted as it travels throughout the body, whereas synaptic signaling is targeted and precise to which neurons receive the neurotransmitter.
Thus, chemical synaptic transmission is both fast and precise.
The action of a transmitter depends on the properties of the postsynaptic receptors that recognize and bind the transmitter, not the chemical properties of the transmitter.
E.g. ACh at neuromuscular junctions is excitatory and can trigger contraction, while at the heart is inhibitory and can slow it down.
Neurotransmitters control the opening of ion channels in the postsynaptic cell either directly or indirectly.
Ionotropic: a receptor that directly controls ion flux.
Metabotropic: a receptor that indirectly controls ion flux.
Ionotropic and metabotropic receptors have different functions, with ionotropic producing relatively fast synaptic actions lasting only milliseconds, while metabotropic producing slower synaptic actions lasting hundreds of milliseconds to minutes.
Electrical and chemical synapses can coexist and interact with each other in the same neuron, with each modifying the other’s efficacy.
E.g. During development, many neurons are initially connected by electrical synapses that help form chemical synapses. As chemical synapses form, they often initiate the down-regulation of electrical transmission.
E.g. In the retina, bipolar neurons form chemical synapses with rods and cones while also receiving electrical synapses between neighboring bipolar cells and amacrine cells.
Highlights
Neurons communicate using electrical and chemical synaptic transmission.
Electrical synapses are formed at tight regions that provide a direct pathway for charge to flow between the cytoplasm of communicating neurons. This results in fast transmission suited for synchronizing the activity of populations of neurons.
Electrical synapses are connected through gap-junction channels.
Chemical synapses use chemical transmitters to transmit signals from the presynaptic cell to the postsynaptic cell.
Chemical transmission allows for amplification of the presynaptic AP through the release of tens of thousands of molecules of transmitter and activation of hundreds of thousands of receptors in the postsynaptic cell.
The effect of a neurotransmitter is determined by the postsynaptic receptors, not the molecule.
The two major classes of transmitter receptors are ionotropic and metabotropic.
Chapter 12: Directly Gated Transmission: The Nerve-Muscle Synapse
Neuromuscular junction / end-plate: the site of contact between nerve and muscle.
Synaptic boutons: the ends of axon branches.
When acetylcholine (ACh) is released into the end-plate, it rapidly binds to and opens the ACh receptor-channels in the end-plate membrane.
This results in a large excitatory post-synaptic potential (EPSP) of about 75 mV.
The combination of a very large EPSP and low threshold at the end-plate results in a high safety factor for triggering an AP in the muscle fiber.
Muscles probably require this high safety factor as they can’t be undecided on whether to contract given a signal. Evolution pushed for high reliability in muscles.
This contrasts with EPSPs in the CNS which are less than 1 mV, and that inputs from many presynaptic neurons are needed to generate an AP in most CNS neurons.
The end-plate current rises and decays more rapidly than the end-plate potential because it takes times for an ionic current to charge or discharge the muscle membrane capacitance, so the membrane voltage (EPSP) lags behind the synaptic current (EPSC).
Although individual ACh receptor-channels are noisy and undergo random thermal fluctuations, the average time a type of channel stays open is a well-defined property of that channel.
Once a receptor-channel opens, what ions flow through the channel and how does this lead to depolarization?
ACh receptor-channels at the end-plate aren’t selective for any ion species except for cations, so sodium, potassium, and calcium ions can flow through the channel leading to depolarization.
Two main differences between ACh receptors (synaptic potential) and voltage-gated channels (action potential)
The AP is generated by sequential activation of two distinct classes of voltage-gated channels, one selective for sodium ions and the other for potassium ions.
The synaptic potential is only cation selective and allows both sodium and potassium ions to pass with near-equal permeability.
The AP regenerates itself as incoming sodium ions depolarize the membrane potential, causing more voltage-gated sodium channels to open.
The synaptic potential is dependent on the amount of ACh available and can’t produce an AP.
If ACh is allowed to stay in the synaptic cleft for a long time, ACh receptors can become desensitized where they no longer conduct ions.
Skipping over the structure of the ACh receptor-channel.
Highlights
The terminals of motor neurons form synapses with muscle fibers at specialized regions in the muscle membrane called end-plates. When an AP reaches the terminal, it causes the release of ACh.
ACh diffuses across the synaptic cleft, binding to ACh receptors and causing an inflow of cations such as sodium, potassium, and calcium.
This inflow causes a large and local depolarization of around 75 mV, enough to exceed the AP threshold generation by a factor of three to four.
This excessive depolarization is required to ensure that a neural signal is always converted into a muscle movement, which is essential for survival.
Chapter 13: Synaptic Integration in the Central Nervous System
Many principles that apply to the synaptic connection between the motor neuron and skeletal muscle fiber at the end-plate also apply in the CNS.
However, synaptic transmission between neurons in the CNS is more complex.
E.g. More inputs from thousands of neurons, both excitatory and inhibitory inputs, many types of neurotransmitters, and not all APs produce an AP in the postsynaptic neuron.
Typically, a depolarization of 10 mV or more is required to push a neuron past AP threshold.
The effect of a synaptic potential, excitatory or inhibitory, isn’t determined by the type of transmitter released but by the type of ion channels in the postsynaptic cell activated by the transmitter.
Some transmitters can produces both EPSPs and IPSPs but most transmitters produce a single type of synaptic response.
E.g. Glutamate is typically excitatory while GABA is typically inhibitory.
We can determine whether a synaptic terminal is excitatory or inhibitory by it’s ultrastructure.
Two morphological types of synapses
Gray type I: glutamatergic and excitatory. Have round synaptic vesicles and contact dendritic spines.
Gray type II: GABAergic and inhibitory. Have oval or flattened vesicles and contact dendritic shaft, cell body, or axon.
Axon terminals are normally presynaptic and dendrites are normally postsynaptic, but any part of a neuron can be a presynaptic or postsynaptic site of chemical synapses.
The most common types of contact are axodendritic, axosomatic, and axoaxonic.
Excitatory synapses are typically axodendritic and occur mostly on dendritic spines.
Inhibitory synapses are typically found on dendritic shafts, cell body, and axon initial segment.
As a general rule, the proximity of a synapse to the axon initial segment is thought to determine its effectiveness.
E.g. The closer the synapse is to the axon initial segment, the greater the influence on AP output than at more remote sites due to less leakage.
Some neurons compensate for this effect by having more receptors at distant synapses than at close synapses to ensure that inputs at different locations have more equal influence.
Most axoaxonic synapses don’t have a direct effect on the trigger zone of a neuron, but instead control the amount of transmitter released from the presynaptic terminals.
The three receptors of glutamate are NMDA, AMPA, and kainate.
Three major types of ionotropic receptors
ACh, GABA, and glycine
Glutamate
ATP
Skipping over the detailed structure of the glutamate receptor.
Most of the excitatory synapses in the mature nervous system have both NMDA and AMPA receptors, compared to in early development where synapses mostly have only NMDA receptor.
The NMDA receptor is unique among ligand-gated channels because its opening depends on both membrane voltage and transmitter binding.
This voltage dependence is caused by a mechanism that differs from that used by voltage-gated channels that generate the AP.
In NMDA receptors, depolarization doesn’t result in conformational changes in the channel but instead removes a plug from the channel.
The NMDA receptor acts as a molecular coincidence detector, opening during the concurrent activation of both the presynaptic and postsynaptic cells.
Since most excitatory synapses have AMPA receptors that are capable of triggering an AP by themselves, why does the nervous system also use NMDA receptors?
NMDA receptors are special because they can control the inflow of calcium ions that activates various calcium-dependent signaling cascades. Thus, NMDA receptor activation can translate electrical signals into biochemical ones.
Some of these biochemical reactions lead to long-lasting changes in synaptic strength called long-term synaptic plasticity.
A brief but high-intensity and high-frequency synaptic stimulation leads to long-term potentiation (LTP).
Subsequent studies showed that LTP depends on an inflow of calcium ions through the NMDA receptor-channels, which open in response to the combined effect of glutamate release and strong postsynaptic depolarization.
The rise of intracellular calcium ions in the postsynaptic cell is thought to potentiate synaptic transmission by activating biochemical cascades that trigger the insertion of additional AMPA receptors into the postsynaptic membrane.
Interestingly, the calcium ion accumulation and biochemical activation is mostly restricted to the individual spines that are activated. So, LTP is input-specific as only the synapses activated are potentiated.
The prolonged high-frequency presynaptic firing required to induce LTP is unlikely to happen in normal conditions, but a more realistic stimulus and more relevant form of plasticity is spike-timing-dependent plasticity (STDP).
STDP can be induced if a single presynaptic stimulus is paired at low frequency with the triggered firing of one or more postsynaptic APs.
In STDP, the presynaptic activity must precede postsynaptic firing.
LTP, STDP, and related processes probably provide an important cellular mechanism for memory and learning.
Inhibitory synapses play an essential role in the nervous system by preventing too much excitation and by regulating the firing patterns of networks of neurons.
Most inhibitory neurons use neurotransmitters GABA and glycine.
Inhibition is achieved by the inflow of chloride ions through GABA and glycine receptor-channels.
Inhibitory actions counteract excitatory actions.
The IPSP results from an increase in chloride ion conductance because the equilibrium potential of chloride ions is -70 mV compared to the cell’s resting potential of -65 mV, causing hyperpolarization.
If the cell’s resting potential is -70 mV, then no hyperpolarization occurs but the inflow of chloride ions can help buffer against an EPSP by holding the membrane potential at -70 mV.
The opening of chloride ion channels increases the resting conductance, making the neuron more leaky, thus depolarization during an EPSP decreases since the inflowing sodium and potassium ions leak out. This is called short-circuiting or shunting.
Inhibition can exert powerful control over AP firing in neurons by changing the temporal patterning of neuronal spikes.
We can think of these synaptic effects as mathematical operations.
E.g. Inhibition as subtraction, shunting inhibition as division, excitation as addition, and excitatory input near the axon initial segment as multiplication.
In some cells, inhibition is caused not by the inflow of chloride ions, but by the opening of potassium channels.
The potassium ion equilibrium potential is -80 mV which is always negative to the resting potential, so opening potassium ion channels inhibits the cell.
The net effect of synaptic inputs on a neuron depends on several factors.
E.g. Location, size, and shape of the synapse, proximity and relative strength of other synapses, the resting potential of the cell, and the timing of excitatory and inhibitory inputs.
Neuronal integration: the coordination of inputs in the postsynaptic cell.
Neuronal integration reflects the task of the entire nervous system: to fire or not to fire.
This integration happens at the axon initial segment where ions from synapses converge.
The axon initial segment has a lower threshold for AP generation than the cell body or dendrites because it has a higher density of voltage-gated sodium ion channels.
E.g. The initial segment only needs a depolarization of 10 mV (from -65 to -55 mV) compared to the cell body which needs a depolarization of 30 mV (from -65 to -35 mV).
Once the initial segment discharges, the AP also depolarizes the membrane of the cell body to threshold and, at the same time, is propagated along the axon.
Two passive membrane properties that affect neuronal integration
Membrane time constant helps determine the time course of the synaptic potential in the excitatory postsynaptic current (EPSC).
E.g. Neurons with a large membrane time constant have a greater capacity for temporal summation than neurons with small time constants.
The longer the time constant, the greater the chance that two consecutive inputs sum to bring the membrane potential past threshold.
This controls temporal summation.
Length constant of the cell determines the degree to which the EPSP decreases as it passively spreads from synapse to cell body to axon initial segment.
E.g. Neurons with a longer length constant transmit signals with little loss, while in neurons with shorter length constants, signals decay rapidly with distance.
This controls spatial summation.
The mammalian CNS has relatively few types of glutamatergic pyramidal neurons in comparison to the large variety of GABAergic inhibitory interneurons.
Even though only 20% of all neurons are inhibitory, the overall levels of inhibition and excitation tend to be nearly balanced in most brain regions.
This means that neural circuits are mostly tuned to only respond to the most salient excitatory information.
We’ve noticed a pattern that different types of interneurons selectively target different regions of their postsynaptic neurons.
This selective targeting is important because the location of inhibitory inputs relative to excitatory inputs is critical in determining the effectiveness of inhibition.
E.g. Shunting inhibition near the cell body is more effective and greater than inhibition near a dendrite. The reverse is also true in that inhibition near the cell body decays by the time it reaches dendrites.
Classes of inhibitory neurons
Basket cells and chandelier cells exert strong control over neuronal output by specifically targeting the soma and axon initial segment, respectively. Paradoxically, some chandelier cells enhance neuronal firing because the chloride ion reversal potential in some axons can be positive to the threshold for AP firing.
Martinotti cells specifically target distal dendrites and spines.
Vasoactive intestinal peptide (VIP) inhibitory interneurons selectively target other interneurons and act to decrease the overall level of inhibition in a neural circuit.
Propagation of signals in dendrites was originally thought to be purely passive, but we now know that dendrites could produce APs.
E.g. Most dendrites have voltage-gated sodium, potassium, and calcium channels.
One function of the voltage-gated sodium and calcium channels in dendrites is to amplify the EPSP.
However, the number of voltage-gated sodium and calcium channels in dendrites isn’t usually sufficient to support the all-or-none regenerative propagation of an AP to the cell body.
Rather, APs generated in dendrites are usually local events that spread to the cell body and axon initial segment.
These same dendritic voltage-gated channels also allows APs generated at the axon initial segment to propagate backwards (backpropagate) into the dendritic tree.
This can be thought of as an AP reverberating or echoing within a neuron.
The function of these backpropagating APs isn’t known, but they may be used by NMDA receptor-channels, thereby contribution to synaptic plasticity and learning.
Sometimes, NMDA receptors enable a positive feedback loop of depolarization which opens more NMDA channels that leads to a local regenerative depolarization called an NMDA spike.
NMDA spikes are purely local and can’t propagate because they require glutamate release.
Almost 95% of all excitatory inputs in the brain terminate on dendritic spines, not dendritic shafts.
The function of dendritic spines isn’t completely understood, but it may compartmentalize the synapse by restricting the movement of ions using the thin spine neck.
Another possible function is a site of synaptic integration when APs backpropagate.
E.g. A backpropagating AP paired with presynaptic stimulation causes greater summation and this may be a biochemical coincidence detector of the near simultaneity of the input (EPSP) and output (backpropagating AP).
Highlights
Glutamate is used in most excitatory synapses while GABA and glycine are used in most inhibitory synapses.
Three major classes of ionotropic glutamate receptors: AMPA, NMDA, and kainate.
Binding of glutamate opens a nonselective cation channel permeable to sodium and potassium ions. The NMDA receptor-channel also has a high permeability to calcium ions.
The NMDA receptor acts as a coincidence detector as it only conducts when both glutamate is released and when the postsynaptic membrane is sufficiently depolarized to expel the magnesium ion.
Calcium influx through the NMDA receptor during strong synaptic activation can trigger intracellular signaling cascades, leading to long-term plasticity, providing a potential mechanism for memory.
Binding of GABA or glycine to its receptor activates a chloride ion selective channel. The chloride ion equilibrium potential is slightly negative to the resting potential, causing hyperpolarization.
Whether a neuron fires an AP or not depends on spatial and temporal summation of the various excitatory and inhibitory inputs and is determined by the size of the resulting depolarization at the axon initial segment.
Dendrites also have voltage-gated channels, enabling them to fire local APs and can amplify the size of the local EPSP to produce a larger depolarization at the cell body.
Chapter 14: Modulation of Synaptic Transmission and Neuronal Excitability: Second
Messengers
Activation of an ionotropic receptor directly opens an ion channel, while activation of an metabotropic receptor regulates the opening of ion channels indirectly through biochemical signaling pathways.
Ionotropic receptors are fast and are the basis of all behaviors, while metabotropic receptors are slow and modulate behavior.
Metabotropic receptors are responsible for transmitters, hormones, and growth factors.
Two major families of metabotropic receptors
G protein-coupled
Receptor tyrosine kinases
The cyclic AMP (cAMP) pathway is the best understood second-messenger signaling cascade initiated by G protein-coupled receptors.
Skipping over the details of the cAMP pathway.
The binding of transmitter to metabotropic receptors can greatly influence the electrophysiological properties of a neuron.
E.g. Altering transmitter release by regulating either calcium ion influx or the efficacy of synaptic release.
Since metabotropic receptors act on the resting and voltage-gated channels of a neuron, they effectively control the neuron’s resting potential, membrane resistance, length and time constants, threshold potential, action potential duration, and repetitive firing characteristics.
Two general properties of neurotransmitters
Modulatory projection neurons can coordinately influence the properties of large numbers of neurons to change the state of a neural circuit or of the entire animal.
Neuromodulators can bias a circuit’s dynamics to expand its dynamic range or to adapt it to the behavioral needs of the animal.
Highlights
Neuromodulators are substances that bind to receptors, mostly metabotropic, to change the excitability of neurons, the likelihood of transmitter release, or the functional state of receptors on postsynaptic neurons.
When neuromodulators activate second-messenger pathways, it can influence ion channel properties and other targets.
Given that all neurons and synapses are modulated by one or more substances, it’s remarkable that brain circuits are only rarely “overmodulated” such that they lose their function.
Chapter 15: Transmitter Release
The brain’s ability to learn and memorize is thought to emerge from the elementary properties of chemical synapses.
The amount of transmitter released is a steep function of the amount of presynaptic depolarization.
E.g. More depolarization at the presynaptic synapse leads to more neurotransmitter released.
However, there is a limit as depolarization above an upper limit produces no further increases in the postsynaptic potential.
Normal transmitter release occurs when both sodium and potassium ion channels are blocked, suggesting that neither ion is required for transmitter release, but are used only for depolarization.
Two functions of calcium ions
A carrier of depolarizing charge during the AP.
A special chemical signal conveying information about changes in membrane potential to the intracellular machinery responsible for transmitter release.
Calcium ions are used because of their low intracellular resting concentration, so that small absolute changes can lead to large relative changes, triggering various biochemical reactions.
Calcium channels are mostly localized in presynaptic terminals at active zones.
Active zone: sites where neurotransmitter is released and are exactly opposite of postsynaptic receptors.
This is important because calcium ions don’t diffuse across long distances.
One striking feature of transmitter release at all synapses is its steep and nonlinear dependence on calcium ion influx.
E.g. A 2-fold increase in calcium ion influx can increase the amount of transmitter released by more than 16-fold.
Synaptic delay: a lag of about 1-2 ms between the onset of the presynaptic AP and the resulting EPSP.
The synaptic delay is due to the slow opening of calcium ion channels compared to sodium ion channels, and calcium ions don’t begin to enter the presynaptic terminal until the membrane has begun to repolarize.
Once calcium ions enter the terminal, transmitter is rapidly released with a delay of only a few hundred microseconds.
Thus, synaptic delay is mostly due to the time required to open calcium ion channels.
The rapid release of transmitter suggests that the release process must already exist in a primed and ready state.
E.g. Vesicles must already be created and filled with transmitter before release.
The calcium ion influx is only open for a short time and is localized to the active zone, which means a concentrated local pulse of calcium ions induces a burst of transmitter release.
The amount of calcium ions needed to trigger transmitter release is around 10-30 micromoles for a normal AP.
The relationship between calcium ion concentration and transmitter release is highly nonlinear and is consistent with a model in which at least four or five calcium ions must bind to the calcium sensor to trigger release.
Skipping over the several classes of calcium channels.
Transmitter is released in discrete amounts called quanta, and each quantum of transmitter produces a postsynaptic potential of a fixed size called the quantal synaptic potential.
The total postsynaptic potential is made up of a large number of quantal potentials, which is why EPSPs seem smoothly graded because each quantal is small relative to the total.
The greater the calcium ion influx into the terminal, the larger the number of transmitter quanta released.
In the absence of an AP, the rate of quantal release is low at only one quantum per second.
In the presence of an AP, the rate of quantal release is high at about 150 quanta per millisecond.
The mechanism behind the quantal release of transmitter is that transmitter is stored in synaptic vesicles.
Synaptic vesicles have a diameter of around 40 nm and although most don’t contact the active zone, some are physically bound called docked vesicles.
Active zones are generally found in precise apposition to the postsynaptic membrane patches that contain the neurotransmitter receptors.
Presynaptic and postsynaptic specializations are functionally and morphologically attuned to each other, sometimes precisely aligned in structural “nanocolumns”.
All known chemical synapses have demonstrated quantal transmission.
Factors that affect transmitter release
Number of synapses between the presynaptic and postsynaptic cell.
E.g. In the CNS, most neurons only form a few synapses with any postsynaptic cell. However, in the cerebellum, a single climbing fiber forms up to 10,000 terminals on one Purkinje neuron.
Number of active zones in an individual synaptic terminal.
E.g. In the CNS, most presynaptic boutons only have one active zone where an AP usually releases, at most, a single quantum of transmitter. However, the calyx of Held contains many active zones due to the need of reliability transmitting auditory information.
Probability that a presynaptic AP will trigger the release of transmitter.
E.g. The mean probability of transmitter release from a single active zone varies widely from less than 0.1 to greater than 0.9.
Release probability can be powerfully regulated as a function of neuronal activity.
Synaptic reliability: the probability that an AP in a presynaptic cell leads to some measurable response in the postsynaptic cell.
Synaptic efficacy: the mean amplitude of the synaptic response, dependent on both synaptic reliability and on the mean size of the response.
Most neurons in the CNS have a low probability of transmitter release (aka high failure rate) and this isn’t a design defect but serves a purpose.
Not all chemical signaling between neurons depends on the synaptic machinery described above.
Surprisingly, about 90% of the ACh that leaves the presynaptic terminals at the neuromuscular junction does so through continuous leakage, but this is ineffective in causing an EPSP because it isn’t targeted to receptors, isn’t concentrated, and isn’t synchronous.
Review of exocytosis and endocytosis.
Since membrane capacitance is proportional to membrane surface area, we can measure a synapse’s capacitance to monitor exocytosis as changes in capacitance reveal the time course of exocytosis and endocytosis.
In neurons, the change in capacitance caused by the fusion of single, small vesicles is too small to resolve, but the fusion of many vesicles can be resolved.
When firing at high frequencies, a typical presynaptic neuron is able to maintain a high rate of transmitter release by retrieving and recycling used vesicles.
Since nerve terminals are usually far from the cell body, replenishing vesicles by synthesis in the cell body and transport to the terminals would be too slow to be practical at fast synapses. So synaptic recycling must happen at the synaptic terminal.
Synaptic vesicles are released and reused in a simple cycle.
Vesicle cycle
Vesicles fill with neurotransmitter and cluster in the nerve terminal.
They dock at the active zone where they undergo a complex priming process that makes vesicles respond to the calcium ion signal that triggers the fusion process.
Numerous mechanisms exist for retrieving the synaptic vesicle membrane.
E.g. Reversibly opening and close the fusion pore, without the full fusion of the vesicle membrane with the synapse membrane. Kiss-and-stay pathway where the vesicle remains at the active zone after the fusion pore closes, ready for a second release.
Skipping over the protein mechanism behind synaptic vesicles.
Synaptic plasticity: modulating the effectiveness of chemical synapses for seconds, hours, days, or longer.
Synaptic strength can be modified presynaptically, by altering the release of neurotransmitter, or postsynaptically, by modulating the response to neurotransmitter, or both.
We focus on how synaptic strength can be changed by modulating the amount of transmitter released.
Changes in transmitter release can be controlled by two different mechanisms
Changes in calcium ion influx
Changes in the amount of transmitter released in response to a given calcium ion concentration
Both types of mechanisms contribute to different forms of plasticity.
Synaptic strength is often altered by the pattern of activity of the presynaptic neuron.
Synaptic depression: a decrease in the size of the postsynaptic response to repeated stimulation.
Synaptic facilitation/potentiation: an increase in transmission with repeated stimulation.
Whether a synapse facilitates or depresses is often determined by the probability of release in response to the first AP of a spike train.
E.g. Synapses with a initial high probability of release undergo depression because the high rate of release transiently depletes docked vesicles at the active zone.
E.g. Synapses with an initial low probability of release undergo facilitation, in part because the buildup of intracellular calcium during the train increases the probability of release.
Longer spike trains increase the time that voltage-gated calcium ion channels stay open, which leads to enhanced entry of calcium ions and a subsequent increase in transmitter release, resulting in a larger postsynaptic potential.
Synaptic depression may contribute to sensory adaptation during repeated stimulation.
E.g. The time course of sensory adaptation parallels the attenuation of cortical spiking to repeated stimulation and the synaptic depression of EPSPs.
We find the simplest kind of cellular memory in the form of residual free calcium ions in synaptic terminals after a rapid burst of spikes.
Residual free calcium is thought to increase the priming of synaptic vesicles after stimulation, resulting in a form of short-term memory.
Postsynaptic inhibition: decreasing the likelihood that the postsynaptic cell will fire by hyperpolarizing the cell body or dendrites.
Presynaptic inhibition: reducing the amount of transmitter released to the postsynaptic cell by forming synapses on the axon terminal of another cell and hyperpolarizing it.
Presynaptic facilitation: increasing the amount of transmitter released by forming synapses on the axon terminal of another cell and depolarizing it.
Presynaptic terminals are endowed with a variety of mechanisms that allow for the fine-tuning of synaptic transmission strength.
Although we know a fair amount about the mechanisms of short-term changes in synaptic strength, the mechanisms of long-term changes remain mysterious.
We suspect that long-term changes require alterations in gene expression, growth of presynaptic and postsynaptic structures, altering calcium ion influx, and enhancement of transmitter release from existing terminals.
Highlights
Chemical neurotransmission is the primary communication mechanism used by neurons throughout the nervous system.
Neurotransmitter release is heavily dependent on the depolarization of the presynaptic terminal. It’s the depolarization itself, and not the voltage-gated sodium or potassium channels, that trigger release.
Depolarization of the presynaptic terminal opens voltage-gated calcium ion channels (VGCCs) that cause calcium to flow in (influx). These channels are concentrated along presynaptic ”active zones” which are very close to the sites at which release occurs.
The relationship between calcium ion influx and neurotransmitter release is tightly coupled and steeply nonlinear. The peak calcium ion influx lags behind the peak of the AP due to the slow opening of VGCCs.
Chemical transmission generally involves the release of quantal packets of neurotransmitter, with a quantum/unit corresponding to the contents of a single synaptic vesicle.
The amplitude of a postsynaptic potential can be described as a product of multiple factors
Number of presynaptic sites occupied by a readily releasable vesicle.
Release probability of individual sites.
Size of the postsynaptic response to the release of a single vesicle.
The number of vesicles released can be described by a binomial distribution.
Vesicles release their transmitter by fusing with the presynaptic membrane, dumping their contents into the synaptic cleft in a process called exocytosis.
Synaptic vesicle exocytosis is very precise and rapid due to the machinery being primed.
Vesicles are recycled by a process called endocytosis.
Rapid endocytosis of vesicle membranes after release enables fast recycling of vesicles for a continuous supply during prolonged stimulation.
Transmitter release can be modulated as an aspect of synaptic plasticity.
Synaptic strength can be strongly influenced by the pattern of firing in phenomena known as “depression” and “facilitation”, and by the regulation of calcium ion channels.
Chapter 16: Neurotransmitters
Four steps of chemical synaptic transmission
Synthesis and storage of the transmitter.
Release of the transmitter.
Interaction of the transmitter with receptors at the postsynaptic membrane.
Removal of the transmitter from the synapse.
This chapter covers the first and last steps of chemical synaptic transmission.
The competing identification of the adrenaline/epinephrine molecule lead to it having many different names.
Neurotransmitter: a substance that’s released by a neuron that affects a specific target in a specific manner.
As with many other operational concepts in biology, the concept of a neurotransmitter isn’t precise.
Although hormones and neurotransmitters are similar, neurotransmitter usually act on targets close to the site of transmitter release, whereas hormones are released into the bloodstream to act on distant targets.
At many synapses, transmitters activate not only postsynaptic receptors, but also autoreceptors at the presynaptic release site.
Autoreceptors: modulate synaptic transmission that’s in progress using a feedback loop.
E.g. Limiting further release of transmitter or inhibiting subsequent transmitter synthesis.
The duration of interaction between neurotransmitters and receptors is short; on the order of less than a millisecond to several seconds.
While the interaction is brief, it can result in long-term changes within the target cell often by activating gene transcription.
Four criteria of a classic neurotransmitter
Synthesized in the presynaptic neuron.
Accumulated within vesicles in presynaptic release sites and is released by exocytosis in amounts sufficient to exert a defined action on the postsynaptic neuron or cell.
When given exogenously (outside the cell) in reasonable concentrations, it mimics that action of the endogenous (inside the cell) transmitter.
A specific mechanism exists for removing the substance from the extracellular environment.
Two main classes of chemical substances that fit these criteria
Small-molecular transmitters
E.g. ACh, glutamate, GABA, glycine, ATP.
Neuropeptides: short polymers of amino acids.
Acetylcholine (ACh) is the only low-molecular-weight aminergic transmitter substance that’s not an amino acid or derived directly from one.
ACh is released at all vertebrate neuromuscular junctions by spinal motor neurons.
Biogenic amine transmitters
Catecholamines
E.g. Dopamine, norepinephrine, and epinephrine.
Norepinephrine is the only transmitter synthesized within vesicles.
In many cases, neurons that release norepinephrine can also release its precursor dopamine, and thus can act at neurons expressing receptors for either.
Only a small number of neurons in the brain use epinephrine as a transmitter.
Norepinephrine is far more active during the awake state than sleep or anesthesia.
If presynaptic activity is sufficiently prolonged, such as during stress, other changes in the production of norepinephrine will occur.
Severe stress to an animal results in intense presynaptic activity and persistent firing of the postsynaptic adrenergic neuron, placing a greater demand on transmitter synthesis.
The synthesis of biogenic amines is highly regulated and can be rapidly increased, which can keep up with wide variations in neuronal activity.
Serotonin
Histamine
Amino acid transmitters
In contrast to ACh and the biogenic amines, which aren’t intermediates in general metabolic pathways and are only produced in certain neurons, the amino acids glutamate and glycine aren’t only neurotransmitters, but are also universal cellular constituents.
Glutamate, the most common excitatory neurotransmitter, is taken up from the synaptic cleft by specific transporters in the membrane of both neurons and glia.
Glycine is one of the major inhibitory neurotransmitter in the spinal cord.
GABA, another major inhibitory neurotransmitter, is synthesized from glutamate.
GABA and glycine are loaded into synaptic vesicles by the same transporter and thus, can be co-released from the same vesicles.
ATP and adenosine
Caffeine’s stimulatory effects depend on the inhibiting adenosine binding to receptors.
With tissue damage, ATP is released into the general area and activates nociceptors on peripheral axons, causing the sensation of pain.
The presence of a substance in a neuron isn’t sufficient evidence that the substance is used as a transmitter.
E.g. Neurotransmitter glutamate is different from metabolic glutamate in that transmitter glutamate is compartmentalized in synaptic vesicles.
The uptake of transmitter into vesicles is energy-dependent as it works against the concentration gradient and can concentrate some neurotransmitters up to 100,000-fold relative to their concentration in the cytoplasm.
Uptake of transmitters by transporters is fast, enabling vesicles to be quickly refilled after they release their transmitter and are retrieved by endocytosis.
This is important for maintaining the supply of releasable vesicles during periods of rapid nerve firing.
Drugs that are sufficiently similar to normal transmitters can act as false transmitters.
These false transmitters are packaged into vesicles and released by exocytosis but they often bind only weakly or not at all to the postsynaptic receptor, thus their release decreases the efficacy of transmission.
E.g. Several drugs used to treat hypertension are taken up into adrenergic synapses but when released, they fail to stimulate postsynaptic adrenergic receptors, thereby relaxing vascular smooth muscle by inhibiting adrenergic tone.
An unexpected finding is that dopamine can be released from dendrites despite lacking synaptic vesicles.
Small-molecule transmitter substances can be formed in all parts of the neuron, but most importantly they can be synthesized at the axonal presynaptic site where they’re released.
In contrast, neuroactive peptides are formed in the cell body and are transported to the axonal presynaptic site.
Skipping over the neuroactive peptide neurotransmitters.
No uptake mechanism exists for neuropeptides and once a peptide is released, a new supply must arrive from the cell body.
In some neurons, there’s a corelease of small-molecule and peptide transmitters that often work synergistically.
E.g. ACh and vasoactive intestinal peptide, ACh and calcitonin gene-related peptide, glutamate and dynorphin.
Dense-core vesicles that release peptides differ from small-clear vesicles that release only small-molecule transmitters.
The peptide-containing vesicles may or may not contain small-molecule transmitter, but both types of vesicles contain ATP.
Corelease of ATP is an important case that coexistence and corelease don’t necessarily signify co-transmission, as ATP may be released simultaneously, but independent of transmitter release or that ATP is released alone.
An interesting example of corelease of two small-molecule transmitters is that of glutamate and dopamine by neurons projecting to the ventral striatum, cortex, and elsewhere.
Removal of transmitter from the synaptic cleft terminates synaptic transmission.
The duration of removal is important because if transmitter molecules released from one synaptic action were allowed to remain in the cleft, this would prevent new signals from going through and the synapse would ultimately become refractory due to receptor desensitization.
Three mechanisms to remove transmitter substances
Diffusion
E.g. Most in brain regions with low innervation.
Enzymatic degradation
E.g. ACh at the neuromuscular junction.
Reuptake
E.g. Small-molecular transmitters.
Neuropeptides are removed relatively slowly from the synaptic cleft by slow diffusion and proteolysis, which is in contrast to small-molecular transmitters that are removed more quickly.
The critical mechanism for inactivation of most small molecular neurotransmitters is reuptake at the plasma membrane.
Reuptake serves two purposes: terminating the synaptic action of the transmitter and recapturing the transmitter molecules for reuse.
Each type of neuron has its own characteristic uptake mechanism.
E.g. Cocaine blocks the uptake of dopamine, norepinephrine, and serotonin.
Drugs that block transporters and reuptake can prolong and enhance synaptic signaling at the cost of desensitization.
Highlights
One way information is carried between neurons is by chemical messages that cross the synaptic cleft.
The two major classes of chemical messengers are: small-molecule transmitters and neuroactive peptides.
To prevent depletion of small-molecule transmitters during rapid synaptic transmission, most are synthesized locally at presynaptic terminals.
Protein precursors of neuroactive peptides are synthesized only in the cell body and are carried to the terminals by axoplasmic transport. Unlike the vesicles that contain small-molecule transmitters, these vesicles aren’t refilled at the terminal.
Precise mechanisms for terminating transmitter actions represent a key step in synaptic transmission that’s as important as transmitter synthesis and release.
Some transmitter actions are terminated by simple diffusion, but most are terminated by specific molecular reactions.
Not all molecules released by a neuron are chemical messages, only those that bind to appropriate receptors and initiate functional changes in the target neuron are considered neurotransmitters.
Information is transmitted when transmitter molecules bind to receptor proteins, causing them to change conformation, leading to either increased or decreased membrane potential.
The corelease of several neuroactive substances allows for great diversity of information to be transferred in a single synaptic action.
Part IV: Perception
The fundamental nature of perceptual experience is that it’s a construct that we alone impose.
All of our receptive systems serve as filters, characterized by neural receptive fields that highlight certain forms of information and restrict others.
These selective filters are tunable over different timescales, enhancing attention to salient stimuli and adapting to the statistics of the sensory world.
The constructive transition from a world of sensory evidence to one of meaning lies at the heart of perception and has long been one of the most engaging mysteries of human cognition.
Perceptual experience of the world around us is a prerequisite for meaningful interaction with the world.
Decisions are based on the accumulation of sensory evidence in support of one percept versus another.
Chapter 17: Sensory Coding
Through sensation, we form an immediate and relevant picture of the world around us and our place within it.
Sensation provides immediate answers to three ongoing and essential questions
Is something there?
What is it?
What has changed?
To answer these questions, all sensory systems perform three fundamental functions
Detection
Discrimination
Adaptation
This chapter covers the organizational principles and coding mechanisms that are universal to all sensory systems.
Sensory information: neural activity originating from stimulation of receptor cells in specific parts of the body.
Our sensory modalities
Five classic ones (vision, hearing, touch, taste, smell)
Specialized receptors in each sensory system provide the first neural representation of the external and internal world, transforming a specific type of stimulus into electrical signals.
All sensory information is then transmitted to the CNS by the common currency or language of action potentials (APs).
This information flows through regions of the brain, changing its representation as it travels.
Sensory pathways are also controlled by higher centers in the brain that modify and regulate incoming sensory signals by feeding information back to earlier stages of processing.
So, perception isn’t just the “raw” physical sensory information but also cognition and experience.
To what extent do the sensations we experience accurate reflect the stimuli that produce them?
Psychophysics: studies the relationship between the physical characteristics of a stimulus and attributes of the sensory experience.
Sensory physiology: studies the neural consequences of a stimulus.
All sensory systems have a threshold, a limit to their ability to detect whether a stimulus occurred or not.
Two functions of thresholds
They prevent sensations that aren’t of interest or relevant from being detected, reducing noise.
The specific nonlinearity introduced by thresholds aids encoding and processing.
Sensory thresholds are a feature, not a bug.
Psychometric function: the percentage of times the subject reports detecting the stimulus as a function of stimulus amplitude.
By convention, the threshold is defined as the stimulus amplitude detected in half of the trials.
Stimuli are represented in the nervous system by the firing patterns of neurons.
Combining psychophysical measurements with neurophysiological techniques allows us to study the neural mechanisms that transform sensory neural signals into percepts.
The neural coding of sensory information is better understood at the early stages of processing than at later stages in the brain.
This approach to the neural coding problem, of how stimuli are translated into APs, was pioneered by Mountcastle, who showed that single-cell recordings of spike trains from PNS and CNS sensory neurons provide a statistical description of the neural activity evoked by a physical stimulus.
The study of neural coding of information is fundamental to understanding how the brain works.
Neural code: describes the relationship between the activity of a specified neural population and its functional consequences.
It’s often said that the power of the brain lies in the millions of neurons that process information in parallel.
This, however, doesn’t capture the essential difference between the brain and all of the other bodily organs.
E.g. In the kidney or muscle, most cells do similar things so if we understand one muscle cell, we understand how whole muscles work. However, in the brain, understanding one neuron isn’t enough to generalize to the entire brain as cells in the brain do something different.
To understand the brain, we need to understand how its tasks are organized into networks of neurons.
Two features distinguishing sensory systems
Different driving stimulus energies.
E.g. Light vs sound.
Discrete pathways compose each system.
E.g. Eyes vs ears.
Each neuron in a network performs a specific task, and the train of APs it produces has a specific functional significance for all postsynaptic neurons in that pathway.
Our sensory experience differs from the physical properties of stimuli because the nervous system extracts certain features while ignoring others.
E.g. We receive electromagnetic waves of different wavelengths, but we see them as colors. We receive pressure waves of different frequencies, but we hear them as tones. We receive chemical compounds floating in the air but we experience them as odors and tastes.
Colors, tones, odors, and tastes are mental constructs of the brain that don’t exist outside of the brain, but are linked to specific physical properties of stimuli.
The richness of sensory experience begins with the highly diversified set of sensory receptors.
Each receptor responds optimally to a specific kind of energy at a specific location and sometimes, only to energies with a particular temporal or spatial pattern.
The receptor transforms the stimulus energy into electrical energy, so all sensory systems use a common signaling mechanism.
Receptor potential: the amplitude and duration of the electrical signal generated by the sensory receptor.
Stimulus transduction: the process of converting a specific stimulus energy into an electrical signal.
The arrangement of receptors in a sense organ allows for further specialization of function within each sensory system.
E.g. The concentration of cones in the fovea of the eye or the frequency map in the cochlea.
E.g. Olfaction, gustation, itch, pain, visceral sensation.
Photoreceptor
Thermoreceptor
Each major sensory system has several submodalities.
E.g. Taste can be sweet, sour, salty, savory, or bitter. Vision has color, shape, pattern. Touch has temperature, texture, firmness. Sound has pitch, loudness, rhythm.
Some submodalities are due to discrete subclasses of receptors while others are derived by combining information from different receptor types.
Each receptor behaves as a filter for a narrow range or bandwidth of energy, or we say that the receptor is tuned to an optimal/best/preferred stimulus that evokes the strongest neural response.
Tuning curve: plotting a receptor’s response as a function of changes in stimulus feature.
Tuning curves show the range of sensitivity of the receptor, including its preferred stimulus.
E.g. Blue cone cells in the retina are most sensitive to light between 430-440 nm, but they still respond to light outside of their preferred wavelength.
A photoreceptor’s graded amplitude response encodes specific wavelengths, but also the intensity of light.
E.g. A green cone responds similarly to bright orange as to dim green. How are these distinguished by the nervous system?
Stronger stimuli activate more photoreceptors and the resulting population code of multiple receptors, combined with receptors of different wavelength preferences, distinguishes brightness/intensity from color/hue.
Thus, neural ensembles enable individual visual neurons to multiplex signals of color and brightness in the same pathway.
The tuning curve of a photoreceptor is roughly symmetric about its preferred wavelength.
E.g. Red cones respond similarly to light of 520 and 600 nm. How are these distinguished by the nervous system?
The answer, again, lies with multiple receptors as green cones respond very strongly to 520 nm (close to preferred wavelength) but weakly to 600 nm light, and blue cones respond very weakly to 520 nm but don’t respond to 600 nm light.
So, 520 nm light is perceived as green, while 600 nm light is perceived as orange.
By varying the combinations of photoreceptors, we’re able to perceive a spectrum of colors. The same goes for every other sensory receptor.
Changes in the relative activation of each cone type accounts for the perception of color.
The existence of submodalities points to an important principle of sensory coding: that the range of stimulus energies is deconstructed into smaller, simpler components whose intensity is monitored over time by specialized receptors that transmit information in parallel to the brain.
The brain eventually integrates these diverse components of the stimulus to convey an integrated representation of the sensory event.
The current challenge is to understand how sensory information is distributed across populations of neurons.
Because the sense organs are located far away from the CNS, passive propagation of receptor potentials wouldn’t reach the CNS and thus not transmit their signal.
To communicate sensory information to the brain, a second step of neural coding must transform the receptor potential into a sequence of APs since APs can travel far.
The analog signal of stimulus magnitude in the receptor potential is transformed into a digital pulse code where the frequency of APs is proportional to the intensity of the stimulus.
Thus, we say that the spike train encodes stimulus information.
Rate coding: when stronger stimuli evoke larger receptor potentials that generate more APs (higher frequency).
Population coding: when a stimulus is represented by all active neurons in the receptor population.
Most sensory systems have low- and high-threshold receptors, enabling us to perceive over a greater dynamic range.
E.g. Rods are low-threshold receptors and cones are high-threshold receptors.
The firing rates of each neuron in a population can be plotted in a coordinate system with multiple axes such as modality, location, intensity, and time.
The possibilities for information coding through temporal patterning within and between neurons in a population are enormous.
The instantaneous firing patterns of sensory neurons are as important to sensory perception as the total number of spikes fired over time.
E.g. Steady rhythmic firing in hand nerves is perceived as steady pressure or vibration depending on which touch receptors and pathways are activated. Bursting patterns may be perceived as motion.
The patterning of spike trains plays an important role in encoding changes (temporal fluctuations) of the stimulus.
Humans can report changes in sensory experience that match changes in the firing pattern of sensory neurons within a few milliseconds.
Another important principle of sensory systems is that they detect contrast, changes in the temporal and spatial patterns of stimulation.
Receptor adaptation: if a stimulus persists unchanged, the neural response and corresponding sensation diminishes.
Receptor adaptation is thought to be an important neural basis for perceptual adaptation, when a constant stimulus fades from consciousness.
Slowly adapting receptors: receptors that encode stimulus duration by generating APs throughout the period of stimulation.
Rapidly adapting receptors: receptors that only respond at the beginning and end of a stimulus; they cease firing in response to constant amplitude stimulation and only respond to changes.
Slowly and rapidly adapting receptors illustrate another important principle of sensory coding: neurons signal important properties of stimuli not only when they fire, but also when they slow or stop firing.
The temporal properties of a changing stimulus are encoded as changes in the firing pattern.
Interspike interval: time between spikes.
The receptive fields of sensory neurons provide spatial information about stimulus location.
The position of a sensory neuron’s input terminals in the sense organ is a major component of the specific information conveyed by that neuron.
Receptive field: the physical location that activates a sensory neuron.
Perceptive field: the region that a sensation is perceived to have come from.
The receptive and perceptive field usually coincide.
The size of the stimulus determines the total number of receptors that are activated, so the spatial distribution of active and silent receptors provides a neural image of the size and edges of the stimulus.
The spatial resolution of a sensory system depends on the total number of receptor neurons and the distribution of receptive field across the area.
E.g. Hands have more receptor neurons than arms, so we can discriminate more using our hands. The fovea of the retina has more receptor neurons than the surrounding.
Synapses in sensory pathways provide an opportunity to modify the signal from receptors.
The responses of CNS neurons to sensory stimuli are more variable from trial to trial than those of peripheral receptors.
The axons of sensory projection neurons terminate in the brain in an organized manner that maintains their spatial arrangement.
E.g. Sensory neurons for touch in adjacent regions of the skin project to adjacent neurons in the CNS, and this topographic arrangement of receptive fields is preserved throughout the early somatosensory pathways.
Thus, each primary sensory area in the brain contains a topographic, spatially organized map of the sense organ. This topography extends to all levels of a sensory system.
E.g. Somatotopic for somatosensory, retinotopic for visual, and tonotopic for auditory.
Neurons in the cerebral cortex are specialized to integrate and detect specific features of stimuli beyond their location in the sense organ.
E.g. Simultaneous activation of specific groups of receptors, direction of motion, or tonal sequences of frequencies (temporal pattern of receptor activation).
In each successive stage of cortical processing, the spatial organization of stimuli is progressively lost as neurons become less concerned with the descriptive features of stimuli and more concerned with properties of behavioral importance.
Sensory information is processed in parallel pathways in the cerebral cortex.
Review of the what/ventral and where/dorsal visual pathways.
Neural recordings confirm that neurons change their sensitivity, as reflected in their firing rates, much more so than their selectivity for particular stimuli.
What we perceive is always some combination of the sensory stimulus itself and the memories it both evokes and builds upon.
Association is a powerful mechanism and much of learning consists of making associations through repetition and retrieval.
E.g. If we listen to a song over and over again, the circuits of our auditory system are modified by the experience and we learn to anticipate what comes next, completing the phrase before it occurs.
How does the brain “recognize” a specific pattern of inputs from a population of presynaptic neurons?
One potential mechanism is called template matching where neurons fire if the arriving APs approximately fit the neuron’s pattern of synaptic connections.
Highlights
Sensations comes from the interaction between an external stimulus and the billions of sensory receptors that innervate every organ of the body.
All sensory systems respond to four elementary features of stimuli: modality, location, intensity, and duration. This is first captured by the receptor potential and then converted into sequences of APs.
The intensity and duration of stimulation are represented by the amplitude and time course of the receptor potential, and by the total number of receptors activated. The intensity is coded as the frequency of firing, and the duration is encoded by the dynamics of the spike train.
The modality and location of stimulation are represented by each receptor’s receptive field and the pathway. The identity of the active sensory neuron signals not only the modality of a stimulus, but also the place where it occurs.
Sensory information is processed in parallel in the CNS and to maintain the specificity of each modality within the nervous system, receptor axons are segregated into discrete anatomical pathways that terminate in unimodal nuclei. Processing isn’t strictly hierarchical due to feedback.
Sensory information is processed in serial in the CNS and goes through stages from the spinal cord, brain stem, thalamus, and cerebral cortex.
Throughout processing, sensory information maintains its topographical map such that receptors that are physically close in the sensory organ project information to physically close neurons in the brain.
Chapter 18: Receptors of the Somatosensory System
Somatosensory system: the system that transmits information coded by receptors distributed throughout the body.
Three major functions of the somatosensory system
Proprioception: sense of oneself.
E.g. Awareness of the posture and movement of our own body, particularly the four limbs and the head.
Exteroception: sense of interaction with the external world.
E.g. Touch, pressure, stroking, motion, vibration, heat, cold, pain (nociception).
Interoception: sense of the function of the major organ systems of the body and its internal state.
E.g. Cardiovascular, respiratory, digestive, and renal systems. Monitors blood gases and pH, tissue stretching such as bladder and digestive tracts.
All of the somatic senses are mediated by one class of sensory neurons, the dorsal root ganglion (DRG) neurons.
DRG neurons are a type of bipolar cell called pseudo-unipolar cells that have a axon with two branches, one projecting to the periphery and one projecting to the CNS.
This axon is called the primary afferent fiber since it serves as a single transmission line.
Dermatome: the region of body innervated by these sensory endings.
Bundles of primary afferent fibers form the peripheral nerves.
Damage to peripheral nerves or their targets in the brain may produce sensory deficits in more than one somatosensory submodality or motor deficits.
Five functional zones of a DRG neuron
Receptive zone
Spike generation site
Peripheral nerve fiber
Cell body
Spinal or cranial nerve
Stimuli of sufficient strength produce APs that are transmitted along the peripheral nerve fiber, through the cell body, and into the spinal cord.
Four groups of muscle nerves
Group I: axons in muscle nerves innervate muscle spindle receptors and Golgi tendon organs, signaling muscle length and contractile force.
Group II: innervate secondary spindle endings and receptors in joint capsules, signaling proprioception.
Group III: the smallest myelinated muscle afferents.
Group IV: unmyelinated afferents that signal trauma or injuries in muscles and joints.
Electrical stimuli of increasing strength evoke APs in different groups, with APs first evoked in the largest axons because of their lower electrical resistance.
As more and different group fibers are recruited, the signal has bumps, reflecting different group of fibers being activated.
These differences in fiber diameter and conduction velocity of peripheral nerves allow signals of touch and proprioception to reach the spinal cord earlier than noxious or thermal signals.
Mechanoreceptor: a receptor that’s sensitive to physical deformation of the surrounding tissue.
E.g. Pressure, stretch, suction.
Mechanical stimulation deforms the receptor protein, opening stretch-sensitive ion channels and depolarizing the receptor neuron.
Merkel cell: sensory epithelial cells that form close contacts with the terminals of large-diameter sensory nerve axons at the epidermal-dermal junction.
Merkel cells serve a similar receptive function in the sense of touch as auditory hair cells in the cochlea and taste cells in the tongue.
Experiments indicate that Merkel cells are responsible for the sustained response to static touch.
Hairs on the surface of the skin provide another important set of touch-end organs.
Humans can perceive motion of individual hairs and can localize the sensation to the base of the hair.
The innervation pattern of hair follicles in the skin follows the two principles of convergence and divergence.
E.g. Each hair follicle in the skin provides input to multiple sensory afferent fibers. This overlap provides redundancy of sensory input from a small patch of skin.
The muscle spindle is the principal receptor for proprioception.
Experiments on fatigued or partially paralyzed muscles show that perceived muscle force is mainly related to centrally generated effort rather than actual muscle force.
Joint receptors play little, if any, role in postural sensation of joint angles. Instead, perception of the angle of joints depends on afferent signals from muscle spindle receptors and efferent motor commands.
Humans recognize four distinct types of thermal sensation: cold, cool, warm, and hot. These result from differences between the normal skin temperature and the external temperature.
Temperature sense, like pain and itch, is mediated by a combinatorial code of multiple receptor types transmitted by small-diameter afferent fibers.
Although we’re exquisitely sensitive to sudden changes in skin temperature, we’re normally unaware of the wide swings in skin temperature that occur as our blood vessel expand or contract to discharge or conserve body heat.
Thermal stimuli activate specific classes of transient receptor potential (TRP) channels in neurons. There are at least six known TRP receptors that have been identified and the thermal sensitivity of a neuron is determined by the specific TRP receptors expressed in its nerve terminals.
Two classes of TRP receptors are activated by cold temperature and inactivated by warming
TRPM8: respond to temperatures below 25°C and perceived as cool or cold.
TRPA1: respond to temperatures below 17°C and perceived as cold or frigid.
Cold receptors are about 100 times more sensitive to sudden drops in skin temperature than to gradual changes. This extreme sensitivity to change allows us to detect weak winds.
Four classes of TRP receptors are activated by hot temperature and inactivated by cooling
TRPV3: respond to temperatures above 35°C and perceived as warm to hot.
TRPV1 and TRPV2: respond to temperatures above 45°C and perceived as burning pain.
TRPV4: respond to temperatures above 27°C and signal normal skin temperatures.
Unlike cold receptors, warm receptors act more like simple thermometers since their firing rates increases with increasing skin temperature.
Warm receptors are less sensitive to rapid changes in skin temperature than cold receptors, resulting in humans being less responsive to warming than cooling.
We can detect sudden skin warming at a threshold of 0.1°C.
Various substances such as capsaicin and menthol produce burning or cooling sensations when applied to the skin because they bind to various TRP receptors, thus we perceive it as burning or cooling.
Nociceptors (pain receptors) respond directly to mechanical and thermal stimuli, and indirectly to other stimuli by means of chemicals released from cells in the traumatized tissue.
Two classes of nociceptors
Mechanical (high-threshold mechanoreceptors): respond to stimuli that puncture, squeeze, or pinch the skin and perceived as sharp and pricking.
Polymodal: respond to a variety of noxious mechanical, thermal, and chemical stimuli and perceived as dull burning pain.
Itch is a distinctive cutaneous sensation and is mediated by both TRPV1 and TRPA1 receptors.
How can TRPA1 receptors mediate both itch and noxious cold temperatures? Same goes for TRPV1.
The answer lies in the use of combinatorial codes by small-diameter sensory nerve fibers.
E.g. Noxious cold is sensed when both TRPA1 and TRPM8 receptors are excited, but itch is perceived when TRPM8 receptors are silent.
E.g. Noxious heat is sensed when TRPV1, TRPV2, and TRPV3 expressing fibers are co-activated, but itch is perceived when only TRPV1 fibers respond.
Similar combinatorial codes using multiple receptors are commonly used by other chemical senses such as olfaction and taste.
Visceral sensations represent the status of internal organs for behaviors like respiration, eating, drinking, and reproduction.
The sensory terminal regions of peripheral nerve fibers are usually unmyelinated and don’t express the voltage-gated sodium and potassium ion channels that underlie AP generation.
This design optimizes information gathering in the receptive field by dedicating the highly branched terminal membrane area to sensory transduction channels.
Ensemble recording techniques show that even at the receptor level, there are no canonical/standard responses to somatic stimuli, but rather common patterns of responses.
Furthermore, individual somatosensory neurons appear to be polysensory, responding to more than one modality such as both touch and pain.
Recording neurons simultaneously rather than one at a time is essential for decoding population activity and defining the circuits underlying diverse sensory modalities.
We note that neurons in the dorsal root, trigeminal, and vagal ganglia don’t appear to be spatially clustered or segregated functionally by modality such as by mechanosensation, thermal, or chemical.
The principle organization feature of these sensory ganglia is that body topography is maintained throughout the sensory pathway. Which specific area of skin or which muscle is innervated by particular sensory neurons extends to higher structures in the brain that analyze the sensory information that organize specific behaviors.
The medial division of the spinal cord transmits proprioceptive and tactile information from the innervated body region, while the lateral division transmits noxious, thermal, pruritic, and visceral information.
The distribution of spinal nerves in the body forms the anatomical basis of the topographical maps of sensory receptors in the brain that underlie our ability to localize specific sensations.
Skipping over the 10 laminae/layers of the spinal gray matter.
Highlights
The most important principle of somatosensory organization is specificity: each of the bodily senses arises from specific types of receptors distributed throughout the body.
Dorsal root ganglion (DRG) neurons are the sensory receptor cells of the somatosensory system. The functional role of a DRG neuron is determined by the sensory receptor molecules expressed in its distal terminals in the body.
E.g. Mechanoreceptors are sensitive to local tissue distortion, thermoreceptors are sensitive to specific temperature ranges and shifts in temperature, and chemoreceptors are sensitive to specific molecular structures.
Mechanosensation is mediated by Piezo2 proteins that are sensitive to compression or stretch and these receptors transmit sensory information rapidly.
Thermosensation is mediated by transient receptor potential (TRP) ion channels that are gated in response to local temperature ranges and changes.
E.g. Cold, cool, warm, or hot.
Chemoreceptors change their conductance when binding to specific chemicals, giving rise to sensations of pain, itch, or visceral function.
Activation of somatosensory receptors produces local depolarization of the distal nerve terminals called the receptor potential, whose amplitude is proportional to the strength of the stimulus.
Receptor potentials are converted near the distal nerve terminals to AP trains whose frequency is linked to the intensity/strength of the stimulus.
Individual DRG neurons have multiple sensory endings in the skin, muscle, or viscera, forming complex and overlapping receptive fields. This enables redundant, parallel pathways for information transmission to the brain.
The information transmitted from each type of somatosensory receptor in a particular part of the body is conveyed in discrete pathways to the spinal cord or brain stem by the axons of DRG neurons. The axons are gathered together in peripheral nerves.
Axon diameter and myelination, both of which determine the speed of AP conduction, vary according to the need for rapid signaling.
When DRG axons enter the CNS, they separate to terminate in distinct layers of the spinal cord gray matter and/or project directly to higher centers in the brain stem.
Chapter 19: Touch
The human hand is one of evolution’s great creations due to it’s fine manipulative capacity and fine sensory capacity.
If we lose tactile sensation in our fingers, we lose manual dexterity.
The fingertips are among the most densely innervated parts of the body, providing extensive and redundant somatosensory information about objects manipulated by the hand.
If we become skilled using a tool, the tool feels like an extension of our body because two groups of touch receptors monitor the vibrations and forces produced by distant conditions.
We can also recognize objects placed in the hand from touch alone and we don’t have to think about the information provided by each finger to deduce the object.
Instead, information flows through sensory pathways to memory that instantly matches previously stored representations of the object.
Interestingly, we perceive an object as a single object and not as a collection of discrete features.
Touch: direct contact between two physical bodies.
Touch can be passive, when something else moves against you, or active, when you move against something else.
Active manipulation of objects is based on three dimensions: volumetric, topographic, and elastic properties of objects.
During active touch, fibers descending from the motor centers of the cerebral cortex terminate on interneurons in the medial dorsal horn that also receive tactile input from the skin. Similar fibers from cortical motor areas terminate in the dorsal column nuclei, providing an efference copy (or corollary discharge) of the motor commands that generate behavior.
Using the efference copy, neurons can distinguish between expected tactile input from a movement and passive tactile input.
Touch receptors are innervated by two types of axons/fibers
Slowly adapting (SA): respond to sustained skin indentation with sustained discharge.
Rapidly adapting (RA): respond to changes in skin indentation but not to sustained indentation.
Thus, pressure is captured by SA fibers while motion is captured by RA fibers.
Touch receptors are further subdivided into two types based on size and location in the skin.
Two types of touch receptors
Type 1: terminate in clusters of small receptor organs in the superficial layers of the skin. Have small, highly localized receptive fields with multiple spots of high sensitivity that reflect the branching patterns of their axon terminals in the skin.
Type 2: innervate the skin sparsely and terminate in single large receptors in the deep layers of the skin. Have large, distributed receptive fields with a single hot spot where sensitivity is greatest; this point is located directly above the receptor.
The small receptive fields of type 1 receptors are complemented by the high density of such receptors in the fingertips.
Importantly, the receptive fields of type 1 fibers are significantly smaller than most objects that we perceive, therefore receptors only signal the spatial properties of a limited part of an object.
Only by integrating the responses from many receptors do we get a unified percept.
Receptive fields become larger the further away from the fingertips and closer to the palm, consistent with the lower density of mechanoreceptors in these regions.
Four types of mechanoreceptors in the hand
Meissner corpuscles (RA1): specific changes in skin indentation.
Merkel cells (SA1): specific sustained skin indentation.
Pacinian corpuscles (RA2): general changes in skin indentation.
Ruffini endings (SA2): general sustained skin indentation.
The sense of touch can be understood as the combination of information from these four types of receptors
Our ability to resolve spatial details using touch depends on which region of the body is contacting the object.
Two-point discrimination: a test if we can tell two points of contact apart.
The two-point discrimination test measures our tactile acuity.
Tactile acuity: the distance when we can discriminate two points apart that’s midway between chance and perfect discrimination. So above random guessing but below a perfect score.
E.g. It’s about 1 mm on fingertips in young adults, but declines to about 2 mm in elderly.
Tactile acuity is highest on the fingertips and lips where the receptive fields are smallest.
Our fingerprints give skin a rough surface that increases friction, allowing us to grasp objects without slippage.
Humans are able to distinguish horizontal and vertical orientations of gratings with remarkably narrow spacing of the ridges.
When reading Braille, specific combinations of SA1 fibers that fire synchronously signal the spatial arrangement of the Braille dots while finger motion activates RA1 fibers, enhancing the signals provided by SA1 fibers.
Slowly adapting fibers detect object pressure and form.
The most important function of SA1 and SA2 fibers is their ability to signal skin deformation and pressure.
Skin deformation is perceived as hardness or softness as hard objects indent the skin more than soft objects.
SA1 fibers also perceive object size as larger objects have less sharp curves, so the responses of individual SA1 fibers are weaker.
E.g. A pencil tip pressed 1 mm into the skin feels sharp, but an eraser pressed 1 mm into the skin feels blunt.
SA1 fibers respond stronger to a smaller object than a larger one because the force needed to indent the skin is concentrated at a small contact point.
SA2 fibers respond stronger to skin stretch rather than indentation because of their anatomical location along palm folds and finger joints.
Thus, SA2 fibers aid in the perception of finger joint angle by detecting skin stretch over the knuckles and in the webbing between fingers.
SA2 fibers also provide proprioceptive information about hand shape and finger movements when the hand is empty, when the fingers are fully extended, and when we’re making a fist.
We use this proprioceptive information to preshape our hand to efficiently grasp objects, opening the fingers just wide enough to clear the object.
Rapidly adapting fibers detect motion and vibration.
Vibration is the sensation produced by sinusoidal stimulation of the skin.
Each type of mechanosensory fibers is most sensitive to a specific range of frequencies.
SA1 for below 5 Hz, RA1 for 10-50 Hz, and RA2 for 50-400 Hz.
The RA2 receptor is the most sensitive mechanoreceptor in the somatosensory system and is exquisitely sensitive to high-frequency vibratory stimuli.
The ability to feel vibration allows us to feel conditions at the working surface of a tool in our hand as if our fingers themselves were touching the object under the tool.
E.g. When you use a spoon, it feels like an extension of your body and not just a tool because the vibrations in the spoon are captured by RA2 receptors.
E.g. We can write in the dark because we feel the vibration of the pen as it contacts the paper and transmits frictional forces from the surface roughness to our fingers. It’s as if our fingers were touching the paper itself.
When detecting vibration, SA1, RA1, and RA2 fibers have different firing patterns but their spike trains have important shared characteristics.
Each neuron fires at a particular phase of the vibratory cycle (usually the phase at indentation) and its phasic pattern of spikes replicates the vibratory frequency.
The patterning of spike trains is further reinforced because the population of fibers fires synchronously, enabling the frequency information to be preserved centrally due to synaptic integration.
The total number of spikes per burst also increases as the stimulus amplitude rises, allowing each fiber to multiplex (send multiple features) signals of vibratory frequency and intensity.
E.g. Frequency is conveyed by the temporal pattern of the spike train, while intensity (amplitude of vibration) is conveyed by the total number of output spikes.
Both slowly and rapidly adapting fibers are important for grip control.
Touch receptors are not only useful for object recognition, but they’re also useful during skilled hand movements.
All four classes of touch fibers respond to grasp and each fiber class monitors a particular function.
E.g. SA1 fibers signal the amount of grip force applied by each finger, RA1 fibers signal how quickly the grasp is applied, RA2 fibers signal small vibrations transmitted through the object.
We know when an object is returned to a table because the object and table make vibrations when they contact, and therefore we can manipulate the object without looking at it.
After grasping, RA1 and RA2 fibers stop responding and SA2 fibers monitor hand posture.
We lift and manipulate an object with delicacy as our grip force just exceeds the force that would result in a slip, and that grip force is adjusted automatically to compensate for frictional differences.
We predict how much force is required to grasp and lift an object, and modify these forces based on tactile information provided by SA1 and RA1 fibers.
The importance of tactile information in grasping is seen in cases of nerve injury or during local anesthesia of the hand where patients apply unusually high grip forces, and coordination between grip and load forces is poor.
If the object is unexpectedly heavy or jolted and begins to slip from the hand, RA1 fibers fire in response to small tangential slip movements of the object.
The result is that RA1 activity signals to the motor cortex to increase grip force.
Fibers in the dorsal columns and neurons in the dorsal column nuclei are organized topographically with the upper body represented laterally and the lower body represented medially.
The somatosensory submodalities of touch and proprioception are also segregated functionally in these regions and neurons of distinct types are spatially separated.
Modality segregation is a consistent feature of the projection pathways to the primary somatosensory cortex.
Conscious awareness of touch is thought to originate in the cerebral cortex.
Tactile information enters the cerebral cortex through the primary somatosensory cortex (S-I) in the postcentral gyrus of the parietal lobe.
Pyramidal neurons form about 80% of S-I neurons.
Mountcastle discovered that the S-I cortex is organized into vertical columns.
Each column is 300-600 micrometers wide and spans all six cortical layers deep. Neurons within a column receive inputs from the same local area of skin and respond to the same class or classes of touch receptors.
Cortical columns are an elementary functional module of the neocortex as it provides an anatomical structure that organizes sensory input related by about location and modality.
The columnar organization of the cortex is a direct consequence of intrinsic cortical circuitry, the projection patterns of thalamocortical axons, and the migration pathways of neuroblasts during cortical development.
Thalamocortical axons terminate primarily on clusters of stellate cells in layer IV, whose axons project vertically toward the surface of the cortex. Thus, thalamocortical inputs are relayed to a narrow column of pyramidal cells, which allows the same information to be processed by a column of neurons throughout the cortex.
Neurons in layers II and III also project to layer V in the same column, to higher cortical areas in the same hemisphere, and to mirror-image locations in the opposite hemisphere.
These connections allow for complex signal integration.
Pyramidal neurons in layer V provide the principle output from each column.
Cortical columns are organized somatotopically, meaning that there’s a complete somatotopic representation of the body (roughly matching the spinal dermatomes) in each of the four areas of S-I.
Body surface is represented in at least 10 distinct neural maps in the parietal lobe where each map mediates different aspects of tactile sensation.
E.g. Four in S-I, four in S-II, and at least two in the posterior parietal cortex. Areas 3b and S-I process surface texture, whereas area 2 processes object size and shape.
Another important feature of somatotopic maps is the amount of cerebral cortex devoted to each body part.
Homunculus: a neural map of the body where each part of the body is represented in proportion to its importance to the sense of touch.
E.g. Large brain regions for the hand, foot, and mouth.
Cortical magnification: the amount of cortical area devoted to a unit area of skin.
Cortical magnification varies by more than a hundredfold across different body surfaces and is closely correlated with the innervation density and thus spatial acuity of touch receptors in an area of skin.
The areas with the greatest cortical magnification in the human brain are the lips, tongue, fingers, and toes.
Neurons in S-I are at least three synapses beyond touch receptors in the skin.
E.g. Dorsal column nuclei, thalamus, and cortex.
We perceive that a particular location on the skin is touched because specific populations of neurons in the cortex are activated.
This experience can be induced by electrical or optogenetic stimulation in the same cortical neurons.
The receptive fields of cortical neurons are much larger than those of somatosensory fibers in peripheral nerves.
E.g. SA1 and RA1 are receptive to tiny spots on the skin, whereas cortical neurons are receptive to an entire fingertip.
Receptive fields of higher cortical areas are even larger. Large receptive fields allow cortical neurons to integrate the fragmented information from smaller receptive fields, enabling us to recognize the overall shape of an object.
The receptive fields of cortical neurons usually have an excitatory zone surrounded by or superimposed by inhibitory zones.
The spatial arrangement of excitatory and inhibitory inputs to a cortical neuron determines which stimulus features are encoded by that neuron.
E.g. Three receptive fields positioned horizontally can be used to detect vertical motion.
The size and position of receptive fields aren’t fixed but can be modified by experience or injury.
Cortical receptive fields appear to be formed during development and are maintained by simultaneous activation of input pathways.
E.g. Extensive stimulation of afferent pathways through repeated practice may strengthen synaptic inputs, improving perception and performance.
As information flows toward higher-order cortical areas, specific combinations of stimulus patterns are needed to excite neurons. Thus, touch information becomes more abstract.
E.g. S-II neurons don’t represent vibration as periodic spike trains linked to the vibration frequency, but instead abstract temporal properties of the stimulus, firing at different mean rates for different frequencies.
A similar frequency-dependent transition from temporal- to rate-coding neurons underlies sound processing in the primary auditory cortex.
S-II responses to vibration depend on the stimulus context: the same vibratory stimulus can evoke different firing rates depending on whether the preceding stimulus is higher or lower in frequency.
Lesions to S-I cortex result in difficulty to respond to simple tactile tests.
E.g. Touch thresholds, vibration an joint position sense, and two-point discrimination.
Loss of touch doesn’t cause paralysis or weakness because much of skilled movement is predictive, using sensory feedback mainly for adjustments.
Lesions to the posterior parietal cortex usually only result in mild difficulty with simple tactile tests, but do result in profound difficulty with complex tactile recognition tasks.
E.g. Failing to shape and orient the hand properly to grasp objects and misdirecting the arm during reaching.
Removal of S-II cortex in monkeys causes severe impairment in the discrimination of both shape and texture, and prevents animals from learning new tactile discriminations.
Highlights
At the first touch, the hand deconstructs the object into tiny segments distributed over a large population of around 20,000 sensory nerve fibers.
SA1 fibers provide high-fidelity information about the object’s spatial structure such as form and texture. SA2 fibers provide information about hand conformation and posture. RA1 fibers provide information about object motion in the hand. RA2 fibers provide information about vibration.
Information from touch receptors is conveyed to the brain by dorsal column fiber tracts in the spinal cord, relay nuclei in the brain stem and thalamus, and a hierarchy of intracortical pathways.
By analyzing patterns of activity across entire populations, the brain constructs a neural representation of objects and actions of the hand.
Brain processing of touch is helped by the topographic, somatotopic organization of neurons involved at each relay. Adjacent skin areas that are stimulated together are linked anatomically and functionally in central relays.
The brain also transforms the segregated representations of object properties into an integrated representation of complex object properties.
The peripheral fibers deliver more information than can be handled, so the brain compensates by selecting information for delivery.
The touch system provides information necessary for the control and guidance of movement.
Chapter 20: Pain
Pain: an unpleasant sensation and emotional experience associated with actual or potential tissue damage.
E.g. Pricking, burning, aching, stinging, and soreness.
Pain serves an important protective function by alerting us to injuries that require evasion or treatment.
Pain is unlike any other sensory modality such as vision, hearing, and smell, in that it has an urgent and primitive quality, possessing a powerful emotional component that takes over consciousness.
The perception of pain is subjective and is influenced by many factors.
The variability of the perception of pain is yet another example of a principle that we’ve encountered: pain isn’t the direct expression of a sensory event, but rather the product of elaborate processing in the brain by a variety of neural signals.
Pain can be experienced briefly (acute) or persistently (chronic).
Two types of nociceptive fibers
Aδ axons: conduct APs at 5-30 m/s and are thinly myelinated.
C-fiber axons: conduct APs at less than 1 m/s and are unmyelinated.
Three classes of nociceptors
Thermal: activated by extreme temperatures (> 45°C or < 5°C).
Mechanical: activated by intense pressure to the skin.
Polymodal: activated by high-intensity mechanical, chemical, or thermal stimuli.
These three classes of nociceptors are widely distributed in skin and deep tissues and are often coactivated.
E.g. When a hammer hits your thumb, you initially feel a sharp pain (transmitted by Aδ fibers) followed by a prolonged aching and sometimes burning pain (transmitted by C fibers).
Silent nociceptors are found in the viscera and signal inflammation and various chemical agents.
Noxious stimuli depolarize the bare nerve endings of afferent axons and generate APs that are propagated centrally.
Depolarization is due to transient receptor potential (TRP) ion channels.
Uncontrolled activation of nociceptors is associated with several pathological conditions.
Allodynia: pain in response to normal stimuli.
Hyperalgesia: exaggerated response to noxious stimuli.
Nociceptor signals are transmitted to neurons in the dorsal horn of the spinal cord.
There’s a tight link between the anatomical organization of dorsal horn neurons, their receptive properties, and their function in sensory processing (topographic organization).
Layers/lamina of the dorsal horn
Lamina I: respond to noxious stimuli.
Lamina II: respond to pain- and itch-provoking inputs.
Lamina II/IV: respond to innocuous cutaneous stimuli.
Lamina V: respond to noxious stimuli and project to the brain stem and thalamus. Also responds to nociceptors in visceral tissues.
Lamina VI: respond to innocuous joint movement and don’t contribute to the transmission of nociceptive information.
Lamina VII/VIII: respond to noxious stimuli from either side of the body, whereas most dorsal horn neurons receive unilateral input. May contribute to the diffuse quality of pain.
Two main classes of neurotransmitters for nociceptive neurons
Glutamate is the primary neurotransmitter of all primary sensory neurons, regardless of sensory modality.
Neuropeptides are released as cotransmitters by many nociceptors with unmyelinated axons.
Normal sensory signaling can be dramatically altered when peripheral tissue is damaged, resulting in an increase in pain sensitivity or hyperalgesia.
This condition can be caused by sensitizing peripheral nociceptors through repetitive exposure to noxious stimuli.
Sensitization is triggered by a complex mix of chemicals released from damaged cells that accumulate at the site of tissue injury and the reduced threshold of nociceptors.
Bradykinin is one of the most active pain-producing agents as its potency comes from the fact that it directly activates Aδ and C nociceptors.
Repeated exposure to noxious stimuli result in long-term changes in the response of dorsal horn neurons through mechanisms that are similar to those underlying the long-term potentiation of synaptic responses in many circuits in the brain.
In essence, these changes in the excitability of dorsal horn neurons is a memory of the state of C-fiber input.
This phenomenon has been termed central sensitization to separate it from sensitization at the peripheral terminals of dorsal horn neurons.
Central sensitization in the dorsal horn can decrease pain thresholds, leading to spontaneous pain in the absence of stimulation.
Central sensitization is also due to nerve injury-induced activation of microglia and consequent reduced GABAergic inhibition.
Skipping over the four major ascending pathways of nociceptive information to the brain.
The two thalamic nuclei that relay nociceptive information are the lateral and medial nuclear groups.
The lateral thalamus processes information about the precise location of an injury.
Electrical stimulation of the thalamus can also result in intense pain.
No single area of the cortex is responsible for pain perception, but the anterior cingulate and insular cortex are associated with it.
The anterior cingulate gyrus is part of the limbic system and is involved in processing emotional states associated with pain.
The insular cortex processes information about the internal state of the body and contributes to the autonomic component of pain responses.
Interestingly, lesions to the cingulate cortex or the pathway from the frontal cortex to the cingulate cortex reduces the affective features of pain without eliminating the ability to recognize the intensity and location of the injury.
E.g. Patients perceive noxious stimuli as painful and can distinguish sharp from dull pain, but fail to display appropriate emotional responses.
This implicates the insular cortex as an area where sensory, affective, and cognitive components of pain are integrated.
One effective means of suppressing nociception involves stimulation of the periaqueductal gray region, which produces profound and selective analgesia (inability to feel pain) and has been effective in relieving pain in some patients.
This stimulation-produced analgesia is remarkably modality-specific as animals still respond to touch, pressure, and temperature but not to pain.
Skipping over the opioid peptide section.
Highlights
Peripheral nociceptive axons with cell bodies in dorsal root ganglia include small-diameter unmyelinated (C) and myelinated (Aδ) afferents. Larger diameter Aβ afferents respond only to innocuous stimulation but, following injury, can activate CNS pain circuitry.
All nociceptors use glutamate as their excitatory neurotransmitter and many also express an excitatory neuropeptide cotransmitter like substance P or CGRP.
Nociceptors terminate in the dorsal horn of the spinal cord where they excite interneurons and projection neurons.
A major brain target of dorsal horn projection neurons is the ventroposterolateral thalamus, which processes location and intensity features of painful stimuli. Other neurons target the parabrachial nucleus (PB) of the dorsolateral pons, which project to affective/emotional features of pain experience.
Allodynia, pain from an innocuous stimulus, results in part from peripheral sensitization of nociceptors. Peripheral sensitization occurs when there’s tissue injury, which lowers the threshold for activating nociceptors.
Hyperalgesia, exacerbated pain to painful stimuli, and allodynia are also due to altered activity in the dorsal horn called central sensitization that contributes to spontaneous activity of pain-transmission neurons and amplification of nociceptive signals.
Input carried by large-diameter, non-nociceptive afferents can reduce the transmission of nociceptive information to the brain by engaging GABAergic inhibitory circuits in the dorsal horn, which is the basis of pain relief by vibration.
The brain not only receives nociceptive information leading to the perception of pain, but also regulates the output of the spinal cord to reduce pain by an endorphin-mediated pain control system.
Electrical stimulation of the midbrain periaqueductal gray can engage a descending inhibitory control system which reduces the transmission of pain messages.
Chapter 21: The Constructive Nature of Visual Processing
The mechanisms that underlie vision aren’t obvious.
How do we perceive form and movement? How do we distinguish colors?
Vision isn’t only used for object recognition, but also for guiding our movements, and these separate functions are mediated by at least two parallel pathways in the brain.
The existence of parallel pathways raises one of the central questions of cognition: How are different types of information carried by different pathways integrated to form a coherent visual image?
Vision is a biological process that’s evolved with our ecological needs.
Gestalt: configuration or form.
The central idea of Gestalt psychology is that what we see depends not only on the properties of the stimulus, but also on its context.
The brain has a way of looking at the world, a set of expectations that derives, in part, from experience and in part from built-in neural wiring.
It’s tempting to speculate that the formal features of objects in natural scenes created evolutionary pressure on our visual systems to develop neural circuits that have made us sensitive to those features.
Separating the figure and background in a visual scene is an important step in object recognition.
Three levels of visual scene analysis
Low: local contrast, orientation, color, and movement are discriminated.
Intermediate: analysis of scene layout and surface properties, parsing images into surfaces and contours, and distinguishing foreground from background.
High: object recognition and object motion.
Because distributed processing is one of the main organizational principles in the neurobiology of vision, we must understand the anatomical pathways of the visual system to fully understand the physiological description of visual processing.
The photoreceptors of the retina, rods and cones, extract some 20 local features.
E.g. Local contrasts of dark versus light, color, and motion.
These features are computed by different populations of specialized neural circuits forming independent processing modules that separately cover the visual field.
So each point in the visual field is processed in multiple channels that extract features simultaneously and in parallel.
Review of the LGN, primary visual pathway (geniculostriate pathway), primary visual cortex (striate cortex), retinotopy, ventral (what) and dorsal (where) visual pathways.
Form, color, motion, and depth are processed in discrete areas of the cerebral cortex.
If we include oculomotor areas and prefrontal areas that contribute to visual memory, then almost half of the cerebral cortex is involved with vision.
Top-down cognitive influences contain information on attention, object expectation, perceptual task, perceptual learning, and efference copy.
Review of on-center and off-center receptive fields.
More cortical area is dedicated to the central part of the visual field, where the receptive fields are smallest and the visual system has the greatest spatial resolution.
Receptive-field properties change from relay to relay along a visual pathway.
By comparing the properties before and after a relay, we can determine the function of each relay and learn how visual information is progressively analyzed by the brain.
E.g. The key property of the form pathway is selectivity for the orientation of contours in the visual field. This is an emergent property of signal processing in primary visual cortex; it isn’t a property of the cortical input but is generated within the cortex itself.
Like the somatosensory cortex, the visual cortex is organized into columns of specialized neurons.
Cells in the primary visual cortex with similar functional properties are located close together in columns.
Review of ocular dominance columns and orientation columns.
Embedded within the orientation and ocular dominance columns are clusters of neurons that have poor orientation selectivity but strong color preferences.
Any position in the visual field can be analyzed adequately in terms of orientation of contours, color and direction of movement of objects, and stereoscopic depth by a single computational module.
The columnar system serves as the substrate for two fundamental types of processing
Serial: occurs in succession in connections between cortical areas.
Parallel: occurs simultaneously in subsets of fibers that process different submodalities such as form, color, and motion, continuing the neural processing strategy started in the retina.
Columnar organization has several advantages.
Minimizes distance for neurons with similar functional properties to communicate with one another and allows them to share inputs from discrete pathways.
This efficient connectivity economizes on the use of brain volume and maximizes processing speed.
This allows for easy scaling.
Three pathways from the LGN
Parvocellular → Layers IVCβ and 6
Magnocellular → Layers IVCα and 6
Koniocellular → Layers 1, 2, and 3
These parallel pathways are only an approximation and there’s considerable interaction between pathways.
In general, the superficial layers of neocortex are responsible for connections to higher-order areas of the cortex.
E.g. Layer V pyramidal neurons project to the superior colliculus and pons. Layer VI cells are responsible for feedback projections, both to the thalamus and lower-order cortical areas.
The number of neurons projecting from the cortex to the LGN is ten times the number of projections from the LGN to the cortex. We don’t understand the function of this many feedback connections, but we suspect it has to do with control/modulation/attention.
The spike rates of excitatory neurons are constantly being balanced by matched inhibition that maintains stability.
In addition to serial feedforward, feedback, and local recurrent connections, connections also travel parallel to the cortical surface within each layer, providing long-range horizontal connections and allowing neurons to integrate information over a large visual field.
Review of population code and vector averaging.
The most sensitive part of a neuron’s tuning curve isn’t the peak, but along the flanks where the tuning curve is steepest.
Changes in a stimulus must be sufficient to elicit a change in response that significantly exceeds the normal variability in response of the neuron. That way, changes in a stimulus are seen above noise.
When the brain represents a piece of information, an important consideration is the number of neurons that participate in that representation.
E.g. From the grandmother cell to population coding.
The nervous system doesn’t represent entire objects by the activity of single neurons. Instead, only some cells represent parts of an object and an ensemble of neurons represents an entire object.
This organization is known as a distributed or sparse code.
Highlights
Vision is a constructive process that isn’t the mere recording of visual input like a camera.
The tuning of neural circuits for visual features such as contrast, orientation, and motion often matches the distribution of that feature in the natural environment. This suggests an evolutionary-driven origin for neural circuitry.
Vision uses extensive parallel processing pathways such as the dorsal and ventral pathways.
Parallel processing starts at the retina as each retinal circuit analyzes different points in the visual field for different local features.
The different channels enter V1 at different layers in the cortex, although they mostly enter at layers 4 and 6.
V1 neurons share basic properties such as spatial location or orientation preference, forming columns.
V1 neurons form a visuotopic map that gradually changes with distance. Neural processing reflects its architecture with local vertical processing along columns, and lateral processing across columns.
A useful measure of visual processing is provided by changes in neuronal receptive fields along the visual pathway. A receptive field is the region of visual space that a neuron is receptive/responsive to and is further characterized by the neuron’s optimal stimulus in that region.
Receptive fields grow larger and more complex at successive stages along the visual pathway. Their optimal stimuli also increase in complexity.
One of the most important unsolved questions is the interaction between feedforward and feedback visual processing from higher to lower levels.
Chapter 22: Low-Level Visual Processing: The Retina
All visual experience is based on information processed by the retina. (This isn’t quite true as visual illusions and hallucinations are visual experience without information from the retina.)
The retina’s output is transmitted by just one million optic nerve fibers, and yet almost half of the cerebral cortex is used to process these signals.
Because the retina sets fundamental limits on what can be seen, there’s great interest in understanding how the retina functions.
The retina is a thin sheet of neurons, a few hundred micrometers thick, and is make up of five major cell types arranged in three cellular layers separated by two synaptic layers.
The retina must change its sensitivity to changes in environmental conditions such as illumination.
This adaptation allows our vision to remain stable despite the vast range of light intensities we encounter throughout the day.
Three important retinal functions
Phototransduction
Preprocessing
Adaptation
Any point in the outside world becomes a small blurred circle on the retina due to several factors.
E.g. Diffraction at the pupil, refractive errors at the cornea and lens, and scattering due to air and cells.
Review of the fovea. At the center of the fovea, the other cellular layers are pushed aside to reduce additional blur from light scattering cells.
The back of the eye is lined with a black pigment that absorbs light and keeps it from scattering back into the eye.
Because the optic disc lies nasal to the fovea for each eye, light coming from a single spot never falls on both blind spots simultaneously, so we are unaware of them.
The blind spot demonstrates what blind people experience, not blackness but simply nothing.
The blind spot is a consequence of the inside-out design of the retina where photoreceptors are at the back and ganglion cells are at the front.
Review of rods and cones (L, M, and S). Rods are sensitive to a single photon, primates only have one type of rod but three types of cones, the human retina has about 100 million rods and 5 million cones.
A few millimeters outside the fovea, rods greatly outnumber cones.
The smallest letters we can read on a doctor’s eye chart have strokes whose images are just one to two cone diameters wide on the retina.
Phototransduction links the absorptions of a photon to a change in membrane conductance.
E.g. Red light excites L cones more than M cones, while green light excites M cones more.
The relative degree of excitation in these cone types contains information about the spectrum of the light independent of its intensity.
The brain’s comparison of signals from different cone types is the basis for color vision.
In night vision, only rods are active so all functional photoreceptors have the same absorption spectrum.
E.g. A green light has the same effect as a red light of greater intensity on rods. So rods can’t distinguish color.
Skipping over the molecular and protein mechanisms behind transduction.
The photoreceptor layer produces a simple neural representation of the visual scene with neurons in bright regions being hyperpolarized and dark regions are depolarized.
The optic nerve only has about 1% as many axons as there are receptors, so the retinal circuit must compress and refine the representation before it’s transmitted to the brain.
This retinal processing is called low-level visual processing as it’s the first stage.
Many retinal ganglion cells fire APs spontaneously even in darkness or constant illumination.
Two main types of ganglion cells
ON cells: fire more rapidly if light intensity suddenly increases.
OFF cells: fire more rapidly if light intensity suddenly decreases.
The retinal output has two complementary representations that convey light intensity.
This arrangement communicates rapid changes to the visual scene because if the retina had only ON cells, a dark object would be encoded by a decrease in firing rate that takes about 100 ms to notice by the postsynaptic neuron.
By having OFF cells, a dark object would be encoded by an increase in firing rate that takes only 5 ms to notice, which is 20 times faster than an ON cell.
Review of receptive field, center and surround region.
The output produced by a population of retinal ganglion cells enhances regions of spatial contrast in the input.
E.g. Edges between two areas of different intensity, and less emphasis on regions of uniform illumination.
Transient neurons: produce a burst of spikes only at the onset/offset of a stimulus.
Sustained neurons: maintain a steady firing rate during stimulation.
In general, the output of ganglion cells favors temporal changes in visual input over periods of constant light intensity.
In fact, when an image is stabilized on the retina, it fades from view within seconds.
The retina extracts low-level features of the scene that are useful for guiding behavior and selectively transmits those to the brain.
E.g. Edges for inferring object shape, identity, and motion.
The retina doesn’t transmit features that are constant in either space or time, which accounts for the spatiotemporal sensitivity of human perception.
There are more than 20 types of ganglion cells and each type covers the retina in a tiled pattern, such that any point on the retina lies within the receptive field center of at least one ganglion cell of each type.
The optic nerve conveys 20 or more neural representations that differ in intensity, spatial resolution, temporal responsiveness, spectral filtering, and selectivity.
Photoreceptors and bipolar cells don’t fire APs but instead release neurotransmitter in a graded fashion.
In fact, most retinal processing is done using graded membrane potentials with APs mostly occurring in certain amacrine and retinal ganglion cells.
The axons of ON cells end in the lower half of the inner plexiform layer while those of OFF cells end in the upper half.
The ON bipolar cells excite ON ganglion cells and the same goes for OFF bipolar cells.
The two principal subdivisions of retinal output, the ON and OFF pathways, are already established at the level of bipolar cells.
Stimulus representations in the ganglion cell population originate in dedicated bipolar cell pathways that are differentiated by their selective connections to photoreceptors and postsynaptic targets.
Horizontal cells measure the average level of excitation of the photoreceptor population over a broad region and this signal is fed back to the photoreceptor terminal through an inhibitory synapse.
So, the photoreceptor terminal is under control by two competing factors
Light falling on the receptor hyperpolarizes it.
Light falling on the surrounding region depolarizes it.
This results in an antagonistic receptive-field in bipolar cells.
Bipolar cells have their own version of horizontal cells called amacrine cells.
Amacrine cells generally receive excitatory signals from bipolar cells at glutamatergic synapses.
Amacrine cells enforce lateral inhibition among bipolar cells by feeding back directly to the presynaptic bipolar cell at a reciprocal inhibitory synapse.
Some amacrine cells are electrically coupled to each other, forming an electrical network like that of horizontal cells and enables inhibition from distant bipolar cells, similar to the lateral inhibition of photoreceptor terminals.
These lateral inhibitory connections contribute substantially to the antagonistic receptive field of retinal ganglion cells.
For many ganglion cells, a step change in light intensity produces a transient response and part of this response is due to negative feedback circuits involving horizontal and amacrine cells.
E.g. Decrease in light intensity → Depolarize cone terminal → Excite horizontal cell → Repolarize cone terminal.
Because this feedback loop involves a brief delay, the voltage response of the cone peaks abruptly and then settles back to a stable level. Similar processing occurs at reciprocal synapses between bipolar and amacrine cells and this is the basis behind the transient ganglion response.
In both cases, delayed inhibition favors rapidly changing inputs over slowly changing ones.
This temporal filtering is seen in visual perception as sudden changes are more visible than gradual changes.
Retinal circuits seem to go great lengths to speed up their responses and emphasize temporal changes.
One likely reason is because the very first cell in the retinal circuit, the photoreceptor, is exceptionally slow.
E.g. A flash of light takes a cone cell about 40 ms to reach peak response, an intolerable delay for proper visual function.
This fast processing helps to reduce visual reaction time, a life-extending trait.
Color vision begins in cone-selective circuits.
Two theories of color perception
Trichromatic theory: any color can be made by mixing different amounts of three primary colors (red, green, and blue), which match the three cone types.
Opponent-process theory: color vision involves three processes that respond in opposite ways to light of different colors: (y-b) is stimulated by yellow and inhibited by blue, (r-g) is stimulated by red and inhibited by green, (w-bk) is stimulated by white and inhibited by black.
The opponent-process theory is seen in the postreceptor circuitry of the retina.
E.g. A ganglion cell receives input from S-ON, L-OFF, and M-OFF bipolar cells so it’s depolarized by blue light and hyperpolarized by yellow light. This cell implements y-b color opponency.
Chromatic signals are combined and encoded by the retina for transmission to the thalamus and cortex.
Only about 10% of cortical neurons are preferentially driven by color contrast rather than luminance contrast, which likely reflects the fact that color vision makes only a small contribution to our overall fitness.
E.g. Colorblind individuals can grow up without ever noticing their defect.
Interestingly, being colorblind doesn’t impair vision more broadly because the total number of cones remains unchanged, it’s just that the distribution of L, M, and S cones changes.
Rod and cone circuits merge in the inner retina.
For low-light vision, the mammalian retina only has ON bipolar cells, no OFF cells.
ON bipolar cells are connected to amacrine cells that convey rod input to cone bipolar cells. It provides excitatory signals to ON bipolar cells and inhibitory signals to OFF bipolar cells.
These cone bipolar cells in turn excite ON and OFF ganglion cells.
Thus, the rod signal is fed into the cone system after a detour and it produces the appropriate signal for the ON and OFF pathways.
The retina adapts to changes in illumination by using automatic gain control called light adaptation.
The weakest flashes of light elicit no response, a graded increase in flash intensity elicits graded responses, and the brightest flashes elicit saturation.
To compensate for increases in background illumination, ganglion cells become less sensitive to light variations.
E.g. In a brighter background, a larger change is needed to cause the same response.
The visual system approximates Weber’s law as sensitivity decreases somewhat steeply with increasing background intensity.
Gain control happens in rods, cones, and bipolar cells. As ambient light increases, light adaptation shifts from rods to cones, with it mostly coming from cones towards noon.
Light adaptation changes the sensitivity, speed, and rules of spatial processing in the retina.
E.g. In bright light, many ganglion cells have a sharp center-surround structure in their receptive fields. In dim light, the antagonistic surround becomes broad and weak and eventually disappears.
Under low-light conditions, retinal circuits function to simply accumulate rare photons rather than to compute local intensity gradients.
Two important functions of light adaptation
Discard information about the intensity of ambient light while retaining information about object reflectance.
Match the small dynamic range of firing in retinal ganglion cells to the large range of light intensities in the environment.
Highlights
The retina transforms light patterns on photoreceptors into neural signals that are transmitted through the optic nerve to specialized visual centers in the brain.
Different populations of ganglion cells transmit different neural representations of the retinal image along parallel pathways.
The retina discards much of the stimulus information available at the receptor level and extracts certain low-level features of the visual field such as edges and motion.
The retina adapts flexibly to the changing conditions for vision such as illumination changes.
Rods are used for nocturnal vision while cones are used for daytime vision. Rods have their own bipolar cells that transmit to cone bipolar cells, thus using cone circuitry.
The vertical excitatory pathways are modulated by horizontal inhibitory connections that form negative feedback loops, sharpening the transient response of ganglion cells.
The segregation of information into parallel pathways and the shaping of response properties by inhibitory lateral connections are pervasive organizational principles in the visual system.
Chapter 23: Intermediate-Level Visual Processing and Visual Primitives
Contour integration: combining boundary and edge information about an object.
Contour integration is an example of intermediate-level visual processing and is the first step in generating a representation of the unified visual field.
Intermediate-level visual processing deals with determining which boundaries and surfaces belong to which objects and which belong to the background, and distinguishing the brightness and color of a surface.
Three features that help disambiguate retinal signals
The way a visual feature is perceived depends on everything surrounding it.
The functional properties of neurons in the visual cortex can be changed by visual experience and by perceptual learning.
Visual processing can be influenced by cognitive functions such as attention, expectation, and the goal/task.
Visual primitives: local features in a visual scene.
E.g. Contrast, line orientation, brightness, color, movement, and depth.
Neurons in the retina and LGN have circular receptive fields with a center-surround organization, but neurons in the visual cortex respond selectively to lines of particular orientations.
Each neurons responds to a narrow range of orientations, around 40 degrees, and different neurons respond optimally to distinct orientations.
This orientation selectivity reflects the arrangement of inputs from the LGN as each V1 neuron receives input from several neighboring LGN neurons whose center-surround receptive fields are aligned to represent a particular axis of orientation.
Simple cells: have receptive fields divided into ON and OFF subregions.
Complex cells: are less selective for the position of object boundaries and fire continuously as a line/edge stimulus traverses their receptive field.
Visual perception requires eye movement as visual cortex neurons don’t respond to a stabilized image on the retina.
Depth perception helps segregate objects from background.
The balance of input from the two eyes, known as ocular dominance, varies among cells in V1.
Depth is calculated by binocular neurons in the visual cortices by using the relative retinal positions of objects placed at different distances from the observer.
In addition to binocular disparity, the visual system also uses many monocular cues to discriminate depth.
E.g. Size, perspective, occlusion, brightness.
The primary visual cortex determines the direction of movement of objects.
The perception of brightness and color is highly dependent on context and may be quite different from what’s expected.
E.g. As a friend walks towards you, she’s seen as coming closer, not growing larger even though the image on your retina does expand.
Our ability to perceive an object’s size and color as constant illustrates again a fundamental principle of the visual system: it doesn’t record images passively like a camera, but uses transient and variable stimulation to construct representations.
Since most neurons respond to surface boundaries and not to areas of uniform brightness, the visual system calculates the brightness of surfaces from information about contrast at the edges of surfaces. This is known as perceptual fill-in.
Intermediate-level visual processing requires sharing of information from throughout the visual field, implemented by the long-range horizontal connections between neurons.
The plasticity of cortical maps and connections didn’t evolve in response to lesions, but as a neural mechanism for improving our perceptual skills.
Perceptual learning involves repeating a discriminating task many times and doesn’t require error feedback to improve performance.
An important aspect of perceptual learning is its specificity as training on one task doesn’t transfer to other tasks.
This specificity suggests that early stages of visual processing are responsible for learning.
Scene segmentation involves a combination of bottom-up processes that follow Gestalt rules, and top-down processes that create object expectation.
One strong top-down influence is spatial attention, which can change focus without any eye movement.
Highlights
Vision requires segregating objects from their backgrounds, a process involving contour integration and surface segmentation.
This process is simplified by relying on the statistical properties of natural forms.
E.g. Gestalt rules.
Perception of visual features is dependent on context.
Chapter 24: High-Level Visual Processing: From Vision to Cognition
High-level visual processing integrates information from a variety of sources and is the final stage in the visual pathway leading to visual perception.
This level of visual processing deals with identifying behaviorally meaningful features and depends on descending signals that convey information from short-term working memory, long-term memory, and executive areas of cerebral cortex.
Our visual experience of the world is fundamentally object-centered so our visual processing is mostly concerned with object recognition.
It’s the behavioral significance of objects that guides our action based on visual information.
The inferior temporal (IT) cortex is the primary center for object recognition and lesions to the IT can produce specific failures of object recognition.
The IT comprises at least two main functional subdivisions: anterior and posterior.
Unlike lesions to occipital cortical areas, temporal lobe lesions don’t impair sensitivity to basic visual attributes such as color, motion, and distance. Instead, the lesions cause visual agnosia, an unusual type of visual loss.
Two categories of visual agnosia
Apperceptive: impairment in the ability to match or copy complex visual shapes or objects.
Results from disruption in the first stage of object recognition where the integration of visual features into entire objects occurs.
Commonly follows damage to the posterior IT cortex.
Associative: impairment in the ability to identify objects, but not the ability to match or copy complex objects.
Results from disruption in the second stage of object recognition where an object’s sensory representation is associated with it’s meaning or function.
Commonly follows damage to the anterior IT cortex.
More focal lesions within the temporal cortex can lead to specific deficits.
E.g. Prosopagnosia: inability to identify a particular face as belonging to a specific person but can identify a face as a face, it’s parts, and facial emotions.
Prosopagnosia is an example of a category-specific agnosia where patients with temporal lobe damage fail to recognize specific items belonging to specific semantic categories.
Category-specific agnosias for living things, fruits, vegetables, tools, or animals have also been reported.
Neurons in the IT cortex encode complex visual stimuli and are organized into functionally specialized columns, just like other areas of cortex.
A highly specialized network, located mainly in the temporal cortex, processes the multiple dimensions of information conveyed by a face.
The IT cortex is part of a network of cortical areas involved in object recognition.
E.g. Projections to the perirhinal and parahippocampal cortices, and projections to the prefrontal cortex.
As we’ll see, prefrontal neurons play important roles in object categorization, visual working memory, and memory recall.
Perceptual constancy: representing the invariant attributes of objects independently.
Size constancy: when an object is placed at different distances from an observer and is perceived as having the same size, even though the object produces different image sizes on the retina.
Lesions to the IT cortex lead to failures in size constancy, suggesting that neurons in this area play a critical role in size constancy.
Many neurons in IT cortex don’t exhibit viewpoint invariance and are systematically tuned to viewing angle.
Categorical perception: the ability to distinguish objects of different categories better than objects of the same category.
E.g. It’s harder to discriminate between two red lights than between a red and green light.
Categorical perception simplifies behavior.
Neurons in the IT cortex seem to represent similarity features, while the prefrontal cortex represents categories.
Visual experience can be stored as memory, and visual memory influences the processing of incoming visual information.
The sharpening of neural sensitivity could underlie improvements in perceptual discrimination of visual stimuli.
Object recognition and learning are intricately linked as learning can generate entire areas of functional specialization within the IT cortex.
The delay-period activity in many vision-related neurons in both the IT and prefrontal cortices is thought to maintain information in short-term working memory.
Activity in the IT cortex is associated with the short-term storage of visual patterns and color information.
Activity in the prefrontal cortex is associated with long-term storage as it depends more on task requirements and isn’t terminated by intermittent sensory inputs.
It’s been suggested and tested that learning visual associations might be mediated by enhanced connectivity between the neurons encoding individual stimuli.
Experiments show that paired objects often elicit similar neuronal responses and that these responses became more similar over the course of training.
Most importantly, the changes in neuronal activity occurred on the same timescales as the changes in behavior, suggesting that this behavior is a result of this neural activity.
These learning-dependent changes in the stimulus selectivity of IT cortex neurons are long-lasting, suggesting that this cortical region is part of the neural circuitry for associative visual memories and that learned associations are implemented rapidly by changes in the strength of synaptic connections between neurons representing the associated stimuli.
Although learned associations between images are likely stored in circuit changes in the IT cortex, activation of these circuits for conscious recall depends on input from the prefrontal cortex.
Highlights
One function of high-level vision is object recognition which imbues visual perception with meaning.
Object recognition is difficult due to changes in appearance such as position, distance, orientation, or lighting conditions, making the same object appear different in different contexts.
Object recognition relies on a region of the temporal lobe called the inferior temporal (IT) cortex, which uses information already processed by low- and mid-level vision.
Lesions to the IT cortex cause visual agnosia or the inability to recognize objects.
Neurons in the IT cortex can be and are highly selective to objects such as faces and places.
Face recognition is supported by multiple face areas, each with a unique functional specialization which form the face-processing network.
IT cortex is connected to the perirhinal and parahippocampal cortices for memory, the amygdala for emotional valence, and the prefrontal cortex for object categorization and visual working memory.
Objects are perceived as members of a category which simplifies the selection of appropriate behaviors. Neurons with categorical selectivity are found in dorsolateral prefrontal cortex.
Short-term visual information may be implemented by delay-period activity in neurons in the prefrontal and temporal cortices.
High-level visual information processing changes with top-down modulation.
The sensory experience of an image and the recall of the same stimulus from memory are subjectively similar and neurons exhibit similar activity during both tasks.
Chapter 25: Visual Processing for Attention and Action
The brain compensates for eye movements to create a stable representation of the visual world.
Saccades: quick eye movements.
The brain must account for saccades to produce an interpretable visual image.
With such constant movement, visual images should resemble an amateur videographer where the image jerks around because the camera operator isn’t skilled at holding the camera steady.
In contrast, our vision is so stable that we’re unaware of the visual effects of saccades because the brain makes continual adjustments to the images falling on the retina after each saccade.
The first insight into the brain mechanisms underlying visual stability is that the motor commands for saccades are copied to the visual system.
This copy is used by the visual system to adjust and compensate for eye movements, leading to a stable image.
Efference copy / Corollary discharge: a copy of the motor command that’s sent to sensory systems.
For a corollary discharge to affect visual perception across eye movements, motor information has to affect the activity of visual neurons which happens in the parietal cortex, frontal eye field, prestriate visual cortex, and superior colliculus.
Every time a saccade is made, a stimulus not in the receptive field of a neuron in the lateral intraparietal area and thus incapable of exciting the neuron, will excite the neuron if the impending saccade will bring the stimulus in to the receptive field, even before the saccade occurs.
Thus, a corollary discharge of the impending saccade affects the visual responsiveness of the parietal neuron.
This transient remapping of the receptive field explains how subjects can make sequential saccades that are based on previous saccades, thus chaining them properly.
Remapping is found in many cortical and subcortical areas such as the lateral intraparietal area, frontal eye field, medial intraparietal area, intermediate layers of the superior colliculus, and prestriate areas V4, V3a, and V2.
How does the brain get the vector of the saccade that it feeds back to the visual system?
Research has shown that the motor command for the vector is represented in the superior colliculus on the roof of the midbrain.
Each neuron in the superior colliculus is tuned to saccades of a given vector.
Inactivation of the superior colliculus affects the monkey’s ability to make saccades and electrical stimulation of the superior colliculus evokes saccades.
However, this only provides the vectors that actually drive the eye and not the corollary discharge that visual neurons use.
We aren’t sure, but we suspect that the superior colliculus has both descending pathways for generating saccades and ascending pathways to the cerebral cortex that could carry the corollary discharge.
The pathways to the cortex pass through the thalamus, as does all internal and almost all external information reaching the cerebral cortex.
This is supported by the evidence that inactivation of the thalamic pathway results in monkeys being unable to perform the second saccade in the double-step task.
Another experiment concluded that the corollary discharge does provide the vector of the saccade, and inactivation of the medial dorsal nucleus of the thalamus affects an animal’s perception.
With each saccade, corollary discharge information provides perceptual information for determining the amplitude and direction of the current saccade, and it does so with machine-like precision several times per second.
It’s unlikely that visual cues and oculomotor proprioception provide the vector information at the end of the saccade to compensate for eye movements as they’re too slow.
Saccades also disrupt vision as it blurs vision during it’s sweep across the visual scene.
The blur isn’t perceived because neuronal activity in a number of visual areas is suppressed around the time of every saccade and is called saccadic suppression.
We know that the corollary discharge contributes to this neuronal activity suppression because the suppression occurs even in total darkness (no vision) and even if eye movement is blocked (no proprioception).
E.g. If a saccade starts in total darkness and an object is flashed before the saccade ends, a blur can be seen during the saccade. However, if a mask is flashed after the saccade, the blur is suppressed and unseen.
The eye position signal probably comes from a proprioceptive mechanism and not a corollary discharge because neurons that represent eye position don’t match the timing of a corollary discharge.
It’s possible that the brain calculates the spatial location of an object before an eye movement using two mechanisms: a corollary discharge that’s fast and a proprioceptive signal that’s slow but more accurate.
The proprioceptive signal can also be used to calibrate the corollary discharge.
Visual scrutiny is driven by attention and arousal circuits.
Change blindness: when large changes occur outside the focus of attention that are often missed.
Review of top-down (voluntary) and bottom-up attention (involuntary).
Spatial attention: attending to a specific point in space.
Feature attention: attending to a specific visual feature such as color or shape.
Both types of attention shorten reaction time and make visual perception more sensitive.
Clinical studies have implicated the parietal lobe in visual attention as lesions to the right parietal lobe result in neglect of the contralateral visual hemifield.
Neurons in the lateral intraparietal area represent only those objects of potential importance; a priority map of the visual field.
The parietal cortex provides visual information to the motor system.
E.g. When picking up a pencil, your fingers are separated from your thumb by the width of the pencil, the same goes for picking up a drink.
Patients with parietal cortex lesions can’t adjust their grip width or wrist angle using visual information alone, even though they can verbally describe the size of the object or the orientation of the slot.
The representation of space in the parietal cortex isn’t organized into a single map like the retinotopic map of the primary visual cortex. Instead it’s divided into at least four areas (LIP, MIP, VIP, AIP) that analyze the visual world in ways appropriate for individual motor systems.
Four intraparietal areas
MIP: describes the targets for reaching and projects to the premotor areas that control reaching.
AIP: signals the size, depth, and orientation of objects that can be grasped.
LIP: specifies the targets for saccades and projects to the frontal eye fields.
VIP: responds to tactile stimuli on the face and to objects that approach the tactile receptive field.
Highlights
The visual system compensates for changes in eye position to calculate spatial locations from retinal locations. The brain solves this problem by feeding forward the motor signal that drives the eye to the visual system to compensate for the effect of the eye movement. This is called a corollary discharge.
Neurons in the lateral intraparietal area show evidence of this corollary discharge as they normally don’t respond to a particular stimulus in space but will respond to it if an impending saccade will bring that stimulus into its receptive field.
This receptive field remapping depends on a pathway from the superior colliculus to the medial dorsal nucleus of the thalamus. Inactivation of the medial dorsal nucleus impairs the ability to identify where the eyes land after a saccade, suggesting that the corollary discharge has both a perceptual and motor role.
We suspect that the brain uses eye position to calculate the spatial location of objects from the position of their images on the retina.
We don’t know how the brain chooses between the eye position and corollary discharge mechanisms to determine spatial position. Because corollary discharge precedes the change in eye position and proprioception follows it, could the brain use both positions at different times?
Attention is the ability to select objects for further analysis and without it, spatial perception is severely limited.
The activity of neurons in the parietal cortex predicts the location of spatial attention and seems to create a priority map of the visual field. The motor system uses this map to choose targets for movement.
Lesions in the parietal cortex cause a neglect of the contralateral visual world.
There are at least four different visual maps in the intraparietal sulcus, each of which matches a different motor workspace.
Chapter 26: Auditory Processing by the Cochlea
Our ability to recognize small differences in sounds comes from the cochlea’s capacity to distinguish among frequency components, their amplitudes, and their relative timing.
Hearing depends on the properties of hair cells, the cellular microphones of the inner ear.
Hair cell: transduces mechanical vibrations into electrical signals.
Hair cells also serve as mechanical amplifiers that augment auditory sensitivity.
Each cochleae have about 16,000 hair cells and deterioration of hair cells results in hearing loss.
Auricle: the fold of cartilage-supported skin on the outside of the head.
The external ear isn’t uniformly effective at capturing sounds from all directions as the auricle’s surface collects sounds best when they originate at different positions with respect to the head.
Our capacity to localize sounds in space, especially along the vertical axis, depends critically on the auricle.
Each auricle has a unique topography and its effect on sound reflections at different frequencies is learned by the brain early in life.
Hearing also fits the Weber-Fechner law as it follows a logarithmic scale for the relationship between the magnitude of sound pressure to perceived loudness.
The dynamic range of hearing is enormous as the faintest and loudest sounds differ by a trillion-fold range in stimulus power.
Each cycle of sound stimulus (frequency) evokes a cycle of up-and-down movement of the minuscule volume of liquid in each of the cochlea’s three chambers, thus displacing the sensory organ.
The middle ear increases the magnitude of pressure changes by up to 30-fold, thus matching the low impedance of the air outside to the high impedance of the cochlea and ensuring efficient transfer of sound energy.
The pressure gain given by the middle ear depends on sound frequency, which determines the U-shape tuning curve of auditory threshold.
The continuous variation of the mechanical properties of the basilar membrane along the cochlea is key to the cochlea’s operation.
E.g. The membrane is less than one-fifth as broad at the apex and it increases in width the further down the cochlea. The membrane is also thicker towards the start but thinner at the apex.
The basilar membrane’s width and thickness both contribute to a decrease in stiffness from base to apex.
This stiffness is used to capture and encode sound frequency as higher frequency sounds displace the membrane at the base, while lower frequency sounds displace the membrane at the apex.
E.g. The base of the cochlea responds best to 20 kHz while the apex responds best to 20 Hz sounds.
The basilar membrane’s operation is essentially the inverse of a piano’s.
E.g. A piano produces complex sounds by combining the pure tones produced by multiple vibrating strings, while the cochlea deconstructs complex sounds by isolating the component tones at appropriate segments of the basilar membrane.
For any frequency within the auditory range, there’s a characteristic place along the basilar membrane at which the magnitude of vibration is maximal.
Tonotopic map: the arrangement of vibration frequencies along the basilar membrane.
The tonotopic map can change with feedback from hair cells within the organ of Corti.
The relationship between frequency and position along the basilar membrane varies monotonically, but not linearly; the logarithm of the frequency decreases roughly in proportion to the distance from the cochlea’s base.
Thus, the basilar membrane is the implementation of the Weber-Fechner law.
The membrane also acts as a mechanical frequency analyzer by distributing the energies associated with different frequency components to hair cells along its length.
The basilar membrane begins the encoding of frequencies in a sound.
The organ of Corti is where the mechanoelectrical transduction occurs in the cochlea.
The organ has around 16,000 hair cells innervated by about 30,000 afferent nerve fibers.
The information transmitted by hair cells to their innervating nerve fibers is also tonotopically organized.
Hair cells aren’t neurons as they don’t have dendrites nor axons, but they do transform mechanical energy into neural signals.
Mechanical deflection of the hair bundle excites hair cells of the cochlea, causing a receptor potential.
A hair cell’s receptor potential is graded and as stimulus amplitude increases, the receptor potential grows progressively larger up to the point of saturation.
A hair bundle exhibits Brownian motion (thermal noise) of approximately 3 nm, whereas the threshold of hearing corresponds to basilar membrane movements of as little as 0.3 nm.
How does the hair bundle respond to motion smaller than its own noise?
Three mechanisms
The movement of the hair bundle is larger than that of the basilar membrane.
Frequency-selective amplification of low stimuli actively pulls the signal out of the noise.
Mechanical coupling to a group of neighbors results in synchronization that effectively reduces noise.
Skimming over the detailed transduction in hair cells.
Hair cells function much quicker than any other sensory receptor cell and are quicker than neurons themselves.
This is due to both the optimal frequency range for auditory communication (10 Hz to 100 kHz) and the need to localize sound sources.
Hair cells have a unique mechanism of adaptation that acts as a high-pass filter and they can use mechanical amplification to further tune their mechanosensitivity.
Every cochlear hair cell is most sensitive to stimulation at a specific frequency called it’s characteristics or best frequency, and this can be displayed as a tuning curve.
Hair bundles vary systematically along the tonotopic axis with hair cells that respond to low-frequency stimuli having the tallest bundles, and hair cells that respond to high-frequency stimuli having the shortest bundles.
Hair cells adapt to sustained stimulation by a progressive decrease in the receptor potential during long deflections of the hair bundle.
This isn’t desensitization because the responsiveness of the receptor persists. Instead, during a prolonged step stimulus, there’s a sigmoidal relationship between the initial receptor potential and the bundle’s position shifts in the direction of the applied stimulus.
This results in the membrane potential of the hair cell progressively returning to near its resting value during stimulation.
The sensitivity of the cochlea is too great to come only from the inner ear’s passive mechanical properties. So, it must possess some means of actively amplifying sound energy.
The basilar membrane displays a compressive nonlinearity that accommodates the millionfold variation of sound pressure that characterizes audible sounds into only two to three orders of magnitude of vibration amplitude.
The cochlea seems to be actively amplifying sounds by emitting sounds itself.
The source of the emitted sounds seems to come from outer hair cells that enhance cochlear sensitivity and frequency selectivity and hence act as the motors for amplification.
We know that cochlear amplification occurs because it distorts acoustic inputs, causing phantom tones in sound perception.
Four features of auditory responsiveness
An active amplification process lowers the detection threshold.
Since amplification only works near a characteristic frequency, the input to the sensory system is actively filtered, which sharpens frequency selectivity.
For stimulation near the characteristic frequency, the response displays a compressive nonlinearity that represents a wide range of stimulus levels by a much narrower range of vibration amplitudes.
Even in the absence of a stimulus, mechanical activity can produce self-sustained oscillations that result in otoacoustic emissions.
These features describe the cochlea as an active dynamical system that operates on the verge of an oscillatory instability called the Hopf bifurcation.
For each inner ear, approximately 30,000 ganglion cells innervate the hair cells.
The afferent pathways from the human cochlea reflect the functional distinction between inner and outer hair cells.
Three important consequences of this organization
The neural information that hearing uses originates almost entirely at inner hair cells.
The output of each inner hair cell is sampled by many afferent nerve fibers, so the information from one receptor is encoded independently in parallel channels.
Spiral ganglion cells respond best to stimulation at the characteristic frequency of the presynaptic hair cell, maintaining tonotopy.
The acoustic sensitivity of axons in the cochlear nerve mirrors the connection pattern of the spiral ganglion cells to the hair cells, with each axon being most responsive to a characteristic frequency.
The relationship between sound level in decibels and firing rate in each cochlear nerve fiber is almost linear. This relation implies that sound pressure is logarithmically encoded by neuronal activity.
Thus, cochlear nerve fibers encode stimulus frequency and level.
Because an AP and the subsequent refractory period each last almost 1 ms, the greatest sustainable firing rate is about 500 spikes per second.
Nerve fibers with the same characteristic frequency have different thresholds of responsiveness to sound level.
E.g. The most sensitive fibers respond at 0 dB and saturate at moderate intensities of about 30 db. The least sensitive fibers have very little spontaneous activity and much higher thresholds, but respond in a graded manner to levels even in excess of 100 db.
The lowest sensitive fibers contact the inner hair cell nearest the axis of the cochlear spiral while the most sensitive contact the hair cell’s opposite side.
Therefore, the multiple innervations of each inner hair cell isn’t redundant but is used for parallel channels of differing sensitivity and dynamic range.
Two ways stimulus frequency information is gained
Place code: fibers are in a tonotopic map where position is related to characteristic frequency.
Frequency code: phase-locked firing of the fiber provides frequency information for frequencies below 3 kHz.
Deafness mostly stems from the loss of cochlear hair cells and their afferent fibers.
Highlights
Hearing starts when sounds captured by the ear are transferred to the cochlea and cause the basilar membrane to oscillate.
Hair cells transduce basilar-membrane vibrations into receptor potentials that cause sensory neurons to fire.
The frequency components of a sound stimulus are detected at different locations along the basilar membrane by different hair cells following a tonotopic map.
Each hair cell is tuned to a characteristic frequency according to its morphological, mechanical, and electrical properties.
Hair cells operate much faster than other sensory receptors, which allows them to respond to sound frequencies beyond 100 kHz in some mammalian species.
Unique among sensory receptors, hair cells amplify their inputs to enhance their sensitivity, sharpen their frequency selectivity, and widen the range of stimulus levels that they can detect.
Amplification is done by length changes of outer hair cells called electromotility, and by hair bundle vibration.
The ear not only receives sounds, but also emits them called otoacoustic emissions. Spontaneous and evoked otoacoustic emissions result from the cochlea’s active amplification process.
The cochlea introduces conspicuous distortions that contribute to sound perception, preferentially amplifying weak sound stimuli.
The Hopf bifurcation provides a general principle of auditory detection that simplifies our understanding of hearing.
Chapter 27: The Vestibular System
Our inertial guidance system, called the vestibular system, detects and interprets motion through space and orientation relative to gravity.
The vestibular system of vertebrates has remained highly conserved across many species.
Vestibular signals originate in the labyrinths of the inner ear.
Two parts of the vestibular system
Otolith organs: utricle and saccule which measure linear accelerations.
Semicircular canals: which measure angular accelerations.
Hair cells in the vestibular labyrinth transduce acceleration stimuli into neural signals.
Angular or linear acceleration of the head leads to a deflection of stereocilia, which together compose the hair bundle.
Deflection of the stereocilia produce a depolarizing or hyperpolarizing receptor potential depending on which direction the hair bundle moves.
All vertebrate receptor hair cells receive efferent inputs from the brain stem, the function of which are subject to debate.
However, stimulation of the efferent fibers changes the sensitivity of the afferent axons from the hair cells, increasing excitability in some hair cells while inhibiting others.
The semicircular canals sense angular acceleration and thus head rotation.
The mechanism behind this is that when the head begins to rotate, the vestibular system moves along with the rotation but because of inertia, the endolymph, the fluid inside the semicircular canals, lags behind, pushing the hair cells in the opposite direction.
Each semicircular canal is optimally sensitive to rotations in its plane and each canal has a pair in the other ear.
Thus, the six canals effectively operate as three coplanar pairs.
The left and right ear semicircular canals have opposite polarity.
E.g. When you turn your head left, the receptors in the left horizontal semicircular canal will be excited while the right horizontal canal receptors will be inhibited.
The two otolith organs detect linear motion as well as the static orientation of the head relative to gravity, which itself is a linear acceleration.
Skipping over the vestibular nerve projections.
The vestibular commissural system communicates bilateral information.
How can we tell the difference between moving/translation rightward and tilting leftward, where the linear acceleration signals by the otolith afferents is the same?
We now know that convergent vestibular nuclei and cerebellar neurons use combined signals from both the semicircular canals and the otolith receptors and some simple computations to discriminate between tilt and translation.
Some central vestibular and cerebellar cells encode head tilt, whereas other cells encode translational motion.
Some vestibular nuclei neurons change their responses in active versus passive generated head movements.
This change has been interpreted as sensory prediction error signals as the brain predicts how self-generated motion activates the vestibular organs and subtracts these predictions from afferent signals.
Without such error signals, accurate self-motion estimation would be severely compromised, suggesting that vestibular signals remain critically important when coupled to self-motion estimation and head movement control.
Vestibulo-ocular reflexes (VORs) stabilize the eyes when the head moves.
E.g. If you shake your head back and forth while reading, you can still read. But if you shake the book instead, you can’t read.
Two components of VORs
Rotational: compensates for head rotation and receives input mostly from the semicircular canals.
Translational: compensates for linear head motion.
Rotational VOR
When the semicircular canals sense head rotation in one direction, the eyes rotate in the opposite direction.
A trisynaptic pathway, the three-neuron arc, connects each semicircular canal to the appropriate eye muscle.
This reflex is old in terms of evolution as many invertebrate and all vertebrates have the ability to reflexively rotate their eyes opposite to the direction of head rotation.
The trisynaptic pathway isn’t enough to compensate for head rotations as the afferent signals from the semicircular canals is proportional to head velocity, while compensatory eye movement requires eye position changes.
To convert the velocity to position requires temporal integration that occurs in neural networks in the brain stem.
However, at high rotation speeds the elastic properties of the eyeball and eye muscles would snap the eye back to its default position so there must be a way of continuously activating the eye muscles.
Without constant input, the head rotation would initially bring the eye to the correct position but the eye would drift away since the oculomotor neurons would lack the tonic input to compensate for the elastic restoring forces of the eyeball.
This is exactly what happens after lesions to the brainstem and cerebellar structures that are thought to participate in this neural integration.
Translational VOR
The horizontal compensatory eye movements that are caused by lateral motion scale with target distance.
E.g. The closer the target, the larger the compensatory eye movement.
The translational VOR differs from the rotational VOR in the ability to generate compensatory eye movements.
These abilities appear to be specific to frontal-eye animals as many lateral-eye species don’t generate eye movements that compensate for lateral motion.
VORs are poor at compensating for sustained motion at constant speed, so they’re supplemented by optokinetic responses.
VORs aren’t always appropriate and are under the control of the cerebellum and cortex.
E.g. If you turn your head while walking, you want your gaze to follow. VORs would prevent your eyes from turning with your head. So you want to suppress VORs in this case.
During volitional head movements, the VOR can be suppressed by the cerebellum and cortex.
The VOR is also continuously calibrated and updated to maintain accuracy in the face of changes to the motor system or neural pathways, using the clarity of vision during head movements as the error signal.
The VOR is a highly modifiable reflex and is necessary for anyone who wears glasses.
If the flocculus and paraflocculus of the cerebellum are lesioned, the gain of the VOR can no longer be modulated.
The climbing fiber input to the cerebellum carries a retinal error signal thought to serve as a teaching signal, enabling the cerebellum to correct the error in the VOR.
Other uses for the vestibular system
Tilt perception: vestibular information is critical for spatial orientation relative to gravity.
Visual-vertical perception
Visuospatial constancy
Electrical stimulation of area 2v in humans produces sensations of whole-body motion.
It’s clear that there’s no evidence linking vestibular nuclei response properties directly to head direction or other spatially tuned cell types, and no direct projections from the vestibular nuclei to the brain areas thought to be spatially tuned.
Losing one labyrinth means that all vestibular reflexes must be driven by a single labyrinth.
For the VOR, this is effective at low speeds but during fast rotations, inhibition isn’t sufficient such that the gain of the reflex is reduced when the head rotates toward the lesioned side.
The symptoms of bilateral vestibular loss are different from unilateral loss.
E.g. Vertigo is absent because there is no imbalance in vestibular signals, no spontaneous nystagmus.
The loss of vestibular reflexes is devastating as you can’t read without steadying your head and any motion prevents you from performing object recognition.
Highlights
The vestibular system provides the brain with a rapid estimate of head movement, which is used for balance, visual stability, spatial orientation, movement planning, and motion perception.
Vestibular receptor hair cells are mechanotransducers that sense rotational and linear accelerations.
Receptor cells are polarized to detect the direction of motion.
Three semicircular canals in each ear detect rotational motion and work in bilateral synergistic pairs through convergent commissural pathways in the vestibular nuclei.
Projections from the vestibular nuclei to the oculomotor system allow eye muscles to compensate for head movement through the vestibulo-ocular reflex to hold the image of the external world motionless on the retina.
Cortical projections to the vestibular and oculomotor nuclei allows volitional eye movements to be separated from reflex eye movements.
Motor learning in vestibulocerebellar networks provides compensatory changes in eye movement responses to changing visual conditions.
Chapter 28: Auditory Processing by the Central Nervous System
To understand how animals process sound, it’s useful to consider which features/cues are available.
For localizing sounds, the interaural time and intensity differences carry information about where sounds come from.
The size of the head determines how interaural time delays are related to the location of sound sources.
Humans can localize a sound source with an interaural time difference of 10 microseconds. Interaural time differences are particularly well conveyed by neurons that encode relatively low frequencies.
E.g. Neurons can fire at the same position in every cycle of the sound wave and this encodes the interaural time difference as an interaural phase difference.
However, interaural time differences are much harder to detect at higher frequencies so instead, the auditory system uses intensity differences.
High frequency sounds produce sound shadows or intensity differences between the two ears, which is the feature that the brain uses for localizing high frequency sounds.
Mammals localize sounds in the horizontal plane by having two separate ears, while we localize sounds in the vertical plane by the sound’s interaction with the body and pinna.
Interaural time and intensity don’t vary with vertical displacement, so it’s impossible to localize a pure tone in the vertical plane without the pinna.
One reason human speech can be recognized and understood over noise and distortions is because speech has redundant cues.
The cochlear nerve delivers acoustic information in parallel pathways to the tonotopically organized cochlear nuclei.
The afferent nerve fibers from cochlear ganglion cells are bundled in the vestibulocochlear nerve (cranial nerve VIII) and terminate exclusively in the cochlear nuclei.
The cochlear nerve is made up of two groups of fibers: 95% of myelinated fibers from inner hair cells and 5% of unmyelinated fibers from outer hair cells.
We understand the myelinated fibers more than the unmyelinated ones.
Both groups detect energy over a narrow range of frequencies; the tonotopic map of the cochlear nerve thus carries detailed information about how the frequency content of sounds change.
The unmyelinated fibers integrate information from a wide region of the cochlea but aren’t response to sound. Perhaps they signal cochlear damage and pain after exposure to loud sounds.
Two features of cochlear nuclei
Organized tonotopically.
Each cochlear nerve fiber innervates several different areas within the cochlear nuclei.
The auditory pathway comprises at least four different ascending pathways that simultaneously extract different acoustic information from the signals carried by cochlear nerve fibers.
The ventral cochlear nucleus extracts temporal and spectral information about sounds.
The principal cells of the unlayered ventral cochlear nucleus sharpen temporal and spectral information and convey it to higher centers of the auditory system.
Different types of cells in the cochlear nuclei extract distinct types of acoustic information from the cochlear nerve fiber.
Three types of cochlear nuclei neurons
Bushy cells: projects bilaterally to the superior olivary complex and is used to convey interaural time and intensity differences.
Stellate cells: has a tonotopic organization that encodes the spectrum of sounds.
Octopus cells: detects the onset of sound that allow animals to detect brief gaps.
The dorsal cochlear nucleus integrates acoustic with somatosensory information in making use of spectral cues for localizing sounds.
Among vertebrates, only mammals have dorsal cochlear nuclei.
Recent experiments suggest that the circuits of the dorsal cochlear nuclei distinguish between unpredictable and predictable sounds.
E.g. Self-generated sounds such as chewing or talking are predictable and are canceled through these circuits.
The superior olivary complex in mammals contains separate circuits for detecting interaural time and intensity differences.
The medial superior olive (MSO) generates a map of interaural time differences.
Review of phase-locking, Jeffress sound localization model, and delay lines.
The lateral superior olive (LSO) detects interaural intensity differences, but doesn’t form a map of the location of sounds in the horizontal plane.
In humans, interaural intensities can differ in sounds with frequencies greater than about 2 kHz.
Sounds from the ipsilateral side generate strong excitation and weak inhibition, whereas those that come from the contralateral side generate strong inhibition and weak excitation.
Thus, neurons in the LSO are activated more strongly by sounds from the ipsilateral than the contralateral hemifield.
If sounds come from the midline, then ipsilateral excitation and contralateral inhibition must arrive at neurons in the LSO at the same time to cancel out the net excitation.
The superior olivary complex provides feedback to the cochlea which controls the sensitivity of the cochlea and protects it from damage by loud sounds.
The ventral nuclei of the lateral lemniscus is involved in processing the meaning of sounds.
To localize sounds accurately, animals must ignore the reflections of sounds from surrounding surfaces that arrive after the initial wave front.
Precedence effect: the phenomenon where mammals suppress all but the first-arriving sound.
It’s been proposed that the persistent inhibition in the inferior colliculus from the dorsal nucleus of the lateral lemniscus serves to suppress spurious localization cues and thus is behind the precedence effect.
All afferent auditory pathways converge in the inferior colliculus.
Sound location information from the inferior colliculus creates a spatial map of sound in the superior colliculus.
The superior colliculus is critical for reflexive orienting movements of the head and eyes to acoustic and visual cues in space.
The superior colliculus is also where all information regarding sound localization converges, which is critical since binaural differences in level and timing alone can’t unambiguously code for a single position in space.
Within the superior colliculus, the auditory map is aligned with maps of visual space and body surface.
Auditory, visual, and somatosensory neurons in the superior colliculus all converge on output pathways in the same structure that controls movement of eyes, head, and external ears.
The superior colliculus maps are mapped to motor targets in space and are aligned with the sensory maps, which helps facilitate the sensory guiding of movements.
In a parallel pathway, the inferior colliculus sends ascending auditory information to the medial geniculate body of the thalamus and from there to the auditory cortex.
Stimulus selectivity progressively increases along the ascending pathway.
E.g. An auditory nerve fiber is primarily selective to one stimulus dimension, frequency, but neurons in the central auditory system may be multidimensional and be selective of frequency, spectral bandwidth, intensity, modulation, and spatial location.
The region of preferred stimulus becomes increasingly smaller at structures along the path to the auditory cortex.
The majority of neurons in the auditory cortex are preferentially driven by stimuli with greater spectral and temporal complexity than pure tones and broadband noises.
The neurons also change their firing pattern as they not only respond with higher firing rates, but also with sustained firing throughout the stimulus duration.
This is significant because it provides a direct link between neural firing and the perception of a continuous acoustic event, as such sustained firing by auditory cortex neurons has only been observed in awake animals.
In contrast, an auditory nerve fiber typically shows sustained firing in response to a wide range of acoustic signals.
The overall picture so far is that when a sound is heard, the auditory cortex first responds with transient discharges (encodes the onset of a sound) across a large population of neurons and as time passes, the activation becomes restricted to a smaller population of neurons that are preferentially driven by the sound.
The auditory cortex has numerous maps of sound.
In monkeys, neurons tuned to low frequencies are found at the rostral end of A1, while those responsive to high frequencies are found at the caudal end.
Thus, like the visual and somatosensory cortices, A1 contains a map reflecting the sensory periphery.
In many species, subregions of the auditory cortex that represent important frequencies are larger than others because of extensive inputs which require more processing.
Other features of auditory stimuli are mapped in the primary auditory cortex, but the overall organization is less clear and precise than for vision.
A1 is also organized according to bandwidth (responsiveness to a narrow or broad range of frequencies), neuronal response latency, loudness, modulation of loudness, and rate and direction of frequency modulation.
It’s unknown how these various features interact and integrate.
Sensory representation in A1 can change in response to alterations in input pathways.
E.g. After peripheral hearing loss, the tonotopic mapping in A1 can change so that neurons previously responsive to sounds within the lost range will being to respond to adjacent frequencies.
Raising animals in acoustic environments where they’re exposed to repeated tone pulses of a certain frequency results in a persistent expansion of cortical areas devoted to that frequency, followed with a general deterioration and broadening of the tonotopic map.
In contrast to the auditory midbrain, there isn’t any evidence for a spatially organized map of sound in any of the cortical areas sensitive to sound location.
Why is there a second sound-localization pathway connected to gaze control circuitry when the midbrain pathway from location-sensitive neurons in the inferior colliculus to the superior colliculus to gaze control circuitry directly controls orientation movements of the head, eyes, and ears?
Behavioral experiments shed light on this question as although lesions of A1 can result in profound sound-localization deficits, no deficit is seen when the task is simply to indicate the side of the sound source. The deficit is only apparent when the location must be approached.
Cortical and subcortical sound-localization pathways have parallel access to gaze control centers, which contributes some redundancy.
In both mammals and birds, the general difference is that cortical pathways are required for more complex sound-localization tasks.
Subcortical circuits are important for rapid and reliable performance of behaviors that are critical to survival, cortical circuits are used for working memory, complex recognition, and selection of stimuli and evaluation of their significance.
The auditory cortex, like the visual cortex, is also segregated into separate processing streams.
Although the idea that all sensory areas of the cerebral cortex initially segregate object identification and location is attractive, it’s likely an oversimplification.
An intriguing feature of all mammalian cortical areas, including the auditory areas, is the massive projection from the cortex back to lower areas.
These feedback projections are used to actively adjust auditory signal processing in subcortical structures.
As sound information moves up the processing hierarchy, the precision of timing in sounds gradually decreases.
E.g. The phase-locking upper limit in the auditory nerve is 3000 Hz, 300 Hz in the medial geniculate nucleus, and less than 100 Hz in A1.
The upper limit of phase-locking in A1 is similar to that found in the primary visual and somatosensory areas of cortex.
In the auditory cortex, the temporal firing pattern alone is inadequate to represent the entire range of time-varying sounds that we perceive.
Instead, A1 has two populations of neurons, one that displays phase-locked periodic firing in response to click trains with long intervals between clicks, and one that fires increasingly rapidly as the click interval becomes shorter.
These two populations of A1 neurons, respectively called synchronized and nonsynchronized, have complementary response properties.
Neurons of the synchronized population explicitly represent slowly occurring sound events by synchronized neural firing (temporal code), while neurons of the nonsynchronized population implicitly represent rapidly changing sound events by changes in average firing rates (rate code).
In A1, the neural representation changes from a temporal code to a rate code at about 40 Hz.
40 Hz is also near the boundary of where our perception of a periodic click train changes from being discrete to continuous.
The progressive reduction in the upper limit of phase-locking along the ascending auditory pathway is matched by the emergence of firing-rate-based representations.
There’s a considerable transition from temporal to rate coding by the time auditory signals reach the auditory cortex.
This transition is necessary for auditory information to be integrated with information from other sensory modalities that are intrinsically slower.
Primates have specialized cortical neurons that encode pitch and harmonics.
The change in representation of harmonic sounds from auditory nerve fibers to the auditory cortex reflects a principle of neural coding in sensory systems.
Neurons in sensory pathways transform the representation of physical features, such as frequency or luminance, into a representation of perceptual features, such as pitch or brightness.
Such features lead to the formation of auditory or visual percepts.
We know relatively little about how speech sounds are analyzed by neural circuits.
The auditory system must distinguish an auditory percept as being self-generated or externally generated to mask our own voices.
We also listen to our own voice to detect errors and correct them through feedback.
The vocalization-induced suppression begins several hundred milliseconds prior to the onset of vocalization, suggesting that these neurons receive modulatory signals from vocal production circuits.
Why do we suppress our auditory cortex when we speak?
A simple answer is so that we don’t hear our own loud voice. A more interesting answer is that this suppression comes from a vocal feedback-monitoring network in auditory cortex.
In humans, there’s less or no suppression of the auditory cortex if vocal feedback is experimentally altered through earphones.
This sensitivity to feedback perturbations suggests that neurons exhibiting vocalization-induced suppression are part of a network responsible for monitoring vocal feedback signals.
Thus, there are two mechanisms between the suppression: internal modulation due to corollary discharges and external modulation due to vocal feedback.
Vocalization-induced suppression of auditory responses has been seen in several mammalian subcortical structures such as the brain stem and inferior colliculus.
Highlights
Sound localization uses the interaural time and intensity difference for horizontal plane localization.
Auditory neurons along the ascending pathway progressively increase their stimulus selectivity.
The ventral cochlear nucleus extracts three sound features
Detect coincident firing of auditory nerve fibers that’s useful for detecting onsets and gaps in sounds.
Detect and sharpen the encoding of spectral peaks and valleys. Spectral information is used for understanding the meaning of sounds and for localizing their sources.
Sharpen and convey information about the fine structure of sounds to make interaural comparisons of timing and intensity of sounds to localize sound sources.
The dorsal cochlear nucleus integrates acoustic signals with somatosensory information, helping to distinguish between an animal’s own movements from those coming from the environment.
The inferior colliculus carries information about sound location to the superior colliculus, which controls reflexive orienting movements of the head and eyes.
The auditory cortex also transforms rapidly varying features of sounds into firing-rate-based representations, while representing slowly varying sounds using spike timing.
Speaking induces suppression of neural activity in the auditory cortex prior to vocal onset, and this suppression results from a vocal-feedback-monitoring network and internal signaling.
Chapter 29: Smell and Taste: The Chemical Senses
We’re capable of detecting more than 10,000 different volatile chemicals.
Certain features of chemosensation have been conserved throughout evolution, whereas others are specialized adaptations of individual species.
Odorants: volatile chemicals perceived as odors.
Olfactory sensory neurons are in the nose and have a relatively short life span of only 30 to 60 days. They’re continuously replaced from a layer of basal stem cells in the epithelium.
The olfactory sensory neuron is a bipolar nerve cell with its dendrite going towards the mouth and its axon going towards the brain.
We have approximately 350 different odorant receptors, whereas mice have approximately 1,000.
The binding of an odorant to its receptor induces a cascade of intracellular signaling events that depolarize the olfactory sensory neuron.
However, we rapidly adapt to odors as seen in the weakening of detection of an unpleasant odor that’s continuously present.
Different combinations of receptors encode different odorants.
To be distinguished perceptually, different odorants must cause different signals to be transmitted from the nose to the brain.
Two ways different signals are captured
Each olfactory sensory neuron expresses only one type of odorant receptor.
Each receptor recognizes multiple odorants, and each odorant is detected by multiple different receptors.
So, each odorant is detected and encoded by a unique combination of receptors and thus causes a distinctive pattern of signals to be transmitted to the brain.
The combinatorial coding of odorants greatly expands the discriminatory power of the olfactory system.
Interestingly, even odorants with nearly identical structures are recognized by different combinations of receptors, which explains why a slight change in chemical structure can alter its perceived odor.
Changes in concentration of an odorant can also change the perceived odor.
E.g. Thioterpineol smells like tropical fruit → Grapefruit → Putrid at higher concentrations.
The explanation is that as the concentration increases, additional receptors with lower affinity for the odorant are recruited and thus change the combinatorial receptor code.
Olfactory information is transformed along the pathway to the brain from the olfactory epithelium to the olfactory bulb and then the olfactory cortex.
The olfactory epithelium is divided into different zones that have different receptor types.
Neurons with the same receptor are randomly scattered within a zone so neurons with different receptors are interspersed.
All zones contain a variety of receptors, and an odorant may be recognized by receptors in different zones.
So, there’s only a rough organization of odorant receptors into spatial zones and the information outputted is highly distributed across the epithelium.
In each glomerulus, the axons of several thousand sensory neurons converge to about 40-50 relay neurons. This convergence results in approximately a 100-fold decrease in the number of neurons transmitting olfactory signals.
The organization of sensory information in the olfactory bulb is drastically different from the epithelium.
E.g. In the epithelium, olfactory sensory neurons with the same odorant receptors are randomly scattered in one zone, their axons typically converge in two glomeruli on either side of the olfactory bulb.
Each glomerulus receives input from just one type of odorant receptor, resulting in a precise arrangement of sensory inputs from different odorant receptors that’s similar between individuals.
Since each odorant is recognized by a unique combination of receptor types, each also activates a unique combination of glomeruli in the olfactory bulb.
Two advantages to this organization
Signals from thousands of sensory neurons with the same odorant receptor type always converge on the same few glomeruli, which may optimize the detection of odorants at low concentrations.
Even though receptors are continually replaced, the olfactory bulb remains unchanged, resulting in a stable neural code for an odorant.
One mystery is how all the axons of the same olfactory receptor type converge to the same glomeruli.
It’s thought that the olfactory bulb sharpens the contrast between relevant and irrelevant sensory information before its transmission to the cortex. This is achieved by lateral inhibition by periglomerular cells.
The olfactory cortex receives direct projections from the olfactory bulb and comprises six major areas.
The functions of the different olfactory cortical areas are largely unknown.
The olfactory cortex projects to the olfactory bulb, providing another possible means of signal modulation.
Projections from the olfactory bulb to the piriform cortex indicate that the highly organized map of odorant receptor inputs in the olfactory bulb isn’t repeated in the cortex.
People with lesions to the orbitofrontal cortex can’t discriminate odors and among healthy people, there’s a 1,000-fold difference among our ability to discriminate odors (olfactory acuity).
In many animals, the olfactory system detects not only odors but also pheromones, chemicals that influence the behavior or physiology of other members of the same species.
Pheromones play important roles in a variety of mammals, but they haven’t been shown to exist in humans.
Pheromones are detected by two separate structures, the olfactory epithelium where odorants are detected, and the vomeronasal organ thought to be specialized for detecting pheromones but isn’t present in humans.
Skipping over the details of the vomeronasal organ.
Convergence of many olfactory sensory axons onto a few projection neurons leads to a great increase in the signal-to-noise ratio of olfactory signals, so projection neurons are much more sensitive to odor than individual olfactory neurons.
In C. elegans, diacetyl is normally attractive but when the diacetyl receptor is experimentally expressed in an olfactory neuron that normally senses repellents, the animals are instead repelled by diacetyl.
This indicates that specific sensory neurons encode the hardwired behavioral responses of attraction or repulsion and that a labeled line connects specific odors to specific behaviors.
Similar ideas have emerged from genetic manipulations of taste systems in mice and flies, where sweet and bitter preference pathways are encoded by different sets of sensory cells.
Strategies for olfaction have evolved rapidly between mammals, nematodes, and insects because of a fundamental difference between olfaction and other senses such as vision, touch, and hearing.
Most senses are designed to detect physical entities with reliable physical properties such as photons and pressure waves. By contrast, olfactory systems are designed to detect organic molecules that are infinitely variable and don’t fit a simple continuum of properties.
The organic molecules that are detected are produced by other living organisms which evolve faster than the world of light, pressure, and sound.
The gustatory system controls the sense of taste, which has five submodalities that reflect essential dietary requirements.
E.g. Sweet, bitter, salty, sour, and umami/savory.
Unlike the olfactory system, which distinguishes between millions of odors, the gustatory system recognizes just a few taste categories.
Five submodalities of taste
Sweet: invites consumption of energy-rich foods.
Bitter: warns against ingesting toxins.
Salty: maintains proper electrolyte balance.
Sour: signals acidic, unripened, or fermented foods.
Umami: signals protein-rich foods.
Consistent with the nutritional importance of carbohydrates and proteins, both sweet and umami tastants elicit innately pleasurable sensations in humans and animals.
In contrast, bitter and sour tastants elicit innately aversive responses in humans and animals.
Taste is often thought to be synonymous with flavor, but taste only means the five qualities encoded by the gustatory system, whereas flavor means the rich and integrated signals from the gustatory, olfactory, and somatosensory systems.
Taste cells are very short-lived (days to weeks) and are continually replaced from the stem cell population.
Each taste modality is detected by distinct sensory receptors and cells.
Skipping over the protein and molecular details of each taste receptor.
A dramatic demonstration of each taste submodality comes from studies of mice lacking a specific taste receptor gene or cell type. These studies show that the loss of one taste modality doesn’t affect the others.
E.g. Mice with genetically removed sweet cells don’t detect sugars, but still detect amino acids, bitter compounds, salts, and sour compounds.
It’s the taste cells, rather than the receptors, that determine the animal’s response to a tastant.
E.g. If the human bitter receptor (which mice don’t have) is expressed in mice, it causes strong taste aversion. However, when that same receptor was expressed in sweet cells, the bitter receptor elicited strong taste acceptance.
These findings show that the innate responses of mice to different tastants operate by labeled lines that link the activation of different subsets of taste cells to different behavioral outcomes.
The release of neurotransmitter from taste cells onto the sensory fibers induces APs in the fibers that transmit directly to the taste area of the thalamus.
From the thalamus, taste information is transmitted to the gustatory cortex, a region along the border between the anterior insula and the frontal operculum, and to the hypothalamus.
The gustatory cortex is believed to mediate the conscious perception and discrimination of taste stimuli.
Experiments in mice show that direct control of the primary taste cortex can evoke specific, reliable, and robust behaviors that mimic responses to natural tastants.
To find if these cortically triggered behaviors are innate, similar stimulation experiments were done on mutant mice that had never tasted sweet or bitter chemicals. Even in these animals, activation of the corresponding cortical areas triggered the expected behavioral response, thus substantiating the innate nature of the sense of taste.
Highlights
Odor detection is mediated by a large family of odorant receptors with humans having about 350 receptors.
Individual odorant receptors can detect multiple odorants, and different odorants activate different combinations of receptors.
This combinatorial strategy explains how we can discriminate many odorants and how nearly identical odorants can have different scents.
Each olfactory sensory neuron expresses a single type of receptor. Thousands of neurons with the same receptor are distributed over the olfactory epithelium.
In the olfactory bulb, axons from neurons expressing the same receptor converge in a few receptor-specific glomeruli, generating a map of odorant receptor inputs that’s similar between individuals.
The olfactory bulb projects broadly to multiple areas of the olfactory cortex, resulting in a highly distributed organization of cortical neurons responsive to individual odorants.
The gustatory system detect five basic tastes: sweet, sour, bitter, salty, and umami/savory.
The detection of the five different taste modalities is mediated by different taste receptor cells, each dedicated to one modality.
Taste signals travel from taste buds through cranial nerves to the taste area of the thalamus, and then the gustatory cortex.
The gustatory cortex contains hot spots for sweet and bitter tastes, which, when directly stimulated, can elicit behavioral responses similar to those as if the subject tasted that taste.
Part V: Movement
The vast repertoire of motions that humans are capable of comes from the activity of some 640 skeletal muscles, all under the control of the CNS.
The task of motor systems is the reverse of the task of the sensory systems.
E.g. Sensory processing generates an internal representation of the outside world in the brain, while motor processing begins with an internal representation and changes the outside world.
Critically, this internal representation needs to be continuously updated by internally generated information (efference copy) and by external sensory information to maintain accuracy as the movement unfolds.
Because many of the motor acts of daily life are unconscious, we’re often unaware of their complexity.
E.g. Simply standing requires continual adjustments of numerous postural muscles in response to vestibular signals due to swaying.
E.g. Walking, running, and other forms of locomotion involve the combined action of central pattern generators, gated sensory information, and descending commands, which together generate the complex pattern of alternating excitation and inhibition in the appropriate sets of muscles.
Many motor actions occur far to quickly to be shaped by sensory feedback and instead, the brain uses predictive models that simulate the consequences of outgoing commands to correct fast motor actions.
Motor learning provides one of the most fruitful subjects for studies of neural plasticity.
Like sensory systems, motor systems are organized in a functional hierarchy with each level concerned with a different decision.
E.g. Purpose of movement, formation of a motor plan, spatiotemporal characteristics of a movement, and details of the muscle contractions needed.
This coordination is executed by the primary motor cortex, brain stem, and spinal cord.
As would be expected for such a complex system, the motor system is subject to various malfunctions.
Chapter 30: Principles of Sensorimotor Control
An important function of sensory representations is to shape the actions of motor systems.
Voluntary movements are generated by neural circuits that span different levels of sensory and motor hierarchies.
Focal damage to different structures can cause distinct motor deficits.
Although it’s tempting to suggest that these individual structures have distinct functions, these brain and spinal areas normally work together as a network, so that damage to one component likely affects the function of all others.
The control of movement poses challenges for the nervous system.
The ease with which we move masks the complexity of the control processes involved.
Many factors that are responsible for this complexity become clearly evident when we try to build machines that perform human-like movement.
E.g. Although we have computers that can now beat the world’s best players at chess and Go, no robot can manipulate a chess piece with the dexterity of a 6-year-old child.
We’ll see how the motor system reduces the degrees of freedom of the musculoskeletal system by controlling groups of muscles, called synergies, to simplify control.
Challenges of motor control
Motor systems have to deal with different forms of uncertainty.
Motor systems have to determine which of the 600 muscles to use to perform the correct action.
Noise corrupts many signals and is present at all stages of sensorimotor control.
Time delays are prevalent at all stages of the sensorimotor system.
The body and environment both change on a short and a long timescale.
E.g. Muscle fatigue and muscle growth.
The relationship between motor command and ensuing action is highly complex.
Actions can be controlled voluntarily, rhythmically, or reflexively.
E.g. Breathing can be voluntary as before diving under water, rhythmic in a regular cycle of inspiration and expiration, or reflexive in response to a noxious stimulus causing a cough.
Voluntary movements: actions that are under conscious control.
Rhythmic movements: actions that can be voluntary but differ in their timing and are, to a large extent, controlled autonomously by spinal or brain stem circuitry.
Reflex movements: stereotyped actions in response to specific stimuli that’s generated by neural circuits in the spinal cord or brain stem.
The advantages of reflexes is that they’re fast, but this also means that they’re less flexible than voluntary control systems.
Although we may consciously intend to perform a task or plan a sequence of actions, movements generally seem to occur automatically.
E.g. We move without thinking about the actual joint motions or muscle contractions required.
Muscles can be approximated to act like a spring and damper.
In reality, the distinctions between different movements is blurred in a continuum of responses spanning different latencies.
Increasing the response time allows additional neural circuitry to be involved in the sensorimotor loop and tends to increase the sophistication and adaptability of the response.
Thus, there’s a tradeoff between responsiveness and sophistication of processing.
Open-loop movement: when a motor command is generated without using sensory information.
For perfect open-loop control, one needs to invert the dynamics of motion to calculate the motor command that will generate the desired motion.
Inverse model: the neural mechanism that inverts motion back to muscle command.
An inverse model with open-loop control can determine what motor commands are needed to produce the desired movement.
Although not monitoring the consequences of an action may seem counterproductive, the main reason to ignore feedback is to reduce delay.
E.g. Sensorimotor loops using visual stimuli take 120-150 ms for a motor response compared to saccades which take 30 ms and don’t use sensory feedback to guide movement.
While open-loop control is faster, any movement errors won’t be corrected and therefore will compound over time or over successive movements.
Also, the more complex the system under control, the more difficult it is to arrive at an accurate inverse model through learning.
An example of a purely open-loop control system is the control of the eye in response to head rotation: the vestibulo-ocular reflex. The reflex doesn’t require or use vision as the eyes maintain a stable gaze even when the head is rotated in the dark.
Such precise open-loop control is possible because the dynamic properties of the eye are relatively simple and the eye tends not to be perturbed by external events.
In contrast, it’s very difficult to optimize an inverse model for a complex musculoskeletal system such as an arm, which requires some form of error correction.
Internal model: a model that reflects reality within the nervous system.
Internal models allow an organism to contemplate the consequences of potential actions without actually committing to those actions.
Thus, internal models are the solution the CNS has developed to solve both control and prediction.
Control and prediction are two sides of the same coin and map exactly onto the inverse and forward models.
E.g. Control turns desired sensory consequences into motor commands and prediction turns motor commands into expected sensory consequences.
Closed-loop control: when a motor command is generated and monitored using sensory information.
E.g. How the thermostat in a house turns on the heat when the house temperature falls below the desired temperature.
However, this system has the drawback that the amount of heat put into the house isn’t related to the difference between the current and desired temperature. A better system is one where the control signal is proportional to the error.
By continuously correcting a movement, feedback control can be robust both to noise in the sensorimotor system and to environmental perturbations.
While feedback can update commands in response to deviations, it’s sensitive to feedback delays. Shorter delays are better in that they enable more precise control, whereas longer delays means the feedback becomes less useful and may make the system oscillate.
With a longer delay, the system may respond to errors that no longer exist and may even correct in the wrong direction.
An example of closed-loop control compared to open-loop control is tracking a moving object. If you put out your finger and quickly move your head left and right while tracking your finger, it’s easy to fixate on your finger. However, if you instead move your finger left and right while keeping your head still, then it becomes much harder.
Even though the relative motion of finger to head is the same in both conditions, moving your head is precise because it uses the vestibulo-ocular reflex, whereas moving your finger uses feedback and is less precise.
In most motor systems, movement control is achieved through a combination of feedforward and feedback processes.
To accurately control movement, the brain requires information about the body’s current state such as the position and velocities of the different segments of the body.
However, estimating the body’s state isn’t trivial due to delays in sensory transduction and noise.
Sophisticated computation is required to estimate current body states as accurately as possible and several principles have emerged.
Principles of body state estimation
State estimation relies on internal models of sensorimotor transformations.
State estimation can be improved by combining multiple sensory modalities.
E.g. Just like how we average a set of experimental data to reduce measurement error, averaging sensory modalities can reduce the overall uncertainty in state estimation.
The motor command, when combined with the current body state, can be used to predict the next body state.
E.g. A forward model can be used to anticipate how the motor system’s state will change using an efference copy (corollary discharge).
Using an efference copy is faster than using delayed sensory feedback since it’s available before the movement is carried out, and can therefore be used to anticipate state changes.
It may seem surprising that the motor command is used in state estimation.
The drawbacks of only using sensory feedback (too slow) or only motor prediction (drift), can be dealt with by monitoring both and using a forward model to estimate the current state.
A neural system that does this is called an observer model.
The major objectives of the observer model are to compensate for sensorimotor delays and to reduce uncertainty in the estimate of the current state.
The observer model has been supported by empirical studies.
Active sensing: when movement is used to efficiently gather information.
Prediction and intermittency can compensate for sensorimotor delays.
Intermittency momentarily interrupts a movement with rest, thus giving time for the movement to catch up to sensory feedback as long as the rest interval is greater than the delay of sensory feedback.
E.g. Saccades are momentary periods of stationary eye position.
Prediction is a better strategy than intermittency since we don’t have to interrupt the movement.
E.g. When a load is increased by a self-generated action, grip force increases instantaneously with load force. Sensory detection of the load would be too slow to account for this rapid increase in grip force, thus the motor system must predict it.
E.g. The waiter task. Hold a book on the palm of your hand and if you use your other hand to remove the book, the supporting hand remains still. However, if someone else removes the book, then it’s close to impossible to maintain hand stillness.
The brain is particularly sensitive to unexpected events or sensory prediction errors.
Prediction compensates for delay, but is also a key element in sensory processing.
Sensory signals don’t carry a label of “external” versus “internal” so our CNS must distinguish between the two using prediction.
Subtracting predictions of sensory signals that arise from our own movements from the total sensory feedback enhances signals that carry information about external events.
E.g. This explains why self-tickling is less intense than tickling by others.
With delayed tactile input, the predictions become inaccurate and thus fail to cancel sensory feedback, resulting in perceiving the input as external.
Current research suggests that sensory information used to control actions is processed in distinct neural pathways from those that contribute to perception.
Size-weight illusion: when lifting two objects of different size but equal weight, people report that the smaller object feels heavier.
The illusion is a result of high-level cognitive centers but also that sensorimotor systems can operate independently of these centers.
Motor plans translate tasks into purposeful movement. However, the range of possible movements and trajectories is infinite as there are an infinite number of combinations of muscles and trajectories.
Do we all move in a unique way or is there a pattern between people?
Evidence shows that we don’t all move in a unique way and that repetitions of the same behavior by one person, as well as comparisons between people, have shown that patterns of movement are very stereotypical/similar.
E.g. Our hands tend to move roughly in a straight path and hand speed is typically smooth, unimodal, and roughly symmetric.
To achieve such straight-line movement of the hand, the motor system must coordinate a combination of complex joint rotations.
The fact that hand trajectories are more invariant than joint trajectories suggests that the motor system is more concerned with controlling the hand, even at the cost of generating complex patterns of joint rotations.
Motor equivalence: when a movement can be performed regardless of the limb or body segment used.
E.g. We write using all of our limbs even though the writing may not be as clean.
Motor equivalence suggests that purposeful movements are represented in the brain abstractly rather than as sets of specific joint motions or muscle contractions, thus providing flexibility.
Why do people choose one particular way of performing a task out of the infinite possibilities?
The fundamental idea that’s emerged is that planning can be equated with choosing the best way to achieve a task.
This means optimizing a cost associated with movement. Different ways of achieving a task lead to different costs.
For movement, we appear to be optimizing for task success and effort/energy.
Task success is limited by noise due to the excitability of motor neurons and because fluctuations in the number of motor neurons leads to greater fluctuations in force.
In general, there’s a tradeoff between effort and accuracy. Being more accurate requires substantially more energy.
To optimize for task success and effort, the brain doesn’t specify the desired body state or trajectory but rather an optimal feedback controller to generate the movement.
Given the goal of the task, the controller specifies the motor command suitable for different possible body states.
The trajectory is then a consequence of applying the feedback controller to the current estimate of body state.
Optimal feedback control will only correct for deviations that are task relevant, thus allowing variations in task-irrelevant movement.
E.g. When pushing a door open, it doesn’t matter where you push the door so deviations in hand location are ignored.
The goal of optimal feedback control isn’t to eliminate all variability/noise, but to allow it to accumulate in dimensions that don’t interfere with the task while minimizing it in the dimensions that are task-relevant.
Multiple processes contribute to motor learning, and while evolution can hardwire some innate behaviors, motor learning is required to adapt to new and changing environments.
New motor skills can’t be acquired by fixed neural systems so they must be flexible.
Most forms of motor learning involve procedural/implicit learning because subjects are generally unable to express what it is that they’ve learned.
Two types of sensorimotor learning
Adaptations to changes in the sensorimotor system.
Learning new skills.
Error-based learning involves adapting internal sensorimotor models by comparing the desired outcome to the predicted outcome.
The difference between the predicted and actual outcome, called the sensory prediction error, can be used to update the internal model.
Error-based learning tends to lead to trial-by-trial reduction in error as the motor system learns the new sensorimotor properties.
Evidence that we update our internal model during motor learning comes from experiments where subjects have to compensate for an anticipated force while moving something.
When the force is removed, subjects show a large aftereffect in the opposite direction due to correcting for a force that isn’t there anymore.
Motor adaptation may not be a single process as recent evidence suggests that adaptation is driven by interacting processes.
These interacting processes could have different temporal properties such as one that rapidly adapts but also rapidly forgets, while another process slowly adapts but also slowly forgets.
The advantage of such a mechanism is that learning processes can be matched to the temporal properties of the perturbations, which can range from short-lived to long-lasting.
In contrast to error-based learning where the sensorimotor system adapts to a perturbation, learning skills such as tying shoelaces, juggling, or typing involves improving performance in the absence of a perturbation.
Such learning tends to improve the speed-accuracy trade-off.
E.g. Typing faster but just as accurately.
If the task doesn’t provide a readily available error signal, such as if success is determined by a chain of movements, success can be achieved using reinforcement learning instead.
E.g. The correct sequence of leg and body movements required to make a swing go higher is complex and the error (height from ground) isn’t directly controlled. So instead, we use reinforcement learning where we reinforcement movements that do increase swing height.
Reinforcement learning is more general than error-based learning in that the training signal is success or failure, rather than error at each point in time.
A key problem that reinforcement learning solves is the credit assignment problem.
Credit assignment: which action within a sequence should we give credit or blame when we succeed or fail?
Reinforcement learning can be model-based or model-free.
While model-free learning avoids the computational burden of building a model, the tradeoff is that it’s less able to generalize to new situations.
To deal with noise, the sensorimotor learning system constrains the way in which the system is updated in response to errors. These constraints reflect the internal assumptions about the task structure and the source of errors, and determine how the system represents the task.
Highlights
Our motor actions are controlled by the integrated actions of the motor cortex, spinal cord, cerebellum, and basal ganglia.
To control action, the CNS uses a hierarchy of sensorimotor transformations that convert incoming sensory information into motor outputs.
There’s a tradeoff between speed and accuracy at the different levels of sensorimotor response; from reflexes to voluntary control.
The motor systems generate commands using feedforward circuits or error-correcting feedback circuits; most movements involve both.
The brain uses internal models of the sensorimotor system to facilitate control.
Body state is estimated using both sensory and motor feedback signals together with a forward predictive model to reduce the effect of delays in feedback.
Motor control circuits aren’t static but undergo continual modification and recalibration throughout life.
Motor learning improves motor control in new situations, and different forms of sensory information are vital for learning. Error-based learning is important for adapting to simple sensorimotor perturbations, while reinforcement learning is important for more complex skill learning and can be model-based or model-free.
The motor representations used by the brain constrain the way the sensorimotor system updates during learning.
Chapter 31: The Motor Unit and Muscle Action
Moving isn’t simple. Not only does the nervous system have to decide which muscles to activate, how much to activate them, and the sequence to active them, it must also control the influence of the resultant muscle forces on other body parts and maintain the required posture.
The motor unit is the elementary unit of motor control.
Motor unit: a motor neuron and the multiple muscle fibers it innervates.
A typical muscle is controlled by a few hundred motor neurons whose cell bodies are clustered in a motor nucleus in the spinal cord or brain stem.
The axon of each motor neuron exits the spinal cord through the ventral root or cranial nerve in the brain stem, and runs in a peripheral nerve to the muscle.
When the axon reaches the muscle, it branches and innervates from a few to several thousand muscle fibers depending on whether there needs to be fine muscle control or not.
APs transmitted down the axon releases acetylcholine (ACh) at the neuro-muscular synapse/junction.
Since APs in all muscle fibers of a motor unit occur at about the same time, they contribute to extracellular currents that sum to generate a field potential near the active muscle fibers.
Most muscle contractions involve the activation of many motor units.
Each fiber in most mature vertebrate muscles is innervated by a single motor neuron. But single motor neurons can innervate multiple fibers.
Innervation number: the number of muscle fibers a motor neuron controls/innervates.
E.g. One motor neuron controls 5 eye muscles fibers, but another motor neuron controls 1,800 leg muscle fibers.
Differences in innervation number determine the differences in increments of force produced by activation of different motor units in the same muscle.
Thus, innervation number also indicates the fineness of control of the muscle at low forces.
E.g. The smaller the innervation number, the finer the control.
The force exerted by a muscle depends on the number of motor units activated and the contraction speed, maximal force, and fatigability of motor units.
Twitch contraction: the mechanical response of a muscle to a single AP.
Contraction time: time it takes a muscle twitch to reach its peak force.
Tetanus: the mechanical response to a series of APs that produce overlapping twitches.
The force exerted during tetanus depends on how overlapping the twitches are and thus how they summate.
The functional properties of motor units vary across the population and between muscles. At one extreme, motor units have long twitch contraction times, produce small forces, and are less fatigable. At the other extreme, motor units have short contract times, produce large forces, and are more fatigable.
The order of motor unit recruitment beings with the slow-contracting, low-force units and proceeds up to the fast-contracting, high-force units.
The range of properties exhibited by motor units is partially due to differences in the structural specialization and metabolic properties of muscle fibers.
Physical activity can change motor units properties.
E.g. Brief but strong contractions increase motor unit force (strength), brief but rapid contractions increase motor unit discharge rate (power), and prolonged but weak contractions reduce motor unit fatigability (endurance).
However, training regimens have little effect on the composition of a muscle’s fibers.
Muscle force is controlled by the number of activated motor units and the rate that each active motor neurons discharges.
Increasing force is implemented by activating more motor units, which are progressively recruited from weakest to strongest.
Size principle of motor neuron recruitment: contraction force is increased by recruiting the smallest motor neuron first and the largest motor neuron last.
Two consequences of the size principle
The sequence of motor neuron recruitment is determined by the properties of spinal neurons. This means that the brain can’t selectively activate specific motor units but must activate them in a specific order (from smallest to largest).
Axons from small motor neurons are thinner than large motor neurons and innervate fewer muscle fibers. Since a key determinant of motor unit force is the number of muscle fibers innervated by a motor neuron, motor units are activated in order of increasing strength.
The order that motor units are recruited doesn’t change with contraction speed, and faster contractions only require the order to be shortened.
Input from the brain stem can adjust the gain of the motor unit pool to meet the demands of different tasks.
The nervous system must account for the structure of muscles to achieve specific movements.
Skipping over the contractile protein details of muscles (sarcomere) and the cross-bridge cycle.
Highlights
The basic functional unit for the control of movement is the motor unit, which comprises a motor neuron and the muscle fiber(s) it innervates.
The force exerted by a muscle depends on the number and properties of the motor units activated, and the rates that they discharge APs. The key motor unit properties are contraction speed, maximal force, and fatigability.
Motor unit properties vary continuously across the population that innervates each muscle, so there aren’t any distinct types of motor units.
Motor units tend to be activated in a stereotypical order that’s highly correlated with motor neuron size. The rate that motor units are recruited during a voluntary contraction increases with contraction speed.
The rate that motor units discharge APs can be controlled by descending inputs from the brain stem.
Except at low forces, variations in discharge rate have a greater influence on muscle force than does the number of activated motor units.
The variability in discharge rate of the motor unit population influences the level of fine motor control.
The nervous system must coordinate the activity of multiple muscles to engage in complex actions. Such actions are organized into a few sets that exhibit a stereotypical pattern of activation, but it isn’t known why particular patterns are preferred.
Chapter 32: Sensory-Motor Integration in the Spinal Cord
The sensory-motor integration that makes the ongoing regulation of movement possible takes place at many levels of the nervous system, but the spinal cord has a special role because of its close proximity between sensory input and motor output.
Reflex pathways in the spinal cord produce coordinated patterns of muscle contraction.
The simplest and most studied spinal reflex is the stretch reflex, a reflex muscle contraction caused by the lengthening of the muscle.
Initially, it was thought that the reflex was intrinsic to muscles but cutting either the dorsal or ventral root of the spinal cord abolishes the reflex.
During a stretch reflex, the antagonist muscles are inhibited to prevent movements that would resist the reflex.
Muscle spindle: sensory receptors in the fleshy part of a muscle that signal changes in muscle length.
Muscle spindles are used by the CNS to sense relative positions of the body segments.
We now know that a major component of the neural control system for walking is a set of intrinsic spinal circuits that don’t require sensory stimuli and thus aren’t a reflex.
Sensory feedback helps to shape voluntary motor commands through spinal reflex networks.
Reciprocal innervation is useful not only in stretch reflexes but also in voluntary movements as relaxation of the antagonist muscle enhances speed and efficiency because muscles don’t work against each other.
This organizational feature simplifies the control of voluntary movements, because higher centers don’t have to send separate commands to the opposing muscles.
It’s sometimes desired to contract both muscle and antagonist at the same time to stiffen the joint. Stiffening the joint is useful because it increases precision and joint stability.
E.g. Co-contraction of flexor and extensor muscles of the elbow right before catching a ball.
State-dependent reflex reversal: when a sensory fiber has either an inhibitory and excitatory effect depending on the state.
This phenomenon shows how transmission in a spinal circuit is regulated by descending motor commands to meet the changing requirements during movement.
Descending inputs modulate sensory input to the spinal cord by changing the synaptic efficiency of primary sensory fibers.
Inhibition provides a mechanism where the nervous system can reduce sensory feedback predicted by the motor command while allowing unexpected feedback to access the spinal motor circuit and the rest of the nervous system.
E.g. Inhibitory neurons generally increase in activity during movements that are highly predictable such as walking and running.
In cats and most other vertebrates, the corticospinal tract has no direct connections to spinal motor neurons and all descending commands have to be channeled through spinal interneurons that are also part of reflex pathways.
Humans and old world monkeys are the only species in which corticospinal neurons make direct connections with spinal motor neurons in the ventral horn of the spinal cord.
Even in these species, a considerable fraction of the descending connections terminate in the intermediate nucleus on spinal interneurons.
A considerable part of each descending command for movement has to be conveyed through spinal interneurons, and integrated with sensory activity before reaching motor neurons.
Neurons in spinal reflex pathways are activated prior to movement.
E.g. In human subjects where muscle contraction was prevented by lidocaine, the voluntary effort to contract the muscle still changed the transmission in reflex pathways as if the movement had actually taken place.
Stretch reflexes are routinely used in clinical examination of patients because changes in the strength of the reflex indicates damage to the nervous system.
Absent or weak stretch reflexes indicate damage to the PNS or CNS, while strong/hyperactive stretch reflexes always indicate that the lesion is in the CNS.
Highlights
Reflexes are coordinated, involuntary motor responses initiated by a stimulus applied to peripheral receptors.
Many groups of interneurons in spinal reflex pathways are also involved in producing complex movements.
Reflexes are smoothly integrated into centrally generated motor commands because of the convergence of sensory signals onto spinal and supraspinal interneuronal systems involved in movement.
Injury or disease of the CNS often results in significant alterations in the strength of spinal reflexes which aids diagnosis.
Chapter 33: Locomotion
While the basic locomotor-generating circuits have been conserved, the evolution of limbs and the complex patterns of behavior have resulted in the development of progressively more complex spinal and supraspinal circuits.
The cerebral cortex contributes primarily to the planning and execution of locomotion, whereas the basal ganglia and the cerebellum contribute to the selection of locomotor activity and to its coordination.
Locomotion: the act of moving.
Locomotion requires the production of a precise and coordinated pattern of muscle activity.
The unit of measure of locomotion in limbed vertebrates is the step cycle.
Step cycle: time between any two successive events.
Phases of the step cycle
Swing phase: when the foot is off the ground and transferred forward.
Stance phase: when the foot contacts the ground and propels the body forward.
Each of these phases can be further divided into a period of flexion (F) followed by an initial period of extension (E1) during swing and two additional periods of extension (E2 and E3) during stance.
Activity in most extensor muscles begins before the foot contacts the ground. This preparatory prestance phase signifies that the extensor muscle activity is centrally programmed and not simply the result of afferent feedback from contact of the foot with the ground.
Interlimb coordination: the precise coupling between different limbs.
The appropriate generation of intra- and inter-limb coordination of activity and the adaptation of these patterns of activity according to context is one of the major functions of the CNS during locomotion.
The motor pattern of stepping is organized at the spinal level.
While the entire nervous system is needed to produce a rich behavioral set of motions, the spinal cord is sufficient to generate both the rhythm underlying locomotion and the specific pattern of muscle activity required for limb coordination.
Central pattern generator (CPG): a group of neurons that can generate both the rhythm and pattern of locomotion independent of sensory inputs.
Experiments on a variety of species suggest that there are separate CPGs for each limb.
E.g. On split-belt treadmills, the left and right limbs can walk in independent step cycles.
Even though CPGs can produce the precise timing and phasing needed to walk, they’re also modulated by sensory signals from the moving limbs.
Specifically, proprioceptive and tactile information modulate CPG activity.
Proprioception regulates the timing and amplitude of stepping, while tactile information is used to detect unexpected obstacles.
Interestingly, a stimulus applied during the swing phase causes rapid movement away from the stimulus, but if its applied during the stance phase, it produces the opposite response, causing movement towards the stimulus. This is appropriate because during the stance phase, the animal might collapse if it’s being supported by the limb.
This is an example of phase-dependent reflex reversal as the same stimulus can excite one group of motor neurons during one phase of locomotion while activating the antagonist motor neurons during another phase.
Although the basic motor patterns for locomotion are generated in the spinal cord, the initiation, selection, and planning of locomotion require activation of supraspinal structures such as the brain stem, basal ganglia, cerebellum, and cerebral cortex.
The locomotor networks in the spinal cord require a command or start signal from supraspinal regions to initiate and maintain their activity.
The major neuronal structure involved in movement initiation is a region in the midbrain called the mesencephalic locomotor region (MLR).
Tonic electrical stimulation of the MLR results in animals standing up and walking.
Two structures of the MLR
Cuneiform nucleus (CNF)
Pedunculopontine nucleus (PPN)
Electrical stimulation of both structures has been unable to determine which nucleus is involved in the initiation of locomotion and speed control.
The MLR is composed of two regions that act together to select context-dependent locomotor behavior.
Midbrain nuclei that initiate locomotion project to brain stem neurons.
The excitatory signals from CNF and PPN are relayed indirectly to the spinal cord through neurons in the brain stem reticular formation.
The mechanisms by which the final command signals from the brain stem to the spinal cord activate the spinal locomotor networks, maintain their activity, and allow the expression of different gaits are unknown.
Brain stem nuclei also regulate posture during locomotion.
E.g. Tonic electrical or chemical stimulation of the pons and medulla modulates the level of muscle tone in the limbs and can either facilitate or suppress locomotion depending on the stimulated site.
In mammals, lesions of the motor cortex don’t prevent animals from walking on a smooth floor, but they severely impair precision locomotion, which requires a high degree of visuomotor coordination.
E.g. Walking on the rungs of a horizontal ladder, stepping over barriers, and stepping over objects on a treadmill.
Stimulation experiments suggest that in mammals, the corticospinal tract has privileged access to the rhythm generator of the CPG.
Planning of locomotion involves the posterior parietal cortex (PPC) as suggested by lesions to this region that cause cats to hit obstacles more.
The cerebellum regulates the timing and intensity of descending signals as damage results in marked abnormalities in locomotor movements such as ataxia.
Ataxia: a lack of muscle control or coordination of voluntary movements.
One major function of the cerebellum is to correct movement by comparing the motor signals sent to the spinal cord and the movement produced by that motor command.
Specifically, neurons in the dorsal spinocerebellar tract (DSCT) are strongly activated by proprioceptors and thus provide information about the mechanical state of the limb.
In contrast, neurons in the ventral tract (VSCT) are activated by interneurons in CPGs and thus provide information about the state of the spinal locomotor network.
The cerebellum integrates three information streams: the motor command (efference copy), the movement (afference copy from DSCT), and the state of spinal networks (spinal efference copy from VSCT).
The cerebellum projects to the motor cortex and various brain stem nuclei where they modulate descending signals to the spinal cord to correct for any motor errors.
Experiments also show that the cerebellum plays an important role in the adaptation of gait.
E.g. Patients with damage to the cerebellum don’t show adaptation to the split-treadmill experiment.
The basal ganglia modify cortical and brain stem circuits, and it’s importance is clearly demonstrated by the deficits in locomotion seen in patients with Parkinson’s disease.
Parkinson’s disease disrupts the normal functioning of the basal ganglia due to degradation of their dopamine inputs from the substantia nigra.
Symptoms of Parkinson include shuffling gait and problems with balance during locomotion with anticipatory postural adjustments.
These symptoms suggest that the basal ganglia contribute to the initiation, regulation, and modification of gait patterns.
Neuronal control of human locomotion is similar in quadrupeds and evidence suggests that all of the major principles on the origin and regulation of walking in quadrupeds also applies to humans.
Although the issue of whether CPGs exist in humans remains debated, several observations are compatible with the view that CPGs are important for human locomotion.
E.g. Patients with a spinal cord injury have similar deficits that match studies with cats.
Parallels between human and quadrupedal walking have also been found in patients trained after spinal cord injury.
Evidence supporting the existence of spinal CPGs in humans also comes from studies in human infants who make rhythmic stepping movements immediately after birth if held upright and moved over a horizontal surface.
Infants who lack cerebral hemispheres (anencephaly) still show stepping, suggesting that the circuits must be located at or below the brain stem.
This strongly suggests that some of the basic neuronal circuits for locomotion are innate and present at birth when descending control systems aren’t well developed.
Deficits in human motor cortex are much stronger than in cats or even nonhuman primates, suggesting that our motor cortex plays a more important role in locomotion than in other mammals.
Highlights
Locomotion is a highly conserved behavior that’s essential for the survival of species.
The complex nervous systems of mammals and its organization of different neural pathways involved in the generation and regulation of locomotion has been explored in significant detail.
The spinal cord, in isolation from descending and rhythmical peripheral afferent inputs, can generate a complex locomotor pattern. The circuits responsible for producing this activity are called central pattern generators (CPGs).
Activity in spinal circuits can be modified by experience.
Ionic membrane properties in interneurons and motor neurons contribute to rhythm and pattern generation.
Information from proprioceptive sensors is used to stabilize phase transitions between stance and swing, while information from exteroceptors is used to modify limb activity in response to unexpected perturbations.
Circuits involved in initiating locomotion, controlling speed of locomotion, and selecting gaits are localized in the midbrain, specifically the pedunculo-pontine and cuneiform nuclei.
The three main structures in the brain stem (pontomedullary reticular formation, lateral vestibular nucleus, and red nucleus) all contribute to the control of posture and interlimb coordination.
The motor cortex provides precise control of muscle activity patterns to allow animals to make visually guided anticipatory adjustments of their gait.
The posterior parietal cortex (PPC) is part of a network that contributes to the advanced planning of gait based on visual information. PPC neurons estimate the relative location of objects with respect to the body and retain information in working memory to facilitate coordination of the limbs.
The cerebellum and basal ganglia are used to correct motor errors and to select the appropriate patterns of motor activity.
Chapter 34: Voluntary Movement: Motor Cortices
Understanding how purposeful actions are achieved is one of the greatest challenges in neuroscience.
In contrast to reflexes that are automatically triggered by incoming sensory stimuli, voluntary movements are purposeful, intentional, context-dependent, and are typically accompanied by a sense of “ownership”.
That is, the actions have been willfully caused by the individual and are often made without an external trigger stimulus.
The world presents many challenging contexts for action so voluntary action involves choices between alternatives, including the choice not to act.
Volitional self-control over how, when, and whether to act endows primate voluntary behavior with much of its richness and flexibility, and prevents behavior from becoming impulsive, compulsive, or even harmful.
Voluntary movement is the physical manifestation of an intention to act on the environment to achieve a goal.
Since large areas of the cerebral cortex are implicated in various aspects of voluntary motor control, the study of cortical control of voluntary movement provides important insights into the functioning of the cerebral cortex as a whole.
The brain must transform a goal into motor commands that realize that goal.
Just like how the activity of neurons in primary sensory areas appears to encode specific physical properties of stimuli, the sensorimotor transformation model assumes that the activity of neurons in the motor system explicitly encodes or represents specific properties and parameters of the intended movement.
However, the sensorimotor transformation model has important limitations such as the parameter and coordinate systems being imported from physics and engineering, rather than being derived from physiological properties of biological sensors and effectors.
Another limitation is that the model places all emphasis on strictly serial feedforward computations and doesn’t capture the complexity of feedback as seen in the brain.
Lastly, the model hasn’t addressed how the proposed sensorimotor transformations could be implemented by neurons.
In recent years, theoretical studies of the motor system have been moving away from strictly representational models to more dynamical causal models.
One example of a dynamical causal model is optimal feedback control.
Three parts of optimal feedback control
State estimation: involves forward internal models (efference copies) and external sensory feedback to provide the best estimate of the present state of the body and the environment.
Task selection: involves the brain choosing a behavioral goal in the current context and what motor action(s) may best attain that goal.
Control policy: provides the set of rules and computations that establish how to generate the motor commands to attain the behavioral goal given the present state of the body and the environment.
The control policy isn’t a series of pure feedforward computations to calculate every instantaneous detail of a desired movement. Instead, it involves context- and time-dependent adjustments to feedback circuits.
Skipping over the details on primary motor cortex (M1), dorsal premotor cortex (PMd), predorsal premotor cortex (pre-PMd), ventral premotor cortex (PMv), supplementary motor area proper (SMA), presupplementary motor area (pre-SMA), and intraparietal sulcus (IPS).