CR4-DL

How the Mind Works (Incomplete)

By Steven Pinker

NOTE: This is an incomplete set of notes.

Preface

  • Our ignorance can be divided into problems and mysteries.
    • For problems, we may not know its solution, but we have an insight and an inkling of what we’re looking for.
    • For mysteries, we can only stare in wonder and bewilderment.

Chapter 1: Standard Equipment

  • The reason why we don’t have humanlike robots is because the engineering problems that we solve as we walk and plan are far more challenging than landing on the moon.
  • Nature, once again, has found ingenious solutions that human engineers can’t yet duplicate.
  • The faculty with which we ponder the world has no ability to peer inside itself or our other faculties to see what makes them tick.
  • Our minds aren’t animated by some godly vapor or single wonder principle.
  • The mind, like a spacecraft, is designed to solve many engineering problems, with each system specialized to overcome its own obstacles.
  • Review of visual object recognition.
  • Our conscious sensation of objects matches the color and lightness as the world is rather than the world as it presents itself.
  • E.g. Coal is black in sunlight and a snowball is white indoors.
  • The harmony between how the world looks and how the world is must be an achievement of our neural wizardry.
  • The next problem with seeing is depth. Our eyes squash the 3D world into a pair of 2D retinal images, and our brain’s must reconstruct the third dimension.
  • An intelligent system can’t be stuff with trillions of facts as there are too many.
  • Instead, it must be equipped with a smaller list of core truths and a set of rules to deduce their implications. But only relevant implications matter.
  • Isaac Asimov insightfully noticed that self-preservation, that universal biological imperative, doesn’t automatically emerge in a complex system.
  • Every human behaviour we take for granted is a challenging engineering problem.
  • Sight, action, common sense, love, and every human faculty are no accident, no inevitability.
  • Each is a tour de force wrought by a high level of targeted design.
  • Hidden behind the panels of consciousness must lie fantastically complex machinery.
  • Any explanation of how the mind works that alludes to some single master force begins to sound hollow.
  • E.g. Culture, learning, self-organization.
  • The robot challenge, of creating robots like us, hints at a mind loaded with complexity.
  • When the visual areas of the brain are damaged, the visual world isn’t simply blurred or riddled with holes.
  • Instead, select aspects of visual experience are removed while other are left intact.
  • E.g. Hemispatial neglect, cortical color blindness, inability to see movement, inability to recognize objects, prosopagnosia.
  • When we gaze at the world, we don’t think about the many layers of apparatus that underlie our unified visual experience until neurological disease dissects them for us.
  • Review of twin studies to better understand the role of genetics in the brain.
  • By showing how many ways the mind can vary in its innate structure, the discoveries open our eyes to how much structure the mind must have.
  • The complex structure of the mind is the subject of this book.
  • The key idea is that the mind is a system of organs of computation, designed by natural selection to solve the kinds of problems that our ancestors faced in their foraging way of life, such as understanding and outmaneuvering objects, animals, plants, and other people.
  • The mind is what the brain does, and the brain processes information.
  • The mind is organized into modules or mental organs, and each is specialized in one arena of interaction with the world.
  • Each module was and is shaped by natural selection.
  • On this view, psychology and neuroscience is reverse-engineering; we’re trying to figure out what a machine was designed to do.
  • We all engage in reverse-engineering when we face an interesting new gadget.
  • E.g. A tool with a metal screw is used for removing wine bottle corks.
  • The strategy of reverse-engineering the body has continued in the last half of this century as we’ve explored the molecular level of cells and life.
  • The stuff of life turned out not to be a quivering, glowing, magical substance, but a contraption of tiny springs, hinges, rods, magnets, zippers, gates, and trap doors.
  • Evolution can be used to explain not just the complexity of an animal’s body but the complexity of its mind.
  • How could the forces that shaped that system and the purposes for which it was designed be captured in our understanding of the mind?
  • Evolutionary thinking is indispensable in the form of careful reverse-engineering.
  • Psychology helps us to understand the mind, while evolution helps us understand why we have the mind that we do.
  • Thinking is computation, the author claims, but that doesn’t mean that the computer is a good metaphor for the mind.
  • The mind isn’t the brain, but what it does, and not even everything it does, such as metabolizing fat and producing heat.
  • The brain’s special status comes from a special thing the brain does, which is information processing or computation.
  • Information and computation are independent of the physical medium that carries them.
  • E.g. A program can run on vacuum tubes, transistors, quantum particles, or even a group of people with flags.
  • This insight is now called the computational theory of mind.
  • Neuroscientists like to point out that all parts of the cerebral cortex look the same, not only in different parts of the human brain but also across animal brains.
  • However, this claim is based on a superficial observation because we simply lack the ability to look at patches of the brain and read out the logic in the intricate pattern of connectivity that makes each part separate.
  • In the same way that all books look the same superficially with different combinations of the same 75 or so characters or all movies are just different patterns of light.
  • The content and meaning of a book or movie lies in the pattern of ink marks or light, and is only apparent when the piece is read or seen.
  • Similarly, the content of brain activity lies in the pattern of connections and activity among neurons.
  • Minute differences in the details of the connections may cause similar-looking brain patches to implement very different programs.
  • The arrangement of neurons is what matters.
  • What microcircuits can do depends only on what they’re made of and how they’re connected.
  • E.g. Circuits made from neurons can’t do the exact same things as circuits made from silicon.
  • These differences ripples up through the programs built from the circuits and affect the performance of the circuit.
  • The organ systems of the body do their jobs because each is built with a particular structure tailored to the task.
  • E.g. The heart pumps blood and the lung exchanges gases.
  • This specialization goes all the way down from organ to tissue to cells.
  • The mind has to be built out of specialized parts because it has to solve specialized problems.
  • The term “module” here doesn’t refer to detachable, snap-in components, instead, it refers to a distributed system over the brain.
  • What we know so far is that brain modules assume their identity by a combination of what kind of tissue they start out as, where they are in the brain, and what patterns of triggering input they get during critical periods of development.
  • The human mind is a product of evolution, so our mental organs are either present in the minds of apes or evolved from the minds of apes.
  • Behaviour itself didn’t evolve, what evolved was the mind.
  • To reverse-engineer the mind, we must sort them out and identify the ultimate goal in its design.
  • The logic of natural selection gives the answer: the ultimate goal of the mind is to maximize the number of copies of the genes that created it.
  • The goal is the long-term stability of our replicator genes.

Chapter 2: Thinking Machines

  • The two deepest questions about the mind are
    • What makes intelligence possible?
    • What makes consciousness possible?
  • We may have trouble defining intelligence, but we recognize it when we see it.
  • To make rational decisions means to base the decision on some grounds of truth: correspondence to reality or soundness of inference.
  • Without a specification of goals, the very idea of intelligence is meaningless.
  • Intelligence, then, is the ability to attain goals in the face of obstacles by means of decisions based on rational rules.
  • Under a microscope, the brain has a breathtaking complexity of physical structure fully commensurate with the richness of the mind.
  • Intelligence doesn’t come from a special kind of spirit or matter or energy but from a different commodity: information.
  • Information is nothing special; its found wherever causes leave effects.
  • What is special is information processing.
  • The intelligence of a system emerges from the activities of the not-so-intelligent components within it.
  • How do symbols acquire meaning?
  • One answer is that a symbol is connected to its referent in the world by our sense organs.
  • Another answer is that the unique pattern of symbol manipulations triggered by the first symbol mirrors the unique pattern of relationships between the referent of the first symbol and the referents of the triggered symbols.
  • These are called the causal and inferential-role theories.
  • Both can be considered and coexist as explanations of meaning.
  • The proper label for the study of the mind informed by computers isn’t AI but natural computation.
  • The form of a representation determines what can easily be inferred from it.
  • The way people generalize is perhaps the most telltale sign that the mind uses mental representations.
  • The combinatorics of representations explains the inexhaustible repertoire of human thought and action.
  • Only a few elements and a few rules that combine them can generate an unfathomable vast number of different representations.
  • E.g. Language and music.
  • Studies suggest that the human brain uses at least four major formats of representation.
  • E.g. Visual image, phonological representation, grammatical representation, and mentalese.
  • Mentalese: the medium in which content or gist is captured.
  • The modular design of computers and minds is a special case of modular, hierarchical design in all complex systems.
  • Complex systems are hierarchies of modules because only elements that hang together in modules can remain stable enough to be assembled into larger and larger modules.
  • Review of Searle’s Chinese Room thought experiment and “The Emperor’s New Mind” by Penrose as arguments against the computational theory of mind.
  • Review of McCulloch and Pitts work.
  • Review of auto-associator and their benefits, perceptrons and the XOR problem.
  • Review of the backpropagation algorithm and connectionism.
  • The author favors the opposing view that symbol manipulation underlies human language and the parts of reasoning that interact with it.
  • Problems with connectionism
    • NN can’t entertain the concept for an individual.
      • E.g. We can represent vegetableness or horsehood, but not a particular vegetable or a particular horse.
      • It’s easy to confuse the relationship between a class and a subclass, with the relationship between a subclass and an individual.
      • E.g. Animal and horse versus two individual horses.
      • This can lure modelers into treating an individual as a very, very specific subclass.
      • E.g. Using a slight difference to distinguish between entities.
      • Your knowledge of the properties of two objects can be identical and still you know that they’re distinct.
      • There is, admittedly, one feature that always distinguishes individuals: they can’t be in the same place at the same time.
    • Compositionality: the ability to represent parts, their meaning, and that the way those parts are combined into something meaningful.
      • This problem is crucial to understanding language.
      • The whole is different from the sum of its parts.
      • E.g. The dog bites the man versus the man bites the dog.
      • A hundred trillion sentence meanings can’t be squeezed into a brain with tens of billions of neurons if each meaning has its own neuron.
      • Instead, the brain uses combinatorics to deal with a combinatorics problem.
      • This tells us that thoughts are assembled out of concepts and not stored whole.
    • Variable binding or quantification.
    • Catastrophic forgetting
      • A network’s ability to generalize comes from its dense interconnectivity and its superposition of inputs.
      • Often, different chunks of information should be packaged separately, not blended together.
      • This may be why we have different and specialized memory systems, to prevent mixing and ambiguity.
  • People think in two modes
    • We can form fuzzy stereotypes by soaking up correlations among properties.
    • We can also form systems of rules that define categories in terms of the rules that apply to them.
  • Rule systems allow us to rise above mere similarities and reach conclusions based on logic.
  • Using categories and rules are, in a sense, digital, and this gives their representations stability and precision.
  • E.g. If you make a chain of analog copies, the quality declines with each copy. If you make a chain of digital copies, every copy is the same quality.
  • Similarly, crisp symbolic representations allow for chains of reasoning in which symbols are copied in successive thoughts.
  • Connectionist networks, like ANNs, typically require large amounts of data and training time because they don’t cut to the solution by means of rules, but need to have most of the examples pounded into them and to interpolate between the examples.
  • The author’s main intent isn’t to show what certain models can’t do, but what the mind can do.
  • Thoughts and thinking are no longer ghostly enigma, but mechanical processes that can be studied.
  • What about consciousness?
  • The computational theory of mind offers no clear answer.
  • Three meanings of consciousness
    • Self-knowledge: information about the being itself.
      • E.g. You can say in your head “Hey, I’m reading this sentence.”
      • Thus, consciousness is defined as building an internal model of the world that contains the self.
      • This has nothing to do with consciousness as it’s commonly understood: being alive, awake, and aware.
      • Self-knowledge, including the ability to use a mirror, is no more mysterious than any other topic like perception or memory.
    • Access to information: how we can report some information and not others.
      • E.g. We can say what we like and dislike but not what happens in the brain and body like the enzymes secreted by our stomach or how we turn the 2D image on our retina into a 3D object.
      • This shows that information and its processing falls into one of two pool.
      • One pool can be accessed by the systems underlying verbal reports, rational thought, and deliberate decision making.
      • The other pool can’t be accessed by those systems.
      • Both systems talk to each other such as learning motor skills that become automatic over time.
      • This is the distinction between the conscious and unconscious mind.
      • Analogous to computers, computers can know that the printer has an error, but it doesn’t know why the printer has an error.
    • Sentience: subjective, first-person awareness.
      • What it is like to be.
      • It’s this sense, sentience, that makes consciousness seem like a miracle.
  • The rest of this chapter is about consciousness in the last two senses.
  • Someday, we will have a good understanding of what in the brain is responsible for consciousness in the sense of access to information.
  • When we’re aware of a piece of information, many parts of the mind can act on it.
  • In this way, short-term memory is access-consciousness.
  • Instead of storing all possible combinations of chess pieces, sentences, and actions, we only process a subset of information at a time and calculate an answer just when it’s needed.
  • Not only is space a constraint, but time is too.
  • Life is a series of deadlines.
  • Perception and behaviour take place in real time and since computation itself takes time, information processing is part of the problem rather than the solution.
  • Resources are a third constraint as information processing requires energy.
  • Only relevant information is processed and stored, information that’s only relevant some of the time is stored until its relevant.
  • This design specification explains why access-consciousness exists in the human mind and allows us to understand some of its details.
  • Four features of access-consciousness
    • We’re aware of a rich field of sensations.
    • Parts of this field can fall under the spotlight of attention.
    • Sensations and thoughts come with emotions.
    • The “I” appears to make choices and pull the levers of behavior.
  • Access-consciousness seems to tap into the intermediate levels of information processing.
  • The lower levels aren’t needed and the higher levels aren’t enough.
  • The next important feature of conscious access is attention.
  • Review of Anne Treisman X O green red experiment.
  • Why is visual computation divided into an unconscious parallel stage and a conscious serial stage?
  • Parallel unconscious computation stops after it labels each location with a feature such as color, contour, depth, and motion.
  • The combinations then have to be computed consciously at each location.
  • If the conscious processor is focused at one location, the features at other locations should float around unglued.
  • This is exactly what experiments find.
  • In an optimal information-retrieval system, an item should only be recovered when its relevance outweighs the cost of retrieving it.
  • This means it should be biased to fetch frequently and recently encountered items, which is what human memory does.
  • Sentience and access consciousness may be two sides of the same coin.
  • However, we know they’re different because one can be found without the other.
  • E.g. Blindsight is access consciousness without sentience.
  • What we need is a theory of how the subjective qualities of sentience emerge out of mere information access.
  • Imagine a surgeon replaces one of your neurons with a microchip that duplicates its input-output functions. You feel and behave the same. Then they replace a second one, and a third one, and so on until more and more of your brain becomes silicon. Do you notice a difference?
  • This is the ship of Theseus experiment but on your brain.

Chapter 3: Revenge of the Nerds

  • Review of SETI.
  • We think that the goal of evolution is to produce intelligence or brains but this isn’t true.
  • Life is a densely branching bush, not a scale or ladder.
  • Evolution doesn’t aim for intelligence nor complexity; it’s about ends, not means.
  • Organisms don’t evolve toward every possible advantage because there are always tradeoffs.
  • By devoting energy and matter to one organ, other organs must be neglected.
  • This chapter reviews natural selection.
  • Natural selection isn’t the only process that changes organisms over time, but it’s the only process that seemingly designs them over time.
  • Review of exaptation.
  • Why did brains evolve?
  • The answer lies in the value of information, which brains have been designed to process.
  • Information confers a benefit that’s worth paying for.
  • Every decision in life is like choosing which lottery ticket to buy.
  • To make good decisions, we must use information that informs us of the odds, benefits, and risks.
  • In animals, information is gathered and translated into profitable decisions by the nervous system.
  • Often more information is better, up to a point of diminishing returns.
  • The evolution of information processing has to be accomplished at the nuts-and-bolts level by the selection of genes that affect the brain.
  • Path integration: the integration of the velocity vector with respect to time to obtain the position vector.
  • A brain is a precision instrument that allows a creature to use information to solve the problems presented by its lifestyle.
  • There’s no such thing as generic animal intelligence.
  • When a species has a noteworthy talent, it’s reflected in the gross anatomy of its brain.
  • The same applies not only to species, but also individuals.
  • Many species can’t evolve defenses fast enough, even over evolutionary time, to defend themselves against humans.
  • Why are humans unique?
    • Our visual system is built to process depth and handle 3D processing.
    • Group living let’s us share information easily but also results in a cognitive arms race between humans.
    • Hands to manipulate objects and tools, resulting in bipedalism which is energy efficient.
    • Hunting and gathering.
  • Review of our evolution lineage and timeline.
  • The cliché that “cultural evolution has taken over biological evolution.”
  • Natural selection works for anything that can replicate, not just DNA.
  • E.g. Memes.
  • Memes are more analogous to epidemiology than to evolution.
  • Ideas are contagious diseases that cause epidemics rather than as advantageous genes that cause adaptations.

Chapter 4: The Mind’s Eye

  • Illusions unmask the assumptions that natural selection installed to allow us to solve unsolvable problems.
  • David Marr described vision as “a process that produces from images of the external world a description that’s useful to the viewer and not cluttered with irrelevant information.”
  • This chapter explores how vision turns retinal depictions into mental descriptions.
  • Pictures exploit projection, the optical law that makes perception such a hard problem.
  • Information about depth is lost in the process of projection onto the retina.
  • So to determine depth, the brain uses images from two eyes.
  • The brain uses the difference in an object’s projection to the two eyes, together with the angle formed by the two eyes’ gaze and their separation to calculate depth.
  • This mechanism is called stereo vision or stereo for short.
  • Stereo also explains how we can distinguish paintings from reality. When a painting and object are seen with both eyes, the painting causes two similar images to be projected onto the retina, while the object causes two different images to be projected.
  • The brain implements stereo visions like how we solve a crossword puzzle. It keeps its options open until both the matching horizontal and vertical word is found.
  • This is similar to sound localization and coincidence detection between the two eyes.
  • Vision is a sense of surfaces and boundaries.
  • Unlike the other two dimensions, which are captured by the position and rods and cones, depth must be painstakingly wrung out of the data.
  • A shape memory isn’t a copy of the retinal activation pattern but rather is stored in a format that differs from it in two ways.
  • First the coordinate system is centered on the object and not on the viewer.
  • Second the representation shouldn’t be an exact copy of the object and should abstract away the details.
  • Review of the mental rotation experiment and how it takes longer to answer for objects rotated further.
  • A mental image is simply a pattern in the visual cortex that’s loaded from long-term memory rather than from the eyes.
  • Presumably space in the world is represented by space on the cortex because neurons are connected to their neighbours, and it’s handy for nearby bits of the world to be analyzed together.
  • Perky effect: holding a mental image interferes with seeing faint and fine visual details.
  • Mental images live in the visual cortex as do seen images.
  • The cost of of reactivating visual experience is the risk of confusing imagination with reality, as is the case with dreams.
  • The form of a mental representation determines what’s easy or hard to think about.
  • E.g. Logical transitive arguments can be visualized as locations because locations are transitive.
  • Visual thinking is often driven more strongly by the conceptual knowledge we use to organize our images than by the contents of the images themselves.

Chapter 5: Good Ideas

  • This chapter is about human reasoning and how we make sense of the world.
  • Natural selection didn’t shape us to get good grades in science class, it shaped us to master the local environment.
  • In a large society with writing and science, the cost of an exponential number of tests is repaid by the benefit of the resulting laws to a large number of people.
  • Our brains were shaped for fitness, not for truth.
  • Sometimes the truth is useful, sometimes it isn’t.
  • What the mind gets out of categories is inference, not improved memory or better organization.
  • The smaller the category, the better the prediction.
  • The core idea of objecthood is that parts that move together are part of the same object.
  • Object principles
    • Objects can’t pass through another object like a ghost.
    • Objects move along continuous trajectories and don’t teleport.
    • Objects are cohesive and its parts are stuck together.
    • Objects move each other by contact only.
  • However, infants don’t quite grasp gravity and inertia, and neither do adults.
  • Agents are recognized by their ability to violate intuitive physics by starting, stopping, swerving, or speeding up without an external nudge.
  • If a piglet is raised by a cow, will it grow up to oink or moo?
  • Preschoolers correctly answer oink.
  • Artifact: an object suitable for attaining some end that a person intends to be used for attaining that end.
  • Autistic children appear to be mind-blind.
  • The gambler’s fallacy is rarely a fallacy because it’s rare for phenomenon to be truly random and not dependent of the past.

Chapter 6: Hotheads

  • I am dropping this book at chapter 6 because it’s too long and doesn’t have the information I want. A lot of the information is stuff I already know and there’s no interesting theory.

Chapter 7: Family Values

Chapter 8: The Meaning of Life