Perception Theory

Authored by: Ann Marie Barry

Handbook of Visual Communication

Print publication date:  November  2004
Online publication date:  December  2004

Print ISBN: 9780805841787
eBook ISBN: 9781410611581
Adobe ISBN: 9781135636531

10.4324/9781410611581.ch3

 

Abstract

How the brain enables the mind is the question to be answered in the twenty-first century.

Michael Gazzaniga (1998, p. xii)

 Add to shortlist  Cite

Perception Theory

How the brain enables the mind is the question to be answered in the twenty-first century.

Michael Gazzaniga (1998, p. xii)

Theoretical Perspectives

Perception Theory is my own term for describing the application of neurological research and accepted psychological principles to the study of visual communication. By addressing how the mind/brain receives information, processes it, derives meaning from it, and uses it, this theoretical approach adds new medical information to the study of visual communication and helps us assess the efficacy of existing theories of communication derived from social research. Ultimately, in order to be useful, all communication theory and all assumptions about the way we process images and the impact they have on us must be compatible with neurological research.

Simply stated, this perceptual approach to communication theory acknowledges the primacy of emotions in processing all communication, and particularly targets visual communication as paralleling perceptual process dependent on primary emotion-based systems of response. In light of current neurological research, for example, we can no longer assume that a person’s response to visual images will be conscious, or logical. Rather, neurological research reveals that visuals may be processed and form the basis of future action without passing through consciousness at all. Developmentally, too, we know that children and teenagers reason primarily through emotions, and are therefore highly susceptible to emotional appeals through visuals in the way they think and act. Every aspect of perception, therefore, has profound implications for all areas of communication, and none more than visual communication. Ultimately the key to understanding all visual communication lies in the neurological workings of the brain. We must therefore begin here.

The roots of this neurological approach go back a century and a half to the discovery of the connection between language and certain areas of the left cerebral hemisphere, and continuing through the work of William James and Gestalt psychologists Max Wertheimer, Wolfgang Kohler, and Kurt Koffka, and the ecological optics of J. J. Gibson. The major impetus for a neurological approach to communication, however, comes from the 1960s split-brain research of the late Roger Sperry at the California Institute of Technology and the visual-processing research of David Hubel and Torsten Wiesel of Harvard Medical School, for which the trio was awarded the Nobel Prize in Physiology and Medicine in 1981. The work of these men and of the subsequent generation of researchers who have continued their work on the organization of the human brain into the present—such as researchers like Michael Gazzaniga, Joseph LeDoux, Antonio Damasio, Semir Zeki, and Steven Pinker, among many others—is the basis of our expanding knowledge of the brain, and it is this knowledge that will drive visual communication study in the 21st century.

Because the history of the evolution of the brain’s neurology is the also evolving story of human communication, as we trace the path of vision through the various visual-processing areas in the brain, we recognize at once how primitive and inaccurate is the idea, as Kepler suggested, that the eye is essentially a camera, passively recording an objective external reality. We also recognize just how inaccurate was Descartes’ view of reason as the final arbiter of perception. What emerges instead from brain research is the awareness that although sight may indeed begin with light hitting the retina, vision occurs deep within the brain; and that perception, the process by which we derive meaning from what we see, is an elaborate symphony played first and foremost through the unconscious emotional system, with neural equipment evolved over millions of years. With our neural maps for the brain’s development already in place, humans today advance according to the same principles by which their ancient ancestors developed.

As we learn to read this map, we find a variety of controversial questions in theory becoming resolved in interesting ways. These include the debate over nature versus nurture, over the relationship between language and thought, and most importantly, the primary relationship between reason and emotion, and between vision and memory. Although, for example, the debate between nature and nurture has raged from the mid-19th century when Darwin first published his Origin of Species, and neurological researchers today still argue forcefully that the key characteristics of mind are inherited (LeDoux, 2002; Pinker 1994, 1997; Dawkins, 1996; Wilson, 1999), the neural mechanics of the brain are far from reductive or deterministic. Joseph LeDoux, for example, estimated mind and behavior to be approximately 40% “nature” and 60% “nurture,” but reminds us that “synapses are the key to the operations of both” and that these synapses are wired up in the brain by “one system that takes care of both situations” (2002, pp. 5–6). This system, prewired by evolution to detect and respond to danger, is thus built on and modified by perceptual experience.

Because visual experience is by far the most dominant learning mode, it is central to building synaptic connections in the brain. No other sensory system has been studied so completely as the visual, and no other has shown such promise in revealing the secrets of mind and therefore of behavior. Because much of our visual experience today comes vicariously through media, an understanding of how perception works is fundamental to ongoing communication research, particularly in terms of media effects. As media violence becomes the central focus of much social concern today, for example, it is important to note that much of the finger-pointing for assigning blame is beside the point: Whatever experience a child has will build the pattern of his or her future response. Family interaction, formal education, and media are all a part of the stream of influence that builds and reinforces certain brain synapses. If one influence is stronger, this will sway perception in a particular direction. If one is weaker, it will eventually go the route of all unreinforced and therefore ineffectual synapses and be reabsorbed into the system.

Because evolution is a slow process, our brains have not yet adapted to visual experience gained via media in any special way. Although biological evolution proceeds at a snail’s pace, the technological revolution has sped by us at awesome speed. For the brain’s perceptual system, visual experience in the form of the fine arts, mass media, virtual reality, or even video games is merely a new stimulus entering the same prewired circuits we have inherited as part of our brain potential and is processed in the same way. In other words, visual media is just as real to the emotional brain as any other visual experience, and it contributes just as much to the brain’s synaptic wiring. In the same way that it can be argued that “we are what we eat,” it can also be argued that how we see (and consequently how we behave) is primarily the sum of our perceptual experience. As Antonio Damasio (1994, 1999) observed, neurological research has shown us that we are not primarily thinking beings who also feel, but essentially feeling beings who also think.

Thus, neural research provides a rich heuristic for new insight into all aspects of existing visual communication theory. As the emerging picture becomes clearer of how the brain’s learning and memory systems are fed by visual experience, and how its major pathways and modules work both independently of and in concert with one another to complete the process of perception, we find limitless opportunities for new research in visual studies. This chapter lays out some of the major concepts revealed through neurological research and explores the implications of these in terms of visual communication theory.

Evolution and the Mechanics of Vision

Visual researcher R. L. Gregory and others suggested that visual perception first developed “in response to moving shadows on the surface of the skin—which would have given warning of near-by danger—to recognition patterns when eyes developed optical systems” (Gregory, 1998, p. 13). Richard Dawkins of Oxford University observed that it would not be surprising to find that all animals that have survived the process of natural selection possess some sort of “rudimentary eye,” and he and others speculated that the eye most probably began as a patch of light-sensitive pigment that cued the animal to whether it was day or night (Dawkins, 1996; Gazzaniga, 1998, p. 11). The eye, a survival device that functions to detect change from nonchange, begins the process of making meaningful sense out of light from the external world. As the signal is carried via the optic nerve to the visual cortex, the internal brain takes control of the process.

The eyes are, in fact, a direct extension of the brain into the environment. The last and most sophisticated of our senses to evolve, our eyes send more data more quickly and efficiently through the nervous system than any other sense. Characterized by cells responsive to minute differences in shape, direction, degree of slant, and color, the eyes represent the first stage in a segmented sequential process that eventually results in meaning and all that is implied by “seeing.” The optical system, an interface between the brain proper and the environment, is a synchrony of millions of nerve cells firing in particular patterns in parallel and sequential processing. Within the system, cells work separately and in concert with one another to activate and to inhibit certain responses, using continual feedback looping to hone the image that we see. Perception, the process by which we derive meaning through experience, is a dynamic, interactive system that utilizes built-in genetic programming to synthesize sensory input, memory, and individual needs. The eyes are only an initial part of the equation, and can, in fact, be bypassed altogether.

Experiments with blind people have shown, for example, that we actually do not need eyes to “see.” Patients fitted with a device turning low-level video pictures (through a camera mounted next to the eyes) into vibrating pulses (fed to a patch of skin on their backs) have also learned to “see.” The skin conducts the signals, which the brain can then convert into neural imagery—imagery that can then be interpreted as sight (Carter, 1999). This “vision” utilizes the rudimentary structures of perception, but does not result in understanding. Because it lacks the emotional processing through the limbic system that is essential for meaning, it gives us a clue as to how important emotional processing is to perception. Without it, we are in fact lost, and could not function adequately in everyday life.

As shown in Fig. 3.1, perceptual process begins with ambient light that bounces off objects in the environment. This optic array is focused by the cornea and lens onto the 126 million receptors of the retina—120 million rods and 6 million cones—which line the back of the eye. As the visual system seeks and acts on information from the environment, retinal inputs lead to ocular adjustments and then to altered retinal inputs as the eyes actively engage the environment. Receptors in the retina then transform and reduce information from light into electrical impulses, which are then transmitted by the optic nerve from each eye to the brain’s visual thalamus and onto the visual cortex where vision actually occurs.

Mechanically, the retina is a complex network of neurons lining the back of the eye; it contains rods (that detect light and shape and are used in night vision) and cones (that detect color and are used in day vision). The area of clearest vision in the retina is the

Fig. 3.1.   Eye mechanics. When we see, light passes through the cornea, a tough membrane of four transparent layers, which reduces the speed of light and directs it toward the center of the eye; this light moves through the pupil and onto the lens, which focuses the light onto the retina at the back of the eye. The optic nerve then tranmits electrical signals from the retina to the brain, where vision occurs.

In the thalamus, before conscious recognition of the object is achieved, the message is split into two processing routes (Fig. 3.2), which are key to the understanding of how perception works. The first route, the thalamo-amygdala pathway, is a crude network, which LeDoux (1994, pp. 50–57, 1998, pp. 163–167) described as “quick and dirty,” that sends signals directly to the amygdala, the emotional center of the brain. In this part of process, the perceived shape of the situation is quickly matched to others stored in emotional memory and an emotional response then is framed that is in keeping with past positive or negative experience. Although we are not aware of the process, the end result is felt by us—most dramatically as “fight or flight” response in extreme situations, or in non life-threatening situations as a feeling or attitude that sets up our cognitive thinking, skewing it automatically toward a particular response. The second, slower route, the cortical pathway, transmits signals to the cortex where they are refined and again sent to the amygdala (shown in Fig. 3.2) for emotional coloring. It is in the cortex that we first become aware of what we see, but by then the process has already activated certain emotions and responses out of the range of our consciousness. It is this aspect that is so difficult for the average person to grasp, because our brain continues to fool us into thinking that our rational being is in charge. As Gazzaniga (1998) noted:

By the time we think we know something—[i.e,] it is part of our conscious experience—the brain has already done its work … Systems built into the brain do their work automatically and largely outside of our conscious awareness. The brain finishes the work half a second before the information it processes reaches our consciousness.

(p. 63)
Signals received by the thalamus are sent directly to the amygdala, the seat of our emotion, and also to the cortex, the seat of conscious processing. The signal sent through the thalamo-amygdala pathway is shorter and less complex than the signal sent through the cortical pathway. Emotional reactions are therefore faster than conscious ones, and emotional memory frames all conscious response. The cortex also sends a second signal to the amygdala, adding conscious input to emotional reaction and emotional response to thought. Emotional reaction is a survival-oriented unconscious response that can bypass conscious thought altogether.

Fig. 3.2.   Signals received by the thalamus are sent directly to the amygdala, the seat of our emotion, and also to the cortex, the seat of conscious processing. The signal sent through the thalamo-amygdala pathway is shorter and less complex than the signal sent through the cortical pathway. Emotional reactions are therefore faster than conscious ones, and emotional memory frames all conscious response. The cortex also sends a second signal to the amygdala, adding conscious input to emotional reaction and emotional response to thought. Emotional reaction is a survival-oriented unconscious response that can bypass conscious thought altogether.

Although it may seem counterintuitive, the “quick and dirty” thalamo-amygdala pathway, which engages the limbic system, still remains our first line of defense and has many more one-way connections to the thinking cortex than the cortex has to it. “Emotion,” which is generated by the limbic system, refers not to something that the mind or brain does or has, but to different kinds of responses mediated by separate unconscious neural systems. These systems, which feed vision and other sensory processes, have evolved to accomplish behavioral goals associated primarily with survival and reproduction. They function unconsciously, and according to LeDoux, enter awareness only as “outcomes” and “only in some instances” (1998, p. 17). It is this emotional aspect of visual communication that is limited in patients fitted with the artificial tactile visual device described earlier: Even though male subjects tested had become practiced “viewers” and were able to accurately describe what they were seeing through the device, they remained emotionally unmoved by what they saw (Carter, 1999).

The cortical pathway, shown in Fig. 3.3, supplements and complements the emotional one and is slower and involves more evolutionarily developed structures. As the more refined signal progresses toward conscious recognition, it moves through separate brain areas in the visual cortex: In the area termed V1, general scanning is done in which points in the visual cortex match the external visual field. Here the picture is distorted, paralleling the foveal and peripheral vision fields of the eye (the fovea, the only part of the retina that sees with clarity, is more densely packed with neurons and so takes up a larger part of the V1 image). Area V5, a separate area specialized for detecting motion, receives signals from V1 and also directly from the retina. Because seeing change and the motion of a potential predator is one of the most important capabilities in survival, it is important that the brain receives this information as quickly as possible.

Visual input travels through the eye, and via the optic nerve, to the thalamus and then to area V1, the primary visual cortex, where it is sent to other appropriate areas for processing. Area V5, which is geographically distinct and specialized for motion detection, receives signals directly from the retina and also through V1.

Fig. 3.3.   Visual input travels through the eye, and via the optic nerve, to the thalamus and then to area V1, the primary visual cortex, where it is sent to other appropriate areas for processing. Area V5, which is geographically distinct and specialized for motion detection, receives signals directly from the retina and also through V1.

In the visual cortex, electrical signals sent from the retina are processed by thousands of specialized modules, each of which corresponds to a small area of the retina. When we ask the question “where?” for example, a pathway involving areas V1→V2→V3→V5→V6 is activated. When we ask “what?” a pathway from V1→V2→V4 is activated (Carter, 1999). There are four parallel systems involved in the different attributes of vision—one for motion, one for color, and two for form. Color is perceived when cells specialized to detect wavelength in V1 signal two other specialized areas in V4 and V2. Form in association with color is detected by a circuit of connections between V1, V2, and V4. Perception of motion and dynamic form occurs when cells in layer 4B of V1 send signals to areas V3 and V5 and through V2 (Zeki, 1992). Cells in V6 determine objective positioning. This symphony of intricate and delicate biochemical and electrical rhythms comes together as perception.

In the visual fine arts, neurologist Semir Zeki has used this knowledge to explore how artists have utilized the active process of vision as a means of gathering information to express a unified truth within an ephemeral world. Vision, Zeki commented, is “an active process in which the brain, in its quest for knowledge about the visual world, discards, selects, and, by comparing the selected information to its stored record, generates the visual image in the brain, a process remarkably similar to what an artist does” (1998, p. 21). The function of art and the function of the brain, he concluded, are both the same: to find and represent the constant, lasting, essential, and enduring features of objects, surfaces, faces, and situations. Zeki observed, for example, that the Fauvists, who tried to liberate color from form, faced an impossible task: Although color and form are processed separately by the brain, they are so intimately linked that only an extreme pathological condition could separate them (1998, pp. 197–204). Conversely, according to Zeki, Cubism’s attempt to find the essence and permanence of things within changing views of it mimics the brain’s ability to integrate successive views of objects and people as they move within the environment, or as we move around them within a given space, into a single image (1998, p. 54).

In the process of perception, data are reduced and compressed, and what was once a retinal image becomes not a cameralike picture of external reality, but rather a representative map of the visual field. In this way, light is transformed into meaning built from separate, specific functions in the brain. The eye—triggered by the attention system of the brain—is continually and automatically darting about to gather the specific information that will form the mental image. Both the eyes and the external world are in continuous motion; the brain creates from this motion a stable mental configuration that can be described as an “image.”

An image, Damasio explained, consists of a neural pattern that represents the highest level of biological phenomenon (1999, p. 9). The ability to hold the image over time, a process described as “working memory,” is ultimately the basis of extended consciousness. “All consciousness,” Damasio explained, “operates on images” (1999, pp. 122–123).

Technology, the Brain at Work, and Implications for Visual Communication

Within the past 30 years, primarily through the power of functional magnetic resonance imaging (fMRI), computerized tomography (CT), positron emission tomography (PET), and near-infra-red spectroscopy (NIRS), we can now view exquisitely detailed images of the brain and learn what parts are active in performing various visual, oral, and computative tasks. MRI works by magnetism, aligning atomic particles within body tissue and recording the feedback when these are bombarded with radio signals. CT, a sophisticated software system, then converts this information into a three-dimensional picture. Functional MRI adds to this picture, showing areas of greatest brain activity by revealing which parts are using the greatest amount of oxygen. By recording four images each second, fMRI provides a rapid scanning of the flow of activity in the brain as it undertakes various tasks. Although very expensive, fMRI provides the highest resolution image of brain activity to date. PET scans also show areas of activity in the brain, but without the high resolution of fMRI and with the additional drawback of requiring the injection of a radioactive marker through the bloodstream. NIRS also reveals active brain areas, but does so by bouncing light waves into the brain and measuring the reflection. Today, multimodal imaging that combines these techniques is becoming increasingly popular (Carter, 1999).

With the availability of such techniques, neuroscience has been able to build a map of how the brain’s modules function and communicate with one another in solving particular problems and undertaking specific tasks. The image that we ultimately perceive is unified, not because the mind sees a picture of what is really “out there,” but because the specialized areas in the visual cortex link four parallel systems into a vast network, a network in which reentrant connections allow information to flow both ways to resolve conflicts between cells. This network, neuroresearcher Zeki (1992, 1998) speculated, allows information processed in different places to be combined through synchronous firing, and this synchronicity yields perception and comprehension simultaneously. There is, many neurologists believe, no single area in the brain where all of the different sensory regions converge in such a way as to form the base for an “integrated mind.” Instead, there is a system of modules, each with its own local attention and working memory devices. Our sense of ourselves as beings with a rational integrative mind in control is just an illusion. “Our strong sense of mind integration,” Damasio explained, “is created from the concerted action of large-scale systems by synchronizing sets of neural activity in separate brain regions, in effect a trick of timing” (1994, p. 95).

This modular functioning also lends insight into the relationship between language and thought and, ultimately, to the efficacy of semiotic criticism in visual communication theory. Linguistic theoretician Ray Jackendoff observed, “language and thought, while related, are distinct forms of mental information.” The answer to how thought can be different from language when we seem to think in words is that “the language we hear in our heads while thinking is a conscious manifestation of the thought—not the thought itself, which isn’t present to consciousness” (1994, p. 187). Researcher Stephen Pinker, author of The Language Instinct, explained that the human brain utilizes at least four different formats in representing thought: (a) the visual image as a two-dimensional picturelike mosaic; (b) a phonological representation that runs like a tape loop; (c) grammatical representations of nouns and verbs, phrases and clauses, stems and roots, phonemes and syllables, arranged in hierarchical trees; and (d) “mentalese, the language of thought in which our conceptual knowledge is couched” (1997, pp. 89–90). This fourth format, “mentalese,” Pinker explained, is the mind’s “lingua franca,” a medium in which gist is captured and concepts are stored; it is this format that is comparable to Damasio’s concept of image as a biological cluster of neurons firing in synchronicity (1999, p. 9). “Mental imagery,” Pinker stated, “is the engine that drives our thinking about objects in space…. Images drive the emotions as well as the intellect” (1997, pp. 284–285).

Again and again, for example, great creative minds explain their creative thought generation in terms of visual imagery and their reliance on mental images as springboards for extending their understanding well beyond the parameters of verbal language. As Einstein observed of his own mental images to explain his theories, images lead to generative syntheses: “My particular ability does not lie in mathematical calculation, but rather in visualizing effects, possibilities and consequences” (Pinker, 1997, p. 285). Among other scientists Pinker describes as thinking in images are Faraday and Maxwell, Kekulé, Watson, and Crick. Cognitive psychologist Howard Gardner suggests that the creative mind works in images precisely because mental images allow us to understand one idea through another (1993, p. 365).

The format for all consciousness and all meaning, imagery thus appears to be a function quite separate from the processing of spoken language or grammar per se. More akin to the process of visual perception than to language processing, the imagery of consciousness consists in patterns of meaning in which neurons combine into gestalts with meaning greater than, and different from, the sum of individual parts.

Semiotic criticism or rhetorical criticism, like all verbal communication, therefore has the inherent weakness of using verbal grammar and expression to explain the inherently nonverbal. Because images are the basic communication medium of the brain, semiotics and rhetorical criticism come closest to understanding visual communication when they look at relationships and tropes. But even at this level, they are still a tier of understanding away from the “lingua franca” of the brain, and a whole system away from visual communication. When what we read, what we hear, and what we see reach the level of ideas, they all appear in a different format: the format of neural imagery. This neurological shift is what results in meaning, and it is this patterning of neurons that allows us to understand something about the impact of what we see.

In addition to modularity and synchronicity, another important aspect in understanding how perception works and its implications in terms of visual communication is the brain’s basic hemispheric structure. The brain consists of two hemispheres, left and right, each a mirror image of the other with minor variations for specialization in each.

Each hemisphere directs the movements of the opposite side of the body, and coordination between the two hemispheres is made possible by the bridge of the corpus callosum, which connects them. Although the earliest biological research into the brain consisted mainly of postmortem examination of brain-injured patients, the work of Roger Sperry in the 1960s on split-brain patients with severe epilepsy showed that when the corpus callosum—the only informational conduit between the two hemispheres of the brain—is severed, the left and right hemispheres can no longer communicate thoughts or sophisticated emotions. Although each brain hemisphere is a mirror image of the other, each has its own strengths and weaknesses, ways of processing information, and special skills. Although subsequent research in the field has show this segmentation to be more complex than originally thought, we may still speak in general terms of left and right brain, although these two hemispheres maintain such a continuous and harmonious conversation so fully complementary and integrative that it appears to be a single stream of consciousness (Carter, 1999, p. 34).

In general, the left hemisphere can be said to be analytical, logical, abstract, and time-sensitive, while the right hemisphere is more holistic and emotional, as well as more fearful, sad, and pessimistic (Springer & Deutsch, 1993). Recognizing faces, finding your way around in space, discerning shapes in camouflage, and seeing patterns at a glance are right-brain activities; breaking down complex patterns into component parts, focusing on detail, and intense analysis are left-brain activities. In normal people, both sides of the brain continuously converse across the eight billion nerve fibers of the corpus callosum, but specialized tasks are directed to one side or the other.

In 95% of right-handed people, who themselves compose an estimated 90% of the world population, language facility is almost totally confined to the left hemisphere, which also houses the ability to recognize and imagine shapes according to the arrangement of parts. The right hemisphere, in contrast, is excellent at estimating measurements of whole shapes, can easily judge length and width, and processes information simultaneously and holistically. Most sensory input crosses from the incoming side to the opposite side of the brain (visual input from the right half of each eye, for example, goes to the left side of the brain for processing, and visual input from the left goes to the right).

In the science and the art of visual communication, these observations have enormous significance. Because images appeal to the right side of the brain, they are read in a different way from words, which appeal to the left for processing. In advertising, for example, Carter and Frith asserted that, much advertising “is designed to exploit the gap between the impressionable right brain and the critical left. Those adverts that use visual images rather than words to convey messages are particularly likely to impinge on the right hemisphere without necessarily being registered by the left” (Carter, 1999, p. 41). Zeki observed that “artists are in some sense neurologists, studying the brain with techniques that are unique to them, but studying unknowingly the brain and its organization nevertheless” (1998, p. 10), and certainly the same can be said of advertising moguls, who have long intuitively understood this hemispheric phenomenon. The Foote, Cone, & Belding (FCB) advertising agency, for example, developed an advertising planning model of purchase decision making well over 20 years ago that recognized feeling and thinking as distinctly separate parts within the same continuum and used this as the basis for developing effective creative strategies in ad campaigns.

As shown in Fig. 3.4 of the FCB Grid, on the left to right horizontal axis of the grid, consumers are seen as making decisions based on some relative degree of thinking and feeling, while on the top to bottom vertical axis, the relative importance of the decision is weighed from high to low. In this model, high-priced items, such as appliances for which consumers are likely to compare features and relative costs, would demand more rational reasons and fact content or “hard sell” arguments to persuade people to purchase. Products such as cigarettes, which suggest no logical reasons for purchase and which until very recently have been relatively inexpensive at the onset of a tobacco habit, however, demand a more “soft-sell” and “feeling” approach. The former lends itself to verbal argument, the latter to visual persuasion through images. As the late advertising guru David Ogilvy told us, “Writing advertising for any kind of liquor is an extremely subtle art. I once tried using rational facts to argue the consumer into choosing a brand of whiskey,” he said. “It didn’t work. You don’t catch Coca Cola advertising that Coke contains 50 per cent more cola berries…. Next time an apostle of hard-sell questions the importance of brand images, ask him how Marlboro climbed from obscurity to become the biggest-selling cigarette in the world” (1985, pp. 15–16).

Foote, Cone, & Belding’s Advertising Planning Grid (after

Fig. 3.4.   Foote, Cone, & Belding’s Advertising Planning Grid (after Vaughn, 1980). Horizontally, the sliding scale moves from the predominance of thought to feeling in terms of their relative influence on consumer behavior. Vertically, the grid moves from low to high involvement. All products and services can be placed on the grid through social and marketing research and subsequently positioned to determine the appropriate mix of cognitive and emotional appeals to sell them effectively.

Leo Burnett, whose advertising agency is responsible for the Marlboro Man, explained that the most effective images resonate deep within the psyche: “The most powerful advertising ideas are non-verbal,” he said, “and take the form of statements with visual qualities made by archetypes. Their true meanings lie too deep for words” (Broadbent, 1984, p. 3). “A strong man on horseback, a benevolent giant, a playful tiger. The richest source of these archetypes,” Burnett believed, “is to be found at the roots of our culture—in history, mythology and folklore. Somewhere in every product are the seeds of a drama which best expresses that product’s value to the consumer. Finding and staging this inherent drama of the product is the creative person’s most important task.” (Broadbent, 1984, pp. 3–4). Such images utilize the right hemisphere’s ability to discern patterns, to ignore detail, to respond emotionally, and to learn visually and holistically. The warning label, however, appears in verbal language and is “read” by a different area of the brain, if indeed it is read at all. Both the FCB Advertising Planning Grid and brain neurology recognize that when high social or ego involvement is the prime motivating factor, cognitive appeals are likely to be ineffectual.

This is supported in eye-tracking studies of adolescents viewing tobacco ads, where 44% of viewers did not look at the warning label, and in those who did, average time spent amounted only to about 8% of their attention (Fischer, Richards, Berman, & Krugmann, 1989; Krugmann, Fox, & Fletcher, 1994). In comparing adolescents’ time spent viewing ads as a whole, ads utilizing Joe Camel, a trade character specifically designed to appeal to a younger and cognitively immature audience, were viewed significantly longer (16 seconds) than any other ad, including Marlboro (Fox, Krugmann, Fletcher, & Fischer, 1998). Visual design elements invariably draw the eye away from the words; and the verbal format of the warning also ensures that the message, if read, will be processed differently. Not only are words processed differently by the brain, but also their nature is more experientially remote and less directly emotionally involving, particularly for youngsters.

Brains, it seems, were built to process visual images with great speed and to respond to them with alacrity. They did not evolve to process written verbal symbols in the same way. “Brains were not built to read,” Gazzaniga told us. “Reading is a recent invention of human culture. That is why many people have trouble with the process and why modern brain imaging studies show that the brain areas involved with reading move around a bit. Our brains have no place dedicated to this new invention” (1998, p. 6).

In short, visual images have right-brain appeal, while verbal arguments are grabbed by the left brain for processing. This is why patients with damage to the left hemisphere may suffer speech problems, but those with damage to the right hemisphere are more likely to have perceptual and attentional problems (Springer & Deutsch, 1993). If the corpus collosum is severed, cognitive information from one side of the brain will remain trapped on that side. Emotional information, however, leaks easily across the hemispheres through the limbic system and is unconsciously shared and unconsciously learned. This realization should put emotional processing of visual messages at the forefront of all visual communication research.

Emotional and Cognitive Systems

As stated earlier, there are fundamentally two information-processing systems in the brain—the cortical pathway and the thalamo-amygdala pathway. Until the mid-1980s, it was generally hypothesized that emotion had to come after conscious and unconscious thought processing. Richard Lazarus (1982), for example, argued that emotional reaction required cognitive appraisal as a precondition. It has now been shown, however, that the brain accomplishes its goals in an absence of awareness and that perception is not a system per se, but a description of what goes on in a number of specific neural systems (LeDoux, 1996, p. 16). In the amygdala—a subcortical region buried deep within the temporal lobe—emotional significance is attached to incoming data and readies the body to act before the mind makes the conscious decision to act. Sensory signals from the eye travel first to the thalamus and then, in a kind of short circuit, to the amygdala before a second signal reaches the neocortex. LeDoux explained: “The cortical systems that try to do the understanding are only involved in the emotional process after the fact” (1986, p. 241).

Beneath the cerebral cortex lies the anterior commissure, which connects deep, sub-cortical (nonthinking) regions of the brain, so that even if the left hemisphere of the split-brained person cannot name a stimulus, it is nevertheless capable of receiving emotional information about it. There exists in the brain, in fact, a “fundamental dichotomy—between thinking and feeling, between cognition and emotion” (LeDoux, 1998, p. 15), which runs deeper even than left/right brain asymmetry. The older emotional pathway, which allows raw emotions to connect with the thinking areas of the hemispheres, is that “quick and dirty” emotional route that connects the cortex and neocortex to the limbic system.

There is, in consequence, a measurable time gap between action and consciousness of action. As early as the 1950s, Libet (1996) demonstrated in experiments that the conscious will to act comes only after we initiate action, not before. Because of this delay, the mind is also geared to anticipate what is coming, which it does by calling up templates of past experience to predict the future. According to Gazzaniga, “what we see is not what is on the retina at any given instant, but is a prediction of what will be there. Some system in the brain takes old facts and makes predictions as if our perceptual system were a virtual and continuous movie in our mind” (1998, p. 75). LeDoux (1986) explained that this is truly advantageous to our survival, because in critical situations, instinctual responses must not only move rapidly through the limbic system but also must use emotional memory predictively if we are to survive.

Gazzaniga also suggested that although it is counterintuitive to our sense of rationality, one of the chief ways we use our cognitive faculties is to rationalize what has already been emotionally decided. Human beings, he explained, have a centric view of the world and like to think of ourselves as “directing the show most of the time.” He argued, however, that the illusion that we are directing our actions “simply appears to be true because of a special device in our left brain called the interpreter. This device creates the illusion that we are in charge of our actions, and does so by interpreting our past—the prior actions of our nervous system…. Reconstruction of events starts with perception and goes all the way up to human reasoning. The mind is the last to know things” (1998, p. xiii). Gazzaniga’s “interpreter,” a special device in the left hemisphere of the brain, operates on the activities of other adaptations built into the brain through evolution and reconstructs the automatic activities of the brain in order to maintain an integrated view of the world and a holistic sense of self. Although these automatic activities are cortically based, they are nevertheless outside of our conscious awareness, and the role of the interpreter is to “construct theories to assimilate perceived information into a comprehensible whole…. We need something that expands the actual facts of our experience into an on-going narrative, the self-image we have been building in our mind for years. The spin doctoring that goes on keeps us believing we are good people, that we are in control and mean to do good. It is probably the most amazing mechanism the human being possesses” (Gazzaniga, 1998, pp. 26–27). Rationalization is not, therefore, a cognitive tool tied to logic so much as it is a process integral to perception itself.

A series of experiments in which identical stimuli produced a whole range of rationalizations may serve to illustrate. In one, a number of pairs of identical nylon stockings were laid out and women were asked to choose one. When asked the basis for their choices, each was able to cite reasons ranging from differences in color to texture and quality (Gazzaniga, 1992). Here rational cognition is used to justify an irrational choice. Ogilvy suggested an alternate experiment in which the emotional set-up can be used to overwhelm the cognitive beforehand: “Give people a taste of Old Crow, and tell them it’s Old Crow. Then give them another taste of Old Crow but tell them it’s Jack Daniel’s. Ask them which they prefer. They’ll think the two drinks are quite different. They are tasting images” (1985, p. 15). Just as we are prone to consciously rationalize unconscious emotional decisions after the fact, we are also prone to build preconceptions through images before the fact of rational cognition. But this weakness is also a perceptual strength. When everything functions appropriately, precognitive feelings point us in the right direction by tapping emotional learning and assisting the neocortex in its ability to make rational decisions. As Davidson and Irwin noted, “Emotion guides action and organizes behavior towards salient goals … The amygdala has been consistently identified as playing a crucial role in both the perception of emotional cues and the production of emotional responses” (1999, p. 11)

Emotional Learning

Our conscious mind and our emotional system usually work together smoothly, just as the complementary right and left hemispheres of the brain work together through a continuous dialogue across the corpus collosum. When they don’t, however, the consequences can be catastrophic. In the process of developing a mature mind, several things can go wrong: Injury may prevent adequate emotional or cognitive abilites; deprivation or lack of use may prevent normal maturation during the evolutionary window of development that opens and closes according to a built-in genetic determinism; patterns of negative attitudes and behavior may prematurely close off broader choices or reinforce destructive habits of mind or action.

In his book Descartes’ Error, for example, Damasio (1994) told the story of his patient, “Elliot,” whose surgery to remove a fast-growing frontal lobe tumor severed the neural pathways from the amygdala, where emotions are generated, to the frontal cortex, where emotions are registered. The prefrontal cortex, a region of the frontal cortex behind the forehead, has also been identified as the site where decision making takes place (Carter, 1999). The surgery left Elliot without the capacity to feel emotion and therefore without the ability to reach decisions and make accurate judgments. Controlled, dispassionate, eminently rational, but without his emotional system to help, Elliot lost the ability to prioritize, to choose one path of action over another, and to accurately evaluate others’ motivations and character.

Without a perspective on what was important and how much detail was sufficient, he was crippled in his decision making even though his base of knowledge and his intelligence remained intact. Ultimately, because of the surgical damage to the right frontal cortices, Elliot lost his family, his social acumen, his work, his wealth, and his former life. Damasio’s patient reveals not only the interdependence of reason and emotion, but also the evolutionary wisdom in unconscious emotional processing to prepare the way for logic and reason.

Neurological research has also revealed the existence of genetic windows for development. From birth to age 3, for example, the brain is especially vulnerable. At this time, repeated abuse, neglect, or terror (from whatever source) causes a flood of stress-related chemical responses that reset the brain’s fight-or-flight hormones and that make the child more or less reactive to stress throughout life. At this point, abusive parenting, or even repeated stressful media exposure, can change the way the brain responds in ordinary situations by hyperreacting to negative influences or by becoming totally unresponsive to them. Because the emotional system responds as automatically to a horror film as it does to the real thing, unconscious emotional memories are stored in the amygdala, and stressful events and traumatic memories may be burned into the system. Although cortical thinking can override the immediate influence of this visual experience, the emotional system continues working to get the body ready for fight or flight: The heartbeat quickens, breathing accelerates, pupils dilate, temperature drops, and blood is redirected to the muscles. Most importantly, as we physically experience the fight-or-flight response, an emotional memory is laid down to guide future action. The greater the impact of the emotional experience, the more deeply the emotional memory is etched. This memory, because it belongs to a survival-based system geared to learning from traumatic experience, may never be eradicated (LeDoux, 1998; Damasio, 1999).

When thematic activities and patterns of ideas and actions are repeated over and over again, they, too, become deeply embedded within the unconscious memory system, and are established as templates. In this way, emotional response to media becomes a permanent part of our response repertoire. Perhaps the most dramatic example of this has come from the long-term Cultural Indicators Project initiated in 1973 at the University of Pennsylvania. Cultivation Analysis Theory, which crystallized from the project, has concentrated on the storytelling function of media, and focused on the developing patterns of attitude that neurological researchers have found to be the basis of unconscious emotional learning. Correlating these with television viewing habits, they conclude that “the repetitive pattern of television’s mass-produced messages and images forms the mainstream of the common symbolic environment that cultivates the most widely shared conceptions of reality. We live in terms of the stories we tell—stories about what things exist, stories about how things work, and stories about what to do—and television tells them all through news, drama, and advertising to almost everybody most of the time” (Gerbner Gross, Jackson-Beeck, Jeffries-Fox, & Signorelli, 1978, p. 178).

Because our mammalian brain interprets media images as reality and responds emotionally according to the circumstances presented to it, understanding perceptual processing has significant implications for media effects. Pinker explained: “When we watch TV, we stare at a shimmering piece of glass, but our surface-perception module tells the rest of the brain that we are seeing real people and places…. Even in a life-long couch potato, the visual system never ‘learns’ that television is a pane of glowing phosphor dots, and the person never loses the illusion that there is a world behind the pane” (1997, p. 29).

Although the intent of visual media directors and producers is often taken into account in the discussion of media effects, in terms of perception, the intention of the producer of the image is irrelevant. Neurological effects occur whether they are intended or not. When Gerbner and his associates argue that television exposure has both first- and second-order effects in which both facts and patterns of assumptions are learned, they are fully in tune with perceptual research. According to Gerbner’s Mean World Index, for example, heavy viewers of television vastly overestimated the amount of actual violence in the real world and were more likely to see the world as fearful and to mistrust people in it (Gerbner et al. 1980). Because neurologically, continually stimulating groups of brain cells makes them more sensitive and easier to activate (Carter, 1999), repeated neural firings with the same thematic or emotional content increase the likelihood of attitudinal and behavioral repetition. Like traumatic exposure, this realization has profound implications in terms of habitual media use and recurrent patterns of attitude and behavior within media, especially in interactive media such as video games.

Conclusion

Neurologically, without our consciously realizing it, emotional learning occurs that preframes attitudes, thinking, and behavior. Emotional templates serve as a basis for perceptual anticipation of the future, and although reason and emotion both play crucial and inseparable roles in perception, at various times, emotion can and does function at the expense of reason. Whether we are continually bombarded with the same “mean world” pattern in media, or we select out and deliberately repeat specific movies or video games because they resonate with felt needs and realities, it is important to realize that the emotional learning that goes with media experience is both unconscious and peculiarly indelible.

Damasio explained that “images [i.e., mental patterns created through the senses] allow us to choose among repertoires of previously available patterns of action and [to] optimize the delivery of the chosen action” (1999, p. 24). Because the neurological maps that we use to navigate reality are drawn from the repetition of patterns of action provided by both direct experience and visual media, the parameters of our behavioral choices are determined by both, using the same underlying neural mechanisms.

In the words of Sperry in his 1981 Nobel lecture that initiated the neurological research that is the foundation for our new understandings in visual communication, “Where there used to be a chasm and irreconcilable conflict between the scientific and the traditional humanistic views of man and the world, we now perceive a continuum. A unifying new interpretative framework emerges with far reaching impact not only for science but for those ultimate value-belief guidelines by which mankind has tried to live and find meaning” (Sperry, 1981).

In the process of our becoming, visual communication plays a crucial role, one that is particularly vulnerable to emotional learning and to manipulation by political, economic and other vested interests. “Virtually every image, actually perceived or recalled,” explained Damasio, “is accompanied by some reaction from the apparatus of emotion” and because “the engines of reason still require emotion … the controlling power of reason is often modest” (1999, p. 58). Pattern formation and repetition are the way in which the brain forms attitudes and ideas neurologically, and these repeated patterns create the templates that we use to map and to anticipate reality. Because neurons that “fire together wire together,” these templates are peculiarly resistant to reason (LeDoux, 1998, p. 214).

Visual media with its frequently recurring patterns of action and thematic development are peculiarly well suited to emotional learning, as is the individual impact of specific paintings, drawing, and sculptures in the fine arts. In its “wisdom,” Nazi propaganda in Germany in the era preceding World War II began not with control of the spoken word, but with a state art, architecture, and film initiative that captured people emotionally and intentionally bypassed reason. Today, remnants of the same visual techniques can be seen in entertainment from MTV to interactive video games to virtual reality. Because visual messages are mostly processed by unconscious regions of the brain that do not understand that art and mass media are not reality, their visual power can have enormous impact, unintended or not, on our emotional development. Through emotional templates, our attitudes, ideas, and actions are pushed in particular directions, positive or negative.

The neurological research that is currently mapping the mind thus can be seen to provide an invaluable framework for new research in visual studies that bridges the interdisciplinary chasm between the traditional “hard” and “soft” sciences, and for understanding the social implications of what it means to “see” and to “watch” in a visually dominated culture.

References

Broadbent, S. (1984). The Leo Burnett book of advertising. London: Hutchinson.
Carter, R. (1999). Mapping the mind. Berkeley: University of California Press.
Crick, F. (1994). The astonishing hypothesis. New York: Touchstone.
Damasio, A. (1994). Descartes’ error: Emotion, reason and the human brain. New York: Avon.
Damasio, A. (1999). The feeling of what happens. New York: Harcourt Brace.
Davidson, R. and Irwin, W. (1999). The functional neuroanatomy of emotion and affective style [Review]. Trends in Cognitive Sciences, 3(1), 11–21.
Dawkins, R. (1996). The blind watchmaker. New York: Norton.
Fischer, P. , Richards, J. Jr. , Berman, E. , & Krugmann, D. (1989, January 6). Recall and eye tracking study of adolescents viewing tobacco advertisements. Journal of the American Medical Association, 261(1), 84–89.
Fox, R. , Krugmann, D. , Fletcher, J. & Fischer, P. (1998). Adolescents’ attention to beer and cigarette print ads and associated product warnings. Journal of Advertising, 27(3).
Gardner, H. (1993). Creating minds. New York: HarperCollins.
Gazzaniga, M. (1992). Nature’s mind: The biological roots of thinking, emotions, sexuality, language and intelligence. New York: Penguin Books.
Gazzaniga, M. (1998). The mind’s past. Berkeley: University of California Press.
Gerbner, G. , Gross, L. , Jackson-Beeck, M. , Jeffries-Fox, S. , & Signorelli, N. (1978). Cultural indicators: Violence profile no. 9. Journal of Communication, 28, 176–206.
Gerbner, G. , Gross, L. , Morgan, M. , & Signorelli, N. (1980). The mainstreaming of America: Violence profile no. 11. Journal of Communication, 30, 10–29.
Gregory, R. (Ed.). (1998). The Oxford companion to the mind. New York: Oxford University Press.
Jackendoff, R. (1994). Patterns in the mind. New York: Harper Collins/Basic Books.
Krugmann, D. , Fox, R. , & Fletcher, J. (1994, November/December). Do adolescents attend to warnings in cigarette advertising? An eye-tracking approach. Journal of Advertising Research, 39–52.
Lazarus, R. S. (1982). Thoughts on the relations between emotions and cognition. American Psychologist, 37, 1019–1024.
LeDoux, J. (1986). Sensory systems and emotion. Integrative Psychiatry, 4, 237–248.
LeDoux, J. (1994, June). Emotion, memory and the brain. Scientific American, 270(6), 50–57.
LeDoux, J. (1998). The emotional brain. New York: Simon & Schuster.
LeDoux, J. (2002). Synaptic self. New York: Viking/Penguin.
Libet, B. (1996). Neural time factors in conscious and unconscious mental functions. In S. R. Hameroff et al. (Eds.), Toward a science of consciousness (pp. 337–347). Cambridge, MA: MIT Press.
Ogilvy D. (1985). Ogilvy on advertising. New York: Vintage Books.
Pinker, S. (1994). The language instinct. New York: William Morrow.
Pinker, S. (1997). How the mind works. New York: Norton.
Sperry, R. (1981, December 8). Nobel lecture: Some effects of disconnecting the cerebral hemispheres. Available at http://www.nobel.se/medicine/laureates/1981/sperry-lecture.html. Last date of acccess: June 3, 2004 .
Springer, S. , & Deutsch, G. (1993). Left brain, right brain (4th Ed). New York: W. H. Freeman.
Vaughn, R. (1980). “How advertising works: A planning model.” Journal of Advertising Research, 20(5), 27–33.
Wilson, E. O. (1999). Consilience. New York: Random House.
Zeki, S. (1998). Inner vision: An exploration of art and the brain. New York: Oxford.
Zeki, S. (1992, September). The visual image in mind and brain. Scientific American, 75–76.
Search for more...
Back to top

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.