Empirical Perspectives on Consciousness and its Relationship to Free Will and Moral Responsibility

Authored by: Neil Levy

The Routledge Companion to Free Will

Print publication date:  November  2016
Online publication date:  November  2016

Print ISBN: 9781138795815
eBook ISBN: 9781315758206
Adobe ISBN: 9781317635475

10.4324/9781315758206.ch38

 

Abstract

 

 Add to shortlist  Cite

Empirical Perspectives on Consciousness and its Relationship to Free Will and Moral Responsibility

It is traditional to distinguish two kinds of conditions that an agent must satisfy in order to qualify as acting with free will: a control condition and an epistemic condition. The control condition has attracted the bulk of philosophers’ attention, largely because the debate has centered around the question whether free will is compatible with causal determinism (or, in a version now less central, with God’s foreknowledge). Causal determinism (and foreknowledge) seem much more plausibly a threat to control than to the epistemic dimensions of freedom; while it might be plausible to think that I lack some kind of control over my behavior if how I will act is in some sense already settled prior to my making up my mind, it is much less plausible to think that my beliefs concerning the circumstances in which I act are undermined or altered by determinism or foreknowledge.

Latterly, however, the epistemic condition has been attracting more attention. A central reason for this shift has come from work in cognitive science. Research in psychology and in cognitive neuroscience have provided a wide range of evidence that agents lack awareness of important facts that nevertheless play a significant role in shaping their action. Cognitive neuroscientists have also produced evidence that we lack awareness of our intentions to act; this fact, they argue, shows that we lack free will or are not morally responsible for our actions. (In this chapter, I will not discuss these experiments, since they are well covered elsewhere in this volume). These experiments have made attention to the epistemic condition—to what agents need to be aware of in order to possess free will and/or to be morally responsible for their actions—an urgent concern.

In this chapter, I will outline the current debate over the extent to which agents need to be conscious of the facts to which they respond, as that debate has evolved in response to the findings of cognitive science. My aim is to assess the extent to which the empirical findings represent a serious threat to our capacity to act freely. In what follows, I understand ‘free will’ as a power of agents such that, by exercising that power in the appropriate circumstances, they are morally responsible for their actions. Moral responsibility, in turn, I understand as closely tied to desert of sanction or reward; to say that an agent is morally responsible for an action is to say that in virtue of having performing it she might justifiably be treated somewhat better or worse than otherwise (this is of course defeasible and frequently defeated). I do not claim that this is the only defensible understanding of either free will or of moral responsibility. Nevertheless, they are understandings that, as a matter of fact, are central to most debates about free will and moral responsibility—as this volume illustrates—and which have clear and direct links to important practical questions such as the justification of the criminal justice system.

Before turning to the experimental data and interpretation, one other preliminary remark. When philosophers talk about ‘consciousness,’ it is usually phenomenal consciousness they have in mind. Phenomenal consciousness is the kind of consciousness that has a certain ‘feel’ to it: it is the kind of consciousness at issue when we think about the ineffable quality of the taste of wine or the feel of the sun on your face. It is this kind of consciousness that gives rise to the so-called ‘hard problem’ of consciousness: how can mere matter give rise to properties that (according to some) are incommensurate with anything physical (Chalmers 1996)? Fortunately, we are not concerned with the hard problem here, because we are not concerned with phenomenal consciousness at all. Rather, we are concerned with agents’ access (or lack of access) to representational contents. This notion of consciousness is akin to what Block (1995) calls access consciousness. Some philosophers object that this is not a notion of consciousness at all (Searle 1992; Burge 1997). I think, on the contrary, that prior to the 1990s, this notion was the central notion of consciousness; the only notion that Freud (for instance) utilized (as Block himself seems to recognize). In any case, it is the sense of consciousness at issue here.

The Nature of the Threat to Free Will

Intuitively, moral responsibility requires that agents be aware of central facts concerning their own actions (or be culpable for their lack of awareness). For instance, an agent who Φs, believing that he is rendering aid to someone in need but who actually performs an action that is harmful— say giving a person who is dehydrated a drink to which he turns out to be allergic—seems to be excused blame in virtue of his ignorance of the fact that the action is harmful (unless there are grounds for holding him responsible for that ignorance). Some experiments suggest that agents may often perform morally significant actions while unaware of facts about those actions in a way that seems analogous to the ignorance of the agent above, but (unlike the agents in the vignette), responding to the facts of which they are unaware. These experiments raise in a stark manner the question whether it is conscious response to facts that matters, or whether nonconscious response might ground moral responsibility.

Consider the many experiments that demonstrate confabulation of criteria for choice (Dovidio and Gaertner 2000; Uhlmann and Cohen 2005; Son Hing et al. 2008). In typical experiments along these lines, subjects are asked to choose which of two candidates is better qualified for a job. One candidate belongs to a minority or is female; the other is white and/or male. The candidates also differ in their skills and experience, but in ways that leave both apparently well-qualified for the job (one might have more experience while the other might have better paper qualifications for instance). These experiments have two conditions: in one condition, the subjects get the resumes of the candidates with the white/male candidate possessing one set of qualifications and the female/minority candidate the other. In the other condition, the qualifications possessed by each are reversed (this may be done by simply changing the names on the resumes, using stereotypically black and white, or male and female, names).

In these experiments, a majority of applicants choose the male/white candidate in both conditions. Interestingly, however, their choice seems to be (often) driven not by overt sexism or racism, but by confabulation of merit. Subjects report that the qualifications possessed by the preferred applicant are clearly those that are most relevant for the job.

Note that the experiment was designed so that the confabulation was plausible: no matter how much each subject introspected, and no matter how committed any of them may have been to affirmative action, it is quite possible that they judged that so superior was the male/white candidate that they had no choice but to prefer him or her. It is only by virtue of looking at the responses across the groups that we can detect the fact that the choice was driven, more distally, by implicit biases. The fact was invisible to the subjects themselves. What they were conscious of was a confabulated criterion, whereas the processes that caused the confabulation worked unconsciously. So the decision had a moral character—that it was sexist or racist (for example)—that was due to subjects’ unconscious biases.

The implicit bias literature is a rich source of other evidence that unconscious biases may lead people to behave in ways that are (variously) sexist, racist, homophobic, and so on, without their realizing and in some cases despite their sincere efforts to treat people fairly. Implicit biases predict a variety of relatively trivial actions that are nevertheless prejudicial—making less eye contact with black confederates, for instance (McConnell and Leibold 2001). More seriously, they predict a higher probability of judging that an ambiguous object in the hands of black person is a gun, compared to the same object in the hands of a white person (Payne 2006). It is plausible that this fact plays a role in explaining the higher risk of being shot by police that black people run in the United States.

In addition to morally significant actions caused by implicit biases, such actions may be caused by environmental primes. Whereas in the implicit bias cases actions have a moral character due to agents’ implicit biases (because the following counterfactual is true: were the action controlled by agents’ explicit attitudes alone, the action would lack that moral character), in priming cases, the moral character is due either to a prime of which the agent is unconscious, or to processes of which the agent is not conscious and which were triggered by the prime. Agents process primes for semantic contents, even when they are not conscious of the primes: for instance, in masked priming experiments (in which a word or image may be shown very briefly and then immediately masked to prevent retinal persistence) agents report seeing nothing, but their behavior may be influenced by the prime. In laboratory experiments that model real-life situations, primes are usually presented in full view of the subjects and the priming effect is said to occur when their behavior is modified without their noticing that the stimulus has had this effect.

For instance, a subtle cue can induce better behavior in people. Bateson et al. (2006) used a gloriously simple manipulation to induce more honesty in people. Placing a stylized picture of a pair of eyes over an ‘honesty box’ in which university staff were supposed to deposit money to pay for their coffees and teas induced a nearly threefold increase in the amount paid, compared to a control condition. Here the subjects were conscious of the image—though presumably they rarely attended to it—but not conscious of how it caused them to change their behavior.

All the cases outlined above involve agents engaging in morally significant actions that are responsive to information of which the agent is not conscious. Call the kind of consciousness lacking here state consciousness—lack of consciousness of a particular mental state. Agents may also perform morally significant actions in the absence of creature consciousness. Whereas a lack of state consciousness is a lack of access to the content of some state, the agent who lacks creature consciousness lacks access to the contents of any state. Absence of creature consciousness combined with apparently intentional action may be seen in absence seizures, complex partial seizures, fugue states, and somnambulism. Agents in such states may engage in sex, drive cars, and even assault other agents.

Should we hold agents morally responsible for these behaviors? More precisely, should we hold them responsible for the moral characters of these behaviors, moral characters due to facts (concerning themselves or concerning the external environment) of which they failed to be conscious? The subjects in the experiments mentioned might choose a man over a woman and do so because the preferred candidate is male; they thereby perform an action that might be said to be sexist. However, they are not conscious of the role that sex played in the choice: are they nevertheless blameworthy? The subject who paid for her coffee, yet might not have done so were it not for the influence of a cue whose influence she failed to detect performs an action that is honest. Is she praiseworthy for her honesty? There are a variety of answers to these questions in the literature.

Responding to the Threat

Some philosophers suggest that consciousness of the facts to which we respond is not especially significant for freedom or moral responsibility. Call these philosophers opponents of the consciousness thesis: the thesis that consciousness (of some kind) is a necessary condition of free will. In effect, these philosophers seem committed to saying that we may satisfy the epistemic condition on free will and/or moral responsibility even if we are not conscious of the facts in question. They point to how sophisticated nonconscious processes may be. Above, I remarked that absence of consciousness of a fact seems to threaten agents’ control over their behavior. But nonconscious processes may realize exquisite control. Consider the behavior of agents who experience absence seizures, which are believed to produce a state of absent or greatly reduced consciousness in their sufferers. If they are already engaged in a complex action prior to the onset of the seizure, they will typically continue to perform the action, more or less successfully. They may, for instance, continue to drive a car: without hitting obstacles or pedestrians, they remain capable of navigating a series of lights and turns. Or they may continue to play a musical piece they had already begun (Penfield 1975).

Even more impressive, perhaps, is the complex behavior exhibited by conscious agents who nevertheless report that they are not conscious of the facts to which they respond. Elite athletes and expert musicians sometimes report that their best performances occur in a state in which they are not consciously guiding their actions (these states may correspond to the ‘flow’ state—a state of effortless engagement—made famous by Csikszentmihalyi [1990]). These performances seem to be paradigms of control, indicating that controlled processes need not be conscious processes. Some thinkers have concluded, accordingly, that nonconscious sensitivity to reasons is sufficient for responsibility-level control, at least in normal situations and for the normal brain (Suhler and Churchland 2009). (Suhler and Churchland [2014] is an important development and modification of the view.)

On the basis of the fact that agents may respond flexibly to (genuine) reasons in the absence of consciousness of these reasons (or of the reasons qua reasons), most philosophers seem to have concluded that lack of consciousness is not an excuse, though it may be (somewhat) mitigating. Many of these philosophers are motivated more by thought experiments and literary examples than by cognitive science. Several philosophers have cited the case of Huckleberry Finn, for instance, in arguing for the claim that consciousness is not directly relevant to agents’ moral responsibility. Huck fails to turn the escaped slave Jim over to people who would return him to his ‘owners,’ despite his conscious judgment that helping Jim to escape is abetting theft. But Huck acts as he does in response to Jim’s humanity. His responsiveness to genuine moral considerations reveals his quality of will, and renders him praiseworthy; it is revelatory of deep facts about Huck, whereas his conscious judgment is caused by relatively shallow facts about him (Arpaly 2002). Arpaly is explicit that this framework can fruitfully be applied to cases stemming from the experimental literature, although she does not treat such cases in any detail. Angela Smith, whose views also owe their development more to traditional philosophical methods than to cognitive science, has recently argued that her somewhat related view entails responsibility for actions stemming from implicit bias (A. Smith, unpublished: “Implicit Biases, Moral Agency, and Moral Responsibility”). On her view, which develops a framework she has defended over several important publications (Smith 2005, 2008), agents are responsible for attitudes that are properly attributable to them, and for the actions caused by such attitudes; attitudes are properly attributable to them when they reflect their judgments, conscious or unconscious. Implicit biases, Smith argues, are states with representational contents and this entails judgment-dependence of a kind sufficient to underwrite attributability.

Philosophers whose views are more directly driven by engagement with cognitive science have often reached the same conclusion via a different route. Building on Peter Carruthers’ important work on the cognitive science of self-knowledge (Carruthers 2011), Matt King and Carruthers (2012) argue that consciousness or its lack can make no difference to moral responsibility because agents are never conscious of their attitudes in the right kind of way to underwrite moral responsibility. Rather, agents self-attribute attitudes on the basis of the same kinds of evidence they use to attribute attitudes to others: by some kind of interpretative process. King and Carruthers conclude that agents are morally responsible for all the behaviors driven by content-bearing states, no matter their source, rather than those driven by conscious ones alone. More generally, a number of philosophers have pointed to the fact that nonconscious processes are pervasively involved in all behavior, even the most deliberative and reflective: conscious control depends on nonconscious control. On this basis, they have pointed out that if consciousness is required for moral responsibility, it would seem that we are never morally responsible; this fact gives us a strong reason to reject the consciousness thesis (Suhler and Churchland 2009).

A small number of philosophers, and a greater number of psychologists, have reached the opposite conclusion: that lack of consciousness excuses. These thinkers accept the evidence that nonconscious processes are pervasive, and that conscious control depends on nonconscious control, but they bite the bullet on the claim that if consciousness is a necessary condition of moral responsibility, we are never morally responsible. The most forceful advocate of this line among philosophers is Gregg Caruso (2012). For Caruso, so much happens outside our conscious awareness that we cannot possess the kind of control required for moral responsibility. Caruso cites a great deal of evidence (of the kind already outlined above) that factors of which we are not conscious may influence not merely how we act, but the moral character of our actions, in such a manner that we perform actions that have a moral significance of which we might be blissfully unaware. There is, as he says, a powerful pull toward excusing agents of moral responsibility in these kinds of cases; induction generalizes the intuition.

The dispute between those who hold and those who deny that consciousness is required for moral responsibility turns on a number of points. In keeping with the central focus of the free will debate, competing claims about control are central to the dispute. Reflecting recent developments in the literature however, some philosophers have emphasized attributability or the quality of agents’ will in urging that consciousness is not required for moral responsibility. Properly assessing these claims requires the development of a satisfactory account of control, attributability, and other concepts central to the free will and moral responsibility literature. It also requires detailed attention to the empirical literature, in order to assess the degree to which nonconscious processes may realize responsibility-level control or be properly attributable to agents. For reasons of space, in this chapter I will set aside the topic of attributability and focus on questions of control.

Before we begin to explore these philosophical concepts and their (neuro)psychological underpinnings, it is worth pointing to what should be common ground between the various contending positions. No one maintains (nor should maintain) that consciousness of the central facts concerning a token action is required for the agent to be morally responsible for it. At most, defenders of the consciousness thesis should maintain that consciousness (of the right sort of facts) must feature appropriately in the history of the action in order for the agent to be responsible for it. Consider what psychologists call ‘overlearned’ actions. These actions may be carried out automatically, in response to triggering cues, and may proceed ballistically until completion of the action sequence. Such an action might be triggered in the absence of consciousness, and yet the agent might be morally responsible for it, on the grounds that she consciously inculcated such an action sequence in herself. For similar reasons, the appeal by opponents of the consciousness thesis to the fact that nonconscious processes may exhibit exquisite control may be undercut by the fact—if it is a fact—that agents have deliberately inculcated such responses in themselves. The control of the athlete or the musician may, for instance, be the product of hundreds of hours of training, in which she consciously sought to bring herself to respond appropriately without thinking.

Let us turn, now, to control, which is surely the most widely accepted necessary condition for moral responsibility. Prima facie, control has epistemic dimensions, such that the epistemic and control conditions are not as easily separated as many have thought (and as my introduction might have suggested) (Mele 2010; Levy 2011). But opponents of the consciousness thesis are right to point out that agents exercise control in ways that are responsive to facts regarding which they are unconscious (think, again, of the person who experiences an absence seizure while driving a car). Is this control sufficient to underwrite moral responsibility? The answer may depend on what we are holding the agent responsible for.

Consider the most dramatic cases in which the consciousness thesis is at issue: cases featuring assaults (even killings) by agents lacking creature consciousness. While the agents in these cases undoubtedly exercise control, they may not exercise control over the assault itself. Their control may extend only to the range of overlearned actions they perform: driving a car, opening a door, and so on. The assault itself may not be exercise of control, at least for an agent who lacks a history of violence (Levy 2014).

What makes the performance of overlearned actions an exercise of control but not the assault? Following the lead of Fischer and Ravizza (1998), we might understand control as consisting in a kind of sensitivity to reasons: an agent exercises control over a behavior when she would recognize and respond to a sufficient number and variety of reasons to act in a way that differs from the way she actually acts. Control comes in degrees: the broader the range of reasons she would respond to, the more control she has. Acquiring expertise in an activity is often a question of acquiring greater sensitivity to considerations as reasons to act otherwise, and acquiring the capacity to act appropriately. The driving behavior mentioned above is controlled because it remains sensitive to a range of environmental conditions, despite the absence of consciousness (it seems that sensitivity to reasons, of a particular kind, can itself be overlearned and thereby automated). However, if one or other of the views of consciousness that belong to what Tononi (2004) calls “the integration consensus” is correct, the agent may lack much control over the assault. In views that belong to the integration consensus, consciousness is required for actions to be driven by systems that are broadly sensitive to reasons; without it, actions are driven by systems that can process only a narrow range of reasons. If that is right, the assault may occur because of a lack of sensitivity to reasons: the agent fails to recognize a range of reasons that she herself would, were she conscious, take as reasons to act otherwise.

Whereas the actions of agents who lack creature consciousness are sensitive to only a narrow range of reasons (given the truth of the appropriate account of consciousness), in cases of absence of state consciousness the agent remains sensitive to a very broad range of reasons. Her control is therefore much greater than the control exhibited by the former kind of agent. There may be grounds, though, for denying that she has sufficient control over the moral character of the action, responsibility for which is in question. Just as the extent to which absence of creature consciousness excuses depends on empirical claims about what consciousness does, so the availability of mitigation or excuse depends on empirical claims about the kinds of processes that underwrite responsiveness to unconscious information.

There is a lively debate about the nature of the attitudes involved in unconscious bias. On some views, these attitudes are beliefs (Mandelbaum 2013, 2015; De Houwer 2014). If these implicit attitudes are beliefs, then there is no reason to expect that insofar as behavior is driven by these attitudes it will be any less reasons-responsive than behavior driven by conscious attitudes. Beliefs are, as Stich (1978) puts it, inferentially promiscuous, which is to say, roughly, that they are capable of interacting appropriately with any other representation, and that seems to entail broad and deep reasons-responsiveness.

The standard view among psychologists has hitherto been that implicit attitudes are associations, not beliefs. If implicit attitudes are associations, then behavior driven by them can be expected to be very much less sensitive to reasons than behavior driven by beliefs. Associations do not interact with other representations in ways that are sensitive to their semantic contents, and cannot serve as premises in reasoning: these facts entail that they exhibit little reasons sensitivity. The evidence amassed by Mandelbaum seems to show quite conclusively, however, that implicit attitudes are not just associations. Rather, they have some kind of propositional structure, in virtue of which they do interact with reasons appropriately. Nevertheless, it may be that Mandelbaum is too hasty in inferring from the fact that implicit attitudes have propositional structure to the conclusion that they are beliefs.

Elsewhere, I have argued that implicit attitudes are patchy endorsements (Levy 2015). Like beliefs, they have propositional structure, but this structure is patchy, such that they can interact appropriately only with some representations under some conditions. If implicit attitudes are patchy endorsements, then it is an empirical question, differing from case to case, whether they are able to interact appropriately with the semantic content of a particular reason.

That might seem to entail that the answer whether agents who lack state consciousness nevertheless exercise control over their behavior itself varies from case to case, and that therefore they are morally responsible for some actions caused by implicit biases and not others. I suggest that the class of cases is in which there is at least a prima facie case for excuse is broader than that. Though it is true that the extent to which the relevant mechanisms are capable of recognizing a particular reason varies from case to case, when an action has a moral character due to implicit bias, that moral character is evidence that the mechanism was not responsive to the right reasons. In such cases, the following counterfactual is often (and perhaps always) true: were the agent, or the relevant mechanism, capable of recognizing the right reasons, the action would not have had the character it had. Consider the experiments demonstrating confabulation of merit, for instance. In these experiments, participants ignored reasons they were perfectly capable of recognizing (that the minority or female candidate was at least equally well-qualified for the job). It is insensitivity to these reasons that explains the moral character of the actions. Had the relevant mechanism been capable of responding appropriately— and nothing in the patchy endorsements account entails that it could not, in principle, have been so capable—the action would not have that character.

There are possible cases in which an action has a moral character due to an implicit bias, yet the bias-constituting mechanism is capable of responding to relevant reasons. Cases like this will involve implicit mechanisms that recognize but devalue the reason, just as someone with a racist belief might recognize a black candidate’s qualifications as a reason to hire her, but devalue this reason. Were implicit attitudes unconscious beliefs, it would be comparatively easy to see how they might cause actions which have a moral character due to their influence and in which the implicit attitude was appropriately sensitive—responsive and reactive (Fischer and Ravizza 1998)—to the relevant reasons. It is harder to see how patchy endorsements might cause actions that satisfy this kind of description. Sensitivity to the reason must be coupled with a great deal of other, belief-like, processing.

This brief survey of the empirically driven or sensitive philosophical literature on the extent to which consciousness of the reasons to which we respond is a necessary condition of moral responsibility for our actions is not exhaustive. It does, however, indicate the kinds of considerations that are relevant to the debate (as well as picking out the major landmarks in the landscape). Progress on these issues requires expertise in several different areas of philosophy: not just philosophy of action and of moral responsibility, but also philosophy of mind and of course detailed knowledge of the relevant cognitive science. Because the kind of expertise required remains relatively rare, and because the questions addressed are relatively novel, a wide variety of different questions remain to be explored in sufficient depth (concerning the nature of control and the extent to which it might be exercised by non-conscious processes; on the nature of attributability and whether— and when—non-conscious states are appropriately attributed to agents; on the nature of implicit attitudes, and so on). Given the rapid developments in this area, and the range of topics upon which it touches, it is emerging as one of the exciting debates in the philosophy of free will and moral responsibility.

References

Arpaly, N. (2002) Unprincipled Virtue: An Inquiry into Moral Agency. Oxford: Oxford University Press.
Bateson, M. , Nettle, D. , and Roberts, G. (2006) “Cues of Being Watched Enhance Cooperation in a Real-World Setting,” Biology Letters 2: 412–414.
Block, N. (1995) “On a Confusion About a Function of Consciousness,” Behavioral and Brain Sciences 18: 227–287.
Burge, T. (1997) “Two Kinds of Consciousness,” in N. Block , O. Flanagan , and G. Guzeldere (eds). The Nature of Consciousness: Philosophical Debates. Cambridge: MIT Press.
Carruthers, P. (2011) The Opacity of Mind. Oxford: Oxford University Press.
Caruso, G. (2012) Free Will and Consciousness: A Determinist Account of the Illusion of Free Will. Lanham: Lexington Books.
Chalmers, D. (1996) The Conscious Mind. Oxford: Oxford University Press.
Csikszentmihalyi, M. (1990) Flow: The Psychology of Optimal Experience. New York: Harper & Row.
De Houwer, J. (2014) “A Propositional Model of Implicit Evaluation,” Social and Personality Psychology Compass 8: 342–353.
Dovidio, J.F. and Gaertner, S.L. (2000) “Aversive Racism and Selection Decisions: 1989 and 1999,” Psychological Science 11: 319–323.
Fischer, J.M. and Ravizza, M. (1998) Responsibility and Control: An Essay on Moral Responsibility. Cambridge: Cambridge University Press.
King, M. and Carruthers, P. (2012) “Moral Responsibility and Consciousness,” Journal of Moral Philosophy 9: 200–228.
Levy, N. (2011) Hard Luck. Oxford: Oxford University Press.
Levy, N. (2014) Consciousness and Moral Responsibility. Oxford: Oxford University Press.
Levy, N. (2015) “Neither Fish nor Fowl: Implicit Attitudes as Patchy Endorsements,” Noûs 49: 800–823.
McConnell, A.R. and Leibold, J.M. (2001) “Relations among the Implicit Association Test, Discriminatory Behavior, and Explicit Measures of Racial Attitudes,” Journal of Experimental Social Psychology 37: 435–442.
Mandelbaum, E. (2013) “Against Alief,” Philosophical Studies 165: 197–211.
Mandelbaum, E. (2015) “Attitude, Inference, Association: On the Propositional Structure of Implicit Bias,” Noûs 50: 629–658.
Mele, A. (2010) “Moral Responsibility for Actions: Epistemic and Freedom Conditions,” Philosophical Explorations 13: 101–111.
Payne, B. K. (2006) “Weapon Bias: Split Second Decisions and Unintended Stereotyping,” Current Directions in Psychological Science 15: 287–291.
Penfield, W. (1975) The Mystery of the Mind: A Critical Study of Consciousness and the Human Brain. Princeton: Princeton University Press.
Searle, J. (1992) The Rediscovery of the Mind. Cambridge: MIT Press.
Smith, A. (2005) “Responsibility for Attitudes: Activity and Passivity in Mental Life,” Ethics 115: 236–271.
Smith, A. (2008) “Control, Responsibility, and Moral Assessment,” Philosophical Studies 138: 367–392.
Son Hing, L.S. , Chung-Yan, G.A. , Hamilton, L.K. , and Zanna, M.P. (2008) “A Two-Dimensional Model That Employs Explicit and Implicit Attitudes to Characterize Prejudice,” Journal of Personality and Social Psychology 94: 971–987.
Stich, S. (1978) “Beliefs and Subdoxastic States,” Philosophy of Science 45: 499–518.
Suhler, C.L. and Churchland, P. (2009) “Control: Conscious and Otherwise,” Trends in Cognitive Science 13: 341–347.
Suhler, C.L. and Churchland, P. (2014) “Agency and Control,” in W. Sinnott-Armstrong (ed.), Moral Psychology: Free Will and Moral Responsibility, Vol. 4. Cambridge: MIT Press, pp. 309–326.
Tononi, G. (2004) “An Information Integration Theory of Consciousness,” BMC Neuroscience 5: 42.
Uhlmann, E.L. and Cohen, G.L. (2005) “Constructed Criteria: Redefining Merit to Justify Discrimination,” Psychological Science 16: 474–480.

Further Reading

Related Topics

Search for more...
Back to top

Use of cookies on this website

We use cookies, see our privacy and cookies policy for more information. If you are okay with cookies click below or continue to browse.