Tuesday, November 22, 2005

SCR Reloaded








For those of you interested in consciousness science, the new online journal/e-zine/forum for the scientific study of consciousness, Science & Consciousness Review, is now online.

New features include comments by registered users, databases, an annotated bibliography, RSS feeds and surveys.

Visit SCR here.

Monday, November 21, 2005

Neurobiology of human values I


Modern brain science has made a big impact on numerous issues, not the least on health care. The invention of psychopharmacological agents was the first break-through in human history in the treatment of psychiatric diseases.

The consequences of neuroscience may, however, be felt most at all in our self-conception, in how we view human nature. A very clear example of how profound investigations into the brain have upset very deep seated convictions about human nature is the research started by Thomas Willis and his Oxford circle in the 1660’s. By empirically examining both human and animal brains Willis demonstrated that not only bodily, but also cognitive functions, were governed by physical (albeit, at that time, still unknown) processes located in the grey matter of the brain. This conclusion effectively broke with a more than 2000 year old psychological model where bodily and cognitive function were commanded by different parts of the soul – most dramatically in Descartes’ philosophy of mind, where the bodily functions were imagined to be controlled by wholly material forces, whereas “the mind” were seen as non-material. In fact, it is fair to say that this break in our self-image was so powerful that we haven’t really come to terms with it yet. The story of Willis’ groundbreaking work is told masterfully in Carl Zimmer’s informative and very entertaining book The Soul Made Flesh.

In many ways, though, the revolution of Willis pales beside the neuroscientific results obtained in the last 30 years. We are now on the brink of understanding how the very brain processes Willis could only speculate about actually work. Some processes, early vision and memory consolidation for instance, are already quite well understood, even on a molecular level. And the invention of brain imaging techniques – PET, fMRI, MEG, etc. – has made it possible for brain scientists to start digging into even more mysterious and opaque faculties of the human brain such as decision-making, future planning, mathematical reasoning, and language…In short, all the higher-order cognitive faculties we consider the defining and unique traits of the human species. Already, this research is turning out some rather unexpected surprises. Consider, for example, how studies of human economic decision-making have shown the reward and punishment system – the basal ganglia, nucleus accumbens, amygdala, OFC, and other structures – to be critically implicated. Processes in these structures are highly dependent on neurotransmitters such as dopamine and serotonin. As it turns out, these molecules play much the same role in rat and monkey striatum, with dopamine, for instance, contributing to reward-prediction processes. Thus, behaviour such as economic wheelin-n-dealin, hitherto considered a strictly human ability, setting us apart from the rest of the animal kingdom (perhaps even elevating us to a superior, non-animal level of the great chain of being...), has at least some root in neuronal mechanisms that have been around for millions of years and which we share with other animals.

Throughout the last century economic behaviour were explained through mathematical models – utility functions and game theory. Now this approach is being turned on its head…or actually inside the head, as it were! Economics is slowly becoming neuroeconomics. And, similarly, philosophy is becoming neurophilosophy, sociology is becoming social neuroscience, aesthetics is becoming neuroaesthetics, etc., etc. This is perhaps next big revolution brought about by the neurosciences: putting the humanistic sciences on a biological footing, exchanging cultural analysis for neuroscientific experiments!

The coming days I will post discussions of a new book that epitomizes this neurobiological Kehre. Based on a symposium held this January in Paris, Jean-Pierre Changeux, Antonio Damasio, Wolf Singer, and Yves Christen have published a small book called Neurobiology of Human Values. This is really a remarkable title, since more than anything human values have been the foundation that the whole enterprise of a specific “human” science (Humanwissenschaft), in contrast to the natural sciences, have been built on. Human values, the argument runs, cannot be captured by natural laws (since they are changing and subjective), and therefore human behaviour cannot be the subject of the natural sciences but must be investigated by way of a particular “humanistic” methodology. As Changeux et al.’s book shows, this argument no longer holds up to scrutiny. The many experiments reported in the book’s papers demonstrate that it is, in fact, possible to unveil the neuronal processes underlying human values.

At the same time, the book is also important because its authors are almost all very senior and influential neuroscientists – besides the editors, there are chapters by, e.g., Frans de Waal, Richard Davidson, Nobel laureate Daniel Kahneman, Giacomo Rizzolatti, and Stanislas Dehaene. This will lend important credibility to the ongoing study of human values, a research topic which is yet not mainstream in the neuroscientific community. Thus, although it has its shortcomings (most of all, very sloppy copy-editing), Neurobiology of Human Values is a very welcome publication which deserve mentioning.

In coming days I will go through some of issues raised by the book, including aesthetic values, social values, ethical values, and economic values. And what exactly is a value, then? Stay tuned for the answer to that big question!

Friday, November 18, 2005

More Literary Darwinism

In response to my post on Literary Darwinism, Joseph Carroll has updated his web site, adding not only the Buss chapter I cite, but also several new in press papers. Get 'em here.

Literary Darwinism and the brain

The November 6 issue of the New York Times Magazine ran a piece by D.T. Max on a new literary theory called ”Literary Darwinism” (LD). LD is the most prominent part of a larger movement called Biopoetics, dedicated to investigating the evolutionary background of the human capacity for producing and consuming works of art. As is well known, the emergence of anatomical modern homo sapiens is associated with a “creative explosion” some 50.000-75.000 years ago – exactly when is still hotly debated by archaeologists and paleontologists – which included, among other things, the first appearance of works of art. (Again, some experts argue that the first works of art appeared earlier, and, indeed, we cannot be completely certain that older hominids didn’t produce non-fossilized art such as, for instance, songs or stories; we can, however, be pretty sure that art, in any meaningful sense, was invented by some member of the homo lineage, and not too long ago.) Thus, the existence of art seems to depend upon some neurocognitive mechanisms that are only found in the human brain. It is of obvious interest to understand not only what these putative mechanisms amount to, but also why the human brain ended up being equipped with them. Biopoetics is dedicated to answering this last question.

If you are at all convinced that humans are biological organisms, this endeavour shouldn’t upset you much. I personally find it pretty obvious that any true understanding of the phenomenon of art calls for a concerted examination of the three old questions: what, how, and why? Without a description of the kinds of language constructions that make for a metaphor (what), a break down of the neuronal processes “running” these constructions (how), and an explanation of why the human brain – perhaps in contrasts to brains of other species – has picked up these processes (why), a theory of what a metaphor is cannot really be considered complete. Yet, as it turns out, most scholars studying art and language only focus on the what-question and disregard the question of how behaviour is grounded in neurobiology. Indeed, many (for reasons I won’t speculate on here) even find this question orthogonal to what a real inquiry into art or language should be about. There are therefore a lot of humanistic scholars to whom any introduction of biology, such as exemplified by Biopoetics and LD, into the study of human behaviour will be like a red rag to a bull.

No doubt this is main the reason New York Times Magazine find LD important enough to warrant a whole exposé. (I don’t think that I offend anybody by saying that LD is still very much in its infancy: Max tells us that there are in fact “only 30 or so declared adherents [of LD] in all of academia”, and most of the work done on LD to this day is of the sort that Joseph Carroll, the theoretical leader of LD, calls “Darwinian literary criticism” – interpretations based on insights gleaned from evolutionary science. Actual attempts to answer the why-question, “why did the human brain come to be able to produce and consume literature?”, are few and far between.) The main story of Max’s article is certainly that here is something new, something different from mainstream postmodern theory. That’s ok. But I think it should be stressed that Max’s introduction to LD is very cursory and doesn’t go much into its theoretical assumptions at all. (For instance, it doesn’t discuss LD’s heavy reliance on Evolutionary Psychology (EP) and EP’s hypothesis that the mind is massively modular, with each module having been adapted for some specific task. Both the notion of a massively modular mind and the human mind as adapted can be criticized; I will return to this discussion in some later post when I get my hand on The Literary Animal, the anthology of papers on LD that prompted Max’s article.) For a more in-depth presentation the reader should really go to Joseph Carroll’s homepage where it is possible to download a sizeable part of Carroll’s papers, albeit not his most recent introduction to LD which can be found in David Buss’ new Handbook of Evolutionary Psychology.

Max’s article does, however, raise an interesting and important point: that LD would benefit immensely from incorporating brain science into their evolutionary framework. I think this suggestion is a very apt and timely memento. The fact of the matter is that, until recently, both LD and Evolutionary Psychology as a whole have more or less completely neglected the what-study, i.e., how brains actually process literary language. And the reason for this negligence has not only been a pragmatic division of labour, but a problematic commitment to a functionalist stance that goes back to John Tooby and Leda Cosmides manifest “The Psychological Foundations of Cuture” in The Adapted Mind; a functionalist stance that deems that genes and neurobiology are not really relevant to understanding biological functions. This stance is problematic since evolution not really works on “functions” but on genes and, consequently, on the cell biology of the brain. Also, as Marc Hauser has stressed, comparing how “functions” are instantiated in different brains is actually a very powerful way of getting at the evolutionary why-question…Comparing chimpanzee to human speech, for instance, will tell us something about how the speech system has changed in hominids since chimps and humans parted way some 6 million years ago. It is therefore very gratifying to now read that no less a figure than Edward Wilson, the doyen of adaptionist studies, points to neuroimaging as a way of advancing LD and evolutionary studies in general. Writes Max:

Edward Wilson told me that he is confident neurobiology can help confirm many of evolutionary psychology's insights about the humanities, commending the work to "any ambitious young neurobiologist, psychologist or scholar in the humanities." They could be the "Columbus of neurobiology," he said, adding that if "you gave me a million dollars to do it, I would get immediately into brain imaging." In fact, you won't always need a million dollars for the work, as the cost of M.R.I. technology goes down. "Five years from now, every psychology department will have a scanner in the basement," says Steven Pinker, a Harvard cognitive psychologist. With the help of those scanners, Wilson says that science and the study of literature will join in "a mutualistic symbiosis," with science providing literary criticism with the "foundational principles" for analysis it lacks.

It should be noted, though, that, due to various technical limitations, much of what is interesting about literature will be very difficult to investigate in a MR-scanner. Long stretches of discourse doesn’t really make for good experimental stimuli, and we really need a good model of “literary cognition” before being able to design interesting fMRI experiments. But, it is definitely the way to go, and I hope that we will soon see some of the people working within LD take a more keen interest in the brain.



References:

D.T. Max (2005): The Literary Darwinists. New York Times Magazine (November 6).
Joseph Carroll (2005): Literature and Evolutionary Psychology. In D. Buss (ed.): Handbook of Evolutionary Psychology. John Wiley.

Monday, November 14, 2005

Rennie gives the pope some directions!

Having mentioned the new blog at Nature.com, it should also be noted that John Rennie, the editor of Scientific American, writes a regular blog worth surfing by from time to time. In the most recent post (dated November 13), as I'm writing this, Rennie scolds the pope's embarrasing endorsement of Intelligent Design. According to the Associated Press the pope

...quoted St. Basil the Great, a 4th-century saint, as saying some people, "fooled by the atheism that they carry inside of them, imagine a universe free of direction and order, as if at the mercy of chance."

But, as Rennie remarks,

...I don't think most scientists would say that the universe is directionless or chaotic. Randomness is a fact of nature in many physical processes (e.g., radioactive decay and mutation), but there are also organizing principles at work (e.g., the laws of thermodynamics) that do impose a direction as well. For instance, creationists like to say that it's impossible for random evolution to produce order, but evolution isn't random: natural selection is an orderly directional process that acts on the randomness introduced by mutation. Thus it's not clear whom the Pope is really rebuking with this comment.

Rennie, in older posts, does a great job of rebuking the spectre of ID. It is very gratifying to see the editor of one of the major vehicles for the popularization of science take a stand against this concerted effort to suspend the scientific inquiry into the nature of homo sapiens.

Saturday, November 12, 2005

Gazzaethics I

What does a member of the US President’s Council on Bioethics think about the consequences of brain science? How does is knowledge about the brain influencing bioethical decisions? Michael Gazzaniga is a world-renowned neuroscientist and one of the very founders of modern experimental cognitive neuroscience. He is also a member of the Bioethics Council. In his recent book, The Ethical Brain, Gazzaniga not only presents the new discipline of ‘neuroethics’. He is also putting forth his own views on the vital matters. In many ways, Gazzaniga’s own take on the ethical problems are indeed the most interesting parts of the book, and they will surely be found provocative by a lot of readers.

Although the book is relatively small (178 pages plus endnotes) and written as a popular science book, the issues treated here – and Gazzaniga’s views – are well worth a closer look. Despite that Gazzaniga sometimes takes logical leaps from premise to conclusion, the book is clearly a most accessible and entertaining book. As member of the Bioethics Council, Gazzaniga has an impact not only on US law and ethics, but potentially a worldwide influence on how we think about ourselves and others, about free will, when life begins and ends, and on brain enhancements. So let me start by taking on one part at a time; life-span ethics, brain enhancement, brain and the law, and “universal ethics”

“An egg and a sperm is not a human. A fertilized embryo is not a human – it needs a uterus, and at least six months of gestation and development, growth and neural formation, and cell duplication to become a human.” (p. 11)

Gazzaniga leaves no doubt that there are specific boundaries for what can be called a human and what cannot. A fertilized embryo goes through a series of necessary steps before becoming the complex organism that makes up a human baby. But where along this development do we draw the line of what is human, a being to be given rights as any other human being? Anti-abortionists make use of the “continuity argument” which states that a fertilized egg will go on to become a human being and therefore deserve the rights of an individual. In this view the embryo is a (potential) human being from conception and onwards that should be given rights accordingly. This stands in sharp contrast to today’s practice in many countries worldwide, accepting abortion before the embryonic age of 23 weeks.

In order to make good judgements about these issues, says Gazzaniga, we need to consult scientific evidence. Based on findings from neuroscience, Gazzaniga refers to the development of the brain (and mind). It is now well-know that it is only in weeks 5 to 6 that the first electrical brain activity occurs. However, this activity is much too crude to be called “brain waves”, which is the assembly of neuronal populations working together, and a hallmark of mental life. It is first around week 23 that synapses start to form and lay the ground for coherent assemblies of brain activity. In order to see the relevance of these brain waves we should look at the other end of the life line: brain death. The complete and irreversible cessation of brain activity is the clinical hallmarks of the end of life. This is an uncontroversial fact across countries and religions. Practice on the determination of brain death may vary between countries and regions, but the basic assumption that brain death signifies mental death is considered a well established fact today.

The following concluding argument is obvious: while neural activity can be found in brain dead patients, the incoherent, scattered and unorganized activity found – and signifying death – here corresponds to the activation found in embryonic stages up until around week 23. Before that, the neural activation does not represent any integrated thought or behaviour. The embryo is not a mental being before week 23.

What, then, about the continuity argument? As mentioned, there are those who claim that any fertilized egg will continue to grow into a human, and that because of this a fertilized egg should be given the same rights as you and me or any other human. This argument moves beyond the current state of the embryo, basically stating that the human “soul” is present right at or after the fertilization of the egg. Gazzaniga replies here by addressing what he calls the “potentiality argument”; the view that “since an embryo or fetus could become an adult, it must always be granted equivalent moral status to a postnatal human being” (p. 11). The premise here (always look for the premises!) is the assumption that a fertilized egg will always develop into a human being. However, such a view is based upon an uninformed view of fertilization and embryonic development. During the first fourteen days both twinning and chimeras may occur. That is, the fertilized can become two individuals, or it can split into twin eggs and then move back into one egg again. This is in stark contrast to the continuity argument. Otherwise, we should be talking about splitting souls and chimera souls, right?

Gazzaniga gives us a new perspective on ethics; our moral decisions should now be informed by the best possible available evidence on a subject matter. We should not be led by our gut feelings or implicit assumptions about such complex mechanisms as the growth of a human embryo. In order to make sound decisions, we must consult the evidence.

Thursday, November 10, 2005

Are you ready for action?

We are not the only ones starting a serious blog these days. We are being dwarfed by Nature Neuroscience’s (NN) new feature, a blog called Action Potential. And yet, this is very likely one of the resources we will get our material from. A brief look at the blog tells us that it is an attempt at giving the journal a face outwards. To many (most) of us, NN is really one of the top journal to get a publication. Any source that can give a better understanding of what goes behind the scenes in NN is great! And see, they also announce talks about how to get published in NN!

About the purpose of the blog, it says:
“Action Potential is a blog by the editors of Nature Neuroscience - and a forum for our readers, authors and the entire neuroscience community. We'll discuss what's new and exciting in neuroscience, be it in our journal or elsewhere. We hope for spirited conversation!”

If things work out for Action Potential, we hope to see plenty of discussions, news and headlines both from NN, Nature as well as other journals. But it seems to depend on the activities of the visitors; you and me. Why not pay Action Potential a visit and join the discussions?

So, a welcome from us Lilliputians; we’ll be watching you.

Thursday, November 03, 2005

Ethical Decision-Making. A new Review.

Why are some types of behaviour deemed wrong or good? Much of the philosophical work dedicated to this question has been focused on what could be called its metaphysical dimension: How can we determine if some act is good or bad by necessity, and should therefore be considered good or bad by all people? Recently, however, a growing number of researchers have begun to look into its neurocognitive dimension: How does the human brain decide whether or not a behavioural act is good or bad? Two researchers have more than anyone pioneered this approach: a philosopher-cum-psychologist, Joshua Greene, and Jorge Moll, a neuroscientist. Both have conducted a number of imaging experiments trying to illuminate which processes takes place when we make an ethical decision.

Now Moll, together with renowned neuroscientist Jordan Grafman, has published an interesting review of this research so far, which can be found in the October issue of Nature Reviews Neuroscience. Their basic proposal is that ethical decision-making is the result of the integration of processesing in three different brain systems: the prefrontal cortex, the temporal lobe, and the limbic system (and/or the reward system). They call this the “event-feature-emotion” complex. In this scheme, PFC computes event-structures (a Grafman term) and social values; the temporal lobe computes perceptual and functional features relevant for social reasoning; and the emotional system computes motive states. An example from the paper illustrates their reasoning. If you come across an orphan child, the “feature” system will inform the brain of the child’s display of sadness, and imbue knowledge of what it means to be helpless. The “event-structure” system will predict the sad future of a child living without parental support, and the “motive” system will activate an emotional response to this cognitive processing. The end result will be something like a complex conceptual and emotional integration: This child is in a state of distress; it will not survive without its parents; this situation makes me sad or angry, and I should do something to help alleviate it. It is the right thing to do.

Moll and Grafman’s model is hardly the last word on ethical decision-making. But it is exciting to see that some progress is being made in understanding how the moral brain works, seeing as the first neuroethics experiment was only published in 2001.

'The Ethical Brain': Mind Over Gray Matter - New York Times

'The Ethical Brain': Mind Over Gray Matter - New York Times: "

By SALLY SATEL

New York Times

TOM WOLFE was so taken with Michael S. Gazzaniga's ''Social Brain'' that not only did he send Gazzaniga a note calling it the best book on the brain ever written, he had Charlotte Simmons's Nobel Prize-winning neuroscience professor recommend it in class. In ''The Ethical Brain,'' Gazzaniga tries to make the leap from neuroscience to neuroethics and address moral predicaments raised by developments in brain science. The result is stimulating, very readable and at its most edifying when it sticks to science.

As director of the Center of Cognitive Neuroscience at Dartmouth College and indefatigable author of five previous books on the brain for the general reader alone, Gazzaniga is less interested in delivering verdicts on bioethical quandries -- should we clone? tinker with our babies' I.Q.? -- than in untangling how we arrive at moral and ethical judgments in the first place.

Take the issue of raising intelligence by manipulating genes in test-tube embryos. Gazzaniga asks three questions. Is it technically possible to pick out ''intelligence genes''? If so, do those genes alone determine intelligence? And finally, is this kind of manipulation ethical? ''Most people jump to debate the final question,'' he rightly laments, ''without considering the implications of the answers to the first two.'' Gazzaniga's view is that someday it will be possible to tweak personality and intelligence through genetic manipulation. But because personhood is so significantly affected by factors like peer influence and chance, which scientists can't control, we won't be able to make ''designer babies,'' nor, he believes, will we want to.

Or consider what a ''smart pill'' might do to old-fashioned sweat and toil. Gazzaniga isn't especially worried. Neither a smart pill nor genetic manipulation will get you off the hook: enhancement might enable you to grasp connections more easily; still, the fact remains that ''becoming an expert athlete or musician takes hours of practice no matter what else you bring to the task.''

But there are ''public, social'' implications. Imagine basketball stars whose shoes bear the logo not of Nike or Adidas but of Wyeth or Hoffman-La Roche, ''touting the benefits of their neuroenhancing drugs.'' ''If we allow physical enhancements,'' Gazzaniga argues, ''some kind of pharmaceutical arms race would ensue and the whole logic of competition would be neutralized.'' Gazzaniga has no doubt that ''neuroscience will figure out how to tamper'' with neurochemical and genetic processes. But, he says, ''I remain convinced that enhancers that improve motor skills are cheating, while those that help you remember where you put your car keys are fine.''

So where, as Gazzaniga asks, ''do the hard-and-fast facts of neuroscience end, and where does ethics begin?'' In a chapter aptly called ''My Brain Made Me Do It,'' Gazzaniga puts the reader in the jury box in the case of a hypothetical Harry and ''a horrible event.'' This reader confesses impatience with illuminated brain scans routinely used to show that people ''addicted'' to drugs -- or food, sex, the Internet, gambling -- have no control over their behavior. Refreshingly, Gazzaniga declares ''the view of human behavior offered by neuroscience is simply at odds with this idea.''

''Just as optometrists can tell us how much vision a person has (20/20 or 20/40 or 20/200) but cannot tell us when someone is legally blind,'' he continues, ''brain scientists might be able to tell us what someone's mental state or brain condition is but cannot tell us (without being arbitrary) when someone has too little control to be held responsible.''

Last year, when the United States Supreme Court heard arguments against the death penalty for juveniles, the American Medical Association and other health groups, including psychiatrists and psychologists, filed briefs arguing that children should not be treated as adults under the law because in normal brain development the frontal lobe -- the region of the brain that helps curb impulses and conduct moral reasoning -- of an adolescent is still immature. ''Neuroscientists should stay in the lab and let lawyers stay in the courtroom,'' Gazzaniga writes.

Moving on to the provocative concept of ''brain privacy,'' Gazzaniga describes brain fingerprinting -- identifying brain patterns associated with lying -- and cautions that just like conventional polygraph tests, these ''much more complex tests . . . are fraught with uncertainties.'' He also provides perspective on the so-called bias tests increasingly used in social science and the law, like one recently described in a Washington Post Magazine article. Subjects were asked to pair images of black faces with positive or negative words (''wonderful,'' ''nasty''); if they pressed a computer key to pair the black face with a positive word several milliseconds more slowly than they paired it with a negative word, bias was supposed. The unfortunate headline: ''See No Bias: Many Americans believe they are not prejudiced. Now a new test provides powerful evidence that a majority of us really are. Assuming we accept the results, what can we do about it?''

Nonsense, Gazzaniga would say. Human brains make categories based on prior experience or cultural assumptions. This is not sinister, it is normal brain function -- and when experience or assumptions change, response patterns change. ''It appears that a process in the brain makes it likely that people will categorize others on the basis of race,'' he writes. ''Yet this is not the same thing as being racist.'' Nor have split-second reactions like these been convincingly linked to discrimination in the real world. ''Brains are automatic, rule-governed, determined devices, while people are personally responsible agents,'' Gazzaniga says. ''Just as traffic is what happens when physically determined cars interact, responsibility is what happens when people interact.''

Clearly, Gazzaniga is not a member of the handwringer school, like some of his fellow members of the President's Council on Bioethics. At the same time, his faith in our ability to regulate ourselves is touching. He notes that sex selection appears to be producing alarmingly unbalanced ratios of men to women in many countries. ''Tampering with the evolved human fabric is playing with fire,'' he writes. ''Yet I also firmly believe we can handle it. . . . We humans are good at adapting to what works, what is good and beneficial, and, in the end, jettisoning the unwise.''

Gazzaniga looks to the day when neuroethics can derive ''a brain-based philosophy of life.'' But ''The Ethical Brain'' does not always make clear how understanding brain mechanisms can help us deal with hard questions like the status of the embryo or the virtues of prolonging life well over 100 years. And occasionally the book reads as if technical detail has been sacrificed for brevity.

A final, speculative section, ''The Nature of Moral Beliefs and the Concept of Universal Ethics,'' explores whether there is ''an innate human moral sense.'' The theories of evolutionary psychology point out, Gazzaniga notes, that ''moral reasoning is good for human survival,'' and social science has concluded that human societies almost universally share rules against incest and murder while valuing family loyalty and truth telling. ''We must commit ourselves to the view that a universal ethics is possible,'' he concludes. But is such a commitment important if, as his discussion suggests, we are guided by a universal moral compass?

Still, ''The Ethical Brain'' provides us with cautions -- prominent among them that ''neuroscience will never find the brain correlate of responsibility, because that is something we ascribe to humans -- to people -- not to brains. It is a moral value we demand of our fellow, rule-following human beings.'' This statement -- coming as it does from so eminent a neuroscientist -- is a cultural contribution in itself.

Sally Satel is a psychiatrist and resident scholar at the American Enterprise Institute and a co-author of ''One Nation Under Therapy: How the Helping Culture Is Eroding Self-Reliance.''

Taken from NYTimes

Blogging the ethics of neuroscience

Finally, there is a blog on neuroethics. And it seems that it is not only a buzzword blog: it is initiated by prof. Adam Kolber from the Un. of San Diego School of Law.

As Kolber writes about this blog; "The Neuroethics and Law Blog is an interdisciplinary forum for legal and ethical issues related to the brain and cognition. It is meant to be of interest to bioethicists, legal academics, lawyers, neuroscientists, neurologists, cognitive scientists, psychologists, psychiatrists, philosophers, criminologists, behavioral economists, and others."

So, does that not include the most of us academics dealing with humans? I would think that the top stories from this blog would also make "regular" people discuss.

So what are the latest developments in neuroethics? For formal publications, Martha Farah has just published an article called "Neuroethics: a guide for the perplexed". In here, she touches upon one of the most interesting views in my opinion: if a "naturalist" account of the mind, i.e. conscious and unconscious processes, is correct, it would have a tremendous impact on our self-awareness, and consequences for ethicsa and law.

Another recent development, although many years in the making, is the combination of genetics and neuroimaging techniques. This is indeed a hot topic in human brain mapping science. Of course, it has been known for a long time that genes are the "building blocks" of proteins that e.g. regulate uptake of a certain neurotransmitter. But the new idea is to demonstrate that certain genes that are polymorph, i.e. they have a "natural variation" in healthy individuals, have a significant impact on neural function. Several recent studies by Weinbeger, Hariri and colleagues demonstrate that even among normals, genes can explain different responses of the brain For example, they have shown that individual differences in the response of the amygdala to emotional pictures are explained by their "genetic makeup".

Should we think further from these findings, we might very well end up with people being gene tested for their potential for being cynical soliders, executive and effective business leaders, empathic caregivers ...(fill in your favourite).

Brain cells know more

A story in NewsWise tells about new research showing that the brain does more than we are aware of. Well, is THAT so surprising? No. We know that in priming studies, words flashed too briefly to be detected, nevertheless influence people's subsequent choices. For example, if the word "king" was flashed below conscious threshold, you would likely choose the word "queen" instead of "farmer" afterwords, even though you were not aware of why you did so.

Anyway, this cited study is still important. It pinpoints brain mechanisms that underlie subliminal perception.

YOUR BRAIN CELLS MAY “KNOW” MORE THAN YOU LET ON BY YOUR BEHAVIOR
http://www.newswise.com/p/articles/view/515337/

Thunderstorm clouds ominously darken the horizon. We nonetheless go out without an umbrella because we are distracted and forget. But do we? Neurobiologists carried out experiments that prove for the first time that the brain remembers, even if we don’t and the umbrella stays behind.

We often make unwise choices although we should know better. Thunderstorm clouds ominously darken the horizon. We nonetheless go out without an umbrella because we are distracted and forget. But do we? Neurobiologists at the Salk Institute for Biological Studies carried out experiments that prove for the first time that the brain remembers, even if we don’t and the umbrella stays behind. They report their findings in the Oct. 20th issue of Neuron.

“For the first time, we can a look at the brain activity of a rhesus monkey and infer what the animal knows,” says lead investigator Thomas D. Albright, director of the Vision Center Laboratory.

First author Adam Messinger, a former graduate student in Albright’s lab and now a post-doctoral researcher at the National Institute of Mental Health in Bethesda, Md. compares it to subliminal knowledge. It is there, even if doesn’t enter our consciousness.

“You know you’ve met the wife of your work colleague but you can’t recall her face,” he gives as an example.

Human memory relies mostly on association; when we try to retrieve information, one thing reminds us of another, which reminds us of yet another, and so on. Naturally, neurobiologists are putting a lot of effort into trying to understand how associative memory works.

One way to study associative memory is to train rhesus monkeys to remember arbitrary pairs of symbols. After being shown the first symbol (i.e. dark clouds) they are presented with two symbols, from which they have to pick the one that has been associated with the initial cue (i.e. umbrella). The reward is a sip of their favorite fruit juice.

“We want the monkeys to behave perfectly on these tests, but one of them made a lot of errors,” recalls Albright. “We wondered what happened in the brain when the monkeys made the wrong choice, although they had apparently learned the right pairing of the symbols.”

So, while the monkeys tried to remember the associations and made their error-prone choices, the scientists observed signals from the nerve cells in a special area of the brain called the "inferior temporal cortex" (ITC). This area is known to be critical for visual pattern recognition and for storage of this type of memory.

When Albright and his team analyzed the activity patterns of brain cells in the ITC, they could trace about a quarter of the activity to the monkey’s behavioral choice. But more than 50 percent of active nerve cells belonged to a novel class of nerve cells or neurons, which the researchers believe represents the memory of the correct pairing of cue and associated symbol. Surprisingly, these brain cells kept firing even when the monkeys picked the wrong symbol.

“In this sense, the cells ‘knew’ more than the monkeys let on in their behavior,” says Albright.

And although behavioral performance is generally accepted to reliably reflect knowledge, in fact, behavior is heavily influenced – in the laboratory and in the real world – by other factors, such as motivation, attention and environmental distractions.

“Thus behavior may vary, but knowledge endures,” concluded Albright, Messinger and their co-authors in their Neuron paper. The other co-authors are Larry R. Squire, a professor in the Department of Psychiatry at the UCSD School of Medicine, and Stuart M. Zola, director of the Yerkes National Primate Research Center in Atlanta.

The Salk Institute for Biological Studies in La Jolla, California, is an independent nonprofit organization dedicated to fundamental discoveries in the life sciences, the improvement of human health and the training of future generations of researchers. Jonas Salk, M.D., whose polio vaccine all but eradicated the crippling disease poliomyelitis in 1955, opened the Institute in 1965 with a gift of land from the City of San Diego and the financial support of the March of Dimes.

© 2005 Newswise. All Rights Reserved.

Free willie --- and implications

This story dropped in through my Google Alerts the other day (not Dilbert). Are we really going to get any further in explaining free will, whatever that is?

When teaching students and giving talks touching this topic, I often must ask: what is free will free from? We know that as biological beings, we are bound by the physical forces of nature; as social beings our behaviours are constrained by our social milieu. Our choices are influenced by moods and unconscious processes (such as priming). So what to we really mean by "free"?

This article also discusses some of the ethical implications of neuroscience --- yes, neuroethics again-again. Take this as a hint that brain science is beginning to have an impact on human thought and introspection, even everyday thought and talk.

_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/

Does Neuroscience Refute Free Will?

This is the excellent foppery of the world, that, when we are sick in fortune, — often the surfeit of our own behavior, — we make guilty of our disasters the sun, the moon, and the stars; as if we were villains on necessity, fools by heavenly compulsion, knaves, thieves, and treachers by spherical predominance, drunkards, liars, and adulterers by an enforc'd obedience of planetary influence, and all that we are evil in, by a divine thrusting on. --William Shakespeare

In the above quote from King Lear we find a description of those who, throughout human history, deny free will and personal responsibility, instead blaming their wrongdoings on interventions divine and planetary. In a recent article, Joshua Greene and Jonathan Cohen join the believers in the "divine thrusting on."[1] This being the scientific age, and our authors being card-carrying neuroscientists, the divine thrusting on becomes a neuroscientific thrusting on, the brain playing the role of the stars above.

The divine thrust of their argument is that we have no free will because there is neuroscience, though our laws have yet to take this into account:

… the law's intuitive support is ultimately grounded in a metaphysically overambitious, libertarian notion of free will that is threatened by determinism and, more pointedly, by forthcoming cognitive neuroscience…. The net effect of this influx of scientific information will be a rejection of free will as it is ordinarily conceived, with important ramifications for the law.[2]

What are these ramifications? To begin with, the concept of personal responsibility is obsolete. Since all actions are determined by the "preexisting state of the universe," we have no choice in the matter. As they put it: "Given a set of prior conditions in the universe and a set of physical laws that completely govern the way the universe evolves, there is only one way that things can actually proceed." Thus we can logically trace everything back to the Big Bang that blasted the universe into existence. Should you ask why I had bagels rather than bananas for breakfast this morning, for example, I can refer you to the Big Bang theory of human action.

But if there is already the Big Bang, why do we need neuroscience to reveal our lack of free will? According to Greene and Cohen, for ages "scientific" philosophers, i.e. philosophers of their determinist camp, had argued against free will, but because the mind was then a black box, it was easy for the deluded religious people, the soft humanists, and other dim-witted souls to cling to the illusion of free will.

Now that we have neuroscience, however, the mind is a black box no more — it is high time for the rest of us to wake up from our dogmatic slumber and smell the deterministic universe. In short, while the Big Bang provides the big picture, neuroscience supplies the details, which will make it abundantly clear, even to the lay public, that we are literally puppets in a deterministic universe after all.

Blame it on the brain
Greene and Cohen argue that our brains are responsible for all our behaviors. Our "brains" commit crimes. "We" are innocent. Thus, in their words, "demonstrating that there is a brain basis for adolescents' misdeeds allows us to blame adolescents' brains instead of the adolescents themselves." It is fortunate that the boys in the neighborhood have not read their article, for here is their new defense after damaging your property: I didn't do it, it was my brain!

Although it has been known even before Plato that the brain plays a central role in behavior, this particular argument is rather novel. One reason others have not been bold enough to advance it (despite a perennially strong demand for determinism) is that it contains a glaring category error. Greene and Cohen compare two opposing sources of agency — either your brain or you — as if they are mutually exclusive, as if without your brain you would still be a moral agent.

As a result of this error, Greene and Cohen conclude, "the idea of distinguishing the truly, deeply guilty from those who are merely victims of neuronal circumstances will, we submit, seem pointless."

But the moral agent in the legal sense is the whole package — you consisting of your brain and the rest. To say that we are victims of neuronal circumstances is to say that we are victims of ourselves. The underlying assumption is that we have no control over "neuronal circumstances," just as we have no control over "external circumstances." But this assumption (a newly bottled behaviorist assumption) entirely contradicts our knowledge that the brain is a self-organizing and self-regulating biological system, not merely a step in the transformation of some external stimulus to behavioral output.

It is, however, not necessary to discuss in any detail the brain as a control system in order to refute Greene and Cohen, for their argument is not based on any understanding of the brain at all. It boils down to the primitive logic that, for example, if I stole your wallet then my hand is to be chopped off.

Mr. Puppet
To their credit, Greene and Cohen sensed that blaming everything on the brain is not enough. They have another weapon in store for free will, yet another "thought experiment." For their strategy is to generate as many arguments as they can against free will, hoping that some of them will have done the damage, even if these arguments contradict each other.

In their second strike, they urge us to imagine the case of a "Mr. Puppet," a criminal designed by a group of scientists through tight genetic and environmental control. During Mr. Puppet's trial, the lead scientist is called to the stand by the defense. And here is what Greene and Cohen had him say:

I designed him. I carefully selected every gene in his body and carefully scripted every significant event in his life so that he would become precisely what he is today. I selected his mother knowing that she would let him cry for hours and hours before picking him up. I carefully selected each of his relatives, teachers, friends,enemies, etc., and told them exactly what to say to him and how to treat him. Things generally went as planned, but not always. For example, the angry letters written to his dead father were not supposed to appear until he was fourteen, but by the end of his thirteenth year he had already written four of them. In retrospect I think this was because of a handful of substitution I made to his eighth chromosome.

Of course, a change in the chromosome cannot determine the timing of a nasty letter written, since the genome does not contain information that specifies any of our actions. The environmental regulation, too, is impossible, except in science fiction. But plausibility or knowledge of basic biology is not to be expected from our authors. Greene and Cohen believe that Mr. Puppet is not responsible for his actions, because "forces beyond his control played a dominant role in causing him to commit the crimes, it is hard to think of him as anything more than a pawn."

But even if we assume, for the sake of argument, that such a person could be so designed, we might conclude that he is indeed a puppet of the scientist-designer, while we are not puppets of this sort. Our genes are not selected, nor our environment scripted, by anyone.

Not surprisingly, however, Greene and Cohen reach a rather different conclusion:

What is the difference between Mr. Puppet and anyone else accused of a crime? After all, we have little reason to doubt that (i) the state of the universe 10,000 years ago, (ii) the laws of physics, and (iii) the outcomes of random quantum mechanical events are together sufficient to determine everything that happens nowadays, including our own actions. These things are all clearly beyond our control. So what is the real difference between us and Mr. Puppet? … in a very real sense, we are all puppets. The combined effects of genes and environment determine all of our actions. Mr. Puppet is exceptional only in that the intentions of other humans lie behind his genes and environment. But, so long as his genes and environment are intrinsically comparable to those of ordinary people, this does not really matter. We are no more free than he is.

In an apparent slip, they acknowledged that the scientists had intentions, that they deliberately acted in designing Mr. Puppet. Their actions apparently differ from causes that are not human actions. Greene and Cohen never bothered to ask whether these scientists ought to be punished for specifically designed someone to commit crimes, whether they are responsible at all. But if we are forced to accept this scenario, then the responsibility for the crimes appears to lie with the scientists — for designing puppet criminals.

According to Green and Cohen, however, Mr. Puppet's genes and environment are "intrinsically comparable" to those of ordinary people, as if we all live in a designed environment, in which people deliberately abuse us and lie to us; as if our genes, rather than the results of natural selection, are picked by a team of evil scientists. Intrinsically comparable? By that they presumably mean that the environment is still a earthly environment like ours, the same house with furniture and TV and parents, and so on, and the genes are still stretches of DNA made up of garden-variety nucleotides.

But clearly these "intrinsic" features are irrelevant in Mr. Puppet's case. His genes and environment, after all, are designed to make him a criminal. But note, in particular, Greene and Cohen's peculiar emphasis on the combination of genes and environment. Biology, of courses, tells us there are additional factors which are neither genetic nor environmental, but we can safely assume that these authors, possessing no particular interest in the science of biology, are not aware of these.

Being metaphorical scientists, by "genes and environment" they mean everything that makes us who we are, everything that determines our actions. We are now ready to translate their claim into plain English: Everything that determines who we are determines who we are; everything that determines our actions determines our actions. Surely we do not have control over everything — Greene and Cohen correctly assume. And surely all possible factors combined determine our actions. But while reaching such a brilliant conclusion they have spun their minds out of control, ignoring the circularity in the process. We are compelled by the laws of logic to agree with them: Yes, a banana is a banana.

Illusion of free will
Having thus disposed of free will, Greene and Cohen are ready to explain why we think we have such a thing. If we think we have something that doesn't exist, then that something must be an illusion. Hence their claim that the brain generates the illusion of free will to fool us into thinking we are in control.

With becoming modesty, our authors compare themselves to Copernicus, Darwin, and Freud in overthrowing human narcissism. Copernicus removes the earth as the center of the universe, Darwin removes human beings as lords of the earth, and Freud removes consciousness as the sole determinant of human behavior. Here comes another blow beneath the belt — even what little conscious control you have over your action is an illusion.

It seems to me, however, that this is a case of sadomasochism. Green and Cohen appear to derive keen delight out of wounding human narcissism, as represented by free-will folk psychology. You thought you decided to read this article because it seemed interesting. But no, you have no clue, and that thought was really just some illusion generated by your brain to mask its cluelessness.

How much insult to your narcissism can you take? That is the question, on which your scientistic manhood depends. Only tough scientists like Greene and Cohen are brave enough to take determinism straight, without illusions. And if you don't think you are a puppet yet, they will beat you into submission with their thought experiments and imagined data until you give up your selfhood. And so the game goes on.

Although I do not wish to deny the multitudinous pleasures derived by Greene and Cohen from becoming puppets of metaphysical fiction and mouthpieces of pseudoscientific rant, I do wish to examine the evidence they present for their claims.

For such evidence Greene and Cohen rely on the work of Daniel Wegner,[3] a Harvard psychologist and a fellow metaphorical scientist. According to Wegner, our actions are not caused by our willing. In support of this claim he cites evidence that hypnosis or brain damage can impair our sense of free will, that various experimental manipulations can create in us the illusion of control.

Our immediate response is: So what? We will not have free will if our heads are cut off. We will not have free will if we are asleep. Sometimes we erroneously thought we caused something to occur when, in fact, we did not. From this, however, it does not follow that free will is an illusion.

Under hypnosis, for example, we might feel that our arm was raised even though we did not will it. Likewise, when our motor cortex or our muscle is stimulated, various movements might be induced which are not willed. For Wegner, however, this sense of "it just happens" is a more accurate description of what really happens when we act. It never occurred to him that there is no experience of will because these are not instances of voluntary actions under our control.

Wegner very much prefers this sense of passivity, for only then do we feel like inanimate objects. When my arm uncontrollably does something, it is acting as a "scientific object" should, like a brick. Our free will must be an illusion because it does not fit into Wegner's scientific understanding of the world.

The philosopher Daniel Dennett believes that, for the sake of convenience, we adopt the "intentional stance" when interpreting the behavior of other human beings. Wegner's position can be described as the "passivity stance." He prefers to feel like the hypnotic subject, the brain damaged patient, or a zombie in general, because, according to his scientific Weltanschauung, the passivity stance is a more accurate reflection of reality. But the question remains as to whether Wegner, or the average man on the street, is actually delusional.

Agreeing with Wegner's claim that our sense of free will is an illusion, Greene and Cohen go one step further, and argue that our attribution of free will in others is also an illusion.

They cite a study by Heberlein et al., who presented the following film to human subjects: a big triangle chases a little circle around the screen, bumping into it. The little circle repeatedly moves away, and a little triangle repeatedly moves in between the circle and the big triangle. When normal people watch this movie they see these interactions in social and intentional terms. The big triangle tries to harm the little circle, and the little triangle tries to protect the little circle.

However, a patient with damage to the amygdala, an almond-shaped collection of different brain structures, fails to see these shapes in such intentional terms.[4] Consequently, for Greene and Cohen, because this attribution of free will is generated by a brain area, it is also an illusion.

Readers of my earlier article will be familiar with Greene and Cohen's penchant for evolutionary speculation. Here they go again. According to their new so-so story, parts of the brain, in the course of evolution, become specialized modules for folk psychology, e.g. attributing free will to others; other parts, for folk physics, e.g. believing in the sort of motion typically seen in a Disney cartoon. We know that folk physics is wrong, but folk psychology is just as wrong, according to Greene and Cohen. Because of our folk psychology system, we think of other animate objects as uncaused causers. But after learning neuroscience, "when we look at people as physical systems, we cannot see them as any more blameworthy or praiseworthy than bricks."

Perhaps we could posit, in addition to the folk physics system and the folk psychology system, a third system of masochistic scientism, which fools one into believing one is a brick being acted on by the forces of nature, rather than an acting agent responsible for his actions. The neural basis of this third system, I submit, remains to be established.

To summarize then, our brick-minded theorists accuse folk psychology behind the law to view moral agents as uncaused causers. Since we are not uncaused causers, we cannot be moral agents, and we cannot be responsible for our actions.

Now if I am an uncaused causer, my actions are insulated from any external influence. Suppose a man is given a life sentence for killing the guard while robbing a bank, such a punishment cannot possibly prevent me from doing the same. Deterrence is indeed impossible if I am the uncaused causer of my actions.

However, this cannot possibly be the assumption behind our law, because it cannot possibly explain the law's focus on intentionality. According to the folk psychology that Greene and Cohen attack relentlessly, it is characteristically human that we deliberately choose appropriate means to reach desired ends. This capacity enables us to become moral agents, the targets of praise or blame. For example, an act is not guilty lest there be a guilty mind (Reum non facit nisi mens rea).

As Mises repeatedly pointed out, the very concept of human action, of means and ends, presupposes the category of causality. Responsibility does not imply that we are unmoved movers in the Aristotelian sense, standing outside of the chain of cause and effect, but that we, as agents of intentional actions, are in a peculiar position in a long chain of causes stretching back to the Big Bang. We are agents capable of controlling our actions, not reflex-arcs translating stimulus into response.

The law, then punishes crimes that are the result of deliberation and willing, and is lenient towards accidents or those agents incapable of rational actions (e.g. children). This selectivity can only be based on the idea of deterrence. For it would be absurd to tell someone not to murder, if he could not help it, just as it is absurd to tell someone to stop beating his heart.

The law, instead, punishes crimes that result from actions that we can control, and could thus prevent such actions in the future.

If the law is in fact based on the assumption of uncaused causers, it would have no reason to make distinctions between deliberate murdering and accidental killing. Strict liability would apply to all crimes. It is of course beyond the scope of this article to discuss the history of the law, though it should be pointed out that the concept of personal responsibility in violent crimes is in fact a relatively recent development. Strict liability, extending to relatives and Lords, is common in many primitive societies (I refer the interested reader to Pollack and Maitland's masterpiece or Zane's book on the history of law).

Law and liberty
Free will, in the sense discussed here, means that humans control what they do. Neuroscience will not change this fact. Science fiction, of the variety favored by Greene and Cohen, could always imagine such a day. In this sense, it cannot be distinguished from any teleological religion.

Indeed, determinism of this type, which claims that human beings do not choose, do not act, but are always acted upon, has been revived innumerable times in history, in various guises. It is a historical fact that primitive savages, religious fanatics, and believers in inexorable laws of history have always advocated some version of it.

In the development of the law also, the concept of personal responsibility evolved, partly because some human beings, after struggling free from superstition and the "passive stance," began to understand the nature of their own actions and their effects on the world. Enlightened individualism, we should remember, was a late development, and remains unpopular in many parts of the world today. The intuitive folk psychology of human action we possess is the product of such enlightenment.

On the other hand, in attacking the concept of free will and personal responsibility, Green and Cohen merely revive the cult of irrational thought that has long prevailed in human societies. It should not therefore surprise us to find in their article the following sentence: "rationality is just a presumed correlate of what most people really care about." Indeed, what is left of rationality when you are not responsible for your actions?

In place of reason, these authors substitute aggregate welfare. The law reformed in light of neuroscientific knowledge should, according to them, aim to promote future welfare, rather than punish those responsible for their crimes. In an earlier article I discussed the attempt by these authors to abolish universal moral norms using brain imaging data, in the name of aggregate welfare. We should at least applaud their consistency. Of course, a universal moral norm such as the Categorical Imperative would have no meaning if there is no free will. Why tell someone not to steal if he could not help it, if his brain was to blame?

All considered, then, their arguments boil down to this: (1) the criminal is not responsible for his crime because everything that determines who he is determines who he is; (2) instead of punishing criminals for what they deserve, the law should maximize future welfare.

Ethically, it seems preposterous to argue against the total welfare of mankind, just as logically, it is impossible to refute a tautology. The take-home lesson here is that you should always watch out for someone who argues for something that cannot possibly be contradicted, for there is often a hidden agenda, attached to the can't-possibly-be-wrong package, that triggers the self-destruction of the whole thing once uncovered.

As I pointed out in the earlier article, their concept of aggregate welfare is a vacuous concept, made up for the sake of convenience. We cannot possibly calculate what this welfare is, though we can indirectly observe, by studying history, the long-term effects of certain rules and practices on groups that follow them. In the latter, somewhat more concrete sense of welfare, our current legal framework appears to have been one of the chief promoters of human welfare, judging by the remarkable spread of the relevant ideas from the West, against often strong resistance from local customs and primitive practices.
What we can and cannot know: $16
Finally, throughout their essay, Greene and Cohen emphasize that the "libertarian" conception of free will which they attack has no connection to the political philosophy. This disclaimer, however, betrays ignorance of the political philosophy. Free will and responsibility provide the necessary foundation of the libertarian political philosophy. Laws protect liberty, and liberty entails responsibility.

Their arguments for determinism are yet another attempt to abolish laws as abstract rules that apply to everyone equally. Instead the State and its "scientific experts" will get to decide whether a person will be harmful to society or not, in order to maximize future welfare in each case (i.e. to do whatever those in power wish to do). The law itself becomes meaningless. And instead of being general rules that protect individual liberty, in the hands of Greene and Cohen, and in the name of neuroscience, it will be used, as a tool for state intervention and arbitrary judgments, to destroy liberty.

Lucretius is a neurobiologist living in Maryland. Email. He will read the blog and answer comments there. Read his first article: Does Neuroscience Refute Ethics?


1. Greene, J. & Cohen, J. "For the law, neuroscience changes nothing and everything." Philos Trans R Soc Lond B Biol Sci 359, 1775-85 (2004).

2. von Mises, L. Theory and History (Mises Institute, 1957).

3. Wegner, D. M. Precis of the illusion of conscious will. Behav Brain Sci 27, 649-59; discussion 659-92 (2004).

4. Heberlein, A. S. & Adolphs, R. Impaired spontaneous anthropomorphizing despite intact perception and social knowledge. Proc Natl Acad Sci U S A 101, 7487-91 (2004).