Saturday, December 31, 2005

Blog introspections

At the end of this month and year, our stats are indeed looking promising. We have only started to gather statistics from 11. December, and we have already had a good amount of visits. Close to 600 in all in about 20 days -- and doing so without any deliberate ads anywhere (well, except a note at my group, Mind & Brain at yahoogroups).

There are two primary motivations behind this blog. The first is that it serves as an archive for out research on neuroethics, on the road of making a book about it. The book will be written in Danish, but it of course our hope that it can be translated into other languages, e.g. English. So in terms of the book project, we needed somewhere to arhive our items. In extension to this, we also thought a blog would be good for more free thoughts and views in the preparation of the book.

But why should we hold this archive just for ourselves? Why not share this with others? I think the answer is obvious - brain science produces a hole new range of findings that goes straight to the bone of what it actually means to be human. In order to understand these new findings and their implications, there must be a bridge between the researchers and the public (and the media). Martin and I are both cognitive neuroscientists, having our hands in the mud, so to speak. Why let others think about the consequences of what we are doing? Why not discuss this ourselves -- share our thoughts, concerns and visions? In addition, since there is an abundance of brain-hype sites and news, we hope to bring up to date, balanced news and views following the proper scientific rigor!

So while much of what we write is to our own amusement and preparation, we hope that you will be amused with us. And please drop in for comments and discussions. We'd love to hear your opinions.

Cognitive literary studies

Every year the American Modern Language Association stages a conference where English professors and students from all over the US gather to discuss the state of literary criticism and theory. Think Society for Neuroscience, if you need a comparison.

This year, as Nick Gillespie tells us in this report, the talk focused on cognitive approaches to the study of literature. The old paradigm - postmodernism or whatever you would like to call it - seems to be in decline. Will a cognitive approach be the next big thing?

Well, since literary texts are composed of string of words, and since it takes the processing of neurocognitive mechanisms to make sense of such strings of words, we should certainly hope so! Still, personally I wouldn't hold my breath. It is, though, heartening to see that a few valiant cognitivists are trying to break the anti-biological spell of modern lit-crit.

Friday, December 30, 2005

Anterior cingulate cortex


The structure of today is the anterior cingulate cortex (ACC). Two new studies jointly illuminate the function played by this intriguing part of the brain. In many imaging studies the ACC lights up in connection with cognitive processing, especially when something goes wrong. Some researchers have speculated that the ACC may work as a cognitive error-detection device. Other studies implicate the ACC in emotional processing (and, traditionally, the ACC has been grouped anatomically as part of the limbic system). So, what is it, cognition or emotion?

In a forthcoming paper in Brain and Cognition, Ray Dolan, Hugo Critchley and their colleagues at FIL in London test two patients with damage to the medial part of the prefrontal cortex (including ACC) on a number of cognitive tasks. They conclude (citing the abstract) that

both patients showed intact intellectual, memory, and language abilities. No clear-cut abnormalities were noted in visuoperceptual functions. Speed of information processing was mildly reduced only in Patient 2 (bilateral ACC lesion). The patients demonstrated weak or impaired performance only on selective executive function tests. Performance on anterior attention tasks was satisfactory.

This suggests, they say, that

our findings are inconsistent with anterior attention theories of ACC function based on neuroimaging findings. We propose that the data may imply that the ACC does not have a central role in cognition. We speculate that our findings may be compatible with the view that the ACC integrates cognitive processing with autonomic functioning to guide behaviour.

As it so happens, another new study by scientists at Stanford and Harvard, reported two weeks ago in PNAS, backs up this conclusion. In this study, subjects used real-time fMRI to modulate feelings of pain by learning to control activity in the rostal part of the ACC. This finding is really quite astonishing, I think! As the authors remark in the abstract to their paper

When subjects deliberately induced increases or decreases in rACC fMRI activation, there was a corresponding change in the perception of pain caused by an applied noxious thermal stimulus. Control experiments demonstrated that this effect was not observed after similar training conducted without rtfMRI information, or using rtfMRI information derived from a different brain region, or sham rtfMRI information derived previously from a different subject. Chronic pain patients were also trained to control activation in rACC and reported decreases in the ongoing level of chronic pain after training. These findings show that individuals can gain voluntary control over activation in a specific brain region given appropriate training, that voluntary control over activation in rACC leads to control over pain perception, and that these effects were powerful enough to impact severe, chronic clinical pain.

The over-all conclusion, thus, appears to be that the ACC do have something to do with the control of behaviour, but mostly with emotional behaviour. Why does it then pop up in so many imaging experiments on cognition? One possible answer could be that the ACC is the neurocognitive seat for integrating emotional responses to some activity or perception with the sequencing of cognitive behaviour. Further research will hopefully tell.


References

Baird, A. et al. (in press): Cognitive functioning after medial frontal lobe damage including the anterior cingulate cortex: A preliminary investigation. To appear in Brain and Cognition.

deCharms, R.C. et al. (2005): Control over brain activation and pain learned by using real-time functional MRI. PNAS 102: 18626-18631.

Thursday, December 29, 2005

Culture I: Chimps have it

The most fascinating scientific result of 2005 – to my mind, at least! – was the sequencing of the chimpanzee genome, reported in the September 1 issue of Nature. (Remember also to read the many accompanying articles on chimp research in the same issue.) Although not the first genome to be sequenced, the chimp genome holds a special importance to research on human cognition and behaviour. The reason for this is the well known fact that chimpanzees are our closest primate relatives. Some 5 to 7 million years ago chimps and the human lineage shared a common ancestor. A comparison of the chimp genome with the human genome will therefore provide invaluable insights into the evolutionary process leading to the creation of homo sapiens. Some interesting finds have already been made. As Elizabeth Culotta and Elisabeth Pennisi write in the Science “breakthrough of year” article that Thomas mentions below

we differ by only about 1% in the nucleotide bases that can be aligned between our two species, and the average protein differs by less than two amino acids. But a surprisingly large chunk of noncoding material is either inserted or deleted in the chimp as compared to the human, bringing the total difference in DNA between our two species to about 4%.

This circumstance feeds a growing suspicion that humans do not so much differ from chimps because of new genes being expressed as because the old genes we share with our chimpanzee brothers and sisters are expressed in a different manner. Various techniques for comparing primate brains (cytoarchitectonics, stereology, imaging) tell much the same story. The human brain is not essentially different from the chimp brain: anatomical areas are more or less arranged in the same manner, it is composed of basically the same cells, and many of the functions it performs are, grosso modo, similar to the functions performed by the chimpanzee brain. There are differences, to be sure. The gene FOXP2, for instance, have mutated twice since the human lineage separated from the chimp lineage. The expression of FOXP2, a trancription factor, is compromised in an English family with a severe speech impediment. Thus, the new variant of foxp2 may have played a role in bringing about human language. Also, Katerina Semendeferi has shown that Brodmann area 10, at the frontal pole of brain, is larger in humans relative to the rest of the brain. Its supragranular layers also appear to form more densely connections with other association areas in the human brain. Yet, it is impossible to say that humans differ from chimps on this or that behaviour which is the product of some new patch of cell tissue, only present in the human brain. We seem to come equipped with a “chimpanzee” brain that have just been modified in a number of subtle ways. Understanding how constitutes one of the great challenges of contemporary science.

On the face of it, chimp behaviour appears to be very different from human behaviour. There are no chimpanzee artists or scientists, for example. No skyscrapers or bridges have been build by chimpanzee engineers and architects. No chimpanzee is blogging from the rainforest of Tanzania about what Jane Goodall is up to these days! For the past 50 years researchers have assembled a long list of putative cognitive cognitive abilities that are unique to humans, and hence contribute to making us different from other primates. However, years of careful observation, and numerous experiments, have, item for item, dismantled this list. Sure, only humans speak, but apes clearly have some semantic capability and are able to refer symbolically to these mental concepts. No chimp will put more than two entities together to form a tool, but they do use sticks to fish for termites, or stones to crack open nuts. Until very recently, most primatologists concurred that only humans are able to read other conspecifics’s minds – that is, that we are the only species to be imbued with a Theory of Mind. It turns out that this is not true. Chimps have ToM as well! Again, tool use, language, and mentalizing, are all clearly different in humans, but they can't be said to be altogether absent from chimps, if you look carefully at the underlying neurocognitive mechanisms. The lesson to be gained from these behavioural studies, once more, seems to be that humans have inherited a more basic capacity from our common chimp-human ancestor and then have run with it.

A case in point is culture. Culture is very much something we associate with humans, and something that from time to time has appeared on the “unique capacities” list. The “human sciences”, to a large degree, simply define their object of inquiry as culture. (In German the human sciences are often referred to as Kulturwissenschaft; in the US much work go under the name “cultural studies”.) In 1999, however, 9 of the world's leading primatologists published a report in Nature where they documented that chimps at 7 African communities have developed cultural differences in the use of tool, or social behaviour. The authors define a cultural tradition as behaviour patterns that are customary or habitual in one community, but absent in others, but which cannot be explained by ecological differences. In a recent review of this research, also published in Nature, one of the authors, Andrew Whiten, note that number of cultural traditions observed in chimpanzee communities greatly exceeds those found in other species. In fact, other mammals, fish and birds commonly only have been associated with just one tradition. 19 have been identified in orangutans. But a repertoire of no less than 40 behavioural variants have been observed in chimp communities, so something appear to have changed throughout primate evolution. The critical question of course being: what?

Does such behavioural traditions really amount to the thing we call human culture? Well, we may point to some obvious differences: there are a lot more than 40 traditions around in human communities; human culture is cumulative (i.e., we build on, and sometimes improve upon, other people’s behaviour); and many thinkers would argue that human culture is just a much about values as about behaviour. Still, the formation and transmission of traditions are without doubt part and parcel of human culture as well. Perhaps the real benefit from comparing chimp and human culture will be an improved notion of what just exactly culture is! (Such as this attempt by Richard Byrne and colleagues amply show.)

Now, what could possibly be the neurocognitive mechanisms underlying the primate ability to form and transmit cultural traditions? Stay tuned for part II!

To be continued…


Byrne, R. et al. (2005): Understanding culture across species. Trends in Cognitive Science 8: 341-346.

Chimpanzee Sequencing and Analysis Consortium (2005): Initial sequence of the chimpanzee genome and comparison of with the human genome. Nature 437: 69-87.

Whiten, A. et al. (1999): Cultures in chimpanzees. Nature 399: 682-685.

Whiten, A. (2005): The second inheritance system of chimpanzees and humans. Nature 437: 52-55.


Tononi's conscious mind

An article on Giulio Tononi's work on the brain basis of consciousness has just been published in Science & Consciousness Review. Henri Montandon reviews some of the recent publications by Tononi. Excerpt from the article:

"Tononi’s writings are noteworthy for his grounding in phenomenology, and his lucid style of presentation. He has noticed three aspects of conscious experience:

  1. given any definition of “conscious state”, the brain produces an infinity of them;
  2. each conscious state is prime, rather in the sense of a prime number; it cannot be deconvoluted into lesser states. Tononi terms this characteristic the “integration” of a state;
  3. conscious experience unfolds in well defined intervals, about 100 to 200 milliseconds to develop a fully formed sensory experience, about 2 to 3 seconds for a single conscious moment.
It is these three observations which Tononi seeks to understand in his information integration theory of consciousness. He discusses two hypotheses:
  1. consciousness corresponds to the capacity of a system to integrate information
  2. the quality of consciousness is determined by the informational elements of a complex, which are specified by the values of effective information among them."

Wednesday, December 28, 2005

Seven seconds to dementia

Yes, every seventh second a new case of dementia develops in the world. Medscape.com reports this from a just published report in The Lancet. Such a number pinpoints the necessity of finding viable solutions to fight degenerative brain disorders. Such an effort must work on many levels; devising new and improved methods for detecting dementia as early as possible; slowing the progression of the disease; finding treatments that halt or even repair neural damage; improving the healthcare of patients suffering from dementing disorders. Approaches are numerous, but we still know little about the mechanisms behind each type of dementia - and there are more than 100 known causes!

With increasing mean age across the world, and with the proportion of elderly growing in the coming years, much effort should be put into the research into neurodegenerative disorders.


From medscape.com
Globally, New Dementia Case Arises Every 7 Seconds

NEW YORK (Reuters Health) Dec 16 - Findings from a review of published studies suggest that every 7 seconds a new case of dementia occurs somewhere in the world.

"We believe that the detailed estimates in this paper constitute the best currently available basis for policymaking, planning, and allocation of health and welfare resources," lead author Dr. Cleusa P. Ferri, from King's College London, and colleagues note.

The researchers used the Delphi consensus method to estimate the global prevalence of dementia. With this method, quantitative estimates are derived through the qualitative assessment of evidence, according to the report in the December 17/24/31st issue of The Lancet. In the present study, 12 international experts used data from published studies to estimate the prevalence of dementia in every World Health Organization world region.

Roughly 24.3 million people currently have dementia and 4.6 million new cases arise every year, the authors state.

A doubling of the prevalence will occur every 20 years, so that by 2040, about 81 million people will have dementia. However, this increase is not uniform; in certain countries, such as China and India, the prevalence will more than double in the next few decades.

The report indicates that the majority of people with dementia, 60%, live in developing countries. By 2040, this percentage will have increased to 71%.

"Primary prevention (of dementia) should focus on targets suggested by current evidence; risk factors for vascular disease, including hypertension, smoking, type 2 diabetes, and hyperlipidemia," the authors state. "The epidemic of smoking in developing countries and the high rising prevalence of type 2 diabetes in Asia are particular causes of concern."

Lancet 2005;366:2112-2117.

Steps to dethronement III – The unconscious agent

Normally, we humans think of ourselves as rational beings. A decision is made by me – the Agent – and I know perfectly what I want and how to get there. Enter cognitive neuroscience. From a multitude of studies, there is a consensus today that many decisions are not made through overt, conscious processing. A lot of work goes on behind the scenes and shapes the motivation and choices even in complex decision making.

Just think of the studies by Tanya Chartrand and her colleagues, as I wrote about in this article in the early days of Science & Consciousness Review. By presenting motivation relevant words (‘‘success’, ‘failure’ etc.) to subjects subliminally (without their conscious detection) the researchers were able to manipulate how they reacted when being given an easy or hard/impossible task. Without the subjects’ knowledge, Chartrand was able to produce emotional states in her subjects, e.g. being in a bad or good mood, by manipulating the motivational tone of the presented words. Best of all, her subject were not able to determine why they felt as they did.

Another well-supported idea about unconscious processes stem from research into subliminal perception and priming. From Phil Merikle’s article:

“Subliminal perception occurs whenever stimuli presented below the threshold or limen for awareness are found to influence thoughts, feelings, or actions. The term subliminal perception was originally used to describe situations in which weak stimuli were perceived without awareness. In recent years, the term has been applied more generally to describe any situation in which unnoticed stimuli are perceived.”

There are tons of empirical evidence for subliminal perception, and they all point to the fact that our behaviour is influenced strongly by unconscious processes. We are not the conscious, autonomous agents we think we are. At least not in the sense we usually think.

But if not all our choices are made on a conscious and “rational” level, why do we have the experience of being conscious agents of our actions? In a forthcoming interview I’m doing with Professor Shaun Gallagher at the Department of Philosophy, University of Central Florida, the terms agency and ownership are explored. This interview will be published in Science & Consciousness Review very soon. Here is an excerpt of the interview:

“Phenomenologically intentions in almost all cases come already clothed in agency – the ‘who’ question hardly ever comes up at the level of experience. The neural systems have already decided the issue – one way or the other – even if I'm wrong about who is acting, I am still attributing agency.

The mistake is to think that there is a necessary isomorphism between the phenomenological level and the neuronal level. But even if the neuronal processes can be defined as involving three steps, this does not mean that those three steps need to show up in consciousness. The wonderful thing about the "Who system" is that it's neurological – and the results of its activation are hardly ever experientially manifested as "making a decision about who did the action." Rather, the results of its activation are experientially manifested as "X's action" where X is either you or me.

Of course experiments and pathologies may generate or reveal ‘who’ problems, but in normal ecological behavior it is generally clear whose intention/action it is, and as a result, the identification question – "Someone is intending to pick up the apple, is it me?" – just doesn't come up.

Stay tuned for the full story.

Tuesday, December 27, 2005

Dennett on Darwinism and ID

The American philosopher Daniel Dennett - an outspoken atheist - is interviewed by German newspaper Der Spiegel (in English, though!) on the whole ID affair. Find it here.

Also, look out for his forthcoming book, to be published by Viking in February, Breaking the Spell, which takes on religion. We will probaby return to that book here on the blog when that time comes.

Friday, December 23, 2005

Social cognitive neuroscience

I didn’t mention it in my post below on Fiddick, but his is only one out of eleven papers composing a theme on social cognitive neuroscience in the recent issue of NeuroImage. This exiting new field has only been around for 5-10 years, but it will surely be one of the major research areas to watch in the coming years. More and more evidence have amassed suggesting that homo sapiens is an extraordinary social species. Yet, not much is known about how our brains give rise to this unique cognitive competence. A better understanding of the human social brain may also have practical consequences. For instance, perhaps one day we will come to understand why some people strap on explosives and blow up themselves and others for the sake of some political cause.

I highly recommend a visit to the Emotion and Social Behavior Lab at Caltech, led by Ralph Adolphs. Adolphs is a world leader in social cognitive neuroscience, and has written several fine reviews on its state of the art. These two are especially good:

Adolphs, R (2003). Cognitive Neuroscience of Human Social Behaviour. Nature Reviews Neuroscience, 4 (3), 165-178. (pdf)

Heberlein, AS, Adolphs, R. Functional anatomy of human social cognition. Book chapter, In press. (pdf)


The fact that we are highly social animals has often been used as an argument for cultural relativism. The argument, prototypically, runs like this. The social context (our “culture”) determines how we think. Therefore, the nature of the brain’s neurocognitive mechanisms is irrelevant to an understanding of our “thinking”. Instead, we should analyse the social “facts” of the culture we are immersed in. As Emile Durkheim famously stated in The Rules of Sociological Method: Social fact

"consist of manners of acting, thinking and feeling external to the individual, which are invested with a coercive power by virtue of which they exercise control over him.
Consequently, since they consist of representations and actions, they cannot be confused with organic phenomena, nor with psychical phenomena, which have no existence save in and through the individual consciousness. Thus they constitute a new species and to them must be exclusively assigned the term social. It is appropriate, since it is clear that, not having the individual as their substratum, they can have none other than society, either political society in its entirety or one of the partial groups that it includes - religious denominations, political and literary schools, occupational corporations, etc. Moreover, it is for such as these alone that the term is fitting, for the word 'social' has the sole meaning of designating those phenomena which fall into none of the categories of facts already constituted and labelled. They are consequently the proper field of sociology." (Bold added.)

I cannot begin to count the number of times I have encountered this argument. Yet, how could social “facts” possibly inform, or even determine, how we think (“exercise control over us” in Durkheim's words), if they didn’t “interact” in some way with the brain’s neuronal processes? If my thinking, for instance, about gender roles has been influenced by my looking at scantily clothed women in advertising and music videos (a very popular sentiment among some feminists), my act of looking must somehow be able to form concepts of gender roles in my brain. So, clearly some brain mechanisms are involved in this external, social “exercise of control”. (Also, remember that some types of brain diseases, such as autism, radically impair the patient's ability to absorb social "facts".)

Many of the papers in the NeuroImage special issue address this problem of why the human brain is so susceptible to social influence. Social psychology refers to this question as the “power of situation over behaviour”. UCLA psychologist Matthew Lieberman, the editor of the special issue, go through a lot of what is known about how situations influence brain activity, including the peculiar phenomenon of priming. In the conclusion to his introductory remarks he writes:

Much of social psychology is fundamentally paradoxical, at least to the western mind. We tend to believe that we are the captains of our destiny, and yet, time and time again, social psychology has shown that situational factors exert strong pressures on our behavior and often does so without our knowledge. The implications of these and other findings for social cognitive neuroscience are twofold. First, although social psychologists have established these various principles, nderstanding why humans are guided by these principles and when these principles apply remain largely unknown. If social cognitive neuroscience can help to answer these questions it would be a major contribution to our understanding of social cognition. Second, the principles of social psychology apply not only to the subjects in our investigations, but to us, the researchers as well. In the absence of understanding these principles, we are likely to generate social cognitive hypotheses that are unnecessarily naıve. If we are as blind to the power of situational forces and our own ability to construct social perceptions that do not feel constructed, we will be unable to generate experimental paradigms that take these factors into account. Ultimately, a successful social cognitive neuroscience should thoroughly integrate the methods of social cognition and cognitive neuroscience, and also rely in equal parts on the conceptual lexica of these two parent disciplines as well.

After having read Adolphs two papers, you should go peruse the NeuroImage papers as well.

Evolution takes gold

Just out today, the well-esteemed journal Science has judged the science of evolution to be the top scientific breakthrough of 2005! The ideas of evolution, of course, traces back to Darwin, and even further. So why is 2005 such an interesting year?

As this story from BBC reports, there have been many vital scientific publications that highlight and expand our understanding of evolutionary processes. This includes the publication of the full chimp genome (from a chimp called Clint). As Science writes:

The genome data confirm our close kinship with chimps: We differ by only about 1% in the nucleotide bases that can be aligned between our two species, and the average protein differs by less than two amino acids. But a surprisingly large chunk of noncoding material is either inserted or deleted in the chimp as compared to the human, bringing the total difference in DNA between our two species to about 4%.

Second, 2005 has seen some important publications on the emergence of new species, also called speciation. Science gives many good examples, including this:

...Birds called European blackcaps sharing breeding grounds in southern Germany and Austria are going their own ways--literally and f iguratively. Sightings over the decades have shown that ever more of these warblers migrate to northerly grounds in the winter rather than heading south. Isotopic data revealed that northerly migrants reach the common breeding ground earlier and mate with one another before southerly migrants arrive. This difference in timing may one day drive the two populations to become two species.

Finally, Science mentions something more close to home for BrainEthics - the improval of the human condition through natural science. With the advanced understanding of evolution and comparative studies between humans and non-human species, we may better understand diseases and treatments that are first tried in animal models. This could lead to better models of cognitive processes, how brain disease initiates and progresses, and how medical treatment can be optimised. Advances such as the sampling of the full chimp genome also allows for sorting out the unique traits of the human genome, which in turn may shed light on what makes humans so distinct from other species. In addition, potential treatments using animal models should now take into account the specific genetic makeup of the animal being used. While it has been known for decades that different species -- even strains within species (e.g. rats) -- may produce different results, knowing the full genome offers a whole new way to quantify these relations.

In the end, however, one cannot avoid speculating about the political importance of stressing evolution today. 2005 has indeed been a year filled with increased interest and discussion of evolution, especially in the light of Intelligent Design. So, while Science has indeed the scientific reasons in place to support a 1st place for evolution science, I won't say that the signal value does any harm, either.

Thursday, December 22, 2005

Psychological debate. Neuroscience to the rescue!

Some weeks ago I mentioned the lack of interest in the brain as a serious problem for Evolutionary Psychology. Now, this resistance to neuroscience may be a general misère within psychology – already back in the 1980’s Martha Farah attacked psychologists for fighting over mental imagery without introducing any neurobiological evidence into the debate. Perhaps psychologists are still functionalists at heart, feeling that neuroscience have no bearing on psychological models. However, when you argue that a psychological function is an evolutionary adaptation, as evolutionary psychology do, it becomes rather problematic to leave out the brain part from your equation, since evolution doesn’t actually operate on functions, but on the genome. And the genome, again, doesn’t build functions but proteins which, in the end, build brains and (again again) not functions, by forming molecules. So, if you are interested in psychological functions as evolutionary adaptations, you simply have to through brain processes.

Luckily, some evolutionary psychologists are starting to do exactly that. In the recent December issue of NeuroImage, Laurence Fiddick presents the results of an fMRI study investigating the question if some deontic conditional rules are easier to reason about, since they are “social” in nature, relative to other “non-social” deontic rules of a similar logical form. Leda Cosmides, back in the 1980’s, proposed that social situations are computed by certain, special social cognitive modules, not by a general-purpose, logical reasoning machine. She went on to investigate this hypothesis using Wason’s selection task, albeit only collecting behavioural data. Over the years, several authors have criticised Cosmides’s data for not really demonstrating a content effect, generating a somewhat heated discussion on the topic (see especially David Buller’s book Adapting Minds, MIT 2005). Yet, until Fiddick’s experiment, nobody had tried to see if Cosmides’s putative “social” deontic rules actually activate a particular set of neural structures in contrast to other forms of deontic rules. Here are the results, quoting from the abstract of Fiddick's paper:

Although the rules and the demands of the task were matched in terms of their logical structure, reasoning about social contracts and precautions activated a different constellation of neurological structures. The regions differentially activated by social contracts included dorsomedial PFC (BA 6/8), bilateral ventrolateral PFC (BA 47), the left angular gyrus (BA 39), and left orbitofrontal cortex (BA 10). The regions differentially activated by precautions included bilateral insula, the left lentiform nucleus, posterior cingulate (BA 29/31), anterior cingulate (BA 24) and right postcentral gyrus (BA 3). Collectively, reasoning about prescriptive rules activated the dorsomedial PFC (BA 6/8).

Does these diffuse networks constitute on the one hand a social module and on the other a logical “precaution” module? Hardly. But, as Fiddick (prudently) state, his results do “reinforce the view that human reasoning is not a unified phenomenon, but is content-sensitive.” Human reasoning is thus content-sensitive, but not modular. This is surely progress. Thanks neuroscience!

eSkeptic on Jones

As a follow-up to yesterday's story about the denial of teaching ID theory in Pennsylvania, eSkeptic has published an in-depth report from the courtroom, as well as the milieu of the trial. From the description of the trial, also referred to as the Kitzmiller et al vs. Dover Area School District (see also this google search):

"Kitzmiller provides an excellent case study of evolution in action; ironically, in this case how the language of creationists has adapted to changing cultural environments. The defense argued that Intelligent Design is an entirely new species unrelated to creation science, and the plaintiffs expertly demonstrated both the clear ancestral relationship between creationism and ID and the selective pressure of higher court decisions that caused the speciation. With that phylogenetic relationship clearly established in the trial, the judge evidently decided that creationism had not mutated enough to survive as the new species of Intelligent Design."

Wednesday, December 21, 2005

Genetic determination of brain responses

The trends in brain imaging are turning towards the study of genes in order to better understand individual differences. While much brain recording to date has been focusing on similarities between people, and what differs between groups, new approaches are looking at what produces the individual differences seen in neuroimaging studies. Some of this variation can be explained by influences of genes, and how each individual's brain is built.

Some of the recent findings have been that the genetic polymorphism -- a normal variance in people -- in the seretonin system has an impact on how people's brain react to emotional pictures. In other words, if Kyle has a type A seretonin gene and Paul has type B, Kyle's brain will react more strongly to emotional faces than Paul. In this sense, it has been shown that even within a healthy sample of people, individual differences can be predicted on the basis of the genetic makeup.

In a recently published study by Cohen et al. this same approach is used to determine individual differences in the dopamine system. Here, Cohen and colleagues find that both the genetic makup of the dopaminergic brain system and the level of extraversion (a personality trait) determines individual differences in the brain's reaction to reward in a gambling game.

As such, these studies clearly demonstrate the importance of the genotype in neuroimaging studies. As I wrote in the previous post, evolution operates at the molecular level. Genes operate at the level of proteins in the brain. We do not understand this properly yet, but with the advent of imaging genetics, this issue is being put at the forefront of neuroscience.

Individual differences in extraversion and dopamine genetics predict neural reward responses

Cohen et al in Cognitive Brain Research

Abstract
Psychologists have linked the personality trait extraversion both to differences in reward sensitivity and to dopamine functioning, but little is known about how these differences are reflected in the functioning of the brain's dopaminergic neural reward system. Here, we show that individual differences in extraversion and the presence of the A1 allele on the dopamine D2 receptor gene predict activation magnitudes in the brain's reward system during a gambling task. In two functional MRI experiments, participants probabilistically received rewards either immediately following a behavioral response (Study 1) or after a 7.5 s anticipation period (Study 2). Although group activation maps revealed anticipation- and reward-related activations in the reward system, individual differences in extraversion and the presence of the D2 Taq1A allele predicted a significant amount of inter-subject variability in the magnitudes of reward-related, but not anticipation-related, activations. These results demonstrate a link between stable differences in personality, genetics, and brain functioning.

ScienceDirect

Imaging genomics videos

When planning my trip to the upcoming "Imaging Genetics" (also called "imaging genomics") meeting in Irvine, January 16.-17, I discovered that they offer videos AND powerpoint slides (as PDF files). This is an invaluable resource that everyone interested in psychology, evolution and the human mind should see and read.

Evolutionary psychology behold: evolution works at the molecular/protein/brain level, not phenotype or behaviour. Well, eventually, yes, behaviour is the outcome. But in order to understand what is really going on, we need to understand the molecular mechanisms at play. Recent books and articles on evolution, evol-psych and modularity don't even discuss what's going on at the brain level. So we are still far from understanding the mechanisms of evolution in psychology. The Irvine video archive is really an eye opener to many. Start with Weinberger's talk. It gives a good overview.

No ID theory in Penn classes

Here is a definite blow to the ID proponents: Judge John E. Jones in Pennsylvania says no to teaching ID theory in classes. New Scientist has a good story to tell. And please also read the words by Judge John E. Jones himself!

More on this topic (yes, go ahead and read it!)

Wednesday, December 14, 2005

New book: Neuroethics - Defining the issues in theory, practice, and policy

While browsing through the Stanford site, I noticed that Illes has edited a book on neuroethics. Why, that's just what we need! I'm going to get a hold of OUP and have them send me a review copy! Hopefully, I'll be able to send some thoughts about this book very soon. Gazzaethics parts 2-4 still awaits, and will be sent out soon.

Here is what it says on OUP.co.uk:

Description
  • Neuroethics is rapidly developing into a major field in its own right, as new neuroscientific techniques continue to cast light on human behaviour
  • This first volume on neuroethics brings together a stellar list of contributors to form a ground-breaking interdisciplinary introduction to the field
  • Includes forewords from Colin Blakemore and Arthur Caplan
Recent advances in the brain sciences have dramatically improved our understanding of brain function. As we find out more and more about what makes us tick, we must stop and consider the ethical implications of this new found knowledge. Will having a new biology of the brain through imaging make us less responsible for our behavior and lose our free will? Should certain brain scan studies be disallowed on the basis of moral grounds? Why is the media so interested in reporting results of brain imaging studies? What ethical lessons from the past can best inform the future of brain imaging?

These compelling questions and many more are tackled by a distinguished group of contributors to this volume on neuroethics. The wide range of disciplinary backgrounds that the authors represent, from neuroscience, bioethics and philosophy, to law, social and health care policy, education, religion and film, allow for profoundly insightful and provocative answers to these questions, and open up the door to a host of new ones. The contributions highlight the timeliness of modern neuroethics today, and assure the longevity and importance of neuroethics for generations to come.

Readership: Neuroscientists, bioethicists, cognitive psychologists, philosophers of law and mind

Readership: Neuroscientists, bioethicists, cognitive psychologists, philosophers of law and mind

Contents

* Part I - Neuroscience, ethics, agency and the self
* 1 Patricia S. Churchland: Moral decision-making and the brain
* 2 Adina Roskies: A case study in neuroethics: the nature of moral judgment
* 3 Stephen J. Morse: Moral and legal responsibility and the new neuroscience
* 4 Thomas Buller: Brains, lies and psychological explanations
* 5 Laurie Zoloth: Being in the world: neuroscience and the ethical agent
* 6 Erik Parens: Creativity, gratitude and the enhancement debate:
* 7 Agnieszka Jaworska: Ethical dilemmas in neurodegenerative disease: respecting patients at the twlight of agency
* Part II - Neuroethics in practice
* 8 Ronald M. Green: From genome to brainome: charting the lessons learned
* 9 Franklin G. Miller & Joseph Fins: Protecting human subjects in brain research: a pragmatic perspective
* 10 Michael S. Gazzaniga: Facts, fictions and the future of neuroethics
* 11 Judy Illes, Eric Racine & Matthew P. Kirschen: A picture is worth 1000 words, but which 1000?
* 12 Turhan Canli: When genes and brains unite: ethical implications of genomic neuroimaging
* 13 Kenneth R. Foster: Engineering the brain
* 14 Megan S. Steven & Alvaro Pascual-Leone: Transcranial magnetic stimulation and the human brain: an ethical evaluation
* 15 Paul J. Ford & Jaimie Henderson: Functional neurosurgical intervention: neuroethics in the operating room
* 16 Robert Klitzman: Clinicians, patients and the brain
* Part III - Justice, social institutions and neuroethics
* 17 Henry Greely: The social effects of advances in neuroscience: legal problems, legal perspectives
* 19 Martha J. Farah, Kimberly G. Noble and Hallam Hurt: Poverty, privilege and brain development: empirical findings and ethical implications
* 20 Paul Root Wolpe: Religious responses to neuroscientific questions
* 21 Maren Grainger-Monsen & Kim Karetsky: The mind in the movies: a neuroethical analysis of the portrayal of the mind in popular media

Neuroethics: Spreading the word to the world

There is an article in Nature Reviews Neuroscience by Illes et al. on neuroethics. The article especially focuses on how neuroscience is presented to the public, e.g. through media, and how this practice differs throughout the world. It is interesting to see how neuroethical discussions and the distribution of neuroscience knowledge differs, from Sweden to Venezuela. Not only are the practices spread, but the initiatives are very different both in terms of who initiates the process and what the motivation is.

These divergent initiatives should probably all be implemented around the world, and should be a source of inspiration for how to involve the broader public in discussing neuroscience.

Read also more about Judy Illes here, and remember to visit the Stanford Center for Biomedical Ethics.

International perspectives on engaging the public in neuroethics
Judy Illes, Colin Blakemore, Mats G. Hansson, Takao K. Hensch, Alan Leshner, Gladys Maestre, Pierre Magistretti, Rémi Quirion and Piergiorgio Strata

Published online: 1st December 2005
p977 | doi: 10.1038/nrn1808

Our fascination with the workings of the human mind is an age-old phenomenon. During the past few years, the advancement of neuroscience research has captured the public's imagination and brought about an increasing awareness of the associated ethical, legal and social issues. Illes et al. present global initiatives for engaging the public on these issues and discuss the opportunities and challenges in the burgeoning field of neuroethics.
Abstract | Full text | PDF (110kb)

Tuesday, December 13, 2005

Allowing comments -- a typo

I just discovered that the settings for the blog was set to accept comments by registered users only. I've now changed this setting to accept all comments. At least for now -- I have no idea if this opens up a spamming pandemonium.

But anyway, please send us your comments and thoughts about these issues. We are always open for comments and discussions about what brain science means to YOU.

Monday, December 12, 2005

Dethronement step II – Just Another Animal

Thanks to Martin for filling me in on Freud. It's notable that in the second criticism of human megalomania Freud mentions Darwin. Not because this is wrong, but rather because his own theory of the human mind has been criticised for using an outdated version of evolution, namely Jean-Baptiste Lamarck and his theory of "inheritance of acquired traits" (see also Lamarckism).

Recently, Scientific American declared Charles Darwin to be the most influential scientist of the past millennium. (The subtitle on the cover was "Sorry, Einstein . . ."). In other words, the idea of evolution is considered the most important scientific idea through the modern history of man (well, at least the past millennium). Although Darwin was cautious to mention humans in his first works, he later made more explicit remarks about the phylogeny of mankind.

There is little doubt that Darwinism has not gained the wide acceptance that the Copernican world view has had for a long time. The proponents of Intelligent Design (hush-hush!) claim to challenge evolutionary theory. But let’s be frank: the ID idea is NOT a scientific theory. It has not generated ANY testable hypotheses, and it does NOT explain empirical data any better than Darwinian evolutionary theory. It’s that simple. Really! But in addition, let’s just mention the idea of Unintelligent Design. After all, if the designer of animals, humans and everything was Intelligent, why are there so many examples of unnecessary – or even harmful – body parts? For example, why do we have a blind spot in each eye? What is the veriform appendix for? See also this section for the main criticisms of the ID idea.

The aspects of Darwinism are – sadly – still not accepted generally. So, while reading the Freud citation from Martin, I still wonder why the same diagnosis of human megalomania applies today. I will not dwell into the reasons to this here, other than point out that there is a clear tendency for religious people to reject evolutionary theory. So this is the current status of the road to human dethronement. OK, we’ll accept that we’re not the Centre of the Universe. But hey, don’t make me think that I’m just some ape!

See also this page.

Saturday, December 10, 2005

Freud's "wounding blows"

It may be somewhat tedious to comment on something written by one's blog partner, but just to clear up who first formulated the idea of three major upheavals in human thought quoted by Thomas in the post below: It was, in actual fact, Freud, who, ever modest, singled out Copernicus, Darwin and himself as the three culprits! Freud presented this idea for the first time in his Introductory Lectures to Psychoanalysis which can be found in volume 15 of the Standard Edition of his works. The passage runs as follows:

"But in thus emphasizing the unconscious in mental life we have conjured up the most evil spirits of criticism against psycho-analysis. Do not be surprised at this, and do not suppose that the resistance to us rests only on the understandable difficulty of the unconscious or the relative inaccessibility of the experiences which provide evi­dence of it. Its source, I think, lies deeper. In the course of centuries the na‹ve self-love of men has had to submit to two major blows at the hands of science. The first was when they learnt that our earth was not the center of the universe but only a tiny fragment of a cosmic system of scarcely imaginable vastness. This is associated in our minds with the name of Copernicus, though something similar had already been asserted by Alexandrian science. The second blow fell when biological research de­stroyed man's supposedly privileged place in creation and proved his descent from the animal kingdom and his ineradicable animal nature. This revaluation has been accomplished in our own days by Darwin, Wallace and their predecessors, though not without the most violent contemporary opposition. But human megalomania will have suffered its third and most wounding blow from the psychological research of the present time which seeks to prove to the ego that it is not even master in its own house, but must content itself with scanty information of what is going on un­consciously in the mind. We psycho-analysts were not the first and not the only ones to utter this call to introspection[1]; but it seems to be our fate to give it its most forcible expression and to support it with empirical material which affects every individual. Hence arises the general revolt against our science, the disregard of all considerations of academic civility and the releasing of the opposition from every re­straint of impartial logic" (Standard Edition, 15, 284-5).

Of course, Freud's idea has since been reformulated many times, by many authors.

Friday, December 09, 2005

Four (was: three) steps to dethronement

Throughout the history of mankind, religious beliefs have been numerous and manifold. The Egyptian belief in Ra, the Greeks in Zeus, the Vikings in Odin, and today’s belief in God, Allah, even Lucifer, shares some basic similarities. First, the belief in human centralism; that humans are (part of) the centre of universal historical events, and that God (or gods) is paying special attention to humans.

Second, humans are unique and qualitatively different from other species, and that they are the creation of a divine being. They are not only different in terms of physical, even mental, abilities. Humans are closer to the Divine Being.

Third, folk psychology tells us that each (normal) person's actions is the result of rational, conscious thought. We are the conscious, autonomous agents of our behaviour.

Finally, there is a belief that humans consist of two substances or dimensions: the physical body and the immaterial soul/mind, and that when the body ceases to exist the soul will go on.

These assumptions have been questioned by scientific discoveries long before the studies of Nicolas Copernicus and his peers. But let us recap these events briefly and contrast them to three religious dogmas presented above. We can think of these controversies in three areas: the universal centeredness of human beings; the uniqueness of human beings; and the body-soul division. In this and three following posts, I will argue that while today’s modern world does not believe in a pre-Copernican world view, the implications of Darwinian evolutionary theory have yet to be understood and endorsed in its entirety. Finally, results from the scientific studies of consciousness imply that the human mind can be explained by brain structure and function alone.

And yes, this is kind of a rehash of earlier claims about human self-perception. Martin mentions that it might be Freud that once said this. I’m not sure. Anyway, I still think the idea is neat. And notice that the previous version had three steps, while I noticed that due to current consciousness science, we can add a fourth step.

Step 1 – Decentralizing Earth

No doubt, the Copernican revolution changed the way people would eventually look at the skies and how Earth was situated in cosmos. The belief that the Earth is situated at the centre of the universe was overthrown by three cornerstones in modern science: theory, observations and the willingness to re-evaluate current thought. Even in modern science, the latter is an attribute hard to find among scientists. Today, few dispute the Copernican principle that the motion of the heavens can be explained without the Earth being in the geometric centre of the system.

This is probably the easiest step, as it is also the least controversial. But it is a clear example of how scientific discoveries, over time, can influence the way in which we think of ourselves and our position in the universe.

Next step: "Evolving Sapiens"

Tuesday, December 06, 2005

From face to brain transplant

There has been an enormous fuss lately about the transplant of face parts from a dead donor to a woman. See some of many google links here. The idea of having another person's face has probably lead many to think about (their own) personal identity. Would you recognise the new face in the mirror as your own? You probably will through association over time. But will it actually "feel" as yourself? This is a totally new issue to which a door now has been opened. Well, at least if this not a one-time show.

As the limits - technical and ethical - of transplants are now being pushed further, what is up next? What if my hippocampus started to shrink, and my memory or learning ability was fading? Would I accept the offer of receiving a fresh hippocampus from a donor? In years to come, we may well be offered this opportunity. In principle, there would be little difference between receiving a donated heart, liver, finger or brain part. But would the inclusion of the hippocampus lead to that other person's private memories? I guess the chances for this scenario are non-existent. The brain (and the hippocampus) does not work that way.

Chance is that the first brain transplants would first be performed in "not-ethically-dangeours" areas, such as the brain stem (e.g. in Parkinsonism), or the primary visual cortex. But what if I was offered a new amygdala, or spare parts of my orbitofrontal cortex? These areas are tightly coupled to emotional reactions and "personality" (yes, a mongrel concept).

How likely is it that this can be done on purely technical grounds? Well, to tell you the truth; it's here already! In 1989 Lehman et al. transplanted the suprachiasmatic nucleus (see image below) from one hamster to another. This nucleus is strategically positioned below (i.e. "supra) the optic chiasma, where the two visual projections from the eye cross. It is known to play a significant role in regulating the sleep-wake cycle, and that ablation of this nucleus permanently disrupts the so-called circadian rythm.

Lehman et al. were able to remove the suprachiasmatic nucleus from one hamster. As expected, this led to a disruption of the dicradian rythm of the animal. Then, a suprachiasmatic nucleus was transplanted from another hamster. What happened was that the circadian rythm - the sleep-wake cycle - of the first hamster was restored! In other words, the sleep cycle of the first hamster could be "repaired" by another hamster's brain part.

So, brain transplants are certainly feasible as future operations in humans! The questions are therefore: would you accept a brain transplant? Should we do this? Could we draw legal lines between "acceptable" and "unacceptable" spare brain parts? Is the hippocampus OK for transplant but the amygdala not?