Wednesday, 8 April 2009

The Voodoo Strikes Back

Just when you thought it was safe to compute a correlatation between a behavioural measure and a cluster mean BOLD change...

The fMRI voodoo correlations controversy isn't over. Ed Vul and collegues have just responded to their critics in a new article (pdf). The critics appear to have scored at least one victory, however, since the original paper has now been renamed. So it's goodbye to "Voodoo Correlations in Social Neuroscience" - now it's "Puzzlingly high correlations in fMRI studies of emotion, personality and social cognition" by Vul et. al. 2009. Not quite as catchy, but then, that's the point...

Just in case you need reminding of the story so far: A couple of months ago, MIT grad student Ed Vul and co-authors released a pre-publication manuscript, then titled Voodoo Correlations in Social Neuroscience. This paper reviewed the findings of a number of fMRI studies which reported linear correlations between regional brain activity and some kind of measure of personality. Vul et. al. argued that many (but by no means all) of these correlations were in fact erroneous, with the reported correlations being much higher than the true ones. Vul et. al. alleged that the problem arose due to a flaw in the statistical analysis used, the "non-independence error". For my non-technical explanation of the issue, see my previous post, or go read the original paper (it really doesn't require much knowledge of statistics).

Vul's paper attracted a lot of praise and also a lot of criticism, both in the blogosphere and in the academic literature. Many complained that it was sensationalistic and anti-fMRI. Others embraced it for the same reasons. My view was that while the paper's style was certainly journalistic, and while many of those who praised the paper did so for the wrong reasons, the core argument was both valid and important. While not representing a radical challenge to social neuroscience or fMRI in general, Vul et. al. draws attention to a widespread and potentially serious technical issue with the analysis of fMRI data, one which all neuroscientists should be aware of.

That's still my opinion. Vul et. al.'s response to their critics is a clearly worded and convincing defense. Interestingly, their defense is in many ways just a clarificiation of the argument. This is appropriate, because I think the argument is pretty much just common sense once it is correctly understood. As far as I can see the only valid defence against it is to say that a particular paper did not in fact commit the error - while not disputing that the error itself is a problem. Vul et. al. say that to their knowledge no accused papers have turned out to be innocent - although I'm sure we haven't heard the last of that.

Vul et. al. also now make explicit something which wasn't very clear in their original paper, namely that the original paper made accusations of two completely seperate errors. One, the non-independence error, is common but probably less serious than the second, the "Forman error", which is pretty much fatal. Fortunately, so far, only two papers are known to have fallen prey to the Forman error - although there could be more. Go read the article for more details on what could be Vul's next bombshell...

ResearchBlogging.orgEDWARD VUL, CHRISTINE HARRIS, PIOTR WINKIELMAN, AND, & HAROLD PASHLER (2009). Reply to comments on “Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition” Perspectives in Psychological Science

Tuesday, 7 April 2009

Antidepressants: Clinical Trials versus Real Life

In a recent post, I argued that no-one knows how well antidepressants work. Although there have been a huge number of clinical trials conducted on a variety of antidepressant drugs, it is impossible to know what the results of these trials mean in terms of real benefits for real patients.

I'm not the only skeptic. A paper just out in the American Journal of Psychiatry adds to growing case against contemporary antidepressant trials (almost all of which are industry-sponsored) and should give everyone cause for thought.
The article, Can Phase III Trial Results of Antidepressant Medications Be Generalized to Clinical Practice? A STAR*D Report, is one of the many spin-offs from the STAR*D trial. STAR*D was a large and ambitious study designed to investigate the effectiveness of antidepressants in a realistic setting. The results were rather difficult to interpret (and some are yet to be published), but this report is certainly amongst the most interesting.

One of the things that made STAR*D different from the average trials was the recruitment criteria. Most trials require a volunteer to tick numerous boxes before they can be enrolled in the study - for example, it's common to exclude anyone showing signs of suicidal thoughts or behaviours, people with any problems other than depression, like addictions, and anyone whose depression is rated as being "insufficiently severe" using a depression rating scale such as the HAMD.

The majority, and probably the vast majority, of people who suffer from depression don't fit such narrow criteria. So antidepressants end up being tested on no more than a small, select group of the people who are likely to end up taking them when and if they hit the market. And because criteria can differ between trials, two trials might end up testing the same drug on two quite different types of people - although on paper they are both a trial of the drug for the exact same thing, "major depression".

To be fair, some criteria are necessary to protect the safety of volunteers (you don't want someone who is suicidal getting their hands on an experimental and potentially dangerous drug), but the whole situation is far from ideal. People have been complaining about it for a while. The new paper adds to the list of complaints. The authors took advantage of the fact that STAR*D did not have restrictive entry criteria, and simply compared those patients who did happen to fit the bill for a "typical" antidepressant trial vs. those who didn't.

First off, just under a quarter (22.2%) of patients met the typical criteria. That's really not very many. And, as you'd expect, this minority of patients were rather different from the rest. Amongst many other things they were slightly younger, a lot richer (mean monthly income $3050 vs. $2163), much less likely to be unemployed or to have no medical insurance, and less likely to be black or Hispanic (this was an American sample).

Such differences might seem unimportant - if someone is suffering from a disease, and they're given a medication to treat it, does the size of their paycheque really matter? Yes it could - the patients who met the criteria for a typical antidepressant trial reported on average more improvement, and fewer side effects, compared to the others. (They were all given citalopram, a popular and pretty decent SSRI).

Does this mean that rich white people really get more benefit from citalopram? Or do they just tend to report more benefit? Or do they experience larger placebo effects? It's impossible to say. The authors, who include some big names in antidepressant research, conclude that:
...a patient sample that meets the inclusion criteria for a phase III clinical trial is not representative of depressed patients seen in typical clinical practice, and phase III trial outcomes may be more optimistic than results obtained in practice.
Although it's also possible that trial outcomes could be more pessimistic, in terms of finding smaller drug-placebo differences than they otherwise would. Only one thing is certain - antidepressant trials are far removed from the real world, and the results of such trials have to be taken with a large pinch of salt.

ResearchBlogging.orgWisniewski, S., Rush, A., Nierenberg, A., Gaynes, B., Warden, D., Luther, J., McGrath, P., Lavori, P., Thase, M., Fava, M., & Trivedi, M. (2009). Can Phase III Trial Results of Antidepressant Medications Be Generalized to Clinical Practice? A STAR*D Report American Journal of Psychiatry DOI: 10.1176/appi.ajp.2008.08071027

Sunday, 5 April 2009

A Couple of Links

A couple of neat things I discovered this week:

Just judging by the name, you might think that ScienceWatch was one of those tedious attack sites going under the guise of "watches" (naming no names). But it's actually about "tracking trends and performance in basic research". By analysing citation data and so forth, they claim to be able to identify "hot papers" and, more interesting, hot "fronts" or themes in research. It's a commercial enterprise, but a lot of the material is free. They just mapped out the hot topics in current OCD research (although the results were hardly surprising).

Then there's Pology Magazine, which is a travel magazine, but with an anthropological/social science approach - and lots of extremely pretty pictures. It's well worth a visit.

Tuesday, 31 March 2009

The Entirely Legitimate Encephalon #67

(Updated! New post from Channel N -see below.) Welcome to the 67th edition of Encephalon, the regular neuroscience and psychology blog roundup. In honor of the recently revealed hilarious petty corruption in British politics, I demanded a hefty bribe to do this post... Wait, did you just read that? I'll give you £50 if you keep quiet about it. Ok, £100. I've got a reputation to uphold.
Anyway, in no particular order - certainly not in the order of the sum they paid me - here are your links for this edition:
  • New! Channel N features a talk by MacArthur Genius and neuro-robotics pioneer Yoky Matsuoka. If you ever want a bionic limb, she's the person to call.
  • In honour of old St Paddy, PodBlack Cat deals with the psychology of "luck", superstition, and Irish movies. Apparantly, there are now breeds of clovers which always have four leaves - where's the fun in that?
  • Neurophilosophy's Mo writes about a pair of fascinating neuroimaging studies about limb amputation and the brain's construction of the body image.
  • Ward Plunet of BrainHealthHacks has three recent posts looking at possible links between obesity and cognitive ability - could be controversial.
  • Ouroboros discusses an interesting discovery which reveals another piece of the puzzle about the genetics of familial Alzheimer's disease.
  • Hesitant Iconoclast of the NeuroWhoa! blog presents a well thought out two-part post about the search for the brain's "God Spot", and what it might mean if there isn't one.
  • The Neurocritic is, as ever, critical, about lie detection and about the latest potential weight loss pills.
  • SharpBrains, the homeland of Encephalon, has a useful set of links to the best brain health articles from the past month, and also discusses the deeply unhealthy goings-on at JAMA regarding conflicts of interests, an antidepressant trial, and some impressive academic fisticuffs.
  • Neuronarrative discusses two fMRI studies which are rather topical in the current economic climate. One is about what happens when we take expert's advice when making decisions and the other about the "money illusion". Finally, there's a post featuring four expert responses to the Susan Greenfield Facebook-destroys-the-brain controversy (which I wrote about previously) which are rather enlightening.
  • BrainBlogger provides a typically accessible write-up of a small but exciting study about the possible utility of lithium in Lou Gherig's disease, and a large study of the possible cognitive consequences of the metabolic syndrome.
  • Finally, The Mouse Trap's Sandeep has an extensive and very thought provoking two part series of thoughts on the psychology of pleasure, pain and bipolar disorder, and to round out this issue, discusses an imaging study about how we know the difference between reality and fiction. Did I really accept bribes to produce this issue?
That's it for this issue! The next Encephalon is slated to be hosted over at Ouroboros, so get writing and e-mail submissions to encephalon{dot}host{at}gmail{dot}com by April 13th.

Sunday, 29 March 2009

Cosmic Ordering, CAM and the NHS

A while back, I argued that it might not be a good idea to encourage the use of therapies, such as homeopathy, which work via the "placebo effect". (I've also previously said that what people call "the placebo effect" very often isn't one).

But there's more to say on this. Let us assume that homeopathy, say, is nothing more than a placebo (which it is). Let's further assume that homeopathy is actually quite a good placebo, meaning that when people go to see a homeopath they generally leave feeling better and end up experiencing better health outcomes - for whatever reason. This second assumption is exactly that, an assumption, because to my knowledge no-one has done a study of whether people who use homeopathy actually feel any healthier than they would if they had never heard of homeopathy and just got on with their lives. But let's assume it works.

Now, does this mean that homeopathy is a good thing? Well, sure, if it makes people feel better, it's a good thing. However - it doesn't follow that homeopathy, or any other form of complementary and alternative medicine which works as a placebo, should be available on the NHS. Many have argued that if CAM works, even if only by the placebo effect, it's still a useful thing which the NHS should support if patient's request it. I disagree.
A while back, South Bank University in London was widely mocked for getting a psychic to give a training session on "cosmic ordering". Cosmic ordering is the belief that you can get what you want in life by placing an order with the universe in the form of wishing really hard and then some quantum stuff happens and - I can't write any more of this. It's all crap. Anyway, the head of South Bank defended the session on the grounds that the staff requested it, liked it and found it useful.

Now if I applied for funding from my University to pay for a night down the pub for the whole of my Department I'd get the beaurocratic equivalent of a slap in the face. This despite the fact that people would enjoy it, it would help with team-building, and reduce stress levels. The point is that despite a Departmental night down the pub being, probably, a good thing in many ways, it's just not the kind of thing a University is responsible for. It would be incredibly unprofessional for University money to be spent on that kind of thing.

Likewise, it was unprofessional of South Bank to pay for a psychic to give a training course, even though the attendees liked it. Sorry to sound anal but Universities don't exist to give their staff what they want. They exist to pay their staff in exchange for their professional services & to help them carry out those services.

Likewise, the NHS, I think, doesn't exist to make people feel good, it exists to treat and prevent medical illnesses. So people like homeopathy and find it's helpful for relieving stress-related symptoms. Does that mean the NHS should be paying for it? Only if you believe that the NHS should also be paying for me to take a holiday to Thailand. I don't believe in homeopathy, but I do believe that a week on a Thai beach would do wonders for my stress levels. Or maybe I'd prefer a sweet guitar - I find playing guitar is great for relaxation, but it would be even better if I had a £700 model to play on. My well-being levels would just soar, if only until the novelty wore off. You get the point.

Most "complementary and alternative medicine" is medicine in appearance only. Just because homepaths hand out pills doesn't mean that what they do has anything to do with medicine. It's ritual. It's close to being entertainment, in a sense - which is not to belittle it, because entertainment is an important part of life. I'm sure there are many people for whom their sessions with their homeopath are really very useful. I just don't think the medical services should necessarily be paying for everything that people find helpful.

[BPSDB]