Sunday, 31 October 2010

The media reporting of research and reviews )and a little bit on why there's some pretty cool scientiffic technqiues in mental health research)

Someone made a point that media reporting on papers rarely reports on
negative findings, i.e. study shows x doesn't work. Most of the
coverage, certainly in mental health, are signle studies which usually
have methodological flaws. They don't even have to be double blind
randomised control trials to get reported. As long as the effect is
interesting or the research touches on another popular news story then
it might get reported amongst the articles reporting on the
misdemeanours of celebrities and whatever tripe constitutes news these days.

They rarely report on reviews, meta-analysis and systematic reviews (or
qualitative papers either come to think of it). A very poor paper that
was severely flawed (such that the students who were subjects were told
what the research was about) showed that caffine might induce psychosis.
They didn't even bother to say "psychosis-like." This is the sort of
research I wipe my arse with. It was done by a post grad and wasn't
important as a study. Yet it found its way into the meidia.

Reviews often show negative findings, i.e. the hypothesis can not be
definitely proved. Some od show positive results of course and these are
rarely reported either. This screws my guess that it's studies that
don't show what a reporter published in the past that don't get
reported, i.e. if the report reports that fish is effective for health
then a review comes out that shows it doesn't (this is just an exmaple)
then they may not report it. They donj't report the results of positive
reviews either.

They're not scientists of course but they report on science and
communicate it to the masses. They aren't trained to critically evaluate
and they don't know the hierachy of evidence and why systematic reviews
are so valued. Systematic reviews are a total bitch. They pick the
highest quaity trails only. They seek to establish if publication bias
inflates the effect of using meta-analysis. They chose the trials so
meta-analysis can be best applied (i.e. those that create the least
problem of comapring apples and oranges) (but this can be flawed too).
And often they find that when you look at lots of studies and compensate
for publication bias effect sizes in mental health quickly diminish to
barely above the placebo group.

It's when you read papers that take almost 1,000 psychological therapies
papers and stick their numbers into a meta-analysis that you start
seeing the power and the effort of proper science. 1,000 papers and the
effect size is about 1 if I remember this paper right. In physical
healthcare this would be considered a tiny effect size. In mental health
it'll just about do. The paper I'm thinking of was actually looking at
publication bias and used a meta-analytical technique called a funnel
plot. (My browser's crashed by search "psychological therapy publication
bias 2010" at the British Journal of Psychiatry website and yoy should
find the paper.)

A funnel plot is used to estimate if publication bias has happened.
Publication bias is the effect where trials which show negative findings
aren't published. This is common with pharmaceutical research and when
unpublished data is added such as in the Kirsch paper in 2007 the effect
size rapidly diminishes. The funnel plot can detect if studies haven't
been published. It works on the principle that large sample sizes mean
the results will be closer to the true effect size or thing that's being
measured whereas smaller studies will have a larger variance around the
average. The greater variance of positive and negative effect sizes
around the average should be uniform and decrease with increased sample
size (btw - this is much easier if you can see what afunnel plot looks
like. google or Wiki it). This is what draws the funnel shape that's
expected without publication bias. At the left of the graph where samole
size is smallest there's the biggest variance so lots of studies are
dotted all over the place but as you go right the scattering becomes
less and less which creates the funnel shape tapering off from left to
right. If the shape doesn't look like a funnel and there's a hole near
the left of the graph where studies with negative findings should be
then there's been publication bias which would inflate the effect size
shown in a meta-analysis. In this paper I mention with around a thousand
papers on psychological therapies put into a funnel plot the publication
bias reduced the effect size by about a third.

This is real science attempting to gain sound truths rather than
something that might titillate the masses however I'm optimistic that
the public aren't so stupid that they wouldn't want to learn about the
funnel plot, the problem of publication bias and the problems of all the
science they get told about..

No comments:

Post a Comment

Blog Archive

About Me

We It comes in part from an appreciation that no one can truly sign their own work. Everything is many influences coming together to the one moment where a work exists. The other is a begrudging acceptance that my work was never my own. There is another consciousness or non-corporeal entity that helps and harms me in everything I do. I am not I because of this force or entity. I am "we"