It's something which I've ranted on at some length but perhaps without valuing the challenge. For me it's obvious. Silicon is simple and so has many research papers written about it. It's actually pretty complex stuff but its dwarfed by more important questions which elude the development of positivistic science, a method which has made significant breakthroughs in understanding the beginnings of the universe and the make up of matter yet fails to have any real understanding of our internal reality.
What positivistic scientific methods, often know as real scientific methods, do is apply a paradigm of reliability of predicting future outcome. They allow the knowledge that something might work for most people, more than pretending to help them. Sadly the latter seems the most cost effective trreatment in mental healthcare. The placebo effect is powerful.
When it comes to writing clinical guidelines and teaching doctors the concept of the hierachy of evidence is vitally important. Any systematic review of any quality will weight studies in their analyisis. They will look at a wide range of pertinent data though will exclude poor quality studies, for example ones which aren't RCTs.
They're the top of the tree. They evaluate the quality of the methodoly to weight the result when they compare lots of studies. By lots of studies this can mean a thousand or more. Many will be quickly excluded though or score low on the evidence system.
They'll probably do a meta-analysis as well, something considered the necxt lowest step on the hierachy of evidence. This is a method which takes lots of studies and adds them up to find the average effect. This is good science. Any result needs to be able to be replicated. That's sort of a very important thing, so the technqiue used to achieve cold fusion using palladium wasn't replicable in other study sites.
Studies in mental health are far from this level of application of positivistic methods such as in the cold fusion debarkle. They can at least be averaged but there's two problems: publication bias and hetereogenity.
The latter is often called the "apples and oranges" problem. Meta-analysis only works if the same things are compared the same way. If the measures are different then the studies are comparing apples and oranges. Two different things.
Good meta-analyses seek to find similar studies and exclude the rest. This means they can end up comparing few studies but at least the technique can be applied rigorously.
One problem is that trials aren't designed the same. Each research team makes a set of decisions on design and methodology. There's no standard methodology but thetr's a de facto standard created by the published weighting systems and inclusion criteria in systematic reviews and metaanalyses.
Studies which fall short of these criteria score lower on the systems used to design clinical guidelines. They may be totally excluded. There is, of course, a value to expermental trials and cheap trials early on to show the value of a possible treatment but really the expensove trials which off the best information, including followup data and proper placebo controlled RCTs, are the only thing which scores highly in a scientific review. Anything less can suck on my chocoate salty balls.
There is also the other problem of meta-analytic techniques. Publication bias. Simply, researchers don't publish negative results or ones which don't support the preconcieved hypothesis.
When taking an average all results are needed. Negative results bring the average down. If they're not available then a meta-analysis will not give a true picture. The Kirsch meta-analysis in 2007 showed how inclusion of unpublished data reduces antidepressant effect sizes to clinical insignifance in all but acute depression. They must still be some pretty effective narcotics.
There's a very clever meta-analysis technique to identify publication bias called a funnel plot. This is one of my favourite bits of science.
Small studies have larger error. Big studies are closer to the average. Sample size is an important quality for science. If you have a lot of studies. You can see if studies are missing for whatever reason. The reason is because when a graph is plotted with effect size on one axis and sample size on another it should look like a funnel. At the fat end are all the small sample size studies. They have the highest probablity of error. They will fall equally around the average but, importantly, if studies have been omitted there will be a chunk of the funnel missing. The large studies form the small part of the funnel because they fall either side of the average but a lot closer than the small studies.
Publication bias is signifcant and not just driven by Big Pharma. They're not the only field of endeavour which witholds negative result. The biomedically-based British Journal of Psychiatry published a paper last year which used a funnel plot on about a thousand papers on psychological therapies. They used a method I hadn't heard of to estimate the effect size by componsating for publication bias, the missing data from unpublished studies which had negative results. It was about a third from the thousand studies used in the plot.
So studies need to be publically indexed too. It sounds ridiculous that this isn't even standard practice.
And all of this science depends on accurate measure of the quantity being explored. Here I am a child walking amongst giants. It is here the very fundamentals of mental illness break down. Where mental illness or social disability differ from real illnesss is that they're a construct, an unreal or changeable thing which people of a certain time or culture judge as extremely different to their norms and not an illness, though they can, perhaps, result from illnesses.
These problems are far outside the scope of usual explanations of the application of real science to mental illness.
No comments:
Post a Comment