ago. It was a chat about the application of science, the epsitomology I
guess - the how do you know what you know?
Research results are measured for validity using different scienfitic
techniques. Confidence internal and stuff like that. I'll admit I don't
totally understand the science behind it and I should do. I trust the
authors of papers and the editorial team at peer reviewed journals to
sort that out for me.
We spoke about the normal distribution and the 95% confidence interval.
The 95% CI in my understanding says the effect is true for most people
based on this average but it doesn't consider the shape or features of
the tail end. What I mean is the extremes aren't measures or understood.
I made the point to him and a point that perhaps worked because it used
his ego: are you part of the 95% of the 5%?
I'm thinking about a systematic review and meta-analysis of job
satisfaction and physical and mental health which covered 485 studies
with a combined sample size of 267 995 individuals. Across a number of
domains job satisfaction improves physical and mental health with an
effect size that towers over what high quality trails and research in
mental health shows for psychiatric treatments. Of course this paper
wasn't looking at the very high quality trails that pass the criteria of
Cochrane Systematic analyses so the effect sizes may have been inflated
and I can't remember if the authors checked for publication bias using a
funnel plot.
This is the paper.
Farragher, et al. 2005, The relationship between job satisfaction and
health: a meta-analysis, Occup Environ Med 2005;62:105-112
doi:10.1136/oem.2002.006734
http://oem.bmj.com/content/62/2/105.abstract
This is the abstract
"
Background: A vast number of published studies have suggested a link
between job satisfaction levels and health. The sizes of the
relationships reported vary widely. Narrative overviews of this
relationship have been published, but no systematic meta-analysis review
has been conducted.
Methods: A systematic review and meta-analysis of 485 studies with a
combined sample size of 267 995 individuals was conducted, evaluating
the research evidence linking self-report measures of job satisfaction
to measures of physical and mental wellbeing.
Results: The overall correlation combined across all health measures was
r=0.312 (0.370 after Schmidt-Hunter adjustment). Job satisfaction was
most strongly associated with mental/psychological problems; strongest
relationships were found for burnout (corrected r=0.478),
self-esteem(r=0.429), depression (r=0.428), and anxiety(r=0.420). The
correlation with subjective physical illness was more modest (r=0.287).
Conclusions: Correlations in excess of 0.3 are rare in this context. The
relationships found suggest that job satisfaction level is an important
factor influencing the health of workers. Organisations should include
the development of stress management policies to identify and eradicate
work practices that cause most job dissatisfaction as part of any
exercise aimed at improving employee health. Occupational health
clinicians should consider counselling employees diagnosed as having
psychological problems to critically evaluate their work—and help them
to explore ways of gaining greater satisfaction from this important
aspect of their life.
"
It's important because it blows away the effect size of CBT in the
mega-meta analysis in 2006
(http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6VB8-4H74MB8-1&_user=10&_rdoc=1&_fmt=&_orig=search&_sort=d&_docanchor=&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=55df21bfab2b983b57bf13232e61ad80)
But that's not why I'm thinking about it. I'm thinking about it because
of one of the graphs in the paper. The authors did a plot where the
averages of the papers and the values one standard deviation either side
of the average was plotted. I'd never seen a plot of meta-analytic
results that had so many papers consistently showing positive results
for any variable in mental health.
But what struck me now and what struck me when I read the paper and what
I was trying to explain to the lawyers I met in a pub in Holborn was it
still didn't work for some people. In fact job satisfaction can have
negative results for a small percentage of the population just as CBT
can have a negative effect on some people.
The negative impact of any variable, no matter how few people it
affects, is important. The measure of "few" would be a debate. This
isn't observed enough in research. Abstracts rarely mention what happens
to those who have extremely negative outcomes. I want to know what
happens to those people who don't find job satisfaction works for them.
There's no number system used conventionally to denote the full 'width'
or range of results from all rather than just 95% of the sample
population nor how far the worst outcomes fall below the average effect
size. It seems the tail ends of the graph aren't really important. Just
the 95%. This is dumb to me.
No comments:
Post a Comment