This is the first of three consecutive blog posts on the research-to-practice gap,
and factors that are typically missed.
What would you say is the biggest barrier to clinicians knowing the research?
Most folks say “time”. Here and here they cite time as the biggest (or primary) barrier. Any time I go to a conference and participate in an EBP talk and somebody asks the audience this question, they always say time. Any time I go to a conference and nobody takes the question beyond time, I start to get a little cranky. Here’s why:
Time barriers are real, for sure. But “time” is a surface issue that hides what’s really going on.
And we need to start talking about the real problems.
What are the real problems? Clinicians, do an exercise with me, and you’ll see what I mean:
***NOTE that the time I just gifted you, above, barely scratches the surface. Brackenberry et al., 2008 suggest it takes 3–7 hours to pose a question, research it, read the evidence, and pose a solution.
OK, so what’s this BIG, under-acknowledged barrier?
Clinical applicability of research articles.
First, a story.
I got my PhD pretty young. I also started working with research pretty young. As a sophomore undergraduate, I was working in multiple research labs. I pretty much only worked in research labs from 2002–2011, and was obsessed with research, to say the least. I can still remember what it felt like to sit on the cool floor of the library stacks on a Thursday night, reading JSLHR (remember the blue-and-white hard copy editions?). That may seem like a sad image to some of you (maybe it was, LOL). But it gives me happy feelings. Yes, #nerdalert. I loved it. And I had a pretty damn impressive vitae to show for that time, if I do say so myself.
Then, toward the end of my PhD program, I made the decision to NOT go do a post-doc (as most young scientists in my position do/should). The reason I didn’t do a post-doc is a much longer story, so I won’t tell that one here, but #longstoryshort I decided to work as an SLP for a while instead. I had NO clinical experience (beyond SLP MA requirements), but thought, “Hey, if my goal is to do great things for the field of speech–language pathology, it’s reasonable for me to work as an SLP for a bit, yeah?” So I did. I worked as a school-based SLP for 5 years, actually. Three years longer than my initial game-plan, but I liked it. I really liked it.
So, where are we going with this?
First of all, you should be asking: as someone with a PhD, were you able to stay up-to-date with the research when you starting working as a clinician? Because that's really the million dollar question here! (And gives insight into whether the science-to-practice gap stems from clinicians lacking the requisite skills, or something else...)
And the answer to that is: meh. Not really. And it wasn't because I was burnt out from my PhD, or anything like that. I NEVER fell out of love with reading research. Instead, I mostly felt like I was too busy to read research (especially in the first couple years). I tried! But, honestly, the paperwork and therapy planning and DOING therapy took up all my dang time!
But then, about three years into working full-time as an SLP, I started to see the situation for as complex as it really was. Because it was far, FAR more complex than a lack of time.
So here’s the actual story I’ve been leading up to. That “epiphany” day:
So after getting your bearings as a “newbie” clinician, whether that’s 2 years, 4 years, 6 years into clinical practice, I think all SLPs go through this self-audit phase. Where they ask, “Is what I’m doing actually correct? Evidence-based? Effective?” and they go on a search to find that out.
For me, I went on that search one day, trying to problem-solve a young client of mine. You know what I mean. That client that makes you look toward the research literature, because you don’t know what the HECK else to do with him, and progress is so slow! Or even sometimes it's just that client you've never had. For example, I know a lot of SLPs who will go years without having someone who stutters or someone with apraxia of speech, so they look to the literature to try to figure out what to do.
So I went to PubMed (research wizard I am, this was easy for me). Dug through a couple hundred articles, pulled about 20 that seemed relevant to helping me answer my clinical question. Read the abstracts of them all, skimmed the papers. Narrowed it to about 10 articles, printed them out, and put them in a folder on my desk. (That all took me, oh, probably 40 minutes. I expect it’d take the average clinician longer.) I had my little labeled manila folder ready to read. Set aside a day to read through lunch, and also rescheduled a student group so I had a nice 90-minute chunk of time to read these things.
Guess what that 40 minutes + 90 minutes got me?
Nothing. Absolutely nothing. Despite the titles and abstracts looking like the articles would address my questions, they got me nowhere. Some of the articles COULDN’T be applied clinically, no matter who my client was. And others just didn’t work for my client.
Lack of clinical applicability.
And that’s when it hit me—THIS is what makes it hard. It’s not just the time thing. It’s that, even when you do spend the time, you’ll often end up empty-handed. And it only takes clinicians a few times of having that experience before they may give up entirely. And, yes, this is one of the exact barriers that SLPs I’ve interviewed cite—“When I try reading the research, I feel like it doesn’t even help me, so is often just a waste of time.”
In Spring of 2016, I had left my last SLP job, and started The Informed SLP. I had no idea if people would like it, use it. But I, almost on a whim, thought: “You know what would be super fun (to me)? Reading research, and telling SLPs what’s contained within, in brief snippets, so they don’t have to dig and dig and read and read, like I did (see story above).”
So I started on this monthly task of digging through all the top journals in our field, and reporting on articles that were clinically applicable. Boom! “Finding the good stuff” problem solved!
And I knew from my previous experience trying to find articles to apply to clinical practice (over and over again) as a clinician, that only a fraction of the research was clinically applicable, and that this was part of the issue in research implementation. But what I actually found, over time, honestly kind of surprised me, and only further solidified this.
Now, two years into doing this every single month (except now it’s not just me, it’s a team of people) we’re finding the following:
So—stop right there—what is meant by “clinically applicable”?
It’s really a pretty straightforward thing! It’s clinically applicable if the answer to the following question is YES:
“If an SLP were to read this article in full, could s/he do something with it? That is, do something different with a client this week, this month, this year, based upon the information contained within?”
And, yes, only 6% of articles published in our field = YES to that question.
Where’s the data? In this moment, you’re just going to have to believe me and my team. I understand if that's not acceptable for you! Because, though we have spreadsheets documenting this monthly for the past two years, I’ve been both (a) too busy to publish it, and (b) too scared to publish it. If I publish this data, I'm doing it open-access. And the idea of showing line-by-line all the articles published in our field each month, and why my team rejected each of them as not clinically applicable kind of gives me heart palpitations. Because I’ve had this conversation with quite a few scientists. And the response of some is—“Oh, yes. That makes sense.” But others are seriously offended when I say their work may not be considered “immediately clinically applicable”. And, oh man, the wrath LOL. I already answer an incredible volume of member questions each month, I truly can't imagine adding scientists to the mix. I need to be ready for that. And, at some point, I will. But for now, you can cite this article instead, which fully aligns with what I’ve found, and addresses one of the reasons why.
What are those reasons? Here are a couple reasons articles get rejected by us each month due to lack of clinical applicability:
So here’s where it gets tricky: You’ll notice that papers that fit that criteria above aren’t “bad science”. In fact, a lot of the most groundbreaking work in our field isn’t immediately clinically applicable. It will be someday! Just not yet. Not until either more information is collected or until the magic tool that was created is made available to clinicians.
And this is also where it gets awkward for me. Because, as I said above, telling scientists their work isn’t immediately clinically applicable isn’t a dig. It’s just—reality. And some of the best scientists in our field publish tons of papers every year that aren’t immediately applicable. But, boy, are they incredible! And applicable later.
So, what do we do about this?
Well The Informed SLP is doing it now. Basically sifting out articles that are immediately applicable to clinical practice, so clinicians don’t waste their time wading through things that aren’t relevant to them via searches in traditional databases. Back to that time thing—I like to think of it like this: You could say that “time” is the reason it’s so hard to find a needle in a haystack. Or we could stop burying the needle in the haystack in the first place! And that’s what TISLP is—a pincushion for the needles. For those tools you can pick up and use as a clinician right away.
Am I saying clinicians shouldn't cold-search PubMed for answers? Absolutely not. Instead, I'm saying that it can't be their primary or only method, lest they absolutely drown in journal articles. Clinicians aren't experts in single topics. We more often have to know about a lot of things. And wading through the literature on all the topics we need to know about is truly insurmountable at times.
So why am I bringing up this issue? It isn’t just to say, “Hey, look we’re solving this problem for you. Yay us.” But because we can't be the only ones picking at this problem. There is work for others to do, too:
Ultimately, the only way to fix EBP in our field is to deeply and realistically audit what the real barriers are. There are more (oh man, there are more...), which I can cover in upcoming blog posts. But for now, getting real with clinical applicability could make a huge difference.