Occasionally we get member questions that are so good we want to share them. We bring in expert guest posters for these, as well, so you hear from somebody other than us. Enjoy!
If you’re reading this post, then you’ve decided to begin, improve, or build your skills as an evidence-based clinician. This is a big and important step and we’re happy that you’re here. As you may already know, it can be very overwhelming to conduct a comprehensive literature review, particularly as it relates to only one of many clients on your caseload. How do you manage it all? How do you know if an article is “good”? How do you know if you have “enough” information? How do you understand and apply conflicting information across sources? We hope to help you answer all of these questions. We are approaching the ideas within this answer from a variety of perspectives—those of clinician, researcher, and instructor.
When I have a few research articles on hand, how do I know if they're good?
There are many strategies that you can use to guide your judgment, but let’s focus on two factors: relevance to your question and quality of the work that was done. We’ll talk about each in turn.
When considering relevance of an article, you want to think about how likely that source is to apply to your client. Is the population or person enrolled in the study similar to your client in terms of diagnosis? Age? Developmental level? Medical history? Culture and background? Language spoken? Ideally, the sample is very similar to (i.e., representative of) your client, and in those cases, it’s pretty easy to judge that the article is relevant and therefore “good”. But, in many—or even most—articles, the study sample will be very different from your client. Now, that doesn’t mean that the article isn’t “good”, it just means that you need to think more critically about what you can glean from the article. For instance, say you are interested in vocabulary development in children with hearing impairment. Your client is in middle school. The only articles you can find focus on children who are preschool-aged. That doesn’t necessarily mean that those articles aren’t “good”, it just means that you need to think about what sort of information those articles can provide. For instance, you might gather some really useful information about approaches that foster vocabulary growth in children with hearing loss, even if you might learn less about whether those specific approaches work specifically for children in middle-school.
So, in sum, you want to think about this question of relevance like a zoom lens; sometimes you are going to want to zoom your focus really close on a certain feature of your client or treatment plan, and sometimes you can zoom out. How wide or narrow should your zoom be? Your clinical judgment, theoretical knowledge, experience, and critical thinking skills will provide the answer.
The second factor that you’ll want to think about is the quality of the work that was done. In other words, the caliber of the research. There are quite a lot of resources available discussing important concepts that underlie research integrity (e.g., validity, reliability), but for our purposes, let’s try to keep things simple: good research is research that rules out alternative explanation for the study findings. Put another way, the results mean what we think they mean! In general, nearly all applied fields have a set of guidelines that look like this; at the top of the heap are articles like systematic reviews and meta-analyses, which collate a large body of previous research. Next in line are articles that include a control group and effectively address confounding variables; last in line are case studies or expert opinion. Locating where any particular article falls in this ranking system will help you to decide how “good” that article is. If you need some extra help, there are lots of good checklists to help you evaluate the quality in more detail. Here are a few:
How do I know if it's enough?
“Enough” will depend on the amount of research that is available on the clinical question, the complexity or specificity of the clinical question, and the quality of the source (as reviewed above in the first question). First, how much research is available on your topic of interest? For example, if you are interested in exploring the morphological and syntactical development in children with developmental language disorder (previously specific language impairment; summary of the SLI terminology change here or here) then you’re in luck! There is loads of information available on that topic/question. By contrast, if you’re interested in exploring the social communication deficits seen in children who have autism spectrum disorders and comorbid visual impairments who were internationally adopted, you may be disappointed to see that there is a dearth of available research for that very complex and specific clinical question. However, following the points in Question 1 above should guide you to research that is relevant to your clinical question. The relevance, again, will depend on how you focus your lens—are you focusing on the population, the skill, or the intervention approach?
This is related to the second point: how specific is your clinical question? If you are, indeed, looking for research on the social communication deficits seen in children who have autism spectrum disorders and comorbid visual impairments who were internationally adopted, then you will likely need to do three separate searches and read substantially more than you initially planned. In this example, you will likely need to search for information separately on social communication deficits: (1) in children with ASD, (2) in children with visual impairments, and (3) in children who have been adopted internationally. This process results in many more articles to sift through and will then require a lot of synthesis on your part to eventually get to “enough”.
Finally, it depends on your source. Is your source a meta-analysis, well-controlled empirical study, a case study, or something else? If you have found one meta-analysis or systematic review that directly relates to your clinical question, that one article alone may be “enough” information (Pro Tip: to find meta-analyses or systematic reviews, try ASHA’s Evidence Maps, ASHA’s Evidence Based Systematic Reviews, What Works Clearinghouse, Campbell Collaboration or Cochrane Library). If you’re reading through a half a dozen well-controlled empirical studies, that is likely “enough” to gain an understanding of the main themes or take-aways from the literature (but see below for how you might navigate conflicting information). You may also start to see roughly the same idea repeatedly, and/or you are consistently referred back to the same sources. If you are reading through one single case study, it is unlikely to completely answer your clinical question (see single subject design vs. case studies; single case-design standards for What Works Clearinghouse). Keep in mind, though, that a single case study can still be a robust and useful source of information, particularly in the absence of empirical studies.
Ultimately, you have likely reached the threshold of “enough” when you feel like you can take the next necessary step with your client or caseload. That next step might be conducting the assessment or implementing a new treatment approach. Try to reframe your efforts to focus on what the next necessary step is, instead of attempting to learn every detail of all available information. It is plausible that you may need to get “just enough” information to move forward with the client in question and then return to the literature later once you’ve gathered more clinical data with that client. Allow yourself that freedom and flexibility, because you’re doing a great thing by moving towards implementing evidence-based practices.
How do I compare one set of articles saying one thing to another set saying the exact opposite?
This is one of the hardest aspects of using previous research— one source rarely “agrees” with another, and they rarely are focusing on exactly the same thing in the first place. You’re going to rely on two things: (1) the consistency of findings from one study to the next and (2) the role of research design (i.e., study population, measures and methods). In many ways, this is going to involve the same kind of thinking that went into the first question, above! You’re always going to be thinking about how the researchers did what they did, in addition to what they found.
To illustrate this process, let’s imagine that we are sitting down with six articles that seem promising. The first thing we’re going to do is carefully read the article, taking notes or highlighting the key methodological details (i.e., population, measures, intervention details). You can do this on the pdf itself, in a separate document, whatever works for you. Do this process for all six articles. As you go, try to take a bird’s eye view of the body of articles; you should look for consistencies/inconsistencies in the results that seem to “travel with” methodological factors. For instance, maybe the studies that report significant effects are the ones where treatment persisted for at least 20 weeks; the ones with shorter treatment windows didn’t find effects. Or maybe the studies that explore parent-mediated intervention for children in preschool report positive findings, but the ones for children who are school-aged do not. As you work your way through the articles, you should be able to find patterns in the evidence (the results and the methods) that help you to make sense of the body of research; this will allow you to draw some useful, specific conclusions based on the information you have at hand.
Put another way, try not to think about the “disagreements” between articles as hurdles; think of them as sign-posts to help refine your conclusions.
So, to elaborate, maybe after I read the first article, I think the bottom line is that “Treatment A is an effective strategy for advancing communication skills in children with ASD”. But then, I read two more articles, and they don’t report any advances in language (though they do in gesture), so I amend my conclusion to “Treatment A is effective strategy for advancing nonverbal communication skills in children with ASD”. Then, after reading the last couple of articles, I realize that the effects aren’t clear for school-age children (though they are for preschoolers and toddlers, so I refine my conclusion further): “Treatment A is an effective strategy for advancing nonverbal communication skills in toddlers with ASD.”
In sum, the process of reviewing available literature can be overwhelming, but is a worthy endeavor to help improve your clinical practice. We encourage you to start small and to engage in conversation with your fellow clinicians. This can be in person or via social media outlets (see Clinical Research for SLPs for an example). Following the steps outlined in this post, and making use of the resources provided are excellent first steps. Importantly, consuming literature is an ongoing process, so enjoy the journey!
This blog post by:
Kelly Farquharson, PhD, CCC-SLP and Rhiannon Luyster, PhD
Emerson College, Boston, MA
members of The Informed SLP
Blog posts by title:
In a fog: Five facts + twelve clinical takeaways about chemo brain
#AmIQualified: Let’s talk about bilingualism and white supremacy in CSD, one layer at a time
#AmIQualified: Let’s talk about privilege in speech–language pathology
#AmIQualified: Let’s talk about moving from cultural competence to cultural humility
The Not-New Speech Norms Part 2: An American Tale
Motivational interviewing and behavioral change in clinical practice
Standardized language tests: That score might not mean what you think it means
The grammar guide you never knew you always wanted
Top 12 questions about ASHA CEUs—answered
Schools, safety, SLPs, and the evidence
Response to #BlackLivesMatter, 2020
COVID-19 and Dysphagia: Considerations for the Medical SLP
COVID-19 and Cognition: Impact for the Medical SLP
Looking for evidence on telepractice for SLPs?
"I don't get what the difference is between ASHA's Evidence Maps, speechBITE, and The Informed SLP..."
SLPD vs. PhD: What's the difference?
That one time a journal article on speech sounds broke the SLP internet
The difference between respecting our science and loving our science
What does the evidence show about treatment intensity?
EBP as a blame game
The EBP barrier nobody is talking about
Guest post: On evidence analysis
Guest post: On trauma and language development
Guest post: Working memory, processing speed, and language disorder
Guest post: Push-in services—how to collaborate!
Guest post: Complexity approach for speech sound disorders
How am I supposed to find time to read research?!?
SLPs: How to make sure you're using EBP
SLPs: How to get access to full journal articles