
36.3K
DRIn a world when anyone can say or publish anything online - even journals can be pay-to-publish - it’s really hard to figure out whether a study is quality without training in that field.
There is a hierarchy of evidence that can help, though.
Case studies and expert opinions are weak because these typically rely on limited data, often a single person.
After that: in vitro (Petri dish) and animal studies. These are useful to start to answer questions, but they CANNOT be used to generalize to people.
Many facets of humans are vastly different from cells on a piece of plastic or other animal species. Unfortunately, too often, people mischaracterize non-human studies and make claims about humans.
Randomized controlled trials are considered a gold standard for evaluating medical questions; they often include blinding so that participants and researchers don’t know who has had what exposure, to limit potential skewing of data.
Systematic reviews & meta-analyses can be powerful: these analyze & combine results from multiple similar studies. However: if the studies used in these are garbage, so are the results of these analyses.
Study design, for all of these types of studies, also matters. Is it relevant to the question being asked?
If you’re selectively searching for a single study that supports your opinion, that’s not an appropriate way to review the data on a topic. That’s called cherry picking and is really common in the influencer world. There are lots of bad studies out there, and you can pretty much find anything to support a point of view: that’s why we consider the body of evidence on a topic, not just one paper.
Watch the rest of my WIRED Tech Support segment here: https://youtu.be/vj71yGp-8WM?si=o-ALIq0cC7zxb48R
#scicomm #sciencecommunication #science #scienceeducation #scientist #research #scientificresearch #biomedical #facts #factcheck #pseudoscience #medicalresearch #womeninSTEM #healthandwellness #sciencefacts
@dr.andrealove










