The data do not provide a universal answer
Unfortunately, there's no one-size-fits-all answer.Data can be grouped and divided in any number of ways,and overall numbers may sometimes give a more accurate picture than data divided into misleading or arbitrary categories.All we can do is carefully study the actual situations the statistics describe and consider whether lurking variables may be present.Otherwise, we leave ourselves vulnerable to those who would use data to manipulate others and promote their own agendas.
6.88K
views
The event exposition of Simpson's Paradox
In another example,an analysis of Florida's death penalty cases seemed to reveal no racial disparity in sentencing between black and white defendants convicted of murder.But dividing the cases by the race of the victim told a different story.In either situation,black defendants were more likely to be sentenced to death.The slightly higher overall sentencing rate for white defendants was due to the fact that cases with white victims were more likely to elicit a death sentence than cases where the victim was black,and most murders occurred between people of the same race.So how do we avoid falling for the paradox?
7.01K
views
The underlying variables are crucial to the correct interpretation of the data.
Simpson's paradox isn't just a hypothetical scenario.It pops up from time to time in the real world,sometimes in important contexts.One study in the UK appeared to show that smokers had a higher survival rate than nonsmokers over a twenty-year time period.That is, until dividing the participants by age group showed that the nonsmokers were significantly older on average,and thus, more likely to die during the trial period,precisely because they were living longer in general.Here, the age groups are the lurking variable,and are vital to correctly interpret the data.
6.82K
views
Importance of latent variables
Strangely enough, Hospital B is still the better choice,with a survival rate of over 98% So how can Hospital A have a better overall survival rate if Hospital B has better survival rates for patients in each of the two groups?What we've stumbled upon is a case of Simpson's paradox,where the same set of data can appear to show opposite trends depending on how it's grouped.This often occurs when aggregated data hides a conditional variable,sometimes known as a lurking variable,which is a hidden additional factor that significantly influences results.Here, the hidden factor is the relative proportion of patients who arrive in good or poor health.
6.91K
views
Make choices on a case-by-case basis rather than statistics
But before you make your decision,remember that not all patients arrive at the hospital with the same level of health.And if we divide each hospital's last 1000 patients into those who arrived in good health and those who arrived in poor health,the picture starts to look very different.Hospital A had only 100 patients who arrived in poor health,of which 30 survived.But Hospital B had 400,and they were able to save 210.So Hospital B is the better choice for patients who arrive at hospital in poor health,with a survival rate of 52.5%.And what if your relative's health is good when she arrives at the hospital?
6.96K
views
Not shown in the statistics may have reversed the conclusions
Statistics are persuasive.So much so that people, organizations,and whole countries base some of their most important decisions on organized data.But there's a problem with that.Any set of statistics might have something lurking inside it,something that can turn the results completely upside down.For example, imagine you need to choose between two hospitals for an elderly relative's surgery.Out of each hospital's last 1000 patient's,900 survived at Hospital A,while only 800 survived at Hospital B.So it looks like Hospital A is the better choice.
6.79K
views
The control of variables can effectively improve the credibility of the experiment
Now that you’ve battle-tested your skills on these hypothetical studies and headlines,you can test them on real-world news.Even when full papers aren’t available without a fee,you can often find summaries of experimental design and results in freely available abstracts,or even within the text of a news article.Individual studies have results that don’t necessarily correspond to a grabby headline.Big conclusions for human health issues require lots of evidence accumulated over time.But in the meantime,we can keep on top of the science, by reading past the headlines.
6.86K
views
Experimental results without control variables and controls are often not credible
To rule out the possibility that some other factor caused weight loss,we would need to compare these participants to a group who didn’t eat breakfast before the study and continued to skip it during the study.A headline certainly shouldn’t claim the results of this research are generally applicable.And if the study itself made such a claim without a comparison group,then you should question its credibility.
6.87K
views
The third experiment: Breakfast and Weight loss
Can you take your skills from the first two questions to the next level?Try this example about the impact of eating breakfast on weight loss.Researchers recruit a group of people who had always skipped breakfast and ask them to start eating breakfast everyday.The participants include men and women of a range of ages and backgrounds.Over a year-long period,participants lose an average of five pounds.So what’s wrong with the headline:Eating breakfast can help you lose weight The people in the study started eating breakfast and lost weight—but we don’t know that they lost weight because they started eating breakfast;perhaps having their weight tracked inspired them to change their eating habits in other ways.
6.93K
views
The experimental results were affected by many factors
we can’t assume that results found in men would also apply to women.Studies often limit participants based on geographic location, age, gender,or many other factors.Before these findings can be generalized,similar studies need to be run on other groups.If a headline makes a general claim,it should draw its evidence from a diverse body of research, not one study.
6.88K
views
Experiment: Male human control experiment
Now that you’ve warmed up,let’s try a trickier example:a study about the impact of aspirin on heart attack risk.The study randomly divides a pool of men into two groups.The members of one group take aspirin daily,while the others take a daily placebo.By the end of the trial,the control group suffered significantly more heart attacks than the group that took aspirin.Based on this situation, what’s wrong with the headline:Aspirin may reduce risk of heart attacks In this case, the study shows evidence that aspirin reduces heart attacks in men,because all the participants were men.But the conclusion “aspirin reduces risk of heart attacks” is too broad;
6.98K
views
The results were not generalizable across ethnic groups
Can you spot the problem with this headline:Study shows new drug could cure cancerSince the subjects of the study were mice,we can’t draw conclusions about human disease based on this research.In real life, early research on new drugs and therapies is not conducted on humans.If the early results are promising,clinical trials follow to determine if they hold up in humans.
6.94K
views
Experiment: Mice control group
We’ve come up with a simplified research scenario for each of these three headlines to test your skills.Keep watching for the explanation of the first study;then pause at the headline to figure out the flaw.Assume all the information you need to spot the flaw is included.Let’s start with this hypothetical scenario:a study using mice to test a new cancer drug.The study includes two groups of mice,one treated with the drug, the other with a placebo.At the end of the trial,the mice that receive the drug are cured,while those that received the placebo are not.
6.96K
views
Titles exaggerate results to attract people
In medicine,there’s often a disconnect between news headlines and the scientific research they coverThat’s because a headline is designed to catch attention—it’s most effective when it makes a big claim.By contrast,many scientific studies produce meaningful results when they focus on a narrow, specific question.The best way to bridge this gap is to look at the original research behind a headline.
6.64K
views
The title often fails to convey the meaning of the content
New drug may cure cancer.Aspirin may reduce risk of heart attacks.Eating breakfast can help you lose weight.Health headlines like these flood the news,often contradicting each other.So how can you figure out what’s a genuine health concern or a truly promising remedy,and what’s less conclusive?
6.97K
views
For the unknown, the known is always less than the unknown
And we choose the option that seems more representative of the overall picture,regardless of its actual probability.This effect has been observed across multiple studies,including ones with participants who understood statistics well–from students betting on sequences of dice rolls,to foreign policy experts predicting the likelihood of a diplomatic crisis.The conjunction fallacy isn’t just a problem in hypothetical situations.Conspiracy theories and false news stories often rely on a version of the conjunction fallacy to seem credible–the more resonant details are added to an outlandish story,the more plausible it begins to seem.But ultimately, the likelihood a story is true can never be greater than the probability that its least likely component is true.
7.01K
views
People are more likely to trust answers with more clues
The more conditions there are, the less likely an event becomes.So why do statements with more conditions sometimes seem more believable?This is a phenomenon known as the conjunction fallacy.When we’re asked to make quick decisions, we tend to look for shortcuts.In this case, we look for what seems plausible rather than what is statistically most probable.On its own, Lucy being an artist doesn’t match the expectations formed by the preceding information.The additional detail about her playing poker gives us a narrative that resonates with our intuitions—it makes it seem more plausible.
7.02K
views
The interpretation of probability
For any possible set of events, the likelihood of A occurring will always be greater than the likelihood of A and B both occurring.If we took a random sample of a million people who majored in math,the subset who are portrait artists might be relatively small.But it will necessarily be bigger than the subset who are portrait artists and play poker.Anyone who belongs to the second group will also belong to the first–but not vice versa.
6.97K
views
People trust their instincts more
Look at the options again.How do we know the first statement is more likely to be true?Because it’s a less specific version of the second statement.Saying that Lucy is a portrait artist doesn’t make any claims about what else she might or might not do.But even though it’s far easier to imagine her playing poker than making art based on the background information,the second statement is only true if she does both of these things.However counterintuitive it seems to imagine Lucy as an artist,the second scenario adds another condition on top of that, making it less likely.
5.72K
views
Superficial knowledge is not the essence
Meet Lucy.She was a math major in college,and aced all her courses in probability and statistics.Which do you think is more likely: that Lucy is a portrait artist,or that Lucy is a portrait artist who also plays poker?In studies of similar questions, up to 80 percent of participants chose the equivalent of the second statement:that Lucy is a portrait artist who also plays poker.After all, nothing we know about Lucy suggests an affinity for art,but statistics and probability are useful in poker.And yet, this is the wrong answer.
7.72K
views