Reposting a thread by Kristen Panthagani, MD, PhD @kmpanthagani on “the Cochrane mask analysis: the Denmark study”


Kristen Panthagani, MD, PhD @kmpanthagani
Creator of | Emergency Medicine resident at @Yale_EM | R #dataviz nerd | #SciComm writer


Yale School of Medicine


Kristen Panthagani, MD, PhD is a resident physician and Yale Emergency Scholar at Yale New Haven Hospital, completing a combined Emergency Medicine residency and research fellowship. She graduated from the Medical Scientist Training (MD/PhD) Program at Baylor College of Medicine in 2021, receiving a PhD in Genetics and Genomics in 2020 for her thesis work studying the human microbiome and the health impacts of Hurricane Harvey. Her research interests include population health, epidemiology, clinical informatics, communication and misinformation. During the pandemic, she developed an interest and science communication and education for the general public and founded the independent website ‘You Can Know Things,’ which helps explain the science of the pandemic in a way everybody can understand, with an emphasis on addressing misinformation with evidence-based medicine.


I have a lot of thoughts about the Cochrane mask analysis, and will try to summarize them soon. But right now I’m only going to talk about one of them: the Denmark study.

As you have probably heard by now, the Cochrane review looks at randomized-controlled trials (RCTs) of mask use for respiratory infections. The majority of studies were conducted before the pandemic, and only two during the pandemic: one in Denmark, and one in Bangladesh.

The Denmark study was conducted in spring of 2020 and published in March 2021. When it was published, it caused a lot of controversy and many people pointed out flaws…

Most commonly noted was the report that less than half in the mask group actually wore the masks as they were supposed to. However, one could argue that this is a reflection of reality (not everyone will wear masks perfectly), so doesn’t necessarily invalidate the study.

Others pointed out that this study only tests if masks protect the wearer, and doesn’t assess the impact of reducing transmission when the mask-wearer is the one who is contagious. This is certainly a valid point to consider, but wasn’t the intent or design of the study.

But there is one flaw that I have not seen anybody discuss, that in my opinion makes the results essentially uninterpretable, and that is this…

To determine if people got COVID, they rely primarily on antibody tests. (They count PCR-based tests and healthcare documented infections as well, but these were in the minority, and the majority of “COVID cases” were based on antibody results).

If someone tested negative at the beginning of the study and positive at the end, they counted that as a COVID infection during the study.

This wouldn’t be such a big issue, except for the fact that the study was only… one month long.

And antibody tests don’t tell you if you caught COVID today, they tell you if you caught COVID 1 to 3 weeks ago.

To make the numbers simple (and to error on the conservative side), let’s say when someone catches COVID, it takes on average a week and a half (11 days) for their antibody test to turn positive. So for the first 10 days after catching COVID, their antibody test will be negative.

How does this affect their study design? First, anyone who catches COVID during the last 10 days of their 30 day study will not show up in the antibody results. That essentially reduces the duration of the study period by about a THIRD.

For those testing positive by antibodies, the study is actually just 20 days long, not 30, because those last 10 days, anyone who catches COVID will not have had time to develop COVID antibodies by the end of the study period, and will test negative.

Second, anyone who catches COVID the 10 days *before* the study started will be counted as “catching COVID during the study”. Their first antibody test will be negative (because antibodies haven’t had time to develop), and their second test will be positive.

This means the antibody results, which accounted for the majority COVID infections reported in the study, were actually a mix of people who caught COVID about 10 days before the study started and 20 days while the study was going on.

Said another way, about one third of the “COVID cases” recorded from antibody tests were likely from infections that occurred before the study had even started.

We do not expect there to be a difference between the mask vs no mask group before the study started. So if this period accounts for a significant proportion of the infections analyzed in the study, it can drown out a true effect of masking.

And if we look at the trend in COVID cases while Denmark while this study was happening, they’re going down… which means COVID was more prevalent in those 10 days before the study started than the 20 days while the study was actually going on.

Of course, not everyone will develop antibodies on exactly day 11… antibodies take anywhere from 1 to 3 weeks to develop, so some may develop them a little earlier, but others even later, which would only magnify the effect.

For example, if it actually takes an average of two weeks for someone to turn positive on an antibody test, then about *half* of the “infections during the study period” documented by antibodies are likely to have occurred before the study started.

The study also measures infection by positive SARS-CoV-2 PCR tests and/or “health care diagnosed COVID.” These are much better measures as, unlike antibody tests, they would indicate active, current infection.

However, these are in the minority, so the study is not statistically powered to assess a difference in these variables between the mask vs no mask group.

If the study had been 12 weeks, the lag effect of antibody testing wouldn’t make up such a large proportion of the study period, and the effect would be more minimal. But it wasn’t, and it significantly muddles the results, to the point where they’re basically uninterpretable.

This is one of two published RCT on masks we have during COVID. Because of these flaws, the results are quite muddled, and shouldn’t be used to guide policy decisions regarding effectiveness of masks during a pandemic.



This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s