I had a friend say that he heard the following from a source he trusts: “the BinaxNow [COVID-19 antigen self test] is 85% accurate for identifying you have COVID and over 98% accurate in determining you don’t have COVID. So… if it says you are negative there is less than a 2% chance you actually have COVID. If you test positive there is a 15% chance you might really be negative.”
This is the exact opposite of what my own research had found, which said that false positives should be trusted much more than false negatives. Who is right? Well, it turns out there are two ways to look at this sort of problem, both of which are correct.
The first way is to look at how likely a given test is to give a false positive or false negative. For example, all the sources I’ve seen (I’ll quote from this one the FDA published a week ago: https://www.fda.gov/media/147253/download) say that “If you have a positive test result, it is very likely that you have COVID-19” because the antigen tests are very specific to covid proteins. On the other hand, the article says that “It is possible for this test to give a negative result that is incorrect (false negative) in some people with COVID-19,” with the reasoning that, if you take the test too soon, too late, or are asymptomatic, there might not be enough proteins for the test to detect. This aligns with what I’ve been consistently hearing for roughly a year: don’t trust negative results, especially if there are symptoms, but do trust positive results.
Now for the other way to look at it, which is the way I suspect my friend’s source was looking at it. Instead of looking at the likelihood of a specific test giving a false positive, look at the likelihood of a specific person receiving a false positive test. Even if false positives are very rare, if the vast majority of the people taking the test don’t have Covid, then even very rare things can become pretty common, to the point something like 15% false positive rate isn’t unheard of in the medical industry. That also would drive the percentage of false negatives down because most people testing don’t have covid, so most tests that are negative are correct. As such, a 2% chance of a given negative being a false negative makes sense.
Finally, it’s also possible that my friend’s source has more up to date information than I’ve found, or that the source has false/misinterpreted information. I don’t know. Either way, though, even if there was only a 85% chance of a true positive, that’s still too high to be ignored in my opinion, especially if the person is symptomatic. Whether or not it’s covid, who wants to get sick with anything?
Anyhow, I find it interesting that both viewpoints are correct; it just depends on your perspective. Simultaneously, there can be a test such that it is more likely to give a false negative than a false positive, but false positives from that test could be more common than false negatives. Statistics can be weird sometimes, lol.
p.s. I’ve learned this concept from a number of sources, but explanations are usually buried in a book or podcast. If you know of a standalone explanation of this concept, post what you find in the comments!
Some other resources:
- Difference between sensitivity (true positive rate) and specificity (true negative rate): https://www.technologynetworks.com/analysis/articles/sensitivity-vs-specificity-318222
- For another example of how perspective changes things, here’s a game that, from your perspective, gives you a 97.2% survival rate, but from your friends perspective, has a 90% chance of killing you: https://www.youtube.com/watch?v=Uyw7d579nxY
(Photo taken by me, George, of a BinaxNOW COVID-19 Antigen Self Test kit)