When you test positive for Covid-19 and the test accuracy is, say 90%, then you are 90% certain to have the bug, right? Wrong answer! According to Bayes theorem the probability you are ill is only 15%. The majority of people get this figure completely wrong, but it is undeniable scientifically. As rapid tests are liable to flag "positive" people that are not ill (even by a small percent), you don't know whether a positive result is real or one of the test's error cases, especially if only 1% of the population have the bug on average. The math is very simple:
According to Bayes formula, the test accuracy (sensitivity) is diluted by the prior probability of the illness, so the chances you have covid after a single positive test are still low. To be certain you must have a second test, preferably independent of the first one. If the second one comes also positive then you are 76% likely to end up in intensive care <g>
Mathematically the probability increase is justified because your prior probability P(illness) after the first positive test is 15%, not 1% as before
Armed with our theoretical background, let's apply it to the covid rapid test reliability data. We need numbers for P(illness) and P(accuracy) for the Bayes formula above; immediately we stumble into several obstacles:
For simplicity, I equate false positive rate to (100-accuracy), which is not necessarily true but this is just a paper exercise.
|
Taking a middle estimation (1 in 100 people infected and 90% accuracy), a single test translates to only 8% chance of real covid infection. So counting single test results as "daily covid infection statistics" is grossly exaggerating the real extent of the disease. Likewise quarantining people on the evidence of a single test is unwarranted. Health authorities are aware of the problem but book-sellers-turned-politicians won't take heed.
Perhaps they have mistaken real time control with statistical sampling. Real time control of the spread of a disease is infeasible, it would require people being tested continuously 24 hours a day.
If you read basic quality control, or elementary statistical sampling, you understand that you don't need to test everyone to assess a situation. You don't ask everybody's intention to vote in a poll, you choose a small sample. You don't need to taste all pasta in a boiling pot to see if it is cooked or not, just the one spaghetti piece will do.
Lately we hear that they are thinking of introducing mandatory weekly tests to all elementary school kids. A classroom is the definition of a biased sample for disease, as kids spend 6-7 hours each day in close contact. If one catches something, the rest will also show it quickly. You don't need to test everyone, and definitely not with an inaccurate, painful, nose test kit. Teachers are being tested weekly, and that's probably enough for a statistical indicator of the entire class.
Unless somebody's making money selling test kits, then it would all make sense!
Let's hope that with mass vaccinations under way (good luck with beta testing :) our Experts will calm down and propose reasonable, feasible measures.
You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.
Abraham Lincoln
Post a comment on this topic »