Translation: I got $300 per hour to pick apart a test that works completely fine. I equate suboptimal to unreliable and engage in other forms of sophistry that are not befitting a science so skilled in the art….
Who has deemed your skills worth paying you to oversee such a test? Classic social media...
Perhaps you were wearing diapers and don't remember but USADA was thoroughly embarrassed by the Lagat case, which proved that their test did NOT work fine, beyond just the transport-induced degradation. And I do consider electing not to use techniques to ensure sample integrity which are available to them for no clear reason other than convenience or laziness to be suboptimal, particularly when appropriate controls are not used to demonstrate that sample stability is not being impacted.
Rather than addressing the issues USADA limited observer mobility within the lab, put a gag on the observer under penalty of being thrown out, refused to allow them to ask questions until the end, and at that point conveniently made the lab head unavailable to answer questions at the end as promised...or at any other time point after.
But yeah...totally a process on the up-and-up.
Jeez I was busting balls, I got ratioed. 😭
What you wrote does sound horrible and I hate when institutions get away with that kind of crap. It was a long time ago, so I don’t remember if heads rolled, but they should have.
Four professors of biochemistry and molecular biology from Norway are worried by the Peter Bol doping case, saying analysis of EPO test results is subjective.
Doping tests are like Covid tests we've all seen, both are completely useless. How many times have you seen someone say they tested negative for covid but was sure they had it so then they test themselves 10 more times?
Exactly. Thank you. I posted the same thing and my post was removed.
Probably because you clearly do not understand science my friend.
I want someone to explain the science on this one. Explain both circumstances: an athlete is using EPO and an athlete that is not using EPO.
The main reasons are not so much "scientific" as they are human, and human errors can be unintentional or intentional. From the moment the samples are collected, until the results are interpreted by an "expert", there are dozens of steps requiring human involvement, which can introduce failures due to incompetence, negligence, subjectivity, and potentially malice and ill-will out of conflicting interests, priorities, agendas, and pre-judgements.
As we learned way back in 2003 with Lagat, the urine based EPO test is not a simple procedure, and the interpretation of results are subjective, made worse by the difficulty or sometimes complete lack of, independent corroboration in a process carefully designed to protect WADA labs sometimes at the expense of accused athletes.
Many of the issues related to EPO positives have been made public in several cases, e.g. the cases of Norwegian speed-walker Erik Tysse, Czech triathlete Vojtech Sommer, German runner Benedikt Karus, and Irish Sprinter Steven Colvert.
Here is a link detailing many of the issues with the urine EPO test, as interpreted by the WADA labs, and adjudicated:
"It is worrisome that the outcome of an athlete’s doping test can be determined inside the heads of a few people and without objective and robust criteria."
"After examining the doping case against Erik Tysse ... and now the case against (Steven) Colvert, it is our opinion that some WADA-accredited laboratories and also sports judges do not recognise such ambiguities and base their conclusions and verdicts on uncertain and inconsistent results and interpretations."
Jon Nissen-Meyer, Erik Boye, Bjarne Osterud and Tore Scotland
Some WADA-accredited laboratories and also sports judges base their conclusions and verdicts on uncertain, inconsistent results and interpretations. That’s fatal for those individual athletes who are innocent and for the cred...
I don't have the answer. But with all the impact this could have. They should have a rule to throw out both conflicting samples and retest the athlete. It's not worth the hassle these days. Both samples need to be supportive of each other.
Well before EPO came on the scene in the 90's, no Kenyan had run faster than 1:43.5, 3:32.5, or 13:06, so we can safely rule out all the Kenyans on the sub 3:30 list.
Credit to "Thoughtsleader" for finding this recent study in another thread, further confirming that EPO testing is not 100% reliable: In this study 4-6% of A-sample tests were "false presumptive findings" not reproduced in B-sample tests:
Well before EPO came on the scene in the 90's, no Kenyan had run faster than 1:43.5, 3:32.5, or 13:06, so we can safely rule out all the Kenyans on the sub 3:30 list.
From the 12 athletes who have run faster than 1:43.50 at the end of 91, 5 have been Kenyans. All the current Olympic champions at your listed distances at the end of 91 are from Kenya. What exactly is your point, Coecheatett?
There are two parts to this answer. One is statistics, the other one is chemistry/medicine.
Statistics: a test for a substance uses a certain sequence of processes, materials, human capital etc. in this process, things can become contaminated, not only by manipulation as suggested by others here, but also just as part of the process. Think about wearing a white shirt, but maybe during the day somehow you got a stain on it and when they hold that small piece of your shirt under the microscope, they will say "hey, this is brown, not white" (positive A sample) and then someone checks a different part of your shirt without the stain and they say "nvm, it's white" (negative B sample). There are a million ways the stain could have gotten there. So the test relies on probability - it says "ok, given what we know about white shirts, our process is so good that in 99% of cases we correctly identify a white shirt". And to a scientist, that's fine. To the very very rare athlete with the small stain, it's tough. And this goes both ways, there can be false positives or false negatives. The more sensitive you make the test to detect stains (i.e. get a crazy good microscope) the more likely you will find even the tiniest stains (traces of doping). But the more sensitive it is, the more likely it becomes that you zero in on a tiny tiny stain that really does not say much about the shirt as a whole. You can look this up in scientific terms: specificity and sensitivity.
The second is medical science and depends on the substance: often, the test does not detect the actual substance, it detects certain proteins or other biochemical structures that are associated with that substance and concludes that it must be the substance. EPO tests have been criticized for this by cyclists (which is a dirty sport, so take with a grain of salt, but there is at least some scientific evidence from the lab to back it up). And there can be other substances that shouldn't be in a human body but that have the same or very similar proteins as the doping substance. We don't eat clean, our food/water is treated with chemicals etc., so the test might not be able to distinguish between different sources of that particular thing it's looking for (not "monospecific" is the term for it). And then sometimes they use more/different testing methods on the B-sample to rule out that this happened and find that with the other method for the same substance it's negative. So yeah, it can be legit.
To add to this, 99% accuracy is going to result in many false positives when testing hundreds or especially thousands of samples (presumably, about 1 per 100, right?).
The key concept is "positive predictive value"--the likelihood that a "positive" result really is a positive. This in turn is related to the sensitivity and specificity of the test as well as the proportion of "true positives" in the population being tested (you can Google the exact formula). Positive predictive value can be quite low when the proportion of "true positives" in a group is low. Think about it this way: if no one was using EPO, every single positive result would be a false positive, right?
In principle, you could make a test more stringent (higher concentration needed to make a "positive" call) to enhance specificity. That would increase the likelihood of positive calls being true positives, at the cost of missing some true positives. But since we don't know in advance the proportion of dopers in a group, it's hard to optimize the assay.
TLDR: Since the likelihood that an A "positive" is a "true positive" depends on the unknown proportion of true positives, the B sample is needed to ensure that positives are really positive.
I don't have the answer. But with all the impact this could have. They should have a rule to throw out both conflicting samples and retest the athlete. It's not worth the hassle these days. Both samples need to be supportive of each other.
And with any "leaked" A sample "result" before confirmation should render the two samples null and void, with a requirement for the athlete to provide additional samples for a certain period of time, such as a competition season.
The leaker should be found and then they should be permanently banned for anything to do with any testing. Leakers declare themselves judge and jury, and make decision often based on how they fell about an individual. Tamper with a sample, because they don't like an athlete.
To add to this, 99% accuracy is going to result in many false positives when testing hundreds or especially thousands of samples (presumably, about 1 per 100, right?).
The key concept is "positive predictive value"--the likelihood that a "positive" result really is a positive. This in turn is related to the sensitivity and specificity of the test as well as the proportion of "true positives" in the population being tested (you can Google the exact formula).
The way around this looking too close at wiggly lines on a chart (the 1%), is to not use such results, and instead have that athlete tested more frequently for several months. But, certain people, the worst being Travesty "Travis T." Tygart, who is more interested in putting notches on his belt, than being unbiased. His personal opinion is all over his notches. He has granted immunity to a lot of cheaters to catch the one person he hated, like Lance Armstrong. I have no issue with Armstrong being banned, but I do have issue with all the cheaters Tygart let off the hook.
If the testing was done using iso electric focusing (IEF) as some suggested, then it is well known that the sensitivity of EPO using this method is low. One study showed its only 58% .
That mean statistically you can expect that out of 100 samples, 42 may be a false negative. In the case of sample A and Sample B, although from the same urine specimen, if the levels were low then its very possible that one of the samples was negative. That's why you need to use another method to confirm.
Why WADA would allow a method with such poor sensitivity to be used to screen athletes is beyond me. In the diagnostic world, the FDA would never approve a test with a sensitivity lower than 90%
This post was edited 4 minutes after it was posted.