Despite the primacy you attach to mathematics the virtual entirety of your comments about it require the use of English - so language not mathematics is the vital tool of communication about the study - or, in your case, miscommunication.
This study has been out for a decade. And here you are, relentlessly still trying to reinterpret it so it doesn't say what it clearly says. To debate with you about its contents is to concede you have an argument. You don't. So debate is not worth the time or effort. You are a fanatical bore.
Not quite a decade. A preliminary paper was unofficially made public in 2015, and only published in 2017.
Curious you say genuine debate is not worth the time or effort, because you seem to spend a lot of time and effort not debating me. Is that worth more to you? Is that less boring?
The English words are no good when they are unsupported by the math. Sure, a study clearly says what it clearly says, but before you can comment on what it says, you have to understand the math and its limitations. Usually limitations can be expressed in a few paragraphs, but here they spent one page in the main paper, and seven pages (out of 14) in the Appendix, discussing all the various confounders, and concluded by urging continued use and refinement. (Notably they have done neither.)
You keep falsely attributing these English words and interprations to me, but numbers and words like "43.6%", "at least 30%", "Effects of Various Possible Forms of Noncompliance by Survey Respondents", "Distortion", "hasty responding", "serious problem", "strongly overestimate", come directly from the researchers in their peer-reviewed paper.
The researchers attempted to address all these potential distortions, often using English words like "plausible", and "implausible", and "almost certainly", and "likely". These are statements of belief and believability, unbecoming of scientists purporting to provide a reliable prevalence estimate that has eluded all scientists before them. In science, you cannot pray the limitations away.
Here are some English words the study never says: "nearly 1 in 2" and "likely more than 44%".
I haven't really noticed before, but in Table 14, for completeness, they failed to envision two more scenarios, and publish their respective models, which would cause significant over-estimates: "Over-reporting Doping" and "Cheating with "yes"".
Despite the primacy you attach to mathematics the virtual entirety of your comments about it require the use of English - so language not mathematics is the vital tool of communication about the study - or, in your case, miscommunication.
This study has been out for a decade. And here you are, relentlessly still trying to reinterpret it so it doesn't say what it clearly says. To debate with you about its contents is to concede you have an argument. You don't. So debate is not worth the time or effort. You are a fanatical bore.
Not quite a decade. A preliminary paper was unofficially made public in 2015, and only published in 2017.
Curious you say genuine debate is not worth the time or effort, because you seem to spend a lot of time and effort not debating me. Is that worth more to you? Is that less boring?
The English words are no good when they are unsupported by the math. Sure, a study clearly says what it clearly says, but before you can comment on what it says, you have to understand the math and its limitations. Usually limitations can be expressed in a few paragraphs, but here they spent one page in the main paper, and seven pages (out of 14) in the Appendix, discussing all the various confounders, and concluded by urging continued use and refinement. (Notably they have done neither.)
You keep falsely attributing these English words and interprations to me, but numbers and words like "43.6%", "at least 30%", "Effects of Various Possible Forms of Noncompliance by Survey Respondents", "Distortion", "hasty responding", "serious problem", "strongly overestimate", come directly from the researchers in their peer-reviewed paper.
The researchers attempted to address all these potential distortions, often using English words like "plausible", and "implausible", and "almost certainly", and "likely". These are statements of belief and believability, unbecoming of scientists purporting to provide a reliable prevalence estimate that has eluded all scientists before them. In science, you cannot pray the limitations away.
Here are some English words the study never says: "nearly 1 in 2" and "likely more than 44%".
I haven't really noticed before, but in Table 14, for completeness, they failed to envision two more scenarios, and publish their respective models, which would cause significant over-estimates: "Over-reporting Doping" and "Cheating with "yes"".
There have been years of discussion about this study. You lost the argument a long time ago. You should move on. Perhaps a subject like immigrants eating pets - which you could similarly prove is contrary to what most of the rest of us understand to be the facts.
There have been years of discussion about this study. You lost the argument a long time ago. You should move on. Perhaps a subject like immigrants eating pets - which you could similarly prove is contrary to what most of the rest of us understand to be the facts.
I find the poster with no arguments declaring past victory wholly unpersuasive. It's like the loser of an election, or a debate, declaring himself the winner without earning the victory on the merits. Only the most gullible and most faithful (or the dishonest) would fall for it.
"Discussion" does not mean the "argument" was ever resolved or concluded. That would require producing arguments and counter-arguments with tangible substance. The "big bang" has been discussed for years, and yet we are still collecting new data raising new questions about the old ideas.
The UQM survey estimates (pick your favorite one) are not reliable, not because I have argued it, but for all the reasons the researchers themselves have published. These limitations and confounders have never gone away, even after the "hasty" fast-responder deletions, but were rationalized away based on preconceived notions of plausibility and likelihood.
This post was edited 1 minute after it was posted.
Reason provided:
Comma
Here are some English words the study never says: "nearly 1 in 2" and "likely more than 44%".
Too funny. Here is what the study actually did say in its results section on page 1:
"The estimated prevalence of past-year doping was 43.6% (95% confidence interval 39.4–47.9) at WCA and 57.1% (52.4–61.8) at PAG. The estimated prevalence of past-year supplement use at PAG was 70.1% (65.6–74.7%). Sensitivity analyses, assessing the robustness of these estimates under numerous hypothetical scenarios of intentional or unintentional noncompliance by respondents, suggested that we were unlikely to have overestimated the true prevalence of doping."
---> Estimated prevalence of 43.6%, unlikely to be an overestimate, likely to be an underestimate. Straight from the horse's mouth. Not so hard to understand, but twist and detract away!
And independent of these theoretical models with all their built-in unknowns, the authors performed a "Primary Analysis" exercise to arrive at a more conservative "lower bound" estimate -- an estimate still subject to the same 9 theoretical models of distortion with 11 unknowns.
Freudian slip, or did you actually have a rare moment of honesty there? Yes indeed, the "more conservative lower bound estimate" of the "Primary Analysis" is.... 43.6%:
"3.1 Primary Analysis We obtained strikingly high estimates of the prevalence of past-year doping at both events: 43.6% (95% CI 39.4–47.9) at WCA and 57.1% (52.4–61.8) at PAG. The estimated prevalence of past-year supplement use at PAG was 70.1% (65.6–74.7). These estimates are markedly greater than the results obtained by biological testing at the two events. Specifically, at WCA, 440 athletes received biological testing, and only two (0.5%) were found positive. At PAG, 670 athletes were tested, of whom 24 (3.6%) were positive. Notably, the prevalence of positive analytical findings at PAG was significantly greater than that at WCA (p < 0.001, two-tailed; bootstrapping the statistics D = PPAG − PWCA, where PPAG and PWCA are the resampled prevalence rates for PAG and WCA, respectively; N = 100,000 bootstrap samples; the null hypothesis is mD = 0). Further details are provided in the Online Appendix, Section 3 (basic results)."
Look into the Section 3, and you see why it's indeed likely more than 43.6%.
Hahahahahaha - try to spin that fact away, after you finally admitted it!
Norwegian athletes athlete Henrik Ingebrigtsen (26) has been on a list of 46 athletes who the International Athletics Association suspected of using doping. It shows alleged IAAF documents released…
Here are some English words the study never says: "nearly 1 in 2" and "likely more than 44%".
Too funny. Here is what the study actually did say in its results section on page 1:
"The estimated prevalence of past-year doping was 43.6% (95% confidence interval 39.4–47.9) at WCA and 57.1% (52.4–61.8) at PAG. The estimated prevalence of past-year supplement use at PAG was 70.1% (65.6–74.7%). Sensitivity analyses, assessing the robustness of these estimates under numerous hypothetical scenarios of intentional or unintentional noncompliance by respondents, suggested that we were unlikely to have overestimated the true prevalence of doping."
---> Estimated prevalence of 43.6%, unlikely to be an overestimate, likely to be an underestimate. Straight from the horse's mouth. Not so hard to understand, but twist and detract away!
You seem to have confronted rekrunner with "facts" and "data" - which he swears to live by. But not the facts and data he chooses to see.
And independent of these theoretical models with all their built-in unknowns, the authors performed a "Primary Analysis" exercise to arrive at a more conservative "lower bound" estimate -- an estimate still subject to the same 9 theoretical models of distortion with 11 unknowns.
Freudian slip, or did you actually have a rare moment of honesty there? Yes indeed, the "more conservative lower bound estimate" of the "Primary Analysis" is.... 43.6%:
"3.1 Primary Analysis We obtained strikingly high estimates of the prevalence of past-year doping at both events: 43.6% (95% CI 39.4–47.9) at WCA and 57.1% (52.4–61.8) at PAG. The estimated prevalence of past-year supplement use at PAG was 70.1% (65.6–74.7). These estimates are markedly greater than the results obtained by biological testing at the two events. Specifically, at WCA, 440 athletes received biological testing, and only two (0.5%) were found positive. At PAG, 670 athletes were tested, of whom 24 (3.6%) were positive. Notably, the prevalence of positive analytical findings at PAG was significantly greater than that at WCA (p < 0.001, two-tailed; bootstrapping the statistics D = PPAG − PWCA, where PPAG and PWCA are the resampled prevalence rates for PAG and WCA, respectively; N = 100,000 bootstrap samples; the null hypothesis is mD = 0). Further details are provided in the Online Appendix, Section 3 (basic results)."
Look into the Section 3, and you see why it's indeed likely more than 43.6%.
Hahahahahaha - try to spin that fact away, after you finally admitted it!
Here are some English words the study never says: "nearly 1 in 2" and "likely more than 44%".
Too funny. Here is what the study actually did say in its results section on page 1:
"The estimated prevalence of past-year doping was 43.6% (95% confidence interval 39.4–47.9) at WCA and 57.1% (52.4–61.8) at PAG. The estimated prevalence of past-year supplement use at PAG was 70.1% (65.6–74.7%). Sensitivity analyses, assessing the robustness of these estimates under numerous hypothetical scenarios of intentional or unintentional noncompliance by respondents, suggested that we were unlikely to have overestimated the true prevalence of doping."
---> Estimated prevalence of 43.6%, unlikely to be an overestimate, likely to be an underestimate. Straight from the horse's mouth. Not so hard to understand, but twist and detract away!
You could be forgiven for being misled, if you had only read the Abstract, rather than the whole paper.
But no need for me to spin anything. These "horses" here are merely making a suggestion: "Sensitivity analysis, ..., suggested that we ...". The next step would be to collect data and make observations to confirm, or contradict, such speculative suggestions.
Freudian slip, or did you actually have a rare moment of honesty there? Yes indeed, the "more conservative lower bound estimate" of the "Primary Analysis" is.... 43.6%:
"3.1 Primary Analysis We obtained strikingly high estimates of the prevalence of past-year doping at both events: 43.6% (95% CI 39.4–47.9) at WCA and 57.1% (52.4–61.8) at PAG. The estimated prevalence of past-year supplement use at PAG was 70.1% (65.6–74.7). These estimates are markedly greater than the results obtained by biological testing at the two events. Specifically, at WCA, 440 athletes received biological testing, and only two (0.5%) were found positive. At PAG, 670 athletes were tested, of whom 24 (3.6%) were positive. Notably, the prevalence of positive analytical findings at PAG was significantly greater than that at WCA (p < 0.001, two-tailed; bootstrapping the statistics D = PPAG − PWCA, where PPAG and PWCA are the resampled prevalence rates for PAG and WCA, respectively; N = 100,000 bootstrap samples; the null hypothesis is mD = 0). Further details are provided in the Online Appendix, Section 3 (basic results)."
Look into the Section 3, and you see why it's indeed likely more than 43.6%.
Hahahahahaha - try to spin that fact away, after you finally admitted it!
All my moments are honest, but can include a rare mistake, e.g. misquoting a section title.
Again, no spinning necessary, when I can refer everyone directly to the published paper.
Under "3 Results", there are two sections: "3.1 Primary Analysis", where the authors did nothing intelligent to correct any potential errors identified later; and "3.2 Analysis Using Response Time", describing an exercise they performed based on response times to remove a hypothesized (and confirmed), over-estimate, due to "carelessness" or "hasty responding".
Contrary to your suggestion, looking into Section 3 (in the Appendix) doesn't show us "why it's indeed likely more than 43.6%". Section 3 gives details of how the survey was conducted, and some things they looked at, and eventually points us to Table 4 for the summarized results, which lists no less than six (6) separate estimates of prevalence.
The most conservative estimate found in Table 4 is 29.9%, or 27.0% when considering the "standard error of estimate", or 24.1%, when considering the 95% Confidence Interval.
You seem to have confronted rekrunner with "facts" and "data" - which he swears to live by. But not the facts and data he chooses to see.
I do prefer "facts" and "data", over baseless speculations and conclusions. Here, I was confronted with a partial subset of the available "facts" and "data", followed by a "suggestion" unsupported by "facts" and "data", but rather theory and assumptions.
You seem to have confronted rekrunner with "facts" and "data" - which he swears to live by. But not the facts and data he chooses to see.
I do prefer "facts" and "data", over baseless speculations and conclusions. Here, I was confronted with a partial subset of the available "facts" and "data", followed by a "suggestion" unsupported by "facts" and "data", but rather theory and assumptions.
That was as meaningful as Trump's statement on childcare. You never realise how ridiculous you sound.
I do prefer "facts" and "data", over baseless speculations and conclusions. Here, I was confronted with a partial subset of the available "facts" and "data", followed by a "suggestion" unsupported by "facts" and "data", but rather theory and assumptions.
That was as meaningful as Trump's statement on childcare. You never realise how ridiculous you sound.
You never realize how boring and ignorant your 33,100 posts sound.
I do prefer "facts" and "data", over baseless speculations and conclusions. Here, I was confronted with a partial subset of the available "facts" and "data", followed by a "suggestion" unsupported by "facts" and "data", but rather theory and assumptions.
That was as meaningful as Trump's statement on childcare. You never realise how ridiculous you sound.
This looks like personal projection again, coupled with a weird analogy.
If you don't like the way I sound, just read the paper for yourself, and look for all the "facts" and "data", and look for any of the researcher's statements of faith not supported by any "facts" or "data" which cannot substitute "conclusion" or "fact".
Stop relying on other middlemen to give you flawed interpretations based on half the facts, simply because you lack the knowledge and skills to figure out who is lying to you.
Too funny. Here is what the study actually did say in its results section on page 1:
"The estimated prevalence of past-year doping was 43.6% (95% confidence interval 39.4–47.9) at WCA and 57.1% (52.4–61.8) at PAG. The estimated prevalence of past-year supplement use at PAG was 70.1% (65.6–74.7%). Sensitivity analyses, assessing the robustness of these estimates under numerous hypothetical scenarios of intentional or unintentional noncompliance by respondents, suggested that we were unlikely to have overestimated the true prevalence of doping."
---> Estimated prevalence of 43.6%, unlikely to be an overestimate, likely to be an underestimate. Straight from the horse's mouth. Not so hard to understand, but twist and detract away!
You could be forgiven for being misled, if you had only read the Abstract, rather than the whole paper.
But no need for me to spin anything. These "horses" here are merely making a suggestion: "Sensitivity analysis, ..., suggested that we ...". The next step would be to collect data and make observations to confirm, or contradict, such speculative suggestions.
No need for you to spin? Then why do you move the goalposts and why do you pretend I was mislead by the authors? The misleading is all yours.
No one ever suggested that all these "sensitivity analyses" with their "numerous hypothetical scenarios" proved anything, but they do show that's unlikely less than 43.6% who doped in the last 12 months before Worlds 2011.
All my moments are honest, but can include a rare mistake, e.g. misquoting a section title.
Again, no spinning necessary, when I can refer everyone directly to the published paper.
Under "3 Results", there are two sections: "3.1 Primary Analysis", where the authors did nothing intelligent to correct any potential errors identified later; and "3.2 Analysis Using Response Time", describing an exercise they performed based on response times to remove a hypothesized (and confirmed), over-estimate, due to "carelessness" or "hasty responding".
More spinning. "a rare mistake" from you that was ever so coincidentally fully in line with your misleading propaganda, sure....
"to remove a hypothesized (and confirmed), over-estimate, due to "carelessness" or "hasty responding" - more spinning including another bold lie. That's a sheer hypothesis, nowhere "confirmed" that is was due to carelessness.
What do the authors actually write in 3.2?
... suggesting possible...
... possibly as an artifact ...
... might have ...
The next step would be to collect data and make observations to confirm, or contradict, such speculative suggestions. Hahahahaha you are quite the troll.
Speaking of the last 12 months: the number of athletes, who were doped at some point of their career, is evidently higher than 43.6% also for the following reasons:
1) All athletes who doped "only" prior to August 2010 are not included in the 43.6%.
2) All athletes who started doping "only" after August 2011 are not included in the 43.6%.
3) All athletes who were doped by their coaches/handlers/... without their knowledge are not included in the 43.6%.
That much is obvious. How many those are? Well, you argued a lot about point 3 in the past, so, a lot of athletes, according to your earlier statements.