I think arguing between 2 and 3 minutes is ridiculous, so I'm trying hard not to do that.
I'll help you know a little more about me. I am not a world record holder in any event. I have never been to Boston, to experience the weather, let alone the pleasure and pain of the uphills and downhills of the Boston marathon. But I do have some strong notions of mathematics, statistics, and probability, even if some experts get probability wrong.
What I see though is a slight injustice. David Monti writes an article that the ARRS declares that Boston should not be declared "excessively wind aided" because they consider the results "statistically valid". People are attacking Ken Young and the ARRS for David Monti's interpretation, but in my opinion, David Monti gives the wrong impression. For example, the sports scientists look at it almost purely from a world record point of view, but a quick visit to the website reveals that the ARRS does not consider it a world record, or treat it as such.
You ask the right questions: What do vague terms "excessively aided" and "statistically valid" mean? What did the ARRS really say, and what did they mean by it? Do they deserve the reaction they are getting from both anonymous letsrun posters, as well as science bloggers?
People are asking specifically for the algorithms, but I insist they do not need the algorithms to understand that no one at ARRS is lobbying for world record status, or even suggesting that the wind was not a big factor. I also wonder if these same people have the same level of algorithms and data that generated the "back of the envelope" 3-4 minute wind estimates.
But lets see to what extent the questions are answered at the website:
1) Which athletes, which events, over what period:
2) Who is the data being analyzed? (Huh?) What is the methodology?
I don't know. I can only guess it's something like scoring tables, corrected by estimated biases if applicable.
3) What is the rational behind the choice of data set and methods used to analyze the data?
How is this different than 1)?
4) How is the conclusion related to, and supported by, the data?
What do you mean by "the conclusion"? Which conclusion exactly? They make world ranking lists, based on times corrected by estimated biases.
There is also too many links to include here, each with incremental information about how to interpret the presented data. For example, an explanation of competitive rankings:
The IAAF also uses the impermissable Boston data: www.iaaf.org/statistics/toplis...etail.html
malmo wrote:Taking out the fact that ARRS, a self-appointed authority on what is aided, uses an elevation drop that is FIVE TIMES more generous than what is permitted by the IAAF."
What does the ARRS use the data for? What does the IAAF use the data for? What are the main differences?
According to these links, the ARRS criteria for a world record is stricter than the IAAF, because the ARRS refused to adopt a new "relaxed" standard that the IAAF arbitrarily adopted:
I'm not so much "covering" for Ken Young, or the ARRS, as just relaying the tiny little bit of research I was able to do with all the information at their website, which seems to paint a very different picture than what is people (such as the sport scientists) are reacting to.
What makes you think they are doing more than that? I read the links at the website, and I think they are still only doing that. The Boston times will appear in the all time lists (not unlike the IAAF, but in a different format). They will not be included in world record lists. I think the IAAF will treat the times similiarly.
The ARRS should simply stick to what they do best: collect results from events around the world. Create updated all-time and record progression lists for catagories like 'World', "Country', "year", and "Event".
That's what they do best.
What is it that the ARRS is doing now differently than before?