Willow Lane was speed rated by my standard process … When possible, I initially compare the race profile (plot of finish position vs race time for the top 50-60% of finishers) to the same race in recent years to see the differences … Often (but not always) it is apparent that the race ran about the same (or 5-10 seconds faster or slower) than the same race in prior years. If the quality of the race is roughly the same, then I use race adjustments from prior years (used to derive those speed ratings) to derive tentative speed ratings for the race being evaluated.
Since I have never speed rated Willow Lane, I initially used a profile of a composite baseline derived for a decent quality high school invitational … I assumed the Willow Lane race was a decent quality race (not Sweepstakes quality or low quality or other quality baselines in use) … I also apply this "decent quality" baseline to many races I speed rate for a 2nd initial opinion … For Willow Lane, this approximated a race adjustment of roughly 78-84 seconds (meaning I add that amount of time to the final times to calculate the speed ratings).
I used 81 seconds to calculate the tentative speed ratings for Willow Lane (for Woodbridge I used 156 seconds).
I then compare the tentative speed ratings for all individual runners to their prior ratings (first for the current season and then for last year when needed) … To do this, I take the results with tentative speed ratings (looking exactly like the results I post of the web) and drop them in a program I wrote … the program extracts each runner’s speed ratings from my database and appends them to the tentative results so I can evaluate them visually and then statistically if needed.
For Willow Lane, a quick fit said the tentative ratings and the prior ratings were a decent fit (plus or minus 2-4 points) … so I used them … They are uncertain because they are based mostly on uncertain ratings and I have no prior Willow Lane data.
The questions about the Eagle ID girls and their Woodbridge ratings demonstrates some early season considerations … The Eagle girls ran in the Varsity Blue race at Woodbridge … In the Varsity Blue Girls A and B races, my database has prior seasonal ratings for 121 girls (mostly just one rating, most from California girls who raced at Cool Breeze or Great Cow) … 103 girls had higher speed ratings at Woodbridge from their first rating while only 10 girls had lower ratings … the median increase was 10.36 points (or over 30 seconds of relative improvement).
The big improvement does not bother me … Woodbridge is a race where kids come to run fast … For their first race before, many kids are just starting to get fit, some ran their first race as a training exercise (this is becoming more common), and many likely did not run all-out … I assume many ran close to all-out at Woodbridge.
Also, since Woodbridge is so fast, it would probably be more accurate to "scale" the speed ratings in a similar method as I use for 2.5 mile & 4K races in NY … At Woodbridge, this scaling would have minimal effect for the faster runners … It would effect the slower runners the most (increasing more as times get slower) … I’m guessing for girls with an 80 speed rating would get scaled back to about 77 and a girl with a 50 rating would be scaled back to 43-44 … This does not bother me.
Speed ratings are a runner-to-runner measurement ... In shorter faster races, the faster runners may not separate themselves from other runners by as much time ... and that makes a difference with the speed ratings.