These types of case studies are super valuable. Chapeau to Malindi for doing her due diligence before signing a contract and to Alex for writing an excellent summary of it. As mentioned above, there’s such a paucity of information out there on these new shoes that any well-controlled experiment, even with a sample size of one, is still informative. That being said, Rojo also correctly identified the qualification you have to give it - that her response cannot be necessarily extrapolated, given that we do know the response to these shoes is variable.
The lack of info comes from two issues:
1 - Of the shoe companies that do test economy of their shoes compared to competitors (and the ones that do and do it robustly is much smaller than you’d think - until recently, this was by no means a standard practice), they obviously won’t trumpet their findings if it’s inferior. Both public pieces on Saucony’s shoes were touting equivocal findings (their own media story about Ward and then this one, which, as Alex mentioned above, had to be approved by them to talk about it publicly (that’s not to say they wrote it or pushed it, but that we probably wouldn’t be reading about it if they were worse - though maybe we would, as Malindi wouldn’t have signed with them!).
2 - A university research lab would be where you could well-controlled, unbiased comparative study. Done properly, this provides the gold standard of information. However, this is expensive - the cost of the shoes (which is not chump change, especially with these) is small compared to the equipment charge rates, staff time, and institutional overhead if it were a traditional research contract. These can be flexible depending on the university, the researchers doing the study, and their resources. Who would provide the funding? Probably not a traditional funding source (grant). A shoe company would likely only do it if they knew from their own testing that it was at least equivocal if not likely better. It’s a big investment, and it would likely be confirming “pilot” data they already have been collecting. In the case of the VaporFly, Nike had already prototyped/developed the shoe and knew it had a substantial benefit before they started the first study at Colorado. It’s awesome that they did commission that study (and also a genius marketing move), as it was very well executed and helped us understand the shoe in a way we rarely get to with new products. But nevertheless, we never saw any studies on any of the Zoom Streak iterations (aside from the VaporFly study!). Otherwise, it’s up to the researchers to piece together independent sources (or spin it into another project) to get the materials and commission any help. Moreover, in addition to the cost, it also can be a slow process. Doing the tests is just the first step (actually the second - recruiting enough runners who can do the tests and scheduling them is the first, which is no small feat!), then you have the data analysis and writing. Then the peer-review process, which can be painfully prolonged. The results could come out to the public a year (or more!) after you’ve done the final tests in the lab. Now, depending on how the research team prioritizes the project, you could certainly speed this up.
However, we really do need more studies characterizing the differences (or lack of) between the shoes, and maybe most importantly, studies characterizing the individual responses to the shoes.
For us as athletes and consumers, we want to know which is the most efficient, and which is the most efficient for us?
For us as fans of the sport, we want to know if the guys and gals racing each other are racing on a level (or, perhaps, slightly-curved-underneath-the-metatarsals) playing field, and what the magnitude of the benefit is so we can understand the context of the performance against history. 2:06 post-2017 is not 2:06 pre-2017.
This isn’t a stance for or against the shoes, it’s just necessary translation for the transition time that we’re in.
Now, imagine that we can do another comparative study. One of the tricky pieces here now: what shoes do we test?
Nike Next%
Adidas Adios Pro
Saucony Endorphin Pro
Those would be the immediate ones I’d like to see.
However, I’d really like to know how the New Balance FuelCell Elite and the new Brooks Hyperion 2.0 stack up. I’d also like to see the other plated shoes just to see if there’s any benefit at all.
Also, do you test an old racing flat as a control?
This becomes a lot of trials in the lab, as ideally you test each shoe twice at a given speed. If you did 5 or 6 shoes with 4 min. trials, you’re looking at 40-48 minutes of running, but a few hours in the lab with a lot of tying and untying. The large number of trials also introduces more measurement and experimental noise to the set-up or within the athlete. Not too bad, but you’re probably going to want it at several different days so you can do different speeds as well.
The other piece that hasn’t been characterized is how any of these benefits change (or don’t) over a long time out on the roads. Do some hold their benefit better (or offset a decline better?)?. Again, really important to know both in selecting and watching.
I’d love to see WA fund something like this (maybe they are? I’ve heard that equipment research might be one of their new directives), as like I said, understanding how all this equipment affects a performance both relative to other current athletes and relative to other past athletes, is critical to enabling fans to fully engage a with and understand a race.