So inside triathlon have published a new aero test on some big name aero bikes... and come to a conclusion that one particular model is 53, 81 or 153 seconds faster than the competitors over a 70.3 bike leg.
What's interesting and slightly confusing is their statement immediately following the results:
"As stated in the printed article, the test is imperfect, as are all bike
aerodynamic tests. Several factors—most notably the influence of a
rider—could make the real life performance of these bikes different than
the results of Inside Triathlon’s test."
Er, well, yes next time I want to send the bike round the course by itself I'll bear these tests in mind... so why are we testing like this again?? Tunnel time isn't cheap, and sweeping through yaw also complicates things, particularly with drive and non drive side aero that varies between the sides tested.
Real world aero tests where n=1 and n=me obviously do have a lot of benefit and should, of course, not be included in "all bike aerodynamic tests".
Perhaps they're still trying to figure out how to use and interpret aero tests that actually have meaning... kudos for doing that, but there are other ways to design an aero study.
The drag estimations they recorded did have variations in drag at zero yaw after swinging through +20 to -20 and back to 0. A better way of presenting results would have been with the error(s) associated with each measurement.
Trek Speed Concept 9 Series: 435 grams +/- 18 grams
Cervélo P5 Six: 495 grams +/- 10 grams
Specialized S-Works Shiv: 525 grams +/- 28 grams
Orbea Ordu GDi2: 606 grams +/- 19.5 grams
Even better, some statistical analysis of the results including for example, standard deviation, standard error or even better a confidence interval (we're 99% confident the value is within a certain range). It's unclear how many measurements were actually taken.
Secondly, they've weighted the yaw results *again*. (Sigh).
"In addition to the amount of wind resistance, yaw angle also changes
with rider speed (faster rider, shallower yaw; faster wind, wider yaw),
so we calculated the fraction of time a rider would spend in various yaw
angle ranges when riding at 23 miles per hour and weighted the drag
created at each yaw angle accordingly. These were the results for a
rider traveling at 23mph in 8.1mph wind."
Er, does this mean you have a course where the rider spends equal time in every compass direction? I don't know about you, but often race courses are out and back on a single stretch of road that will spend a large amount of time in certain directions of yaw. They've collected met data from 49 major cities from Seattle to Miami and arrived at an average of 8.1mph of wind... perhaps they should have also analyzed a variety of courses and determined an average yaw value percentages based on typical conditions for those sites.
Weighting results completely obfuscates the value of testing at yaw. Objects performing at high yaw, may not perform as well at low yaw and vice versa. When combined the final results may be closer by the nature of weighted averaging, but may not be representative of real world experience. It would be better to present cases for zero, low and high yaw situations and allow people to have the data that may influence choice in those conditions... certainly you might not swap a bike under low or high yaw situations if you only have one, but other choices like helmets or wheels might easily be changed given different conditions.
"Ideally, an athlete would measure real wind speed on each individual
racecourse and run a calculation to find the drag they expect to face
during a race then pick accordingly from their stockpile of race wheel.
This is, of course, not practical for most athletes so we use this