Representatives of Mips, the Swedish company behind the patented MIPS (multi-directional impact protection system) technology used in many modern sports helmets, has issued a statement questioning the testing methods behind riding helmet rankings released last week.
The rankings, produced by the Virginia Tech Helmet Lab as a result of a three-year research initiative, tested and scored 40 riding helmets—including 13 models with the MIPS system—based on their ability to reduce concussion risk. Virginia Tech’s testing saw the MIPS helmets land throughout the rankings, from the top spot to several near the bottom and many between, mixed with traditional helmets.
“At Mips, we welcome the new benchmark initiative to evaluate equestrian helmets, yet aspects of the test and rating methods leave room for improvement,” the Dec. 8 statement from the company read. “Mips … provides the MIPS safety system, which is intended to help reduce harmful rotational motion that might be otherwise transferred to the user’s head for certain impacts.
“After carefully evaluating the test method and ratings process, Mips believes that the STAR ratings system should adopt additional testing methods.”
In the statement Peter Halldin, Mips co-founder and chief science officer, said Virginia Tech’s pendulum testing rig focused too heavily on vertical velocity, such as that created by a rider falling straight toward the ground headfirst, and did not take oblique and rotational impacts into account as well.
“If a horse and its rider have a speed forward during their fall, there will be both vertical and horizontal velocity relative to the ground, and rotation could also be induced at the initial contact with the ground due to tangential force,” the statement said. “To be able to replicate this phenomenon, another test method is required.”
Back on Track, the makers of Trauma Void helmets, which use MIPS technology, issued a statement echoing Mips’ concerns while also calling independent third-party testing a “critical step in building customer confidence and instilling safety in the culture of the industry.”
ADVERTISEMENT
“Direct impact testing—be it tangential (rotational), straight, low-velocity, or high—all play a part in keeping our customers safe and confident that they have made a correct choice in helmet safety,” the statement read. “The science and process of testing, and the results therefrom are conditional and are a guide only to the safety of the helmet being tested. The proper helmet size and fit, characteristic of head shape (round or oblong) in relationship to the helmet chosen cannot be underestimated and is most important to ensure that your helmet investment protects in the event of an impact.”
While Virginia Tech, in its study literature, says that its pendulum rig measured both “linear and rotational acceleration for each impact, which are correlated to concussion risk,” Mips suggested the lack of different rotational testing could be the reason MIPS helmets did not perform as well in Virginia Tech’s testing as they have under tests performed by Folksam, a Swedish insurance agency which tests a variety of sports helmets. Folksam tests helmets produced to European standards and incorporates in its protocols an oblique impact test in which helmets are dropped against a sandpaper-topped block with 45-degree angle, rather than flat, surface.
Among 13 equestrian helmets Folksam tested last year (this link opens as a PDF), three of its five “recommended” helmets—those that scored 15% better than the median in its testing—were European versions of models Virginia Tech also tested: the Uvex Exxential II MIPS, the Charles Owen My PS and the One K Avance MIPS. While both Folksam and the Helmet Lab rated the Uvex Exxential II MIPS highly (it was ranked 11th and earned four stars from Virginia Tech), the other two helmets that earned recommendations from Folksam received just one star each from Virginia Tech’s scientists.
“The Folksam rating program includes impacts that have a tangential force acting on the helmet, which we suggest Virginia Tech implements to complement their current test method,” Mips’ statement said.
Mips also questioned whether Virginia Tech’s scoring system—a numerical value indicating the number of concussions a user would get in 30 lab-created impacts—put undue emphasis on frontal, low-velocity impacts over high-energy falls.
Virginia Tech tested each helmet’s performance for low- and high-energy impacts, and among the 40 tests, every helmet except the top-scoring one performed better at preventing concussions during high-energy impacts. The performance gap between low- and high-energy impacts generally grew going down the list, with performance in low-energy situations particularly worsening among lower scoring helmets. The spread in helmet performance also was much more significant between high-energy falls, where most of the 40 helmets scored between about 1 and 2, and low-energy falls, where scores ranged from 0.6 to 10. A Mips representative said both velocities are important test points but that they should be weighted equally to reflect real-world conditions.
Learn more about Mips from our May 2020 story on the company.