It’s common for savvy online shoppers to check third-party reviews before making an online purchasing decision. That’s smart, but testing the efficacy of security software can be a bit more difficult than determining if a restaurant had decent service or if clothing brand’s products are true to size.

So, with the arguably more significant consequences of antimalware testing, how can shoppers be sure that the product they choose is up to the task of protecting their family from malware? Which reviews are worthy of trust and which are just fluff?

Red flags in antimalware testing

Grayson Milbourne is the security intelligence director at Webroot and actively involved in improving the fairness and reliability of antimalware testing. While acknowledging that determining the trustworthiness of any single test is difficult, some factors should sound alarm bells when looking to honestly evaluate antimalware products.

These include:

The pay-to-perform model

In any test, the humans behind the product being evaluated have a vested interest in the performance. How far they go to influence those results, however, varies. One extreme way to positively influence results is to fund a test designed for your product to succeed. Often, the platform on which a test appears can be a sign of whether this is the case.

“YouTube tests are almost always commissioned,” warns Milbourne. “So, if you see things on YouTube, know that there is almost always someone paying for the test who’s working the way the test comes out. I try to avoid those.”

If only one product aces a test, that’s another sign that it may have been designed unfairly, maybe with the undisputed winner’s strengths in mind.

Every vendor acing a test

Tests in which all participants receive high scores can be useless in evaluating product efficacy. Because we know catching malware is difficult, and no single product is capable of doing it effectively 100 percent of the time, tests where every product excels are cause for concern.

“If every product aces the test, maybe that test is a little too easy,” says Milbourne. No product is perfect, so be wary of results that suggest so.

Failing to test in “the big picture”

No one piece of software can stop all the threats a user may face all of the time. But many vendors layer their preventative technologies—like network, endpoint and user-level protection—to most effectively protect against cyberthreats.

“Testers are still very worried about what happens when you encounter a specific piece of malware,” says Milbourne. “But there’s a lot of technology focused on preventing that encounter, and reactive technology that can limit what malware can do, if it’s still unknown, to prevent a compromise.”

In addition to how well a product protects an endpoint from malware, it’s also important to test preventative layers of protection which is lacking in 3rd party testing today.

The problem with the antimalware testing ecosystem

For Milbourne, the fact that so few organizations dedicated to efficacy testing exist, while the number of vendors continues to grow, is problematic.

“There are about five well-established third-party testers and another five emerging players,” he says. But there are well over a hundred endpoint security players and that number is growing.”

These lopsided numbers can mean that innovation in testing is unable to keep up with both innovation in security products as well as the everchanging tactics used by hackers and malware authors to distribute their threats. Testing organizations are simply unable to match the realities of actual conditions “out in the wild.”

 “When security testing was first being developed in the early 2000s, many of the security products were almost identical to one another,” says Milbourne. “So, testers were able to create and define a methodology that fit almost every product. But today, products are very different from each other in terms of the strategies they take to protect endpoints, so it’s more difficult to create a single methodology for testing every endpoint product.”

Maintaining relationships in such a small circle was also problematic. Personal relationships could easily be endangered by a bad test score, and a shortage of talent meant that vendors and testers could bounce between these different “sides of the aisle” with some frequency.

Recognizing this problem in 2008, antimalware vendors and testing companies came together to create an organization dedicated to standardizing testing criteria, so no vendor is taken off guard by the performance metrics tested.

The Anti-Malware Testing Standards Organization (AMTSO) describes itself as “an international non-profit association that focuses on addressing the global need for improvement in the objectivity, quality and relevance of anti-malware testing methodologies.”

Today, its members include a number of antivirus and endpoint security vendors and testers, normally in competition against one another, but here collaborating in the interest of developing more transparent and reliable testing standards to further the fair evaluation of security products.

“Basically, the organization was founded to answer questions about how you test a product fairly,” says Milbourne.

Cutting through the antimalware testing hype

Reputation within the industry may be the single most important determinant of a performance test’s trustworthiness. The AMTSO, which has been working towards its mission for more than a decade now, is a prime example. Its members include some of the most trusted names in internet security and its board of directors and advisory board are made up of seasoned industry professionals who have spent entire careers building their reputations.

While none of this is to say there can’t be new and innovative testing organizations hitting the scene, there’s simply no substitute for paying dues.

“There are definitely some new and emerging testers which I’ve been engaging with and am happy to see new methodologies and creativity come into play, says Milbourne, “but it does take some time to build up a reputation within the industry.”

For vendors, testing criteria should be clearly communicated, and performance expectations plainly laid out in advance. Being asked to hit an invisible target is neither reasonable nor fair.

“Every organization should have the chance to look at and provide feedback on a tests’ methodology because malware is not a trivial thing to test and no two security products are exactly alike. Careful review of how a test is meant to take place is crucial for understanding the results.”

Ultimately, the most accurate evaluation of any antimalware product will be informed by multiple sources. Like reviews are considered in aggregate for almost any other product, customers should take a mental average of all the trustworthy reviews they’re able to find when making a purchasing decision.

“Any one test is just one test,” reminds Milbourne. “We know testing is far from perfect and we also know products are far from perfect. So, my advice would be not to put too much stock into any one test, but to look at a couple of different opinions and determine which solution or set of solutions will strengthen your overall cyber resilience.”

Kyle Fiehler

About the Author

Kyle Fiehler

Copywriter

Kyle Fiehler is a writer and brand journalist for Webroot. For over 5 years he’s written and published custom content for the tech, industrial, and service sectors. He now focuses on articulating the Webroot brand story through collaboration with customers, partners, and internal subject matter experts..

Share This