We will never know what's really going on because important part of their selection and methodology is simply not revealed. As such, there is no way to verify or falsify their claims, possibly pushing this into the realm of pseudoscience.
One can also question the objectivity of NSS Labs when they make statements like this:
We were impressed by the stability of IE8 (RC1).
An interesting observation is that the report is from March 12th, 2009. They claim to have done 24/7 testing for 12 days, meaning that they must have started before Opera 9.64 was released, even though it's in their report!
There are other problems as well:
- Safari 4 and Firefox 3.1 were left out, while IE8 RC, a non-final version, is allowed
- The report says that 7% of the threats were blocked by all browsers, but Opera is claimed to have blocked only 5%
- They started out with 150 000 URLs, but ended up with only 492 in the final test
- Out of the 492 final tests, the same site could have up to 10% of the URLs, meaning that in a "worst case scenario", 10 unique sites were tested! If a browser did particularly well on one of these sites making up more than 10% of the test, their score would obviously be inflated (the report mentions that a number of sites were pruned after reaching their limit)
- According to the "Malware URL Response" table on page 3, Opera catches 15% on hour 0, and 33% after 5 days. And yet the final rate was set to only 5%
- According to the same table, Chrome consistently catches 25% or more, but the final score is only 16%
- The same table shows that IE8 never reaches 69% even once in the table, and yet its final score is raised to 69%
- On the other hand, IE7 has a total score of 17% in the table, but the final score is lowered to 4%
One could almost get the impression that NSS Labs is setting the test up in a very specific way to exaggerate the results in a certain direction:
The computations are done based on this "So if it is blocked early on, it will improve the score. If it continues to be missed, it will detract from the score." Sounds like a typical statistical trick to exaggerate differences found between browsers – those that do well will further improve their score, those that do less well will further decrease their score. With proper selection of the algorithm, one can maximise the resulting difference. This is how an absolute score of 33% for Opera is changed to a score of 5% after the statistical manipulation.
As mentioned above, these anomalies do not only affect Opera, but all the browsers in the test. What could in reality be a tiny and insignificant difference turns into what seems to be a huge gap in the final report. Do other browsers really report less than half of what IE8 does?
The test also measures success at preventing malicious sites from being downloaded, but if Opera only shows the warning after downloading the page, it will automatically fail on a lot of tests even though the user is actually warned. It is not the downloading which is dangerous, but the report does not take that into consideration.
This report is receiving quite a bit of attention from the media. From a quick glance at the numbers, my preliminary conclusion is that this is just another Microsoft marketing trick. By carefully manipulating methods and statistics, you can make a set of numbers show just about anything.
I wonder if the other browser vendors have investigated the report, and if they plan to respond in an official manner. I don't know if we will offer any official statements on this, but I don't think what appears to be rather obvious manipulation of the numbers to exaggerate differences should go unquestioned.
It does seem that I am not the only person who is not convinced. Are there any other reports out there that don't simply repeat Microsoft's claims without question?
In order to look more closely into the claims in this report, I have mailed NSS Labs and requested the URL list. Check back later for any updates. (Update: They never sent any URL lists.)