Updated: February 1, 2008
Design of Experiment or Design of Experiments is a formal mathematical methodology aimed at explaining interactions between different elements in an experiment through careful use of precise rules and statistics. If you're interested, tou can read more about Design of Experiment (DoE) on Wikipedia.
When you ask yourself - how difficult can this be? The answer is: very few engineers in the world have the required knowledge to properly run experiments. That is why, quite often, statistical experts are hired as consultants to help companies design and run their experiments. I hope to demonstrate this intriguing topic through a computer-related subject - The effect of security software on the performance of low-end machines.
You all must have friends who once upon a time asked you for your recommendation about a particular security setup, programs A B C versus X Y Z, to be run on a PC that is not exactly in the prime of its days. While we all possess enough experience to gauge or roughly estimate the right combination of programs, we do not really have a mathematical proof that can support our claim.
This is much harder than it seems. And it has plagued engineers and scientists for generations. When you have a single factor influencing the output, the experiment is very simple - just measure the response. But what happens when you have 3-4 elements, with different levels, interacting? How can we assess such a situation? What do we need to do in our experiment to gain sufficient data to justify our requirements?
Design of Experiment has a very precise methodology that helps answer these problems. This article is available as a PDF file. See link below in the download section.
It is quite possible that you may dislike or disagree with the results of this experiment, since your favorite program has been included and found less "successful" than some other program.
You should remember that this experiment has been conducted for the purpose of fun and education - not to bash products or be used as an "I told you so, X or Y program rulez" kind of argument between security geeks. The selected programs were chosen because they fit the needs. A similar experiment can be performed with any other combination of programs - the number of possibilities is endless.
Furthermore, I am not affiliated with any of the companies mentioned in the article. Finally, the experiment is by NO means an official, standardized benchmarking test for software performance. All that said, I hope you will learn something new and quite interesting today. Of course, your comments and suggestions are always welcome. And you might even create an experiment of your own.