It's a fact: when dealing with commercial Internet servers (like Zeus Web server, Netscape Directory or Microsoft Exchange), you can nearly always access to detailed benchmarks made by specialized companies or dedicated technical writers.
However, in the Open Source world, things have quite a different shape. We all know that Open Source is plenty of advantages over commercial software, but perhaps performance is not generally perceived to be between them. Nevertheless, nowadays, big computer companies (SGI, IBM, Compaq or HP to name only a few) and Linux distribution companies (like RedHat, SuSe or TurboLinux) are putting all its knowledge and human resources to make Open Source software go as fast as the quickest commercial software. So, why people continues having the feeling that an Open Source server can be secure, stable, well-behaved, but not sure about performance?.
Well, in my opinion, Open Source always had a tradition to power computers of individuals or small companies, which couldn't afford the big prices of large computer facilities from well-known computing brands, and that could be in the origin of such a feeling. Besides, Open Source hackers are normally busy adding capabilities and stability for their products, and they couldn't normally afford the time to speed-up their services. Why?, I think this is because as a rule of thumb for general services, it's more important to have flexible servers with lots of capabilities at expense of speed, than a very fast server with little flexibility or stability. For example, Microsoft IIS supposed superiority in performance over Apache in last years, doesn't helped it very much to climb positions in popularity, and Zeus, which is apparently the fastest Web server, has a rather small percentage of the market (around a 2%). I'm sure that Apache stability, and specially, its flexibility, are key to explain this.
Another factor which could influence this opinion is that really powerful benchmarking tools are normally commercial, and they are not available to the Open Source developers, so this could explain the fact that there are relatively few good benchmarks of Open Source servers. They exist, of course, but the majority offer numbers for some special capability and forgot measuring another important ones; or remains for months or years without going updated!. Open Source is a very dynamic subject, evolving quickly, and benchmarks comparing two servers made only one year ago are, most probably, no longer valid with current versions.
As an example, some time ago a large company contracted the company I was working for to deploy an e-mail service for all its 10.000 employees. I knew Open Source servers where a good option for them, but which was the minimum power for a machine that would support such a load?. The candidates were Qmail and Sendmail, and I started browsing the Web looking for some serious comparison: I didn't manage to find none!. All I was able to find were subjective feelings from e-mail managers, and some of them thought that Qmail was faster that Sendmail just because its author claimed that it was faster when he released its 1.0 version, 3 years ago. Since then, Sendmail introduced lot of changes that improved performance maybe well beyond Qmail (but still, I don't know for sure, because I didn't found a recent good comparison).
On the other hand, as Open Source movement is getting more mature, performance and comparisons between servers will become more and more important, specially for capacity planning in large service providers companies (but also in small ones!) which eventually have to decide if they can adopt an Open Source solution or not.
And last, but no least, everybody knows how strong could be the pressure of the big software companies to force independent evaluators to overpraise their own product (see the controversial IIS/Apache comparison conducted by Mindcraft and sponsored by Microsoft ). So, there is no doubt, to measure the potential of Open Source servers (and even commercial servers!), an Open Source tool is advisable!. That way, reproducibility and benchmark code verification is guaranteed.
OpenLC tries to fill this (very important) gap.