Table of Contentsnext
 

Chapter 1: Introduction

I've made this longer than usual because I lack the time to make it shorter.
—Blaise Pascal

1.1 Description

OpenLC is a set of software tools designed to facilitate benchmarking and stress testing of a wide variety of information servers (such as web, email, FTP, LDAP, databases, and so on). The package is built around a microkernel that contains basic routines for benchmarking tasks, such as accessing intermediate results in real-time (spying the run data), setting up simulated clients, defining scenarios, handling database calls, comparing results of different runs, summarizing data, etc.

OpenLC also offers an API for developers interested in creating clients (called commanders, to distinguish them from the simulated client inside the microkernel) that query services provided within the microkernel.

1.2 Architecture

OpenLC is designed around an open client-server architecture. It is open in that communication between the server and the different clients conforms to the XML-RPC protocol (http://www.xml-rpc.org). In principle, OpenLC implements XML whenever data is interchanged among server and client. This means that any language that supports XML-RPC can be used to develop a commander (see later about the meaning of the commander word in OpenLC context).

Nevertheless, in the case of internal servers, modules written in Python (possibly with C extensions if performance is a must) are preferred, due mainly to the convenience of using the same language to access to the microkernel services.

The diagram 1.1 shows the OpenLC architecture. In the next sections we will describe the different components.

Diagram showing the OpenLC architecture
Figure 1.1:Diagram showing the OpenLC architecture

1.2.1 Commander

The commander is a client that allows the end user to define run scenarios and to execute commands that control the run. It may be written to use any communication protocol supported by the different external servers. A developer may choose to use the current XML-RPC external server, and design commanders that use the already-defined API (see doc/microkernel-API.txt) to talk with the microkernel via this external server.

The scenario definition file (SDF)is made using an XML configuration file (see client/config-http.xml for an example), but the commander could offer a graphical interface to facilitate the data introduction.

Included with OpenLC there's a test commander (OLCCommander) that connects to the server and performs several tasks, such as sending a run scenario, starting and stopping a run, sampling real-time data, requesting run statistics or plotting the final results. Read the source code of this commander to see a sample implementation of the API.

Anyone interested in contributing to the OpenLC project can start by writing new commanders that add functionality and use the API in new and interesting ways. Right now, the commanders can be written in any language (provided they have an XML-RPC library and they conform to the API).

Now, let's see the different OpenLC Server components more in depth. This is a completely optional section. You can skip that and go directly to the Installation section in chapter 2.

1.2.2 External server

The XML-RPC external server (server/controlCore.py and server/baseCore.py) is the primary point of contact for every client. It exposes a XML-RPC API to clients, and coordinates all traffic between clients and the OpenLC microkernel. In short, the external server is the interface that connects the microkernel to the world (via the microkernel API).

Currently there is only one external server exposing the microkernel API using the XML-RPC communication protocol. There may be (and will be) more than one external server. However, the XML-RPC server will remain (hopefully for long time) a central piece to access to microkernel functionality. In the future, I may add a SOAP external server.

1.2.3 Microkernel

The microkernel (server/Core.py) is the heart of OpenLC: it is the microkernel that contains all the necessary logic for easily creating extensible benchmarking tools.

The microkernel is in charge of parsing the scenario (sent by the commander), setting up the simulated service clients (currently I'm using threads, but that may change in the future), starting and stopping them, then collecting the runtime data and sending it to the external server and database management module. It coordinates all the necessary actions to serve requests coming from commanders. In addition, it will be in charge of logging all the requests and actions done to easy a debugging session.

1.2.4 Database management module

The database management module (server/manageData.py) provides three simple methods to handle data gathered by the service clients: putData and getData to respectively save and retrieve data objects. Currently the data repository is built on top of a mixture of XML (for python objects), NetCDF and HDF5 (for numeric data) database. This combination forms a very powerful object-oriented database with a high degree of transparency (from the programmer point of view), robustness, and efficency.

1.2.5 Internal server

Finally, the internal server interprets the commands in the run scenario and does the real work by generating load on the target systems. There will be (one or several, depending on the goals of the desired load) different internal servers for each type of service it generates the load. For example, there is already an internal server for HTTP and FTP protocols and another for IMAP4.The purpose of an internal server is mainly for encapsulate functionality and to protect the microkernel for having too much complexity. This will allow also the making of new internal servers without interfere with the existing ones.

In this version, three internal servers are provided:

  • Local: It only simulates the response times of an hypothetic server by using a determined function that return synthetic response times. So, there is no need to load an exterior IT server, and it's perfect to do local tests and check if everything in the OpenLC machinery is sane and well. This mode is also good during the commander developing process where synthetic times are enough (sometimes preferred) to test client functionality. More information in section 4.2.1.
  • HTTP: Simulate real HTTP 1.0 clients (and shortly 1.1), by mapping a thread for each simulated client. Each thread request the different URL as stated on the scenario input file, and returns the response times. If Python is configured with OpenSSL support, secure HTTP (https://name) is also supported. In addition, FTP, Gopher and file:// protocols are supported by this internal server. See section 4.2.2for details.
  • IMAP4: Simulate IMAP4rev1 clients, by mapping a thread for each simulated client. At the moment, no all IMAP4 commands are supported. Please, see section 4.2.3.

All the internal servers reads his configuration and task list from the scenario provided by the commander (through the microkernel).

In the coming releases I'll be adding more internal servers, which will provide support to protocols like SMTP, POP3, LDAP, and so on.

1.3 Final words (or Call for hackers)

Despite of its alpha status, we think OpenLC is quite easy to use and to extend. The goal of the OpenLC project is to develop a lightweight, powerful and stable load-testing microkernel with a flexible API (see doc/microkernel-API.txt) that can be easily adapted to a wide variety of environments, users, services and needs.

If you have any suggestion, bug report or anything related with OpenLC, I'll be happy to hear about you!.


 Table of Contentsnext