Stress Testing with The Grinder and Cactus
The Grinder is an open source project developed by Paco Gómez and maintained by Philip Aston, two engineers from BEA (Weblogic). It is straightforward to use and enables developers to perform white box stress testing; that is, developers have knowledge about the internal workings of the code under test. It is positioned between black box stress testing tools such as Mercury LoadRunner or OpenSTA and Unit Testing tools such as Cactus. You can look upon the Grinder at a stress test framework that manages test cases developed by the user.
I recently had a requirement to load test an application, the client had already developed a number of Unit test cases using Cactus. I had a day to develop the test suite and get some useful results. Mission Impossible? well I knew that The Grinder can use Junit test cases and as Cactus inherits from this library so in theory I had a solution.
The Grinder consists of two parts. A collector console that listens on a multicast address for statistics data generated by Grinder agents. The agents can be distributed over a number of hosts in order to better simulate real world loads. The console is launched on a single host. To run the agent it is sufficient to copy the grinder.jar and any libraries that may be required to run the test cases to the target machine. I normally place all the libraries under a single directory and write a batch (shell) script to automagically configure the classpath with .jars in Grinder lib directory and then run the agent.. That way I can drop in new jars as and when without fiddling around with the classpath.
The following snippet gives an idea of the code to do this:
Grinder Properties File
The Grinder Properties file (grinder.properties) is used to configure the stress tool. As unit tests based on Cactus have already been developed we would like to reuse these. However it is important to remember that Cactus is designed for Unit testing and this poses some problems. In the previous paper: Unit Testing Servlets with Weblogic and Cactus) we discussed how we used Cactus to test a Web based system where data from one response was used as input to subsequent requests, we do not support cookies and we are basically testing a conversation.
A simple solution is to parse the response with HttpUnit in the endXXX method and to store the returned data in static class variables in the test case. These can then be directly accessed by the beginXXX methods of other test cases. It is important to remember that both these methods are executed on the client side so have no access to Server Session data. Bear in mind that class level data is shared by all threads. Not a problem where you are using Cactus to perform a series of unit tests as there is a single thread but this would cause problems using the Grinder agent with multiple threads to simulate different users. As a side note, on the last three J2EE developments I've worked on one of the major headaches has been developers using class level data in a Servlet environment. Exactly the same issue as we face with our Cactus tests.
There is a better solution, if you have some unique key that identfies a session you can maintain client side properties. In our case connecting to the Web server generates a session key called a Zone, this is passed to and fro with each test. If you are using the Web server to maintain session state, either through cookies or url rewriting, then you can access the session id through the cactus JSession object.
The best strategy is to create a singleton class that maintains a hashtable of hashmaps. After obtaining an instance of this class the session id is used as a key into the first hashtable to return the session properties as a hashmap. I used a hashtable to store the references to the shared session objects as this needs to be synchronized. The session properties themselves are not shared and can use the faster hashmap implementation.
As many applications also need to read login and password data I could centralize access to this information through this class. For most users it is probably easier to maintain this data in a file. The class could open this file and implement a synchronized getLogin() method returning a String containing the next login and password from the file.
But remember I only had a day to implement all this so these ideas would have to wait. I stuck with the class level data and relied on the fact that The Grinder can run each test in its own virtual machine. As the tests are fairly lightweight I hoped that the memory footprint would be reasonable allowing me to run many tests on the same machine.
The following is a sample configuration file:
Each JVM process is lauched with a small amount of memory, this allows us to run more JVMs in parallel as the tests themselves have a small memory footprint.
The location of the Cactus unit test classes, which are a part of the application under test and the Weblogic Auxiliary classes (principally the javax package) need to be added to the classpath.
For our tests 20 Grinder processes are run in parallel (this takes around 400 Megabytes of virtual memory). Each JVM will run a single thread (running more causes errors for the reasons outlined above) and we repeat the tests five times. The first set of tests can really be viewed as a warm-up. It is also necessary to specify the address and port of the grinder console that will collate our test results. If Grinder agents are run on different hosts, which is a good idea to simulate real test loads, then this will need to correspond to the address of the console host.
Finally Grinder needs to be told that it is running a JUnit test suite and we give the location ot the test suite class.
We also need to configure Cactus through the cactus.properties, this is located in the directory from where the tests are launched:
This enables us to specify the target host and application. In this case the agents will target the host: hyperion on port 7001 and run the application esweb. This is where the Weblogic server running the application under test is located.
The Grinder Console
The Grinder console should be started before the agents. Once the console and agents are running you can click on Action->Start Processes to start the tests. As can be seen in the screen shot The Console recognises all the standard Cactus tests that have been developed and will display real time statistics for each test.
The following screen shots show the results of running our tests.
The Graphs tab shows that 38 samples have been collected and currently four Cactus test cases are running. The Grinder gives us useful throughput figures on the left hand tab. The average response time is 7.8 seconds and we are processing 2.46 transactions per second. The application actually connects to an application located on an off-site mainframe so this sort of response time is normal in this case. Of more interest to us is how scaleable the application is and if any errors occur when running the tests.occur.
The Results tab shows the final results from the tests. We have run each test 100 times with 20 users in parallel. It should be noted that these tests were run between two NT boxes, the box under test was also executing a number of other operations so the results are not representative of our target deployment architecture.
The Grinder already understands JUnit test cases and because Vincent Massol, the author of Cactus, chose to maintain this interface configuration was simply a case of specifying the test suite to run in the grinder properties file. The main choices are how many agents to run and finding hosts to run them on. You need a Java virtual machine and the relevant libraries used by The Grinder and test cases.
The Grinder gives useful test results at the same level as the Unit Test cases themselves. While perhaps not a substitute for Black Box testing it does fill a need currently occupied by tools such as Wiley Introscope and Kimble xLink.
About The Author
David George is an independent software engineer located in Paris, France. He specialises in Java Performance Tuning and Consulting on J2EE project management.
©1994-2006 All text and images copyright: www.abcseo.com; last updated: