DB (SQL) automated stress/load tools?
Asked Answered
T

5

9

I want to measure the performance and scalability of my DB application. I am looking for a tool that would allow me to run many SQL statements against my DB, taking the DB and script (SQL) file as arguments (+necessary details, e.g. host name, port, login...).

Ideally it should let me control parameters such as number of simulated clients, duration of test, randomize variables or select from a list (e.g. SELECT FROM ... WHERE value = @var, where var is read from command line or randomized per execution). I would like to test results to be saved as CSV or XML file that I can analyze and plot them. And of course in terms of pricing I prefer "free" or "demo" :-)

Surprisingly (for me at least) while there are dozens of such tools for web application load testing, I couldn't find any for DB testing!? The ones I did see, such as pgbench, use a built-in DB based on some TPC scenario, so they help test the DBMS configuration and H/W but I cannot test MY DB! Any suggestions?

Specifically I use Postgres 8.3 on Linux, though I could use any DB-generic tool that meets these requirements. The H/W has 32GB of RAM while the size of the main tables and indexes is ~120GB. Hence there can be a 1:10 response time ratio between cold vs warm cache runs (I/O vs RAM). Realistically I expect requests to be spread evenly, so it's important for me to test queries against different pieces of the DB.

Feel free to also contact me via email. Thanks!

-- Shaul Dar ([email protected])

Temptation answered 20/5, 2009 at 11:28 Comment(1)
subquestioned for SQL Server database stress loading, possibly without test support, tool #4123610Swim
C
5

JMeter from Apache can handle different server types. I use it for load tests against web applications, others in the team use it for DB calls. It can be configured in many ways to get the load you need. It can be run in console mode and even be clustered using different clients to minimize client overhead ( and so falsifying the results).

It's a java application and a bit complex at first sight. But still we love it. :-)

Cellophane answered 20/5, 2009 at 11:49 Comment(1)
Thanks Rudolfson. I've indeed spent several days working with JMeter, started before my post and now reached a point where it's working. As you say it can be annoying to set up and figure out (I've struggled with how to pass parameters so I can run different DB queries, and also had issues with it's freezing up due to memory consumption). But investment was well worth it. JMeter is really well thought of, extensive and slick. -- ShaulTemptation
B
4

k6.io can stress test a few relational databases with the xk6-sql extension.

For reference, a test script could be something like:

import sql from 'k6/x/sql';

const db = sql.open("sqlite3", "./test.db");

export function setup() {
  db.exec(`CREATE TABLE IF NOT EXISTS keyvalues (
           id integer PRIMARY KEY AUTOINCREMENT,
           key varchar NOT NULL,
           value varchar);`);
}

export function teardown() {
  db.close();
}

export default function () {
  db.exec("INSERT INTO keyvalues (key, value) VALUES('plugin-name', 'k6-plugin-sql');");

  let results = sql.query(db, "SELECT * FROM keyvalues;");
  for (const row of results) {
    console.log(`key: ${row.key}, value: ${row.value}`);
  }
}

Read more on this short tutorial.

Bozovich answered 19/7, 2021 at 10:21 Comment(0)
W
1

The SQL Load Generator is another such tool:

http://sqlloadgenerator.codeplex.com/

I like it, but it doesn't yet have the option to save test setup.

Wee answered 22/6, 2012 at 18:14 Comment(0)
U
0

We never really found an adequate solution for stress testing our mainframe DB2 database so we ended up rolling our own. It actually just consists of a bank of 30 PCs running Linux with DB2 Connect installed.

29 of the boxes run a script which simply wait for a starter file to appear on an NFS mount then start executing fixed queries based on the data. The fact that these queries (and the data in the database) are fixed means we can easily compare against previous successful runs.

The 30th box runs two scripts in succession (the second is the same as all the other boxes). The first empties then populates the database tables with our known data and then creates the starter file to allow all the other machines (and itself) to continue.

This is all done with bash and DB2 Connect so is fairly easily maintainable (and free).

We also have another variant to do random queries based on analysis of production information collected over many months. It's harder to check the output against a known successful baseline but, in that circumstance, we're only looking for functional and performance problems (so we check for errors and queries that take too long).

We're currently examining whether we can consolidate all those physical servers into virtual machines, on both the mainframe running zLinux (which will use the shared-memory HyperSockets for TCP/IP, basically removing the network delays) and Intel platforms with VMWare, to free up some of that hardware.

It's an option you should examine if you don't mind a little bit of work up front since it gives you a great deal of control down the track.

Ube answered 23/5, 2009 at 9:45 Comment(1)
wow, is it illustration of the horrors of not finding a testing tool? I have 2 questions: 1)Why 30 but not 12 or 48 PCs? 2)Should I understand that there is no RDBMS testing tool independent on specific RDBMS; 2a)May I open a question about SQL Server database testing tools/techniques or it will be considered a duplicate?Swim
T
0

Did you check Bristlecone an open source tool from Continuent? I don't use it, but it works for Postgres and seems to be able to do the things that your request. (sorry as a new user, I cannot give you the direct link to the tool page, but Google will get you there ;o])

Thornton answered 27/5, 2009 at 5:59 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.