How to set a maximum execution time for a mysql query?
Asked Answered
C

5

50

I would like to set a maximum execution time for sql queries like set_time_limit() in php. How can I do ?

Chirrup answered 6/1, 2009 at 9:15 Comment(1)
Possible duplicate of MySQL - can I limit the maximum time allowed for a query to run?Halberd
E
53

I thought it has been around a little longer, but according to this,

MySQL 5.7.4 introduces the ability to set server side execution time limits, specified in milliseconds, for top level read-only SELECT statements.

SELECT 
/*+ MAX_EXECUTION_TIME(1000) */ --in milliseconds
* 
FROM table;

Note that this only works for read-only SELECT statements.

Update: This variable was added in MySQL 5.7.4 and renamed to max_execution_time in MySQL 5.7.8. (source)

Elayne answered 27/4, 2014 at 3:51 Comment(5)
Will it be active only for this query or all following queries sharing the same session (process id)?Document
It only applies to the query in which it is specified. If you go to my cited source, you can set a global or session max_execution_time.Elayne
Yes I read that, but there is not explanation what session means because for example KILL explicitly needs the modifier QUERY: dev.mysql.com/doc/refman/5.7/en/kill.html And in this answer session was explained as an other word for connection: https://mcmap.net/q/355791/-mysql-concepts-session-vs-connectionDocument
I think there is an error in this answer. When I try putting MAX_EXECUTION_TIME=1000 inside the SELECT itself, that yields a ERROR 1064 (42000): You have an error in your SQL syntax error. set MAX_EXECUTION_TIME=1000; first, then the SELECT statement works. Now it is set in the session, and probably needs to be cleared again with set MAX_EXECUTION_TIME=0; Also, it doesn't look like that syntax is supported in the select syntax documentationMangrove
A similar feature was introduced in Percona Server 5.6.13 (a branch based on MySQL 5.6): percona.com/doc/percona-server/5.6/management/…Perutz
M
10

If you're using the mysql native driver (common since php 5.3), and the mysqli extension, you can accomplish this with an asynchronous query:

<?php

// Heres an example query that will take a long time to execute.
$sql = "
    select *
    from information_schema.tables t1
    join information_schema.tables t2
    join information_schema.tables t3
    join information_schema.tables t4
    join information_schema.tables t5
    join information_schema.tables t6
    join information_schema.tables t7
    join information_schema.tables t8
";

$mysqli = mysqli_connect('localhost', 'root', '');
$mysqli->query($sql, MYSQLI_ASYNC | MYSQLI_USE_RESULT);
$links = $errors = $reject = [];
$links[] = $mysqli;

// wait up to 1.5 seconds
$seconds = 1;
$microseconds = 500000;

$timeStart = microtime(true);

if (mysqli_poll($links, $errors, $reject, $seconds, $microseconds) > 0) {
    echo "query finished executing. now we start fetching the data rows over the network...\n";
    $result = $mysqli->reap_async_query();
    if ($result) {
        while ($row = $result->fetch_row()) {
            // print_r($row);
            if (microtime(true) - $timeStart > 1.5) {
                // we exceeded our time limit in the middle of fetching our result set.
                echo "timed out while fetching results\n";
                var_dump($mysqli->close());
                break;
            }
        }
    }
} else {
    echo "timed out while waiting for query to execute\n";

    // kill the thread to stop the query from continuing to execute on 
    // the server, because we are abandoning it.
    var_dump($mysqli->kill($mysqli->thread_id));
    var_dump($mysqli->close());
}

The flags I'm giving to mysqli_query accomplish important things. It tells the client driver to enable asynchronous mode, while forces us to use more verbose code, but lets us use a timeout(and also issue concurrent queries if you want!). The other flag tells the client not to buffer the entire result set into memory.

By default, php configures its mysql client libraries to fetch the entire result set of your query into memory before it lets your php code start accessing rows in the result. This can take a long time to transfer a large result. We disable it, otherwise we risk that we might time out while waiting for the buffering to complete.

Note that there's two places where we need to check for exceeding a time limit:

  • The actual query execution
  • while fetching the results(data)

You can accomplish similar in the PDO and regular mysql extension. They don't support asynchronous queries, so you can't set a timeout on the query execution time. However, they do support unbuffered result sets, and so you can at least implement a timeout on the fetching of the data.

For many queries, mysql is able to start streaming the results to you almost immediately, and so unbuffered queries alone will allow you to somewhat effectively implement timeouts on certain queries. For example, a

select * from tbl_with_1billion_rows

can start streaming rows right away, but,

select sum(foo) from tbl_with_1billion_rows

needs to process the entire table before it can start returning the first row to you. This latter case is where the timeout on an asynchronous query will save you. It will also save you from plain old deadlocks and other stuff.

ps - I didn't include any timeout logic on the connection itself.

Macpherson answered 27/4, 2014 at 5:38 Comment(1)
this won't stop a mysql query that's in process on the sql server. an excellent solution to a different question, though.Bhatt
G
8

Please rewrite your query like

select /*+ MAX_EXECUTION_TIME(1000) */ * from table


this statement will kill your query after the specified time

Giblet answered 10/1, 2018 at 15:25 Comment(2)
Can you explain what this does? Does it kill the query after 1 second returning nothing, or is it nice and return the partial result set after 1 second?Ervinervine
@Ervinervine it kills the query and the client will error out. Note that this is only "execution time", if the query has executed inside MySQL and is currently only sending results to the client, it will run for as long as it wantsPincushion
S
2

pt_kill has an option for such. But it is on-demand, not continually monitoring. It does what @Rafa suggested. However see --sentinel for a hint of how to come close with cron.

Simper answered 7/7, 2016 at 2:27 Comment(0)
M
1

You can run the following one-liner from the Linux command line, and even stick it into your crontab to run at regular intervals.

This example will instantly kill all SELECT queries which are running for at least 1 hour (3600 sec):

mysql -e "SELECT group_concat(concat('KILL ',id,';') separator ' ') AS cmd FROM information_schema.processlist WHERE user NOT IN ('slave','replica') AND info LIKE 'SELECT %' AND time > 3600;" | grep KILL | mysql

From the above, you may want to replace:

NOT IN (["safe list" of mysql users you NEVER want to touch; e.g. replication users])

time > [N seconds for slow queries you want to target]

Note: blindly running KILL commands can lead to errant effects. Use with extreme caution!

Merkel answered 11/5, 2023 at 17:33 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.