I would like to set a maximum execution time for sql queries like set_time_limit() in php. How can I do ?
I thought it has been around a little longer, but according to this,
MySQL 5.7.4 introduces the ability to set server side execution time limits, specified in milliseconds, for top level read-only SELECT statements.
SELECT
/*+ MAX_EXECUTION_TIME(1000) */ --in milliseconds
*
FROM table;
Note that this only works for read-only SELECT statements.
Update: This variable was added in MySQL 5.7.4 and renamed to max_execution_time
in MySQL 5.7.8. (source)
session
means because for example KILL
explicitly needs the modifier QUERY
: dev.mysql.com/doc/refman/5.7/en/kill.html And in this answer session
was explained as an other word for connection
: https://mcmap.net/q/355791/-mysql-concepts-session-vs-connection –
Document MAX_EXECUTION_TIME=1000
inside the SELECT
itself, that yields a ERROR 1064 (42000): You have an error in your SQL syntax
error. set MAX_EXECUTION_TIME=1000;
first, then the SELECT
statement works. Now it is set in the session, and probably needs to be cleared again with set MAX_EXECUTION_TIME=0;
Also, it doesn't look like that syntax is supported in the select syntax documentation –
Mangrove If you're using the mysql native driver (common since php 5.3), and the mysqli extension, you can accomplish this with an asynchronous query:
<?php
// Heres an example query that will take a long time to execute.
$sql = "
select *
from information_schema.tables t1
join information_schema.tables t2
join information_schema.tables t3
join information_schema.tables t4
join information_schema.tables t5
join information_schema.tables t6
join information_schema.tables t7
join information_schema.tables t8
";
$mysqli = mysqli_connect('localhost', 'root', '');
$mysqli->query($sql, MYSQLI_ASYNC | MYSQLI_USE_RESULT);
$links = $errors = $reject = [];
$links[] = $mysqli;
// wait up to 1.5 seconds
$seconds = 1;
$microseconds = 500000;
$timeStart = microtime(true);
if (mysqli_poll($links, $errors, $reject, $seconds, $microseconds) > 0) {
echo "query finished executing. now we start fetching the data rows over the network...\n";
$result = $mysqli->reap_async_query();
if ($result) {
while ($row = $result->fetch_row()) {
// print_r($row);
if (microtime(true) - $timeStart > 1.5) {
// we exceeded our time limit in the middle of fetching our result set.
echo "timed out while fetching results\n";
var_dump($mysqli->close());
break;
}
}
}
} else {
echo "timed out while waiting for query to execute\n";
// kill the thread to stop the query from continuing to execute on
// the server, because we are abandoning it.
var_dump($mysqli->kill($mysqli->thread_id));
var_dump($mysqli->close());
}
The flags I'm giving to mysqli_query accomplish important things. It tells the client driver to enable asynchronous mode, while forces us to use more verbose code, but lets us use a timeout(and also issue concurrent queries if you want!). The other flag tells the client not to buffer the entire result set into memory.
By default, php configures its mysql client libraries to fetch the entire result set of your query into memory before it lets your php code start accessing rows in the result. This can take a long time to transfer a large result. We disable it, otherwise we risk that we might time out while waiting for the buffering to complete.
Note that there's two places where we need to check for exceeding a time limit:
- The actual query execution
- while fetching the results(data)
You can accomplish similar in the PDO and regular mysql extension. They don't support asynchronous queries, so you can't set a timeout on the query execution time. However, they do support unbuffered result sets, and so you can at least implement a timeout on the fetching of the data.
For many queries, mysql is able to start streaming the results to you almost immediately, and so unbuffered queries alone will allow you to somewhat effectively implement timeouts on certain queries. For example, a
select * from tbl_with_1billion_rows
can start streaming rows right away, but,
select sum(foo) from tbl_with_1billion_rows
needs to process the entire table before it can start returning the first row to you. This latter case is where the timeout on an asynchronous query will save you. It will also save you from plain old deadlocks and other stuff.
ps - I didn't include any timeout logic on the connection itself.
Please rewrite your query like
select /*+ MAX_EXECUTION_TIME(1000) */ * from table
this statement will kill your query after the specified time
pt_kill
has an option for such. But it is on-demand, not continually monitoring. It does what @Rafa suggested. However see --sentinel
for a hint of how to come close with cron
.
You can run the following one-liner from the Linux command line, and even stick it into your crontab to run at regular intervals.
This example will instantly kill all SELECT queries which are running for at least 1 hour (3600 sec):
mysql -e "SELECT group_concat(concat('KILL ',id,';') separator ' ') AS cmd FROM information_schema.processlist WHERE user NOT IN ('slave','replica') AND info LIKE 'SELECT %' AND time > 3600;" | grep KILL | mysql
From the above, you may want to replace:
NOT IN (["safe list" of mysql users you NEVER want to touch; e.g. replication users])
time > [N seconds for slow queries you want to target]
Note: blindly running KILL commands can lead to errant effects. Use with extreme caution!
© 2022 - 2024 — McMap. All rights reserved.