PDOException: Packets out of order. Expected 0 received 1. Packet size=23
Asked Answered
S

7

21

I have a Laravel Spark project that uses Horizon to manage a job queue with Redis.

Locally, (on my Homestead box, Mac OS) everything works as expected, but on our new Digital Ocean (Forge provisioned) Droplet, which is a memory-optimized 256GB, 32vCPUs, 10TB, and 1x 800GB VPS, I keep getting the error:

PDOException: Packets out of order. Expected 0 received 1. Packet size=23

Or some variation of that error, where the packet size info may be different.

After many hours/days of debugging and research, I have come across many posts on StackOverflow and elsewhere, that seem to indicate that this can be fixed by doing a number of things, listed below:

  1. Set PDO::ATTR_EMULATE_PREPARES to true in my database.php config. This has absolutely no effect on the problem, and actually introduces another issue, whereby integers are cast as strings.

  2. Set DB_HOST to 127.0.0.1 instead of localhost, so that it uses TCP instead of a UNIX socket. Again, this has no effect.

  3. Set DB_SOCKET to the socket path listed in MySQL by logging into MySQL (MariaDB) and running show variables like '%socket%'; which lists the socket path as /run/mysqld/mysqld.sock. I also leave DB_HOST set to localhost. This has no effect either. One thing I did note, was that the pdo_mysql.default_socket variable is set to /var/run/mysqld/mysqld.sock, I'm not sure if this is part of the problem?

  4. I have massively increased the MySQL configuration settings found in /etc/mysql/mariadb.conf.d/50-server.cnf to the following:

    • key_buffer_size = 2048M
    • max_allowed_packet = 2048M
    • max_connections = 1000
    • thread_concurrency = 100
    • query_cache_size = 256M

I must admit, that changing these settings was a last resort/clutching at straws type scenario. However, this did alleviate the issue to some degree, but it did not fix it completely, as MySQL still fails 99% of the time, albeit at a later stage.

In terms of the queue, I have a total of 1,136 workers split between 6 supervisors/queues and it's all handled via Laravel Horizon, which is being run as a Daemon.

I am also using the Laravel Websockets PHP package for broadcasting, again, which is also being run as a Daemon.

My current environment configuration is as follows (sensitive info omitted).

APP_NAME="App Name"
APP_ENV=production
APP_DEBUG=false
APP_KEY=thekey
APP_URL=https://appurl.com
LOG_CHANNEL=single

DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=databse
DB_USERNAME=username
DB_PASSWORD=password

BROADCAST_DRIVER=pusher
CACHE_DRIVER=file
QUEUE_CONNECTION=redis
SESSION_DRIVER=file
SESSION_LIFETIME=120

REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379

MAIL_MAILER=smtp
MAIL_HOST=smtp.gmail.com
MAIL_PORT=587
[email protected]
MAIL_PASSWORD=password
MAIL_ENCRYPTION=tls
[email protected]
MAIL_FROM_NAME="${APP_NAME}"

AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION="us-east-1"
AWS_BUCKET=

PUSHER_APP_ID=appid
PUSHER_APP_KEY=appkey
PUSHER_APP_SECRET=appsecret
PUSHER_APP_CLUSTER=mt1

MIX_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
MIX_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"

AUTHY_SECRET=

CASHIER_CURRENCY=usd
CASHIER_CURRENCY_LOCALE=en
CASHIER_MODEL=App\Models\User
STRIPE_KEY=stripekey
STRIPE_SECRET=stripesecret

# ECHO SERVER
LARAVEL_WEBSOCKETS_PORT=port

The server setup is as follows:

  • Max File Upload Size: 1024
  • Max Execution Time: 300
  • PHP Version: 7.4
  • MariaDB Version: 10.3.22

I have checked all logs (see below) at the time the MySQL server crashes/goes away, and there is nothing in the MySQL logs at all. No error whatsoever. I also don't see anything in:

  • /var/log/nginx/error.log
  • /var/log/nginx/access.log
  • /var/log/php7.4-fpm.log

I'm currently still digging through and debugging, but right now, I'm stumped. This is the first time I've ever come across this error.

Could this be down to hitting the database (read/write) too fast?

A little information on how the queues work.

  1. I have an initial controller that dispatches a job to the queue.
  2. Once this job completes, it fires an event which then starts the process of running several other listeners/events in sequence, all of which depend on the previous jobs completing before new events are fired and new listeners/jobs take up the work.
  3. In total, there are 30 events that are broadcast.
  4. In total, there are 30 listeners.
  5. In total there are 5 jobs.

These all work sequentially based on the listener/job that was run and the event that it fires.

I have also monitored the laravel.log live and when the crash occurs, nothing is logged at all. Although, I do occasionally get production.ERROR: Failed to connect to Pusher. whether MySQL crashes or not, so I don't think that has any bearing on this problem.

I even noticed that the Laravel API rate limit was being hit, so I made sure to drastically increase that from 60 to 500. Still no joy.

Lastly, it doesn't seem to matter which Event, Job, or Listener is running as the error occurs on random ones. So, not sure it's code-specific, although, it may well be.

Hopefully, I've provided enough background and detailed information to get some help with this, but if I've missed anything, please do let me know and I'll add it to the question. Thanks.

Subequatorial answered 7/8, 2020 at 12:5 Comment(6)
Additional information request. Any SSD or NVME devices on MySQL Host server? Post on pastebin.com and share the links. From your SSH login root, Text results of: B) SHOW GLOBAL STATUS; after minimum 24 hours UPTIME C) SHOW GLOBAL VARIABLES; D) SHOW FULL PROCESSLIST; F) SHOW ENGINE INNODB STATUS; AND Optional very helpful information, if available includes - htop OR top for most active apps, ulimit -a for a Linux/Unix list of limits, iostat -xm 5 3 for IOPS by device and core/cpu count, for server workload tuning analysis to provide suggestions.Corri
Hey @WilsonHauck, thanks for the info. Yes, there is an 800GB SSD. Here are the dumps you requested: B: pastebin.com/CMiF24Ns C: pastebin.com/kdVX5Rvv D: pastebin.com/JJQQieP1 F: pastebin.com/7ACQmdc9 htop: imgur.com/a/DH5OQqj ulimit: pastebin.com/Z180YAv0 iostat: pastebin.com/38RYuUBn Thanks for taking a look.Subequatorial
Gareth The SHOW GLOBAL VARIABLES; posted only has the last ~ 120 lines of global variables. There are more than 400 lines usually from SHOW GLOBAL VARIABLES; and the iostat was for a single set of output. iostat -xm 5 3 usually has a summary and two more cycles at 5 second intervals. htop indicates MEM is ~ 96M. ThanksCorri
Hey Wilson, here is the iostat output: pastebin.com/jDwXMje6 I'm unsure how to output all the variables from SHOW GLOBAL VARIABLES, as the commandline in MySQL seems to truncate it and only show the last 120 or so.Subequatorial
Gareth You could pick them up in 2 steps. SHOW GLOBAL VARIABLES WHERE variable_name < "m%"; copy to clipboard to write to TXT files - and SHOW GLOBAL VARIABLES WHERE variable_name > "l%"; and share the 2 pastebin.com links. ThanksCorri
I am having issue on similar scenario, non of the answers below fit in as solution. But my php version is 8.1. I have found that there was an mysqlnd bug. But upgraded to latest and it didn't fix the problem. Sentry still catching: ``` CDbCommand failed to execute the SQL statement: Packets out of order. Expected 1 received 0. Packet size=145. The SQL statement executed was: SELECT wq.id AS queueId, wq.context, wt.id AS triggerId, wt.model, wt.method, .... = wt.fk_workflows_id ORDER BY wq.created ASC ``` I am using yii 1.1 nightly and laravel 10 hybrid over here. :-)Joslin
P
14

For me what fixed it was increasing the max packet size.

In my.cnf, I added:

max_allowed_packet=200M

And then service mysql stop, service mysql start, and it worked :)

Perambulate answered 12/8, 2021 at 15:2 Comment(2)
Thank you. It was an issue on MySQL.. Stopping and restarting MySQL service worked for me!!Succubus
This was also the issue for me. I was just trying to import a large db file, so was a one-off issue. Nb. you can temporarily increase the packet limit by logging in, (e.g. mysql -u root -p) and running the following query: SET GLOBAL max_allowed_packet=2G; doing this then allowed me to run my import SQL querySuperbomb
C
3

We were getting a similar PHP warning about packets out of order. What solved it for us is increasing max_connections in the MySQL my.cnf. Your current max_connections are probably 1024. We increased ours to 4096 and the warning went away. In MySQL you can see your current max_connections with this command:

SHOW VARIABLES LIKE "%max_connections%";
or
mysqladmin variables | grep max_connections
Cabretta answered 27/11, 2020 at 16:33 Comment(2)
That worked for me too, even if I have to admit I am not sure why this was required.Dressing
That makes no sense, max connections would have not allowed the connection in the first place if it was reached, different error message. I suspect it's just restarting mysql that solved itGaskins
H
3

I hit a similar issue that was reproducible, it was a programming error:
I was using an unbuffered database cursor and did not close the cursor before firing off other DB operations. The exact error thrown was Packets out of order. Expected 1 received 2.

Housewarming answered 8/12, 2021 at 17:27 Comment(1)
What would this look like in PHP code? I may have a similar issue.Apulia
B
1

The first thing to check is the wait_timeout of the MySQL server, in relation to the time that your application takes between queries. I'm able to recreate this error consistently by sleeping longer than wait_timeout seconds between SQL queries.

If your application performs a query, then does something else for a while that takes longer than that period, the MySQL server terminates the connection, but your PHP code may not be aware that the server has disconnected. If the PHP application then tries to issue another query using the the closed connection, it will generate this error (in my tests, consistently with Expected 0 received 1.

You could fix this by:

  • Extending the wait_timeout, either globally on the server, or on a per-session basis using the command SET session wait_timeout=<new_value>;
  • Catching the error and retrying once
  • Preemptively reconnecting to the server when you know that more than wait_timeout seconds have elapsed between queries.

This error could probably occur because of other problems as well. I would check that you are using a persistent connection and not connecting to the server over and over again. Sometimes the connection process, especially with many simultaneous workers, causes a lot of network overhead that could cause a problem such as this.

Also, sometimes, in a production, high-transaction volume server, weird network stuff happens and this may just happen occasionally, even, it seems over the loopback interface in your case.

In any case, it is best to write your code so that it can gracefully handle errors and retry. Often, you could wrap your SQL query in a try..catch to catch this error when it happens and try again.

Bulla answered 3/1, 2023 at 23:35 Comment(0)
J
0

Try upgrading to latest laravel, which 10 at the moment I am writing. It seems that this error is already handled. That why for my case only Yii 1.1 CDbConnection is having this issue. I should try to implement that in Yii 1.1 driver as well. Here is how it handles connection problems:

  1. Run is executed on every sql statement execution https://github.com/laravel/framework/blob/10.x/src/Illuminate/Database/Connection.php#L741-L770
  2. Run as a result of error above runs this function, https://github.com/laravel/framework/blob/10.x/src/Illuminate/Database/Connection.php#L924-L933
  3. Which as a result runs this one: https://github.com/laravel/framework/blob/10.x/src/Illuminate/Database/Connection.php#L946-L955
  4. And this one executes and matches following line in following function: https://github.com/laravel/framework/blob/10.x/src/Illuminate/Database/DetectsLostConnections.php#L38

As a result of this sequence it reconnects and tries to run same query again. Because of disconnection reason.

But problem here again is that it retries it once. There should be some configuration for setting more retries if reason reoccurs.

Joslin answered 19/8, 2023 at 14:51 Comment(0)
S
0

This is a a PDOException, in my case it occured because I set connections to the database to remain persistent:

$dbh= new PDO(
    'mysql:dbname=database;host=localhost', 'user', 'pass', 
    array(
      PDO::ATTR_PERSISTENT => true // <== Here was the problem
    )
);

I thus solved it by taking out the persistent connections

$dbh= new PDO(
    'mysql:dbname=database;host=localhost', 'user', 'pass'
);
Secede answered 29/2 at 20:27 Comment(0)
G
-2

MySQL 8 - in mysql.cnf, disable all this ->

# For error - ( MySQL server has gone away )
#wait_timeout=90
#net_read_timeout=90
#net_write_timeout=90
#interactive_timeout=300

and looks like help me.

Gooch answered 10/7, 2021 at 16:14 Comment(1)
I don't know why this reply has been undervoted. I am experiencing that problem and I know for sure its a problem about connection timeout from php to mysql daemon. In fact I receive also a "Fatal error: Uncaught mysqli_sql_exception: MySQL server has gone away in /var/www/html/....." error.Bespangle

© 2022 - 2024 — McMap. All rights reserved.