In MySQL 5.7, when a new TCP/IP connection reaches the server, the server performs several checks, implemented in sql/sql_connect.cc
in function check_connection()
One of these checks is to get the IP address of the client side connection, as in:
static int check_connection(THD *thd)
{
...
if (!thd->m_main_security_ctx.host().length) // If TCP/IP connection
{
...
peer_rc= vio_peer_addr(net->vio, ip, &thd->peer_port, NI_MAXHOST);
if (peer_rc)
{
/*
Since we can not even get the peer IP address,
there is nothing to show in the host_cache,
so increment the global status variable for peer address errors.
*/
connection_errors_peer_addr++;
my_error(ER_BAD_HOST_ERROR, MYF(0));
return 1;
}
...
}
Upon failure, the status variable connection_errors_peer_addr
is incremented, and the connection is rejected.
vio_peer_addr()
is implemented in vio/viosocket.c
(code simplified to show only the important calls)
my_bool vio_peer_addr(Vio *vio, char *ip_buffer, uint16 *port,
size_t ip_buffer_size)
{
if (vio->localhost)
{
...
}
else
{
/* Get sockaddr by socked fd. */
err_code= mysql_socket_getpeername(vio->mysql_socket, addr, &addr_length);
if (err_code)
{
DBUG_PRINT("exit", ("getpeername() gave error: %d", socket_errno));
DBUG_RETURN(TRUE);
}
/* Normalize IP address. */
vio_get_normalized_ip(addr, addr_length,
(struct sockaddr *) &vio->remote, &vio->addrLen);
/* Get IP address & port number. */
err_code= vio_getnameinfo((struct sockaddr *) &vio->remote,
ip_buffer, ip_buffer_size,
port_buffer, NI_MAXSERV,
NI_NUMERICHOST | NI_NUMERICSERV);
if (err_code)
{
DBUG_PRINT("exit", ("getnameinfo() gave error: %s",
gai_strerror(err_code)));
DBUG_RETURN(TRUE);
}
...
}
...
}
In short, the only failure path in vio_peer_addr()
happens when a call to mysql_socket_getpeername()
or vio_getnameinfo()
fails.
mysql_socket_getpeername() is just a wrapper on top of getpeername().
The man 2 getpeername
manual lists the following possible errors:
NAME
getpeername - get name of connected peer socket
ERRORS
EBADF The argument sockfd is not a valid descriptor.
EFAULT The addr argument points to memory not in a valid part of the process address space.
EINVAL addrlen is invalid (e.g., is negative).
ENOBUFS
Insufficient resources were available in the system to perform the operation.
ENOTCONN
The socket is not connected.
ENOTSOCK
The argument sockfd is a file, not a socket.
Of these errors, only ENOBUFS
is plausible.
As for vio_getnameinfo()
, it is just a wrapper on getnameinfo(), which also according to the man page man 3 getnameinfo
can fail for the following reasons:
NAME
getnameinfo - address-to-name translation in protocol-independent manner
RETURN VALUE
EAI_AGAIN
The name could not be resolved at this time. Try again later.
EAI_BADFLAGS
The flags argument has an invalid value.
EAI_FAIL
A nonrecoverable error occurred.
EAI_FAMILY
The address family was not recognized, or the address length was invalid for the specified family.
EAI_MEMORY
Out of memory.
EAI_NONAME
The name does not resolve for the supplied arguments. NI_NAMEREQD is set and the host's name cannot be located, or neither
hostname nor service name
were requested.
EAI_OVERFLOW
The buffer pointed to by host or serv was too small.
EAI_SYSTEM
A system error occurred. The error code can be found in errno.
The gai_strerror(3) function translates these error codes to a human readable string, suitable for error reporting.
Here many failures can happen, basically due to heavy load or the network.
To understand the process behind this code, what the MySQL server is essentially doing is a Reverse DNS lookup, to:
- find the hostname of the client
- find the IP address corresponding to this hostname
to later convert this IP address to a hostname again (see the call to ip_to_hostname() that follows).
Overall, failures accounted with Connection_errors_peer_address
can be due to system load (causing transient failures like out of memory, etc) or due to network issues affecting DNS.
Disclosure: I happen to be the person who implemented this Connection_errors_peer_address
status variable in MySQL, as part of an effort to have better visibility / observability in this area of the code.
[Edit] To follow up with more details and/or guidelines:
- When
Connection_errors_peer_address
is incremented, the root cause is not printed in logs. That is unfortunate for troubleshooting, but also avoid flooding logs causing even more damage, there is a tradeoff here. Keep in mind that anything that happen before logging in is very sensitive ...
- If the server really goes out of memory, it is very likely that many other things will break, and that the server will go down very quickly. By monitoring the total memory usage of
mysqld
, and monitoring the uptime
, it should be fairly easy to determine if the failure "only" caused connections to be closed with the server staying up, or if the server itself failed catastrophically.
- Assuming the server stays up on failure, the more likely culprit is the second call then, to
getnameinfo
.
- Using
skip-name-resolve
will have no effect, as this check happens later (see specialflag & SPECIAL_NO_RESOLVE
in the code in check_connection()
)
- When
Connection_errors_peer_address
fails, note that the server cleanly returns the error ER_BAD_HOST_ERROR
to the client, and then closes the socket. This is different from just closing abruptly a socket (like in a crash) : the former should be reported by the client as "Can't get hostname for your address"
, while the later is reported as "MySQL has gone away"
.
- Whether the client connector actually treat
ER_BAD_HOST_ERROR
and a socket closed differently is another story
Given that this failure overall seems related to DNS lookups, I would check the following items:
- See how many rows are in the
performance_schema.host_cache
table.
- Compare this with the size of the host cache, see the
host_cache_size
system variable.
- If the host cache appear full, consider increasing its size: this will reduce the number of DNS calls overall, relieving pressure on DNS, in hope (admittedly, this is just a shot in the dark) that DNS transient failures will disappear.
- 323 out of 55 million connections indeed seems transient. Assuming the monitoring client sometime do get connected properly, inspect the row in table host_cache for this client: it may contains other failures reported.
Table performance_schema.host_cache
documentation:
https://dev.mysql.com/doc/refman/5.7/en/host-cache-table.html
Further readings:
http://marcalff.blogspot.com/2012/04/performance-schema-nailing-host-cache.html
[Edit 2] Based on the new data available:
The Aborted_clients
status variable shows some connections forcefully closed by the server. This typically happens when a session is idle for a very long time.
A typical scenario for this to happen is:
- A client opens a connection, and sends some queries
- Then the client does nothing for an extended amount of time (greater than the net_read_timeout)
- Due to lack of traffic, the server closes the session, and increments Aborted_connects
- The client then sends another query, sees a closed connection, and reports "MySQL has gone away"
Note that a client application forgetting to cleanly close sessions will execute 1-3, this could be the case for Aborted_clients on the master. Some cleanup here to fix clients applications using the master would help to decrease resource consumption, as leaving 151650 sessions open to die on timeout has a cost.
A client application executing 1-4 can cause Aborted_clients on the server and MySQL has gone away on the client. The client application reporting "MySQL has gone away" is most likely the culprit here.
If a monitoring application, say, checks the server every N seconds, then make sure the timeouts (here 30 and 60 sec) are significantly greater that N, or the server will kill the monitoring session.
wait_timeout
andmax_allowed_packet
in my.cnf on the master node? – Changchunwait_timeout = 600
andmax_allowed_packet = 100M
– Coherent