How to access JMX interface in docker from outside?
Asked Answered
M

7

51

I am trying to remotely monitor a JVM running in docker. The configuration looks like this:

  • machine 1: runs a JVM (in my case, running kafka) in docker on an ubuntu machine; the IP of this machine is 10.0.1.201; the application running in docker is at 172.17.0.85.

  • machine 2: runs JMX monitoring

Note that when I run JMX monitoring from machine 2, it fails with a version of the following error (note: the same error occurs when I run jconsole, jvisualvm, jmxtrans, and node-jmx/npm:jmx):

The stack trace upon failing looks something like the following for each of the JMX monitoring tools:

java.rmi.ConnectException: Connection refused to host: 172.17.0.85; nested exception is
    java.net.ConnectException: Operation timed out
    at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:619)
    (followed by a large stack trace)

Now the interesting part is when I run the same tools (jconsole, jvisualvm, jmxtrans, and node-jmx/npm:jmx) on the same machine that is running docker (machine 1 from above) the JMX monitoring works properly.

I think this suggests that my JMX port is active and working properly, but that when I execute JMX monitoring remotely (from machine 2) it looks like the JMX tool does not recognize the internal docker IP (172.17.0.85)

Below are the relevant (I think) network configuration elements on machine 1 where JMX monitoring works (note the docker ip, 172.17.42.1):

docker0   Link encap:Ethernet  HWaddr ...
      inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
      inet6 addr:... Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
      RX packets:6787941 errors:0 dropped:0 overruns:0 frame:0
      TX packets:4875190 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0
      RX bytes:1907319636 (1.9 GB)  TX bytes:639691630 (639.6 MB)

wlan0     Link encap:Ethernet  HWaddr ... 
      inet addr:10.0.1.201  Bcast:10.0.1.255  Mask:255.255.255.0
      inet6 addr:... Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
      RX packets:4054252 errors:0 dropped:66 overruns:0 frame:0
      TX packets:2447230 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000
      RX bytes:2421399498 (2.4 GB)  TX bytes:1672522315 (1.6 GB)

And this is the relevant network configuration elements on the remote machine (machine 2) from which I am getting the JMX errors:

lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
    options=3<RXCSUM,TXCSUM>
    inet6 ::1 prefixlen 128 
    inet 127.0.0.1 netmask 0xff000000 
    inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 
    nd6 options=1<PERFORMNUD>

en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
    ether .... 
    inet6 ....%en1 prefixlen 64 scopeid 0x5 
    inet 10.0.1.203 netmask 0xffffff00 broadcast 10.0.1.255
    nd6 options=1<PERFORMNUD>
    media: autoselect
    status: active
Mistletoe answered 7/7, 2015 at 0:40 Comment(1)
I created a GitHub project that contains a ready to go implementation of JMX from a Docker container. It contains a Dockerfile with a proper entrypoint.sh, and a docker-compose.yml for easy deployment.Faery
M
68

For completeness, the following solution worked. The JVM should be run with specific parameters established to enable remote docker JMX monitoring were as followed:

-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.port=<PORT>
-Dcom.sun.management.jmxremote.rmi.port=<PORT>
-Djava.rmi.server.hostname=<IP>

where:

<IP> is the IP address of the host that where you executed 'docker run'
<PORT> is the port that must be published from docker where the JVM's JMX port is configured (docker run --publish 7203:7203, for example where PORT is 7203). Both `port` and `rmi.port` can be the same. 

Once this is done you should be able to execute JMX monitoring (jmxtrans, node-jmx, jconsole, etc) from either a local or remote machine.

Thanks to @Chris-Heald for making this a really quick and simple fix!

Mistletoe answered 7/7, 2015 at 18:16 Comment(7)
This worked, but to connect to a Docker container I also had to add: -Dcom.sun.management.jmxremote.port=1098 And connect to that port instead rmi.portEastsoutheast
-Djava.rmi.server.hostname=<IP> was the important part for me. Thank you!Heiser
Do you have any best practice approach how to achieve this if I don't want to wire a fixed IP into my Dockerfile? Problem is, I'm using an image that spools up an application using 'CMD java -jar ...' and I cannot rely on a fixed IP for the host.Vermiculate
It's super IMPORTANT that the "same port" that is passed to -Dcom.sun.management.jmxremote.rmi.port is used by the client (i.e. VisualVM) to connect, even when using SSH port forwarding. For example, if you set -Dcom.sun.management.jmxremote.rmi.port=8888 and you do docker run -p25000:8888 ..., then you SSH with port forwarding ssh -L 8888:localhost:25000 your-docker-host.com and then you can connect with VisualVM using localhost:8888.Suppress
@MihaiTodor is right! After long testing i first found my remote JMX configuration trying to connect to some IP number even though the jmx url was localhost. Obviously JMX client reads java.rmi.server.hostname from remote, which in my case resulted in a IP number from Docker. So here is how i got it to work: a) use same port everywhere, including the SSH port forward. b) use -Djava.rmi.server.hostname=localhost for the java app you start in the container.Bornu
@Vermiculate You don't need to specify your host's IP if you use -Djava.rmi.server.hostname=0.0.0.0.Faery
@Faery I haven tried this but as chances are high that somebody is using Docker/K8s, using $(POD_IP) might also be a neat solution.Vermiculate
S
18

For dev environment you can set java.rmi.server.hostname to the catch-all IP address 0.0.0.0

Example:

 -Djava.rmi.server.hostname=0.0.0.0 \
                -Dcom.sun.management.jmxremote \
                -Dcom.sun.management.jmxremote.port=${JMX_PORT} \
                -Dcom.sun.management.jmxremote.rmi.port=${JMX_PORT} \
                -Dcom.sun.management.jmxremote.local.only=false \
                -Dcom.sun.management.jmxremote.authenticate=false \
                -Dcom.sun.management.jmxremote.ssl=false
Shanleigh answered 16/3, 2018 at 12:50 Comment(2)
Does not work with Docker for Windows + kind + kubectl port-forwardPegpega
@RobinGreen Should work. What did you set as JMX_PORT?Severable
M
9

I found it that trying to set up JMX over RMI is a pain, especially because of the -Djava.rmi.server.hostname=<IP> which you have to specify on startup. We're running our docker images in Kubernetes where everything is dynamic.

I ended up using JMXMP instead of RMI, as this only need one TCP port open, and no hostname.

My current project uses Spring, which can be configured by adding this:

<bean id="serverConnector"
    class="org.springframework.jmx.support.ConnectorServerFactoryBean"/>

(Outside Spring you need to set up your own JMXConncetorServer in order to make this work)

Along with this dependency (since JMXMP is an optional extension and not a part of the JDK):

<dependency>
    <groupId>org.glassfish.main.external</groupId>
    <artifactId>jmxremote_optional-repackaged</artifactId>
    <version>4.1.1</version>
</dependency>

And you need to add the same jar your your classpath when starting JVisualVM in order to connect over JMXMP:

jvisualvm -cp "$JAVA_HOME/lib/tools.jar:<your_path>/jmxremote_optional-repackaged-4.1.1.jar"

Then connect with the following connection string:

service:jmx:jmxmp://<url:port>

(Default port is 9875)

Marikomaril answered 19/12, 2016 at 19:57 Comment(0)
P
8

After digging around for quite a lot, I found this configuration

-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.port=1098
-Dcom.sun.management.jmxremote.rmi.port=1098
-Djava.rmi.server.hostname=localhost
-Dcom.sun.management.jmxremote.local.only=false

The difference with the other above is that java.rmi.server.hostname is set to localhost instead of 0.0.0.0

Purgatorial answered 27/7, 2019 at 18:1 Comment(1)
That was the correct solution for me. In case you use SSH port forwarding, also use the same port 1098 on your local machine. The connection String is then service:jmx:rmi:///jndi/rmi://localhost:1098/jmxrmiBornu
Z
2

To add some additional insights, I had some Docker port mappings in use, and none of the previous answers worked directly for me. After investigation, I found the answer here: How to connect with JMX from host to Docker container in Docker machine? to provide the required insights.

This is what I believe happens:

I set up JMX as suggested in other answers here:

-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.port=1098
-Dcom.sun.management.jmxremote.rmi.port=1098
-Djava.rmi.server.hostname=localhost
-Dcom.sun.management.jmxremote.local.only=false

Program flow:

  • I run the Docker container and expose/map the port from host to container. Say I map port host:1099->container:1098 in Docker.
  • I run the JVM inside the docker with the above JMX settings.
  • The JMX agent inside the Docker container now listens to the given port 1098.
  • I start JConsole on the host (outside Docker) with URL localhost:1099. I use 1099, since I used host:docker port mapping of 1099:1098.
  • JConsole connects fine to the JMX agent inside Docker.
  • JConsole asks JMX where to read the monitoring data.
  • JMX agent responds with the configured information and address: localhost:1098
  • JConsole now tries to connect to the given address localhost:1098
  • This fails since port 1098 on localhost (outside Docker) is not listening. Port 1099 was mapped to Docker:1098. Instead of localhost:1098, JMX should tell JConsole to read monitoring information from localhost:1099, since 1099 was the port mapped from host to 1098 inside Docker container.

As a fix, I changed my host:docker port mapping from 1099:1098 to 1098:1098. Now, JMX still tells JConsole to connect to localhost:1098 for monitoring information. But now it works since the outside port is the same as advertised by JMX inside Docker.

I expect the same applies also for SSH tunnels and similar scenarios. You have to match what you configure JMX to advertise and what JConsole sees as the address space on the host where you run it.

Maybe it is possible to play a bit with the jmxremote.port, jmxremove.rmi.port, and hostname attributes to make this work using different port mappings. But I had the opportunity to use the same ports, so using them simplified it, and this works (for me).

Zibeline answered 4/9, 2020 at 11:49 Comment(0)
C
0

Solution for Cloud e.g AWS ECS

The main challenge is that JMX/RMI protocol requires both host and port to correspond accordingly between a server (your JVM app) and a client (e.g VisualVM) that connects to the server. In other words, if any of these parameters will not match - no way it would be possible to establish the connection.

Subsequently, in the case of a containerised application, it means that JMX/RMI configuration requires a predefined/static port for the JVM app and that port should be mapped outside of the container onto an equivalent port inside the container. This is the only way to get it to work.

So now the main question I aim to answer is how to connect to your JVM app that runs inside your cloud, behind the private network and being exposed by a dynamic port only when that port is managed by the cloud not by us.

Solution exists! And it would require a bit of infrastructure cunning approach. Let's look at the diagram.

enter image description here

  • So basically we want to run our JMX router container first, as part of our service. The purpose of that container is to redirect incoming traffic to our JVM's port that we will use for JMX/RMI connection.
  • The port that we will use for JMX would be the dynamic port mapped onto the static inbound port of the JMX router container.
  • As soon as we obtained the dynamic port (the router container is launched) - we would use it for launching our JVM application.

For building our JMX router we'd use HAproxy. To build the image we need Dockerfile:

FROM haproxy:latest


USER root

RUN apt update && apt -y install curl jq

COPY ./haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]

where entrypoint.sh:

#!/bin/bash
set -x

port=$(curl -s ${ECS_CONTAINER_METADATA_URI_V4}/task | jq '.Containers | .[] | select(.Name=="haproxy-jmx") | .Ports | .[] | select(.ContainerPort==9090) | select(.HostIp=="0.0.0.0") | .HostPort')

while [ -z "$port" ]; do
    echo "Empty response, waiting 1 second and trying again..."
    sleep 1

    port=$(curl -s ${ECS_CONTAINER_METADATA_URI_V4}/task | jq '.Containers | .[] | select(.Name=="haproxy-jmx") | .Ports | .[] | select(.ContainerPort==9090) | select(.HostIp=="0.0.0.0") | .HostPort')
done

echo "Received port: $port"

sed -i "s/\$ECS_HOST_PORT/$port/" /usr/local/etc/haproxy/haproxy.cfg

haproxy -f /usr/local/etc/haproxy/haproxy.cfg

with haproxy.cfg:

defaults
    mode tcp

frontend service-jmx
    bind :9090
    default_backend service-jmx

backend service-jmx
    server jmx app:$ECS_HOST_PORT

After our JMX router image is ready (published to our register), we could use it inside our task definition as one of container definitions, e.g.

{
      "name": "haproxy-jmx",
      "image": "{IMAGE_SOURCE_FROM_YOUR_REGISTRY}",
      "logConfiguration": {
        "logDriver": "json-file",
        "secretOptions": null,
        "options": {
          "max-size": "50m",
          "max-file": "1"
        }
      },
      "portMappings": [
        {
          "hostPort": 0,
          "protocol": "tcp",
          "containerPort": 9090
        }
      ],
      "cpu": 0,
      "memoryReservation": 32,

      "links": [
        "${name}:app"
      ]
    }

Here we define our JMX static port is 9090. You could pick any port allowed to use. But after you choose - that exactly that port we will use to find the dynamic port mapped to it by ECS when launching our JVM app.

So now, the only thing left is to get the dynamic port assigned to our JMX router and use it as an RMI port for our JVM app. For that, in entrypoint.sh for our JVM app image we have the following:

#!/usr/bin/env sh

# We set here our initial JVM settings
JAVA_OPTS="-Dserver.port=8080 \
           -Djava.net.preferIPv4Stack=true"

#If we want to enable JMX for the app we will pass JMX_ENABLE env as true
if [ "${JMX_ENABLE}" = "true" ]; then
  
  #we get EC2 instance IP to use as server host
  HOST_SERVER_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)

  # Get a dynamic ECS host port by agreed JMX static port
  JMX_PORT=$(curl -s ${ECS_CONTAINER_METADATA_URI_V4}/task | jq '.Containers | .[] | select(.Name=="haproxy-jmx") | .Ports | .[] | select(.ContainerPort==9090) | select(.HostIp=="0.0.0.0") | .HostPort')
  
  #it might take sometime to get the router container started, let's wait a bit if needed
  while [ -z "$JMX_PORT" ]; do
    echo "Empty response, waiting 1 second and trying again..."
    sleep 1

    JMX_PORT=$(curl -s ${ECS_CONTAINER_METADATA_URI_V4}/task | jq '.Containers | .[] | select(.Name=="haproxy-jmx") | .Ports | .[] | select(.ContainerPort==9090) | select(.HostIp=="0.0.0.0") | .HostPort')
  done

  echo "Received port: $JMX_PORT"
  
  #JMX/RMI configuration you've already seen 
  JMX_OPTS="-Dcom.sun.management.jmxremote=true \
            -Dcom.sun.management.jmxremote.local.only=false \
            -Dcom.sun.management.jmxremote.authenticate=false \
            -Dcom.sun.management.jmxremote.ssl=false \
            -Djava.rmi.server.hostname=$HOST_SERVER_IP \
            -Dcom.sun.management.jmxremote.port=$JMX_PORT \
            -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT \
            -Dspring.jmx.enabled=true"

  JAVA_OPTS="$JAVA_OPTS $JMX_OPTS"
else
  echo "JMX disabled"
fi

#launching our app from working dir
java ${JAVA_OPTS} -jar /opt/workdir/*.jar

So now as soon as both containers are up and running - use HOST_SERVER_IP and JMX_PORT to connect to your JVM application inside the ECS cluster.

Tested and worked for us. Hope would helpful to others as well.

Claybourne answered 21/3, 2023 at 20:40 Comment(0)
H
0

This one worked for me.

Dockerfile

FROM openjdk:17
EXPOSE 8080 48080
ARG JAR_FILE=target/docker-test-1.jar
ADD ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Dcom.sun.management.jmxremote", "-Dcom.sun.management.jmxremote.port=48080","-Dcom.sun.management.jmxremote.rmi.port=48080","-Dcom.sun.management.jmxremote.ssl=false","-Dcom.sun.management.jmxremote.authenticate=false","-Djava.rmi.server.hostname=192.168.1.6", "-jar", "/app.jar"]
  • 192.168.1.6 is the IP of the Docker host
  • 48080 is my JMX port
  • 8080 is the web port of the inbuilt tomcat

I opened the JMX port in Linux (Docker host) firewall, so that it can be reached.

# firewall-cmd --zone=public --add-port=48080/tcp --permanent
# firewall-cmd --reload

jConsole worked with jmx port and RMI URL.

I summarized the steps here.

https://sredigest.com/2024/02/13/how-to-access-jmx-port-of-docker-container-from-outside/

Homemaking answered 13/2 at 22:20 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.