container running on docker swarm not accessible from outside
Asked Answered
M

7

7

I am running my containers on the docker swarm. asset-frontend service is my frontend application which is running Nginx inside the container and exposing port 80. now if I do

curl http://10.255.8.21:80

or

curl http://127.0.0.1:80

from my host where I am running these containers I am able to see my asset-frontend application but it is not accessible outside of the host. I am not able to access it from another machine, my host machine operating system is centos 8.

this is my docker-compose file

version: "3.3"
networks:
  basic:
services:
  asset-backend:
    image: asset/asset-management-backend
    env_file: .env
    deploy:
      replicas: 1
    depends_on:
      - asset-mongodb
      - asset-postgres
    networks:
      - basic
  asset-mongodb:
    image: mongo
    restart: always
    env_file: .env
    ports:
      - "27017:27017"
    volumes:
      - $HOME/asset/mongodb:/data/db
    networks:
      - basic
  asset-postgres:
    image: asset/postgresql
    restart: always
    env_file: .env
    ports:
      - "5432:5432"
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password
      - POSTGRES_DB=asset-management
    volumes:
      - $HOME/asset/postgres:/var/lib/postgresql/data
    networks:
      - basic
  asset-frontend:
    image: asset/asset-management-frontend
    restart: always
    ports:
      - "80:80"
    environment:
      - ENV=dev
    depends_on:
      - asset-backend
    deploy:
      replicas: 1
    networks:
      - basic
  asset-autodiscovery-cron:
    image: asset/auto-discovery-cron
    restart: always
    env_file: .env
    deploy:
      replicas: 1
    depends_on:
      - asset-mongodb
      - asset-postgres
    networks:
      - basic

this is my docker service ls

ID                  NAME                                       MODE                REPLICAS            IMAGE                                         PORTS
auz640zl60bx        asset_asset-autodiscovery-cron   replicated          1/1                 asset/auto-discovery-cron:latest         
g6poofhvmoal        asset_asset-backend              replicated          1/1                 asset/asset-management-backend:latest    
brhq4g4mz7cf        asset_asset-frontend             replicated          1/1                 asset/asset-management-frontend:latest   *:80->80/tcp
rmkncnsm2pjn        asset_asset-mongodb              replicated          1/1                 mongo:latest                                  *:27017->27017/tcp
rmlmdpa5fz69        asset_asset-postgres             replicated          1/1                 asset/postgresql:latest                  *:5432->5432/tcp

My 80 port is open in firewall following is the output of firewall-cmd --list-all

public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources: 
  services: cockpit dhcpv6-client ssh
  ports: 22/tcp 2376/tcp 2377/tcp 7946/tcp 7946/udp 4789/udp 80/tcp
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:

if i inspect my created network the output is following

[
    {
        "Name": "asset_basic",
        "Id": "zw73vr9xigfx7hy16u1myw5gc",
        "Created": "2019-11-26T02:36:38.241352385-05:00",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.3.0/24",
                    "Gateway": "10.0.3.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "9348f4fc6bfc1b14b84570e205c88a67aba46f295a5e61bda301fdb3e55f3576": {
                "Name": "asset_asset-frontend.1.zew1obp21ozmg8r1tzmi5h8g8",
                "EndpointID": "27624fe2a7b282cef1762c4328ce0239dc70ebccba8e00d7a61595a7a1da2066",
                "MacAddress": "02:42:0a:00:03:08",
                "IPv4Address": "10.0.3.8/24",
                "IPv6Address": ""
            },
            "943895f12de86d85fd03d0ce77567ef88555cf4766fa50b2a8088e220fe1eafe": {
                "Name": "asset_asset-mongodb.1.ygswft1l34o5vfaxbzmnf0hrr",
                "EndpointID": "98fd1ce6e16ade2b165b11c8f2875a0bdd3bc326c807ba6a1eb3c92f4417feed",
                "MacAddress": "02:42:0a:00:03:04",
                "IPv4Address": "10.0.3.4/24",
                "IPv6Address": ""
            },
            "afab468aefab0689aa3488ee7f85dbc2cebe0202669ab4a58d570c12ee2bde21": {
                "Name": "asset_asset-autodiscovery-cron.1.5k23u87w7224mpuasiyakgbdx",
                "EndpointID": "d3d4c303e1bc665969ad9e4c9672e65a625fb71ed76e2423dca444a89779e4ee",
                "MacAddress": "02:42:0a:00:03:0a",
                "IPv4Address": "10.0.3.10/24",
                "IPv6Address": ""
            },
            "f0a768e5cb2f1f700ee39d94e380aeb4bab5fe477bd136fd0abfa776917e90c1": {
                "Name": "asset_asset-backend.1.8ql9t3qqt512etekjuntkft4q",
                "EndpointID": "41587022c339023f15c57a5efc5e5adf6e57dc173286753216f90a976741d292",
                "MacAddress": "02:42:0a:00:03:0c",
                "IPv4Address": "10.0.3.12/24",
                "IPv6Address": ""
            },
            "f577c539bbc3c06a501612d747f0d28d8a7994b843c6a37e18eeccb77717539e": {
                "Name": "asset_asset-postgres.1.ynrqbzvba9kvfdkek3hurs7hl",
                "EndpointID": "272d642a9e20e45f661ba01e8731f5256cef87898de7976f19577e16082c5854",
                "MacAddress": "02:42:0a:00:03:06",
                "IPv4Address": "10.0.3.6/24",
                "IPv6Address": ""
            },
            "lb-asset_basic": {
                "Name": "asset_basic-endpoint",
                "EndpointID": "142373fd9c0d56d5a633b640d1ec9e4248bac22fa383ba2f754c1ff567a3502e",
                "MacAddress": "02:42:0a:00:03:02",
                "IPv4Address": "10.0.3.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4100"
        },
        "Labels": {
            "com.docker.stack.namespace": "asset"
        },
        "Peers": [
            {
                "Name": "8170c4487a4b",
                "IP": "10.255.8.21"
            }
        ]
    }
]
Malachite answered 23/11, 2019 at 12:45 Comment(17)
If you're deploying those services on cloud services, make sure that your instance configuration was allowed port 80. E.g: Add port 80 into Security group of your EC2 instance to be able to access from internet.Ingrowth
@ToanQuocHo no I am deploying these service on Hyper-V VM'sMalachite
So did you map the port from your super host machine into your docker host machine? :D.. I said that because seem like you're using Windows to setup a Hyper-V VM, then inside that VM, you install docker and setup docker-swarm for your app. So it's the same as I said then you also have to map a port from your Windows machine to the corresponding port on your Hyper-V VM machine. After that then from can connect from outside.Ingrowth
that is already done. because previously on the same VM I was running my frontend server with NGINX exposed on port 80 without docker ..and I was able to access it.Malachite
So you've already set it up on VM without docker and other machine can connect to but now after setting up via docker then it's not working? It supposed to work when you are able to connect from your docker host @@..Ingrowth
yes that is the issue :-(Malachite
docker logs opsalliant_opsalliant-frontend ?Johnsten
@Johnsten it is showing nothing. I went inside the docker container and checked access.log and error.log that is also emptyMalachite
and when I did curl from host machine itself I can see the page and the docker logs is showing 10.255.0.2 - - [26/Nov/2019:06:18:39 +0000] "GET / HTTP/1.1" 200 910 "-" "curl/7.61.1" "-"Malachite
and the 10.255.8.21 is routable in your Network , if you try traceroute 10.255.8.21 it takes you to the right path ? if so , it is 100% firewall issueJohnsten
output of traceroute 10.255.8.21 is traceroute to 10.255.8.21 (10.255.8.21), 64 hops max 1 192.168.1.1 0.699ms 0.613ms 0.553ms 2 192.168.37.1 1.068ms 0.989ms 1.194ms 3 10.10.10.1 1.747ms 3.600ms 1.687ms 4 10.255.8.21 2.240ms !X 1.287ms !X 2.243ms !X Malachite
try this on the host where docker is hosted firewall-cmd --zone=public --permanent --add-service=httpJohnsten
still not accessible :( output of ` netstat -tulpn` is tcp6 0 0 :::80 :::* LISTEN 26978/dockerdMalachite
@Johnsten after running command firewall-cmd --zone=public --permanent --add-service=http now if i do curl curl http://10.255.8.21:80it is not showing any output and stuck. while previously it was showing No route to hostMalachite
try to delete the old container and set the port section as "10.255.8.21:80:80" and run firewall-cmd --reload on the host if you did not do that after adding the ruleJohnsten
when I did docker stack deploy it says WARN[0000] ignoring IP-address (10.255.8.21:80:80/tcp) service will listen on '0.0.0.0'Malachite
Let us continue this discussion in chat.Malachite
A
14

Ran into this same issue and it turns out it was a clash between my local networks subnet and the subnet of the automatically created ingress network. This can be verified using docker network inspect ingress and checking if the IPAM.Config.Subnet value overlaps with your local network.

To fix you can update the configuration of the ingress network as specified in Customize the default ingress network; in summary:

  1. Remove services that publish ports
  2. Remove existing network: docker network rm ingress
  3. Recreate using non-conflicting subnet:
    docker network create \
        --driver overlay \
        --ingress \
        --subnet 172.16.0.0/16 \ # Or whatever other subnet you want to use
        --gateway 172.16.0.1 \
        ingress
    
  4. Restart services

You can avoid a clash to begin with by specifying the default subnet pool when initializing the swarm using the --default-addr-pool option.

Actium answered 14/8, 2020 at 2:10 Comment(6)
Thank you, Sir! This is the only answer on the whole internet that fixed my problem. I would like to buy you a coffee sometime.Goy
This is the answerWaites
Nice one, lifesaver!!Providence
still not working for me. recreated ingress (10.10.0.0/16) since it was conflicting with my LAN (10.0.0.0/24) and my bookstack Stack is not accessible from LAN on exposed port 6875. Any other suggestions?Fibroid
Fixed my issue. recreating ingress network did not fix local LAN conflict because Default Address Pool: 10.0.0.0/8 was still inside docker (run docker info). # I had to recreate swarm. docker swarm leave --force # recreate swarm. I was using 11.0.0.0/8 network so it does not conflict with mine (10.0.0.0). And 10.0.0.160 is your docker host ip docker swarm init --default-addr-pool 11.0.0.0/8 --advertise-addr 10.0.0.160Fibroid
thank you so much, fixed my issues after several hours inspecting.Klepac
T
2

I got into this same issue. It turns out that's my iptables filter causes external connections not work.

In docker swarm mode, docker create a virtual network bridge device docker_gwbridge to access to overlap network. My iptables has following line to drop packet forwards:

:FORWARD DROP

That makes network packets from physical NIC can't reach the docker ingress network, so that my docker service only works on localhost.

Change iptables rule to

:FORWARD ACCEPT

And problem solved without touching the docker.

Topside answered 15/11, 2022 at 22:3 Comment(0)
S
1

My particular problem was that the hostname was resolving to an IPv6 addr on the docker host. The docker iptables rules automagically installed by docker swarm are IPv4.

  • To diagnose:

    • iptables-save > rules.txt # inspect rules to make sure everything is in order for the :FORWARD chain
    • netcat -vz myhost 80 # connected with no problems
    • wget http://myhost # resolved to ipv6 addr and just hung
    • wget http://10.20.30.40 # brought back my web facing pod's port 80 response instead of the packets getting dropped.
  • To resolve:

My clients were using IPv6 by default. Using modified /etc/hosts or directly connecting via IP worked. Redid the iptables rules for ip6tables, and all is good!

Much thanks to @suyuan in the previous response to look at the forwarding rules on iptables.

Stuffed answered 15/5, 2023 at 13:5 Comment(0)
O
0
docker service update your-service --publish-add 80:80

You can publish ports by updating the service.

Outfitter answered 23/11, 2019 at 13:34 Comment(4)
I tried this is the response. Error response from daemon: rpc error: code = InvalidArgument desc = EndpointSpec: duplicate published ports providedMalachite
I removed the stack and deployed it again. and then updated service using the command you gave now the output is a success but still, it is not accessible from outside opsalliant_opsalliant-frontend overall progress: 1 out of 1 tasks 1/1: running [==================================================>] verify: Service converged Malachite
What ip did you used to access it ?Outfitter
the host machine IP ...e.g. 10.255.8.21Malachite
A
0

Can you try this url instead of the ip adres? host.docker.internal so something like http://host.docker.internal:80

Airburst answered 2/12, 2019 at 14:49 Comment(0)
R
0

I suggest you verify the "right" behavior using docker-compose first. Then, try to use docker swarm without network specification just to verify there are no network interface problems.

Also, you could use the below command to verify your LISTEN ports:

netstat -tulpn

EDIT: I faced this same issue but I was able to access my services through 127.0.0.1

Rainstorm answered 5/6, 2020 at 6:39 Comment(0)
G
0

While running docker provide an port mapping, like

docker run -p 8081:8081 your-docker-image

Or, provide the port mapping in the docker desktop while starting the container.

Greyso answered 11/6, 2022 at 12:21 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.