OpenShift and hostnetwork=true
Asked Answered
P

3

6

I have deployed two POD-s with hostnetwork set to true. When the POD-s are deployed on same OpenShfit node then everything works fine since they can discover each other using node IP.

When the POD-s are deployed on different OpenShift nodes then they cant discover each other, I get no route to host if I want to point one POD to another using node IP. How to fix this?

Pickings answered 13/10, 2017 at 21:17 Comment(13)
Why do you need to set hostnetwork set to true in the first place? Any pods in the same project should be able to talk to any other pods in the same project by default, even if on different nodes. This is because each pod has its own IP address and access will be setup to allow connections.Tatia
You really shouldn't be using the node IP as the addressing mechanism for the other pods. Use the name of the pod as a host name, or better still use the service name as hostname and trust the internal routing to send it to one of the pods for that service. IOW, should be no need to use IPs anyway as there is an internal DNS which maps pod names and service names to the IPs for you.Tatia
@Graham Dumpleton I need to use hostnetwork for REDIS cluster setup. REDIS requires it in order to work on Docker. Thats what they state in offical REDIS cluster documentation.Pickings
What is link to the documentation? Running stuff in plain docker is going to be different to running under OpenShift/Kubernetes. If port assignments are known, you shouldn't need host networking enabled.Tatia
You might also look at this following example for OpenShift. github.com/openshift/origin/tree/master/examples/statefulsets/…Tatia
If you search on Google for 'redis cluster kubernetes' you will also find various examples. I would suggest looking at stuff related to running it in Kubernetes rather than trying to work it out based on how run in normal docker host service. Often the official docker images aren't built to best practices and will not run in container environments with more stringent security in place.Tatia
This is the link to the documentation: redis.io/topics/cluster-tutorial I need to use offical REDIS image, and I don't see any reason why I should not use it. Everything works fine with that image, except cluster configuration, where cluster meet command fails for the instances running on different nodes... Only thing missing now is ability to lookup a pod from a pod running on a different node. Discovery through services doesn't work.Pickings
Okay, so when you say 'official', you mean from REDIS. Unfortunately Docker Inc labels their images as 'official' as well, eg., hub.docker.com/_/redis. It is those from Docker Inc you sometimes have to be careful of as they usually expect to run as root and so will not work in environments where running as root is not the default.Tatia
Can you explain more how you are doing dynamic discovery through services?Tatia
Yes. For REDIS cluster, to join two nodes cluster meet command is issued with two arguments, IP of the REDIS instance and port on which it is listening. So, if I want to that with services, then I use service IP.Pickings
If you really need the IPs, you need to get the IPs for the pods which are listed as endpoints against the service. So you are querying the pods behind the service?Tatia
Well, yes... Behind every service there is only one POD. Why should I use POD IPs and not service IP-s? POD IP-s are changeable, and Service IP-s should be persistent...Pickings
My mistake. I assumed you had multiple replicas behind the one service and effectively using the Service as a registration list. By using pod IP, you would side step the iptables mapping in kernel, but difference shouldn't be noticeable.Tatia
B
1

The uswitch/kiam (https://github.com/uswitch/kiam) service is a good example of a use case.

it has an agent process that runs on the hostnetwork of all worker nodes because it modifies a firewall rule to intercept API requests (from containers running on the host) to the AWS api.

it also has a server process that runs on the hostnetwork to access the AWS api since the AWS api is on a subnet that is only available to the host network.

finally... the agent talks to the server using GRPC which connects directly to one of the IP addresses that are returned when looking up the kiam-server.

so you have pods of the agent deployment running on the hostnetwork of node A trying to connect to kiam server running on the hostnetwork of node B.... which just does not work.

furthermore, this is a private service... it should not be available from outside the network.

Bigmouth answered 7/8, 2020 at 4:5 Comment(0)
U
0

If you want the two containers to be share the same physical machine and take advantage of loopback for quick communications, then you would be better off defining them together as a single Pod with two containers.

If the two containers are meant to float over a larger cluster and be more loosely coupled, then I'd recommend taking advantage of the Service construct within Kubernetes (under OpenShift) and using that for the appropriate discovery.

Services are documented at https://kubernetes.io/docs/concepts/services-networking/service/, and along with an internal DNS service (if implemented - common in Kubernetes 1.4 and later) they provide a means to let Kubernetes manage where things are, updating an internal DNS entry in the form of <servicename>.<namespace>.svc.cluster.local. So for example, if you set up a Pod with a service named "backend" in the default namespace, the other Pod could reference it as backend.default.svc.cluster.local. The Kubernetes documentation on the DNS portion of this is available at https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

This also avoids the "hostnetwork=true" complication, and lets OpenShift (or specifically Kubernetes) manage the networking.

Unsaid answered 23/10, 2017 at 0:44 Comment(2)
Unfortunately, as I explained in a comments above, I must use host network. That is the limitation of the application which I'm deploying on OpenShift.Pickings
I don't know what the application is (obviously), but if it's requiring hostnetwork=true, then deploying within Kubernetes/OpenShift may be a poor choice. They support an abstraction to keep two containers tightly coupled (network & storage wise) with the "N-containers per Pod" concept, but otherwise expect the network to be abstracted more significantly than this application may allow.Unsaid
C
0

If you have to absolutely use hostnetwork, you should be creating router and then use those routers to have the communication between pods. You can create ha proxy based router in opeshift, reference here --https://docs.openshift.com/enterprise/3.0/install_config/install/deploy_router.html

Canute answered 24/10, 2017 at 11:36 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.