Question: "I am trying to make my kube-cluster pull from a registry running inside itself." (Note I plan to going to edit the title of your question to clarify slightly / make it easier to search)
Short Answer: You can't*
*Nuanced Answer: Technically it is possible with hacks and a solid understanding of Kubernetes fundamentals. You'll probably want to avoid doing this, unless you have a really good reason, and fully understand my explanation of the fundamental issue and workaround, as this is an advanced use case that will require debugging to force it to work. This is complicated and nuanced enough to make step by step directions difficult, but I can give you a solid idea of the fundamental issue you ran into that makes this challenging, and high level overview guidance on how to pull off what you're trying to do anyways.
Why you can't / the fundamental issue you ran into:
In Kubernetes land 3 networks tend to exist: Internet, LAN, and Inner Cluster Network.
(Resource that goes into more depth: https://oteemo.com/kubernetes-networking-and-services-101/)
AND these 3 networks each have their own DNS / there's 3 layers of DNS.
- Internet DNS: 8.8.8.8, 1.1.1.1, 9.9.9.9 (google, cloudflare, quad9 or whatever public internet DNS the router is configured to point to.)
- LAN DNS: 192.168.1.1 (LAN DNS hosted on your router)
- CoreDNS: 10.43.0.10 (10th IP of the CIDR range of the inner cluster network)
Here's the gotcha you're running into:
- A pod can resolve DNS entries hosted at any of these 3 levels of DNS.
- The OS hosting the Kubernetes Cluster can only resolve DNS entries hosted on LAN DNS or Internet DNS. (the OS isn't scoped to have visibility into the existence of CoreDNS/Inner Cluster Network.)
- kubelet + docker/containerd/cri-o/other runtime, are responsible for pulling images from registries and these exist at the OS level in the form of systemd services and thus don't have scope to Inner Cluster DNS names. This is why what you're trying to do is failing.
Workaround options / hacks and nuances you can do to force what you're trying to do to work:
Option 1.) (I don't suggest this, has extra difficult chicken and egg issues, sharing for information purposes only)
Host an additional instance of coredns as a LAN facing instance of
DNS on Kubernetes, Expose the registry and 2nd instance of coredns to
the LAN via explicit NodePorts (using static service manifests so
they'll come up with predictable/static NodePorts, vs random
NodePorts in the range of 30000 - 32768) so they're routable from the
LAN (I suggest NodePorts over LB's here as one less dependency/thing
that can go wrong). Have the 2nd instance of coredns use your LAN
router/LAN DNS as it's upstream DNS server. Reconfigure the OS to use
the LAN facing coredns as it's DNS server.
Option 2.) More reasonable and what trow does:
Pay $12 for a some-dns-name.tld
Use Cert Manager Kubernetes Operator or Cert Bot standalone docker container + proof you own the domain to get an https://registry.some-dns-name.tld HTTPS cert from Lets Encrypt Free. And configure your inner cluster hosted registry to use this HTTPS cert.
Expose the registry hosted in the cluster to the LAN using an NodePort service with an explicitly pinned convention based port number, like 32443
Why NodePort and not a LB? There's 3 reason NP is better than LB for this scenario:
1.) Service type LB's implementation differs
between Deployment Environment and Kubernetes Distribution, while type
NodePort is universal.
2.) If the LB changes you have to update every node's /etc/host
file to point to "LB_IP registry.some-dns-name.tld" AND you have to
know the LB IP, that isn't always known in advance / which means you'd
have to follow some order of operations. If you use service type
NodePort you can add the localhost IP entry to every node's /etc/host,
so it looks like "127.0.0.1 registry.some-dns-name.tld", it's well
known reusable and simplifies order of operations.
3.) If you ever need to change where your cluster is hosted, you
can arrange it so you can make the change in 1 centralized location
even in scenarios where you have no access to or control over LAN DNS.
You can craft services that point to a staticly defined IP or external
name (which could exist outside the cluster). and have the NodePort
service point to the staticly defined service.
Add "127.0.0.1 registry.some-dns-name.tld" to /etc/hosts of every node in the cluster.
Set your yaml manifests to pull from registry.some-dns-name.tls, or configure containerd/cri-o's registry mirroring feature to map registry.some-dns-name.tld:32443 to whatever entries are being mirrored on your local registry.
There's 2 more solvable chicken and egg problems to deal with. 1st chicken egg problem is that Kubernetes and the registry will both likely need access to container images to even get this far.
- If you have internet access and if your internal registry is just for cacheing purposes this probably isn't a big deal.
- If you don't have internet access you'd need to .tar up the images needed by your kube distro and registry and "preload" them into spot docker/containerd/cri-o expects.
- If you don't have internet access you could alternatively have another /etc/hosts entry or LAN DNS entry for a non HA temporary docker compose based registry used for initial bootstrapping or one hosted outside the cluster.
- 2nd chicken egg problem is the registry hosted on your cluster will need some way of being seeded with images.
- If internet access should be easy to figure out and script
- If no internet access you may need to come up with some kind of DR solution for backup and restoration of the registries backend persistent storage.
- If no internet access you could alternatively use an "emphemeral transport registry" for seeding purposes. Basically do this docker compose to spin up non-HA registry:2 image with filesystem backing, use skopeo to seed that, tar up the filesystem backing, and import it into another computer, restore the file system backing, reload non-HA registry:2 from the pre-populated filesystem backing, then use skopeo to copy from your "ephemeral transport registry" to your registry hosted in the cluster.
kube-registry.kube-system.svc.cluster.local/myuser/myimage
? – Multiform