What are good practices for creating monorepo kubernetes infrastructure?
Asked Answered
D

1

9

I’m having really hard time trying to construct workflow with k8s that would include:

  • Having monorepo for multiple microservices
  • Having single command to start all of them and being able to start local development
  • Having docker-like experience of installing entire infrastructure on another machine that has no k8s installed on it (for local development) 1. git pull 2. k8s start, 3. wait, 4. ping localhost:3000 would be goal here.
  • Being able to have changes in my local files instantly applied to services without rebuilding images etc (something similar to docker volumes I guess)
  • Having modular config files where there is one root config file for infrastructure that is referencing to services smaller configs

I was looking hard for some example or guide about constructing such system without luck.

Am I missing something important about k8s design that makes me look for something not really possible witk k8s?

Why I think such question should not be closed

  • There are many developers without dev-ops experience trying their best with microservices and I've found lack of some solid guide about such (and very common) use case

  • There is no clear guide about smooth local development experience with rapid feedback loop when it comes to k8s.

  • While it's opinion based, I find this question being more focused on general directions that would lead to such developer experience, rather than exact steps.

    I'm not even sure (and I was trying to find out) if it's considered good practice for professional dev-ops. I have no idea how big infrastructures (tens or hundreds of microservices) are managed. Is it possible to run them all on single machine? Is it desired?

Danuloff answered 7/11, 2017 at 17:24 Comment(0)
S
1

I built something similar to what you're asking before. I ran hyperkube manually, which is hardly recommended but did the trick for local development. In my case this was all running in Vagrant for team uniformity.

docker run -d --name=kubelet \
        --volume=/:/rootfs:ro \
        --volume=/sys:/sys:ro \
        --volume=/var/lib/docker/:/var/lib/docker:rw \
        --volume=/var/lib/kubelet/:/var/lib/kubelet:slave \
        --volume=/var/run:/var/run:rw \
        --net=host \
        --pid=host \
        --privileged \
        --restart=always \
        gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \
        /hyperkube kubelet \
            --containerized \
            --hostname-override=127.0.0.1 \
            --api-servers=http://localhost:8080 \
            --cluster-dns=10.0.0.10 \
            --cluster-domain=cluster.local \
            --allow-privileged --v=2 \
            --image-gc-high-threshold=50 \
            --image-gc-low-threshold=40 \
            --kube-api-qps 1000 --kube-api-burst=2000 \
            --pod-manifest-path=/etc/kubernetes/manifests

On top of this, I had build scripts that would use YAML mustache templates that were aware where this was being deployed. When this was being deployed locally, every pod had the source code mounted as a volume so I could auto-reload it.

The same scripts were able to deploy to production thanks to it all being based on mustache templates. I even had multiple configuration files that would apply different template values for different environments.

The build script would prepare the YAML templates, build whatever images it needs to build, apply to Kubernetes and from there it would just auto-reload. It was a semi-nice user experience. My main issue was sluggishness when it come to file updating because it was running inside Docker inside Vagrant. There was no file sharing type that would provide good performance for both client and server and allow for file watching (inotify didn't work with most file share types, and NFS/SMB was slow for IDEs).

It was my first Kubernetes experience, so I doubt it's the "recommended way", but it worked. There was a lot of scripting involved so there are probably better ways to do this today.

Stimulate answered 7/11, 2017 at 17:37 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.