Many of the posts I read here are about Docker. Is anybody using Kubernetes to manage their self hosted stuff? For those who've tried it and went back to Docker, why?
I'm doing my 3rd rebuild of a K8s cluster after learning things that I've done wrong and wanted to start fresh, but when enhancing my Docker setup and deciding between K8s and Docker Swarm, I decided on K8s for the learning opportunities and how it could help me at work.
Kubernetes is useful if you have gone full cattle over pets. And that is very uncommon in home setups.
If you only own one or two small machines you cannot destroy infra easily in a "cattle" way, and the bloatware that comes with Kubernetes doesn't help you neither.
In homelabs and home servers the pros of Kubernetes are not very useful: high availability, auto-scaling, gitops integrations, etc: Why would you need autoscaling and HA for a SFTP used only by you? Instead you write a docker-compose.yml and call it a day.
The one exception to this is if you're using your homelab to learn kubernetes.
That was the only time I used K8s and k3s on my homelab.
And for anything that I do want to set up in a HA/cattle kind of way, I use Docker Swarm, as it feels like a more comfortable extension of docker compose.
I think the biggest reasons for me have been growth and professional development. I started my home cluster 8 years ago as a single node of basically just running the hack/ scripts on my Linux desktop. I've been able to grow that same cluster to 6 hosts as I've replaced desktops and as I got a bit into the used enterprise server scene. I've replaced multiple routers and moved behind cloudflare, added a private CA a few times, added solid persistence with rook+ceph, and built my ideal telemetry stack, added velero backups into Backblaze b2, and probably a lot more I'm not thinking of.
That whole time, I've had to do almost zero maintenance or upgrades on the side projects I've built over the years, or on the self hosted services I've run. If you ignore the day or so a year I've spent cursing my propensity to upgrade a tad too early and hit snags, though I've just about always been able to resolve them pretty quickly and have learned even more from those times.
And on top of that, I get to take a lot of that expertise to work where it happens to pay quite well. And I've spent some time working towards building the knowledge into a side gig. Maybe someday that'll pay the bills too.
While you're probably right overall, there are many good reasons to use k8s. The api provides all sorts of benefits. Kubectl, k9s, and other operational UIs . Good deployment models and tools like argo. Loads of helm charts that are (theoretically) ready to use.
No, those things aren't free. There's a lot of overhead to running k8s.
I'm not very familiar with kubernetes or k3s but I thought it was a way to manage docker containers. Is that not the case? I'm considering deploying a k3s cluster in my proxmox environment to test it out.
Kubernetes is abbreviated K8s (because there's 8 letters between the "k" and the "s". K3s is a "lite" version. Generally speaking, kubernetes manages your containers. You basicaly tell K8s what the state should be and it does what it needs to do to get the environment as you've declared. It'll check and start or restart services, start containers on a node that can run them (like ensuring enough RAM is available). There's a lot more, but that's the general idea.
Oh it is not that much, I run adguard DNS with adblocking, searxng as my search engine, vaultwarden as my password manager. All combined with Argo CD as GitOps engine, nginx ingress with cert-manager for lets encrypt certificates, longhorn as storage layer and metallb as loadbalancer solution. I am planning to completely replace my current setup (which is an old sandy bridge powered HP microserver) with a turing pi 2 clusterboard with 4 RPi4 CMs as soon as they get cheaper.
I manage like 200 servers in Google cloud k8s but I don't think I'd do that for home use. The core purpose is to manage multiple servers and assign processes between them, auto scaling, cluster internal network - running docker containers for single instance apps for personal use doesn't require this kind of complexity
My NAS software has a docker thing just built into it. I can upload or specify a package and it just runs it on the local hardware. If you have a Linux shell, I guess all you really have to do is run dockerd to start the daemon, make sure your network config allows connections, and upload your docker containers to it for running
My thinking is the same, I see lots of k8s mentions on here and from coworkers at home and all I use is docker and VMs because I don't want all that complexity I have to deal with at work.
Kubernetes is great if you run lots of services and/or already use kubernetes at work. I use it all the time and I've learned a lot on my personal cluster that I've taken to work to improve their systems. If you're used to managing infra already then it's not that much more work, and it's great to be able to shutdown a server for maintenance and not have to worry about more than a brief blip on your home services.
I use k8s at work and have built a k8s cluster in my homelab... but I did not like it. I tore it down, and currently using podman, and don't think I would go back to k8s (though I would definitely use docker as an alternative to podman and would probably even recommend it over podman for beginners even though I've settled on podman for myself).
K8s itself is quite resource-consuming, especially on ram. My homelab is built on old/junk hardware from retired workstations. I don't want the kubelet itself sucking up half my ram. Things like k3s help with this considerably, but that's not quite precisely k8s either. If I'm going to start trimming off the parts of k8s I don't need, I end up going all the way to single-node podman/docker... not the halfway point that is k3s.
If you don't use hostNetworking, the k8s model of traffic routes only with the cluster except for egress is all pure overhead. It's totally necessary with you have a thousand engineers slinging services around your cluster, but there's no benefit to this level fo rigor in service management in a homelab. Here again, the networking in podman/docker is more straightforward and maps better to the stuff I want to do in my homelab.
Podman accepts a subset of k8s resource-yaml as a docker-compose-like config interface. This lets me use my familiarity with k8s configs iny podman setup.
Overall, the simplicity and lightweight resource consumption of podman/docker are are what I value at home. The extra layers of abstraction and constraints k8s employs are valuable at work, where we have a lot of machines and alot of people that must coordinate effectively... but I don't have those problems at home and the overhead (compute overhead, conceptual overhead, and config-overhesd) of k8s' solutions to them is annoying there.
I'd suggest Podman over docker if someone is starting fresh. I like Podman running as rootless, but moving an existing docker to Podman was a pain. Since the initial docker setup was also a pain, I'd rather have only done it once :/
For me the use case of K8s only makes sense with large use cases (in terms of volume of traffic and users). Docker / Podman is sufficient to self-host something small.
I find mine useful as both a learning process and as a thing need. I don't like using cloud services where possible so I can set things up to replace having to rely on those such as next loud for storage, plex and some *arr servers for media etc. And I think once you put the hardware and power costs vs what I'd pay for all the subs (particularly cloud storage costs) it comes out cheaper at least with hardware I'm using.
A lot of people thought this was the case for VMs and docker as well, and now it seems to be the norm.
Yes, but docker does provide features that are useful at the level of a hobbyist self-hosting a few services for personal use (e.g. reproducibility).
I like using docker and ansible to set up my systems, as I can painlessly reproduce everything or migrate to a different VPS in a few minutes.
But kubernetes seems overkill. None of my services have enough traffic to justify replicas, I'm the only user.
Besides learning (which is a valid reason), I don't see why one would bother setting it up at home. Unless there's a very specific use-case I'm missing.
For me, I find that I learn more effectively when I have a goal. Sure, it's great to follow somebody's "Hello World" web site tutorial, but the real learning comes when I start to extend it to include CI/CD for example.
As far as a use case, I'd say that learning IS the use case.
Kubernetes is awesome for self hosting, but tbh is superpower isn't multi-node/scalability/clustering shenanigans, it's that because every bit of configuration is just an object in the API, you can really easily version control everything - charts and config in git, tools like Helm make applying changes super easy, use Renovate to do automatic updates, use your CI tool of choice to deploy on commit, leverage your hobby into a DevOps role, profit
Usually though it's because I run most stuff bare metal anyway so LXC is for temporary or random cases where I need a weird dependency or I want to run a niche service.
Only use docker for when I actually want faster setup like docker-osx which does all the vm stuff for running a virtual Mac for you.
I don't really mind docker, but for homelab I just find myself rewriting dockerfile anytime I want to change something which I don't really need to do if I'm not publishing it or even reusing it.
Kubernates is really more effective for actual load services, which you never need in homelab lol. It's great to use to learn k8s cluster, but the resources get eaten fast.
It’s a damn shame it’s going not free open source, I Just switched my lab over to nomad and consul last year and it has been incredibly smooth sailing.
I've been reading into k3s out of curiosity, which as I understand is supposed to be one of the simpler ones, and even as someone who works as a developer and maintains a small homelab, it just makes me feel utterly clueless lol. Which is to say, I'll definitely be giving Nomad a good look.
Oh and if you do happen to have any other more newbie friendly suggestions, I'd love to hear about them!
Seriously though I changed to nomad/consul/gluster and it’s been wonderful. I still have some other things running on my nas software like Jellyfin and audiobookshelf, but that’s just for performance and simplicity.
I was a bit put off by Hashicorps license change, but I don’t think I’m changing back to k3s anytime soon. Nomad is just so nice and easy.
Helm is one of the reasons I became interested in Kubernetes. I really like the idea of a package where all I have to do is provide my preferences in a values file. Before swarm was mature, I was managing my containers with complicated shell scripts to bring stuff up in the right order and it became fragile and unmaintainable.
I run a 2 node k3s cluster. There are a few small advantages over docker swarm, built-in network policies to lock down my VPN/Torrent pod being the main one.
Other than that writing kubernetes yaml files is a lot more verbose than docker-compose. Helm does make it bearable, though.
Due to real-life my migration to the cluster is real slow, but the goal is to move all my services over.
It's not "better" than compose but I like it and it's nice to have worked with it.
I love kubernetes. At the start of the year I installed k3s von my VPS and moved over all my services. It was a great learning opportunity that also helped immensely for my job.
It works just as well as my old docker compose setup, and I love how everything is contained in one place in the manifests. I don't need to log in to the server and issue docker commands anymore (or write scripts / CI stages that do so for me).
Love is a strong word, but kubernetes is definitely interesting. I'm finishing up a migration of my homelab from a docker host running in a VM managed with Portainer to one smaller VM and three refurbished lenovo mini PCs running Rancher. It hasn't been an easy road, but I chose to go with Rancher and k3s since it seemed to handle my usecase better than Portainer and Docker Swarm could. I can't pass up those cheap mini PCs
Does rancher connect the pcs together? I have like 3 mini pcs sitting around, and I've always wanted to kinda combine them somehow
Like being able to combine cpu power or something. Idk if this is possible without getting a mobo with multiple cpu slots, but if it is. I'd love to learn!
Yeah, Kubernetes is designed to run in a cluster so you can pool processing power and memory from multiple devices. I banged my head against the wall for hours trying to figure out how to set up a cluster by hand, but then discovered if you install Rancher in a regular docker container it can handle all that for you
I have a K3OS cluster built out of a bunch of raspberry pis, it works well.
The big reason I like kubernetes is that once it is up and running with git ops style management, adding another service becomes trivial.
I just copy paste one if my e is ting services, tweak the names/namespaces, and then change the specific for the pods to match what their docker configuration needs, ie what folders need mounting and any other secrets or configs.
I then just commit the changes to github and apply them to the cluster.
The process of being able to roll back changes via git is awesome
I'd love to hear more about your GitHub to K8s setup. I've been thinking about doing something similar, but I'm not sure how to keep my public stuff public while injecting my personalized (private) configuration during deployment.
AKS is a shame. Most of azure, actually. I do my best to find ways around the insanity but it always seems to leak back in with something insane they chose to do for whatever Microsoft reason they have.
I've spent the last two weeks on getting a k3s cluster working and I've had nothing but problems but it has been a great catalysts for learning new tools like ansible and load balancers. I finally got the cluster working last night. If anyone else is having wierd issues with the cluster timing out ETCD needs fast storage. Moving my VMs from my spinning rust to a cheap SSD fixed all my problems.
Here's a slightly different story: I run OpenBSD on 2 bare-metal
machines in 2 different physical locations. I used k8s at work for a
bit until I steered my career more towards programming. Having k8s
knowledge handy doesn't really help me so much now.
On OpenBSD there is no Kubernetes. Because I've got just two hosts,
I've managed them with plain SSH and the default init system for 5+
years without any problems.
Running an RKE cluster as VMs on my ceph+proxmox cluster. Using Rook and external ceph as my storage backend and loving it. I haven't fully migrated all of my services, but thus far it's working well enough for me!
I feel like it took me quite a while to get the hang of Docker, and Kubernetes on a general look seems all that much more daunting! Hopefully one day I can break it down into smaller pieces so I can get started with it!
My homelab is a 2 node Kubernetes cluster (k3s, raspberry pis), going to scale it up to 4 nodes some day when I want a weekend project.
Built it to learn Kubernetes while studying for CKA/CKD certification for work where I design, implement and maintain service architectures running in Kubernetes/Openshift environments every day. It's relatively easy for me to manage Kubernetes for my home lab, but It's a bit heavy and has a steep learning curve if you are new to it which (understandably) puts people off it I think. Especially for homelab/selfhosting use cases. It's a very valuable (literally $$$) skill if you are in that enterprise space though.
I was looking into converting my docker services into a cluster to get high availability and to learn it for work, but while investigating it, I read that kubernetes is actually meant for scalability and just a single service per cluster.
Also read that docker swarm is actually what is recommended for my homelab use case. So I'm right now on my way to convert everything to docker stacks. What do you think?
Of course high availability always requires multiple nodes.
Its just that while choosing how to set up my cluster I looked up several options (proxmox, swarm, kubernetes...) and I noticed that kubernetes is generally meant for bigger deployments.
I only need a single replica for each of my containers and they can all run on a single node, so kubernetes is overkill just to get high availability For my use case