Harbormaster: The anti-Kubernetes for your personal server
debarshri 2021-08-19 11:36:05 +0000 UTC [ - ]
Single machine deployments are generally easy, you can do it DIY. The complexity arises the moment you have another machine in the setup, scheduling workloading, networking, setup to name a few, starts becoming complicated.
From my perspective, kubernetes was designed for multiple team, working on multiple services and jobs, making operation kind of self serviced. So I can understand the anti-kubernetes sentiment.
There is gap in the market between VM oriented simple deployments and kubernetes based setup.
SOLAR_FIELDS 2021-08-19 12:35:53 +0000 UTC [ - ]
Anecdotal, but anyone I know running home lab setups that aren’t software guys are doing vSphere or Proxmox or whatever equivalent for their home usecases. But I know a lot of old school sysadmin guys, so YMMV.
thefunnyman 2021-08-19 15:52:48 +0000 UTC [ - ]
debarshri 2021-08-19 13:01:24 +0000 UTC [ - ]
You cannot avoid learning k8s, you will end up encountering it everywhere, whether you like it or not. It is the tech-buzz word for past few years followed by cloud native and devops.
I really thinking if you wish to be great engineer and truly respect new general tools in generally, you have to go through the route setting up proxmox cluster, loading images, building those VM templates etc. Jumping directly on containers and cloud you kind of skip steps. It is not bad, you do miss our on few foundational concepts, around networking, operating systems etc.
The way I would put it is - A chef who is also farming their own vegetables a.k.a setting up your own clusters and deploying your apps VS a chef who goes to high-end wholeseller to buy premium vegetables does not care how it is grown aka. developers using kubernetes and container orchestration, PaaS.
stavros 2021-08-19 11:44:14 +0000 UTC [ - ]
The single-file YAML config (so it's easy to discover exactly what's running on the server), the separated data/cache/archive directories, the easy updates, the fact that it doesn't need built images but builds them on-the-fly, those are the big advantages, rather than the actual `docker-compose up`.
debarshri 2021-08-19 12:06:16 +0000 UTC [ - ]
stavros 2021-08-19 12:08:06 +0000 UTC [ - ]
GordonS 2021-08-19 13:09:03 +0000 UTC [ - ]
stavros 2021-08-19 13:10:33 +0000 UTC [ - ]
debarshri 2021-08-19 12:11:18 +0000 UTC [ - ]
stavros 2021-08-19 12:13:02 +0000 UTC [ - ]
KronisLV 2021-08-19 15:34:52 +0000 UTC [ - ]
In my experience, there are actually two platforms that do this pretty well.
First, there's Docker Swarm ( https://docs.docker.com/engine/swarm/ ) - it comes preinstalled with Docker, can handle either single machine deployments or clusters, even multi-master deployments. Furthermore, it just adds a few values to Docker Compose YAML format ( https://docs.docker.com/compose/compose-file/compose-file-v3... ) , so it's incredibly easy to launch containers with it. And there are lovely web interfaces, such as Portainer ( https://www.portainer.io/ ) or Swarmpit ( https://swarmpit.io/ ) for simpler management.
Secondly, there's also Hashicorp Nomad ( https://www.nomadproject.io/ ) - it's a single executable package, which allows similar setups to Docker Swarm, integrates nicely with service meshes like Consul ( https://www.consul.io/ ), and also allows non-containerized deployments to be managed, such as Java applications and others ( https://www.nomadproject.io/docs/drivers ). The only serious downsides is having to use the HCL DSL ( https://github.com/hashicorp/hcl ) and their web UI being read only in the last versions that i checked.
There are also some other tools, like CapRover ( https://caprover.com/ ) available, but many of those use Docker Swarm under the hood and i personally haven't used them. Of course, if you still want Kubernetes but implemented in a slightly simpler way, then there's also the Rancher K3s project ( https://k3s.io/ ) which packages the core of Kubernetes into a smaller executable and uses SQLite by default for storage, if i recall correctly. I've used it briefly and the resource usage was indeed far more reasonable than that of full Kubernetes clusters (like RKE).
hamiltont 2021-08-19 16:55:01 +0000 UTC [ - ]
When migrating from a non-containerized deployment process to a containerized one, there are a lot of new skills the employees have to learn. We've had 40+ employees, all who are basically full of work, and the mandate comes down to containerize, and all of these old school RPM/DEB folks suddenly need to start doing docker. No big deal, right? Except...half the stuff does not dockerize easily requires some slightly-more-than-beginner docker skills. People will struggle and be frustrated. Folks start with running one container manually, and quickly outgrow that to use compose. They almost always eventually use compose to run stuff in prod at some point, which works but eventually that one server is full. This the is the value of swarm - letting people expand to multi-server and get a taste of orchestration, without needing them to install new tools or learn new languages. Swarm adds just one or two small new concepts (stack and service) on top of everything they have already learned. It's a god send to tell a team they can just run swarm init, use their existing yaml files, and add a worker to the cluster. Most folks start to learn about placement constraints, deployment strategies, dynamic infrastructure like reverse proxy or service mesh, etc. After a bit of comfort and growth, a switch to k8s is manageable and the team is excited about learning it instead of overwhelmed. A lot (?all?) of the concepts in swarm are readily present in k8s, so the transition is much simpler
proxysna 2021-08-19 16:18:57 +0000 UTC [ - ]
> The only serious downsides is having to use the HCL DSL ( https://github.com/hashicorp/hcl ) and their web UI being read only in the last versions that i checked.
1. IIRC you can run jobs directly from UI now, but IMO this is kinda useless. Running a job is simple as 'nomad run jobspec.nomad'. You can also run a great alternative UI ( https://github.com/jippi/hashi-ui ).
2. IMO HCL > YAML for job definitions. I've used both extensively and HCL always felt much more human friendly. The way K8s uses YAML looks to me like stretching it to it's limits and barely readable at times with templates.
One thing that makes nomad a go-to for me is that it is able to run workloads pretty much anywhere. Linux, Windows, FreeBSD, OpenBSD, Illumos and ofc Mac.
rcarmo 2021-08-19 14:22:29 +0000 UTC [ - ]
Right now I have a deployment hook that can propagate an app to more machines also running Piku after the deployment finishes correctly on the first one, but stuff like green/blue and database migrations is a major pain and requires more logic.
imachine1980_ 2021-08-19 11:42:50 +0000 UTC [ - ]
debarshri 2021-08-19 12:04:38 +0000 UTC [ - ]
zie 2021-08-19 14:28:23 +0000 UTC [ - ]
if you decide to grow past 1 node, it's a little more complex, but not by a lot, like k8s.
globular-toast 2021-08-19 14:18:23 +0000 UTC [ - ]
What's wrong with Ansible? You can deploy docker containers using a very similar configuration to docker-compose.
willvarfar 2021-08-19 11:42:59 +0000 UTC [ - ]
FunnyLookinHat 2021-08-19 13:06:40 +0000 UTC [ - ]
At this point, I think Juju is most likely used in place of other metal or VM provisioning tools (like chef or Ansible) so that you can automatically provision and scale a system as you bring new machines online.
werewolf 2021-08-19 15:10:44 +0000 UTC [ - ]
mradmin 2021-08-19 11:20:27 +0000 UTC [ - ]
I created a shell script to easily set this up: https://github.com/badsyntax/docker-box
GordonS 2021-08-19 13:12:34 +0000 UTC [ - ]
The only things I'd change are switching to Caddy instead of Traefik (because Traefik 2.x config is just so bewilderingly complex!), and I'm not convinced Portainer is really adding any value.
Appreciate you sharing your setup script too.
mradmin 2021-08-19 14:00:33 +0000 UTC [ - ]
GordonS 2021-08-19 15:07:55 +0000 UTC [ - ]
mradmin 2021-08-19 16:40:36 +0000 UTC [ - ]
dneri 2021-08-19 13:42:44 +0000 UTC [ - ]
kawsper 2021-08-19 11:23:06 +0000 UTC [ - ]
stavros 2021-08-19 11:36:08 +0000 UTC [ - ]
heipei 2021-08-19 11:43:31 +0000 UTC [ - ]
stavros 2021-08-19 11:47:55 +0000 UTC [ - ]
kawsper 2021-08-19 12:30:36 +0000 UTC [ - ]
There's different kind of workloads, I use Docker containers the most, but jobs can also run on a system-level, there's also different types of operating modes, some jobs can be scheduled like cron, where other jobs just exposes a port and wants to be registered in Consuls service-mesh.
A job can also consist of multiple subtasks, an example could be nginx + django/rails subtasks that will be deployed together.
You can see an example of a Docker job here: https://www.nomadproject.io/docs/job-specification#example
With a few modifications you can easily allow for blue/green-deployments.
GordonS 2021-08-19 13:23:05 +0000 UTC [ - ]
mrweasel 2021-08-19 14:02:35 +0000 UTC [ - ]
My problem is in the two to eight server space, but networking is already externally managed and I have a loadbalancer. It’s in this space I feel that we’re lacking good solution. The size is to small to justify taking out nodes for a control plane, but big enough that Ansible feels weird.
rcarmo 2021-08-19 10:57:55 +0000 UTC [ - ]
(You can use docker-compose with it as well, but as a deployment step — I might bake in something nicer if there is enough interest)
uniqueuid 2021-08-19 11:03:52 +0000 UTC [ - ]
Thanks for that, Rui!
rcarmo 2021-08-19 14:17:48 +0000 UTC [ - ]
It has also been deployed on all top 5 cloud providers via could-init (and I’m going back to AWS plain non-Ubuntu AMIs whenever I can figure out the right packages).
stavros 2021-08-19 11:13:12 +0000 UTC [ - ]
rcarmo 2021-08-19 14:16:37 +0000 UTC [ - ]
My original use case was _exactly_ that (MQTT services).
nijave 2021-08-19 16:38:01 +0000 UTC [ - ]
It was using GitHub so just needed a read-only key and could be bootstrapped by connecting to the server directly and running the playbook once
In addition, it didn't need any special privileges or permissions. The playbook setup remote logging (shipping to CloudWatch Logs since we used AWS heavily) along with some basic metrics so the whole thing could be monitored. Plus, you can get a cron email as basic monitoring to know if it failed
Imo it was a pretty clever way to do continuous deploy/updates without complicated orchestrators, management servers, etc
mafro 2021-08-19 11:08:56 +0000 UTC [ - ]
That said, the project looks pretty good! I'll have a tinker and maybe I'll be converted
stavros 2021-08-19 11:10:59 +0000 UTC [ - ]
With Harbormaster, I just copy one YAML file and the `data/` directory and I'm done. It's extremely convenient.
uniqueuid 2021-08-19 11:18:56 +0000 UTC [ - ]
scandinavian 2021-08-19 11:43:26 +0000 UTC [ - ]
Isn't that exactly what watchtower does?
https://github.com/containrrr/watchtower
It works great on my mediacenter server running deluge, plex, sonarr, radarr, jackett and OpenVPN in docker.
uniqueuid 2021-08-19 12:50:41 +0000 UTC [ - ]
Aeolun 2021-08-19 12:55:40 +0000 UTC [ - ]
My server was much more stable after it didn’t try to update all the time any more.
I wonder if I can set a minimum timeout.
stavros 2021-08-19 11:52:10 +0000 UTC [ - ]
Then again, Harbormaster doesn't do that either unless the upstream git repo changes.
NortySpock 2021-08-19 13:56:43 +0000 UTC [ - ]
I want to "read all available updates" at my convenience, not get alerts reminding me to update my server.
Maybe I need to write some sort of plugin to DUIN that appends to a text file or web page or SQLite db... Hm.
andrewkdinh 2021-08-19 15:17:22 +0000 UTC [ - ]
Personally, since I’m a big fan of RSS, I’d set up email in Diun and send it to an email generated by https://kill-the-newsletter.com/
hardwaresofton 2021-08-19 11:17:24 +0000 UTC [ - ]
wilsonfiifi 2021-08-19 12:20:22 +0000 UTC [ - ]
Currently, if you want to scale Dokku horizontally and aren’t ready to take the kubernetes plunge, you have to put a load balancer in front of your multiple VMs running Dokku and that comes with it’s own headaches.
proxysna 2021-08-19 13:54:29 +0000 UTC [ - ]
stavros 2021-08-19 11:21:51 +0000 UTC [ - ]
conradfr 2021-08-19 13:15:33 +0000 UTC [ - ]
My biggest complaint would be the downtime when the docker script runs after each deployment.
corndoge 2021-08-19 15:27:00 +0000 UTC [ - ]
nonameiguess 2021-08-19 11:11:29 +0000 UTC [ - ]
It's kind of abandonware because it was the developer's PhD project and he graduated, but it is rather unfortunately widely used in one of the largest GEOINT programs in the US government right now because it was the only thing that offered this capability 5 years ago. Raytheon developers have been begging to fork it for a long time so they can update and make bug fixes, but Raytheon legal won't let them fork a GPL-licensed project.
aidenn0 2021-08-19 16:37:28 +0000 UTC [ - ]
ThaJay 2021-08-19 12:28:00 +0000 UTC [ - ]
"Someone forked it so now our fixes can get merged! :D"
nonameiguess 2021-08-19 12:37:07 +0000 UTC [ - ]
vonmoltke 2021-08-19 13:26:10 +0000 UTC [ - ]
I don't think that is the reason. When Raytheon or other contractors perform software work under a DOD contract (i.e., they charge the labor to a contract) the government generally gets certain exclusive rights to the software created. Raytheon is technically still the copyright holder, but effectively is required to grant the US government an irrevocable license to do whatever they want with the source in support of government missions if the code is delivered to the government. Depending on the contract, such code may also fall under blanket non-disclosure agreements. I believe both of these are incompatible with the GPL, and the latter with having a public fork at all.
The company could work this out with the government, but it would be an expensive and time-consuming process because government program offices are slow, bureaucratic, and hate dealing with small exceptions on large contracts. They might even still refuse to make the contract mods required at the end simply because they don't understand it or they are too risk averse. Legal is likely of the opinion that it isn't worth trying, and the Raytheon program office likely won't push them unless they can show a significant benefit for the company.
stavros 2021-08-19 11:47:10 +0000 UTC [ - ]
adamddev1 2021-08-19 11:07:04 +0000 UTC [ - ]
jrockway 2021-08-19 16:46:21 +0000 UTC [ - ]
systemd has "slice units" that are implemented very similarly to Docker containers, and it's basically the default on every Linux system from the last few years. It's underdocumented but you can read a little about it here: https://opensource.com/article/20/10/cgroups
stavros 2021-08-19 11:10:06 +0000 UTC [ - ]
reddec 2021-08-19 10:54:05 +0000 UTC [ - ]
uniqueuid 2021-08-19 11:06:46 +0000 UTC [ - ]
Especially the backup and Let's encrypt elements are great. And it handles docker networks, which makes it very flexible.
Will definitely check it out.
mnahkies 2021-08-19 13:40:51 +0000 UTC [ - ]
I made a similar thing recently as well, although with the goal to handle ingress and monitoring out the box as well, whilst still able to run comfortably on a small box.
I took a fairly similar approach, leveraging docker-compose files, and using a single data directory for ease of backup (although it's on my to-do list to split out conf/data).
If there was a way to get a truly slim and easy to setup k8s compatible environment I'd probably prefer that, but I couldn't find anything that wouldn't eat most of my small servers ram
https://github.com/mnahkies/shoe-string-server if you're interested
stavros 2021-08-19 13:45:53 +0000 UTC [ - ]
I'll try to rework the README to hopefully make it more understandable, but looking at your project's README I get as overwhelmed as I imagine you get looking at mine. It's a lot of stuff to explain in a short page.
debarshri 2021-08-19 13:43:53 +0000 UTC [ - ]
mnahkies 2021-08-19 14:16:22 +0000 UTC [ - ]
debarshri 2021-08-19 14:24:20 +0000 UTC [ - ]
sgentle 2021-08-19 15:11:10 +0000 UTC [ - ]
Instead of deploying changes as git commits, you deploy them as container image updates. I'm not going to call it a good solution, exactly, but it meant I could just use one kind of thing to solve my problem, which is a real treat if you've spent much time in the dockerverse.
stavros 2021-08-19 15:56:33 +0000 UTC [ - ]
devmor 2021-08-19 16:14:09 +0000 UTC [ - ]
uniqueuid 2021-08-19 11:01:15 +0000 UTC [ - ]
What I couldn't immediately see from skimming the repo is:
How hard would it be to use a docker-based automatic https proxy such as this [1] with all projects?
I've had a handfull of docker-based services running for many years and love the convenience. What I'm doing now is simply wrap the images in a bash script that stops the containers, snapshots the ZFS volume, pulls newer versions and re-launches everything. That's then run via cron once a day. Zero issues across at least five years.
stavros 2021-08-19 11:12:22 +0000 UTC [ - ]
Sounds like a very good ingress solution, I'll try it for myself too, thanks! I use Caddy now but configuration is a bit too manual.
uniqueuid 2021-08-19 11:16:06 +0000 UTC [ - ]
One thing to note is that you'll need to make sure that all the compose bundles are on the same network.
I.e. add this to all of them:
networks:
default:
external:
name: nginx-proxy
stavros 2021-08-19 11:24:12 +0000 UTC [ - ]
I already added a config for Plex in the Harbormaster repo, but obviously it's better if the upstream app itself has it:
https://gitlab.com/stavros/harbormaster/-/blob/master/apps/p...
3np 2021-08-19 12:59:57 +0000 UTC [ - ]
Traefik can be a bit hairy in some ways, but for anything you'd run Harbormaster for it should be a good fit.
Right now I have some Frankenstein situation with all of Traefik, Nginx, HAProxy, Envoy (though this is inherited from Consul Connect) at different points... I keep thinking about replacing Traefik with Envoy, but the docs and complexity are a bit daunting.
GekkePrutser 2021-08-19 13:51:25 +0000 UTC [ - ]
It sounds like a good option too, I don't want all the complexity of Kubernetes at home. If I worked for the cloud team in work I might use it at home but I don't.
3np 2021-08-19 12:53:52 +0000 UTC [ - ]
I could also see this being great for a personal lab/playground server. Or for learning/workshops/hackathons. Super easy to get people running from 0.
If I ever run a class or workshop that has some server-side aspect to it, I'll keep this in mind for sure.
nixgeek 2021-08-19 14:19:45 +0000 UTC [ - ]
Simple one-time setup and then everything is a container?
If that interesting to OP then I might look into that one weekend soon.
stavros 2021-08-19 14:43:30 +0000 UTC [ - ]
dneri 2021-08-19 13:46:54 +0000 UTC [ - ]
selfhoster11 2021-08-19 11:23:54 +0000 UTC [ - ]
aae42 2021-08-19 12:00:02 +0000 UTC [ - ]
i just recently decided to graduate from just `docker-compose up` running inside tmux to a more fully fledged system myself...
since i know Chef quite well i just decided to use Chef in local mode with the docker community cookbook
i also get the nice tooling around testing changes to the infrastructure in test kitchen
if this would have existed before i made that switch, i may have considered it, nice work!
tkubacki 2021-08-19 12:24:22 +0000 UTC [ - ]
https://wickedmoocode.blogspot.com/2020/09/simple-way-to-dep...
gentleman11 2021-08-19 13:37:03 +0000 UTC [ - ]
debarshri 2021-08-19 13:41:44 +0000 UTC [ - ]
If is a startup, use some buzzwords like cloud native, devops etc. Check their sentiments towards kubernetes.
On a serious note, You might have to jump on the kubernetes bandwagon whether you like it or not as many of the companies are serious investing their resources. Having spoken to various companies from series A to Enterprise. I do see the kubernetes adoption is actually not as much as I would have imagined based on the hype.
P.S discussion of kubernetes or not kubernetes was recently accelerated by a post from Ably [1]
p_l 2021-08-19 14:16:59 +0000 UTC [ - ]
proxysna 2021-08-19 13:43:33 +0000 UTC [ - ]
zeckalpha 2021-08-19 11:44:27 +0000 UTC [ - ]
stavros 2021-08-19 11:46:24 +0000 UTC [ - ]
I kind of punted on the decision of how to run the top layer (ie have Harbormaster be a daemon that auto-pulls its config), but it's very simple to add a cronjob to `git pull; harbormaster` (and is more composable) so I didn't do any more work in that direction.
sandGorgon 2021-08-19 13:32:12 +0000 UTC [ - ]
stavros 2021-08-19 13:34:06 +0000 UTC [ - ]
sandGorgon 2021-08-19 14:35:53 +0000 UTC [ - ]
stavros 2021-08-19 14:46:49 +0000 UTC [ - ]
stavros 2021-08-19 10:48:25 +0000 UTC [ - ]
This also worked very well for work, where we have some simple services and scripts that run constantly on a micro AWS server. It's made deployments completely automated and works really well, and now people can deploy their own services just by adding a line to a config instead of having to learn a whole complicated system or SSH in and make changes manually.
I thought I'd share this with you, in case it was useful to you too.
fenollp 2021-08-19 13:37:47 +0000 UTC [ - ]
Not saying this would at all replace Harbormaster, but with DOCKER_HOST or `docker context` one can easily run docker and docker-compose commands without "ever logging in to the machine". Well, it does use SSH under the hood but this here seems more of a UX issue so there you go.
Discovering the DOCKER_HOST env var (changes the daemon socket) has made my usage of docker stuff much more powerful. Think "spawn a container on the machine with bad data" à la Bryan Cantrill at Joyent.
stavros 2021-08-19 13:40:25 +0000 UTC [ - ]
nanis 2021-08-19 16:01:20 +0000 UTC [ - ]
https://docs.chef.io/chef_solo/
revscat 2021-08-19 17:01:36 +0000 UTC [ - ]
I have never used Chef. This is babble to me.
inetknght 2021-08-19 14:35:23 +0000 UTC [ - ]
You could put your SSH server configuration in a repo. You could put your SSH authorization key in a repo. You could even put your private key in a repo if you really wanted.
stavros 2021-08-19 14:40:17 +0000 UTC [ - ]
inetknght 2021-08-19 17:00:37 +0000 UTC [ - ]
You run what's supposed to run the same way you would anything else. It's the same for the environment variables.
How would you track what's supposed to run and what's not for Docker? Using the `DOCKER_HOST` environment variable to connect over SSH is the exact same way.
stavros 2021-08-19 17:01:39 +0000 UTC [ - ]
gibs0ns 2021-08-19 15:36:30 +0000 UTC [ - ]
While I haven't used it personally, there is [0] Watchtower which aims to automate updating docker containers.
[0] https://github.com/containrrr/watchtower
electroly 2021-08-19 15:08:46 +0000 UTC [ - ]
stavros 2021-08-19 15:55:34 +0000 UTC [ - ]
mixedCase 2021-08-19 14:25:00 +0000 UTC [ - ]
stavros 2021-08-19 14:41:52 +0000 UTC [ - ]
mixedCase 2021-08-19 15:02:49 +0000 UTC [ - ]
Agreed on it being a bit too heavy-handed, and the tooling isn't very helpful for dealing with it unless you're neck-deep into the ecosystem already.
thor_molecules 2021-08-19 14:50:06 +0000 UTC [ - ]
e12e 2021-08-19 16:25:14 +0000 UTC [ - ]
"Debugging Under Fire: Keep your Head when Systems have Lost their Mind • Bryan Cantrill • GOTO 2017" https://youtu.be/30jNsCVLpAE
Ed: oh, here we go I think?
> Running Aground: Debugging Docker in Production Bryan Cantrill19,102 views16 Jan 2018 Talk originally given at DockerCon '15, which (despite being a popular presentation and still broadly current) Docker Inc. has elected to delist.
https://www.youtube.com/watch?v=AdMqCUhvRz8
zdragnar 2021-08-19 16:11:48 +0000 UTC [ - ]
He has a lot to say about zones and jails and chroot predating docker, and why docker and co. "won" so to speak.
dutchmartin 2021-08-19 14:29:28 +0000 UTC [ - ]
c17r 2021-08-19 16:08:21 +0000 UTC [ - ]
stavros 2021-08-19 14:45:38 +0000 UTC [ - ]