Hugo Hacker News

Harbormaster: The anti-Kubernetes for your personal server

stavros 2021-08-19 10:48:25 +0000 UTC [ - ]

Hey everyone! I have a home server that runs some apps, and I've been installing them directly, but they kept breaking on every update. I wanted to Dockerize them, but I needed something that would manage all the containers without me having to ever log into the machine.

This also worked very well for work, where we have some simple services and scripts that run constantly on a micro AWS server. It's made deployments completely automated and works really well, and now people can deploy their own services just by adding a line to a config instead of having to learn a whole complicated system or SSH in and make changes manually.

I thought I'd share this with you, in case it was useful to you too.

fenollp 2021-08-19 13:37:47 +0000 UTC [ - ]

> I needed something that would manage all the containers without me having to ever log into the machine.

Not saying this would at all replace Harbormaster, but with DOCKER_HOST or `docker context` one can easily run docker and docker-compose commands without "ever logging in to the machine". Well, it does use SSH under the hood but this here seems more of a UX issue so there you go.

Discovering the DOCKER_HOST env var (changes the daemon socket) has made my usage of docker stuff much more powerful. Think "spawn a container on the machine with bad data" à la Bryan Cantrill at Joyent.

stavros 2021-08-19 13:40:25 +0000 UTC [ - ]

Hmm, doesn't that connect your local Docker client to the remote Docker daemon? My goal isn't "don't SSH to the machine" specifically, but "don't have state on the machine that isn't tracked in a repo somewhere", and this seems like it would fail that requirement.

nanis 2021-08-19 16:01:20 +0000 UTC [ - ]

> "don't have state on the machine that isn't tracked in a repo somewhere"

https://docs.chef.io/chef_solo/

revscat 2021-08-19 17:01:36 +0000 UTC [ - ]

> chef-solo is a command that executes Chef Infra Client in a way that does not require the Chef Infra Server in order to converge cookbooks.

I have never used Chef. This is babble to me.

inetknght 2021-08-19 14:35:23 +0000 UTC [ - ]

What do you think isn't getting tracked?

You could put your SSH server configuration in a repo. You could put your SSH authorization key in a repo. You could even put your private key in a repo if you really wanted.

stavros 2021-08-19 14:40:17 +0000 UTC [ - ]

How do you track what's supposed to run and what's not, for example? Or the environment variables, or anything else you can set through the cli.

inetknght 2021-08-19 17:00:37 +0000 UTC [ - ]

What do you mean?

You run what's supposed to run the same way you would anything else. It's the same for the environment variables.

How would you track what's supposed to run and what's not for Docker? Using the `DOCKER_HOST` environment variable to connect over SSH is the exact same way.

stavros 2021-08-19 17:01:39 +0000 UTC [ - ]

I wouldn't. That's why I wrote Harbormaster, so I can track what's running and what isn't.

gibs0ns 2021-08-19 15:36:30 +0000 UTC [ - ]

For me, I don't define any variables via the cli, i put them all in the docker-compose.yml or accompanying .env file, that way it's a simple `docker-compose up` to deploy. Then I can track these files via git, and deploy to remote docker hosts using docker-machine, which effectively sets the DOCKER_HOST env var.

While I haven't used it personally, there is [0] Watchtower which aims to automate updating docker containers.

[0] https://github.com/containrrr/watchtower

electroly 2021-08-19 15:08:46 +0000 UTC [ - ]

Docker Compose is designed for this.

stavros 2021-08-19 15:55:34 +0000 UTC [ - ]

Yep, that's why Harbormaster uses it.

mixedCase 2021-08-19 14:25:00 +0000 UTC [ - ]

Have you tried NixOS?

stavros 2021-08-19 14:41:52 +0000 UTC [ - ]

I have, and it's really good, but it needs some investment in creating packages (if they don't exist) and has some annoyances (eg you can't talk to the network to preserve determinism). It felt a bit too heavy-handed for a few processes. We also used to use it at work extensively for all our production but migrated off it after various difficulties (not bugs, just things like having its own language).

mixedCase 2021-08-19 15:02:49 +0000 UTC [ - ]

You can talk to the network, either through the escape hatch or provided fetch utilities, which tend to require checksums. But you do have to keep the result deterministic.

Agreed on it being a bit too heavy-handed, and the tooling isn't very helpful for dealing with it unless you're neck-deep into the ecosystem already.

thor_molecules 2021-08-19 14:50:06 +0000 UTC [ - ]

What is the "Bryan Cantrill at Joyent" you're referring to?

e12e 2021-08-19 16:25:14 +0000 UTC [ - ]

Not (I think) the exacttalk/blog post gp was thinking of - but worth watching IMNHO:

"Debugging Under Fire: Keep your Head when Systems have Lost their Mind • Bryan Cantrill • GOTO 2017" https://youtu.be/30jNsCVLpAE

Ed: oh, here we go I think?

> Running Aground: Debugging Docker in Production Bryan Cantrill19,102 views16 Jan 2018 Talk originally given at DockerCon '15, which (despite being a popular presentation and still broadly current) Docker Inc. has elected to delist.

https://www.youtube.com/watch?v=AdMqCUhvRz8

zdragnar 2021-08-19 16:11:48 +0000 UTC [ - ]

The technology analogy is Manta, which Bryan covers in at least one if not several popular talks on YouTube, in particular about contanerization.

He has a lot to say about zones and jails and chroot predating docker, and why docker and co. "won" so to speak.

dutchmartin 2021-08-19 14:29:28 +0000 UTC [ - ]

Interesting case. But did you look at other systems before this? I myself use caprover[1] for a small server deployment. 1: https://caprover.com/

c17r 2021-08-19 16:08:21 +0000 UTC [ - ]

I use caprover on my DO instance and it works great. Web apps, twitter/reddit bots, even ZNC.

stavros 2021-08-19 14:45:38 +0000 UTC [ - ]

I have used Dokku, Kubernetes, a bit of Nomad, some Dokku-alikes, etc, but none of them did things exactly like I wanted (the single configuration file per server was a big requirement, as I want to know exactly what's running on a server).

debarshri 2021-08-19 11:36:05 +0000 UTC [ - ]

I have been knee deep in deployment space for post 4 years. It is pretty hard problem to solve to the n-th level. Here's my 2 cents.

Single machine deployments are generally easy, you can do it DIY. The complexity arises the moment you have another machine in the setup, scheduling workloading, networking, setup to name a few, starts becoming complicated.

From my perspective, kubernetes was designed for multiple team, working on multiple services and jobs, making operation kind of self serviced. So I can understand the anti-kubernetes sentiment.

There is gap in the market between VM oriented simple deployments and kubernetes based setup.

SOLAR_FIELDS 2021-08-19 12:35:53 +0000 UTC [ - ]

IMO the big draw of running K8S on my home server is the unified API. I can take my Helm chart and move it to whatever cloud super easily and tweak it for scaling in seconds. This solution from the post is yet another config system to learn, which is fine, but is sort of the antithesis of why I like K8S. I could see it being theoretically useful for someone who will never use K8S (eg not a software engineer by trade, so will never work a job that uses K8s), but IMO those people are probably running VM’s on their home servers instead since how may non software engineers are going to learn and use docker-compose but not K8S?

Anecdotal, but anyone I know running home lab setups that aren’t software guys are doing vSphere or Proxmox or whatever equivalent for their home usecases. But I know a lot of old school sysadmin guys, so YMMV.

thefunnyman 2021-08-19 15:52:48 +0000 UTC [ - ]

I’ve been working on using k3s for my home cluster for this exact reason. I run it in a vm on top of proxmox, using packer, terraform, and ansible to deploy. My thought process here is that if I ever want to introduce more nodes or switch to a public cloud I could do so somewhat easily (either with a managed k8s offer, or just by migrating my VMs). I’ve also toyed with the idea of running some services on public cloud and some more sensitive services on my own infra.

debarshri 2021-08-19 13:01:24 +0000 UTC [ - ]

I agree with you. It is an anti-thesis that is why it is marketed as anti-kubernetes toolset.

You cannot avoid learning k8s, you will end up encountering it everywhere, whether you like it or not. It is the tech-buzz word for past few years followed by cloud native and devops.

I really thinking if you wish to be great engineer and truly respect new general tools in generally, you have to go through the route setting up proxmox cluster, loading images, building those VM templates etc. Jumping directly on containers and cloud you kind of skip steps. It is not bad, you do miss our on few foundational concepts, around networking, operating systems etc.

The way I would put it is - A chef who is also farming their own vegetables a.k.a setting up your own clusters and deploying your apps VS a chef who goes to high-end wholeseller to buy premium vegetables does not care how it is grown aka. developers using kubernetes and container orchestration, PaaS.

stavros 2021-08-19 11:44:14 +0000 UTC [ - ]

Agreed, but I made this because I couldn't find a simple orchestrator that used some best practices even for a single machine. I agree the problem is not hard (Harbormaster is around 550 lines), but Harbormaster's value is more in the opinions/decisions than the code.

The single-file YAML config (so it's easy to discover exactly what's running on the server), the separated data/cache/archive directories, the easy updates, the fact that it doesn't need built images but builds them on-the-fly, those are the big advantages, rather than the actual `docker-compose up`.

debarshri 2021-08-19 12:06:16 +0000 UTC [ - ]

What is your perspective on multiple docker compose files, and you can do docker-compose up -f <file name>. You could organise in a day that all the files are in the same directory. Just wondering.

stavros 2021-08-19 12:08:06 +0000 UTC [ - ]

That's good too, but I really like having the separate data/cache directories. Another issue I had with the multiple Compose files is that I never knew which ones I had running and which ones I decided against running (because I shut services down but never removed the files). With the single YAML file, there's an explicit `enabled: false` line with a commit message explaining why I stopped running that service.

GordonS 2021-08-19 13:09:03 +0000 UTC [ - ]

Might be I'm missing something, but I often go the route of using multiple Compose files, and haven't had any issue with using different data directories; I just mount the directory I want for each service, e.g. `/opt/acme/widget-builder/var/data`

stavros 2021-08-19 13:10:33 +0000 UTC [ - ]

Harbormaster doesn't do anything you can't otherwise do, it just makes stuff easy for you.

debarshri 2021-08-19 12:11:18 +0000 UTC [ - ]

I understand your problem. I have seen solve that with docker_compose_$ENV.yaml. You could set ENV variable and then the appropriate file would be called.

stavros 2021-08-19 12:13:02 +0000 UTC [ - ]

Hmm, what did you set the variable to? Prod/staging/etc? I'm not sure how that documents whether you want to keep running the service or not.

KronisLV 2021-08-19 15:34:52 +0000 UTC [ - ]

> There is gap in the market between VM oriented simple deployments and kubernetes based setup.

In my experience, there are actually two platforms that do this pretty well.

First, there's Docker Swarm ( https://docs.docker.com/engine/swarm/ ) - it comes preinstalled with Docker, can handle either single machine deployments or clusters, even multi-master deployments. Furthermore, it just adds a few values to Docker Compose YAML format ( https://docs.docker.com/compose/compose-file/compose-file-v3... ) , so it's incredibly easy to launch containers with it. And there are lovely web interfaces, such as Portainer ( https://www.portainer.io/ ) or Swarmpit ( https://swarmpit.io/ ) for simpler management.

Secondly, there's also Hashicorp Nomad ( https://www.nomadproject.io/ ) - it's a single executable package, which allows similar setups to Docker Swarm, integrates nicely with service meshes like Consul ( https://www.consul.io/ ), and also allows non-containerized deployments to be managed, such as Java applications and others ( https://www.nomadproject.io/docs/drivers ). The only serious downsides is having to use the HCL DSL ( https://github.com/hashicorp/hcl ) and their web UI being read only in the last versions that i checked.

There are also some other tools, like CapRover ( https://caprover.com/ ) available, but many of those use Docker Swarm under the hood and i personally haven't used them. Of course, if you still want Kubernetes but implemented in a slightly simpler way, then there's also the Rancher K3s project ( https://k3s.io/ ) which packages the core of Kubernetes into a smaller executable and uses SQLite by default for storage, if i recall correctly. I've used it briefly and the resource usage was indeed far more reasonable than that of full Kubernetes clusters (like RKE).

hamiltont 2021-08-19 16:55:01 +0000 UTC [ - ]

Wanted to second that Docker Swarm has been an excellent "middle step" for two different teams I've worked on. IMO too many people disregard it right away, not realizing that it is a significant effort for the average dev to learn containerization+k8s at the same time, and it's impossible to do that on a large dev team without drastically slowing your dev cycles for a period.

When migrating from a non-containerized deployment process to a containerized one, there are a lot of new skills the employees have to learn. We've had 40+ employees, all who are basically full of work, and the mandate comes down to containerize, and all of these old school RPM/DEB folks suddenly need to start doing docker. No big deal, right? Except...half the stuff does not dockerize easily requires some slightly-more-than-beginner docker skills. People will struggle and be frustrated. Folks start with running one container manually, and quickly outgrow that to use compose. They almost always eventually use compose to run stuff in prod at some point, which works but eventually that one server is full. This the is the value of swarm - letting people expand to multi-server and get a taste of orchestration, without needing them to install new tools or learn new languages. Swarm adds just one or two small new concepts (stack and service) on top of everything they have already learned. It's a god send to tell a team they can just run swarm init, use their existing yaml files, and add a worker to the cluster. Most folks start to learn about placement constraints, deployment strategies, dynamic infrastructure like reverse proxy or service mesh, etc. After a bit of comfort and growth, a switch to k8s is manageable and the team is excited about learning it instead of overwhelmed. A lot (?all?) of the concepts in swarm are readily present in k8s, so the transition is much simpler

proxysna 2021-08-19 16:18:57 +0000 UTC [ - ]

Nomad also scales really well. In my experience swarm had a lot of issues with going above 10 machines in a cluster. Stuck containers, containers that are there but swarm can't see them and more. But still i loved using swarm with my 5 node arm cluster, it is a good place to start when you hit the limit of a single node.

> The only serious downsides is having to use the HCL DSL ( https://github.com/hashicorp/hcl ) and their web UI being read only in the last versions that i checked.

1. IIRC you can run jobs directly from UI now, but IMO this is kinda useless. Running a job is simple as 'nomad run jobspec.nomad'. You can also run a great alternative UI ( https://github.com/jippi/hashi-ui ).

2. IMO HCL > YAML for job definitions. I've used both extensively and HCL always felt much more human friendly. The way K8s uses YAML looks to me like stretching it to it's limits and barely readable at times with templates.

One thing that makes nomad a go-to for me is that it is able to run workloads pretty much anywhere. Linux, Windows, FreeBSD, OpenBSD, Illumos and ofc Mac.

rcarmo 2021-08-19 14:22:29 +0000 UTC [ - ]

I have been toying with the notion of extending Piku (https://github.com/piku) to support multiple (i.e., a reasonable number) of machines behind the initial deploy target.

Right now I have a deployment hook that can propagate an app to more machines also running Piku after the deployment finishes correctly on the first one, but stuff like green/blue and database migrations is a major pain and requires more logic.

imachine1980_ 2021-08-19 11:42:50 +0000 UTC [ - ]

i ask if you know nomad (i didn't use it) but co-workers say was easier to deploy

debarshri 2021-08-19 12:04:38 +0000 UTC [ - ]

Yes, I did look into Nomad. Again, specification of application to deploy is much simpler than kubernetes. But I think operational point of view you still have the complexity. It has similar concepts and abstractions like kubernetes when you operate a nomad cluster.

zie 2021-08-19 14:28:23 +0000 UTC [ - ]

For a single machine, you don't need to operate a nomad cluster: `nomad agent -dev` instantly gives you a 1-node cluster ready to go.

if you decide to grow past 1 node, it's a little more complex, but not by a lot, like k8s.

globular-toast 2021-08-19 14:18:23 +0000 UTC [ - ]

> There is gap in the market between VM oriented simple deployments and kubernetes based setup.

What's wrong with Ansible? You can deploy docker containers using a very similar configuration to docker-compose.

willvarfar 2021-08-19 11:42:59 +0000 UTC [ - ]

Juju perhaps?

FunnyLookinHat 2021-08-19 13:06:40 +0000 UTC [ - ]

I think Juju (and Charms) really shine more with bare-metal or VM management. We looked into trying to use this for multi-tenant deployment scenarios a while ago (when it was still quite popular in the Ubuntu ecosystem) and found it lacking.

At this point, I think Juju is most likely used in place of other metal or VM provisioning tools (like chef or Ansible) so that you can automatically provision and scale a system as you bring new machines online.

werewolf 2021-08-19 15:10:44 +0000 UTC [ - ]

Sadly there is very little activity aiming at bare metal and VMs nowadays. If you look at features presented during couple of past months, you will find mainly kubernetes. Switching from charms to operators. But kudos to openstack charmers holding on and doing great work.

debarshri 2021-08-19 12:08:23 +0000 UTC [ - ]

Are you talking about this[1]?

[1] https://juju.is/

mradmin 2021-08-19 11:20:27 +0000 UTC [ - ]

My anti-kubernetes setup for small single servers is docker swarm, portainer & traefik. It's a setup that works well on low powered machines, gives you TLS (letsencrypt) and traefik takes care of the complicated network routing.

I created a shell script to easily set this up: https://github.com/badsyntax/docker-box

GordonS 2021-08-19 13:12:34 +0000 UTC [ - ]

This is exactly how I deployed my last few projects, and it works great!

The only things I'd change are switching to Caddy instead of Traefik (because Traefik 2.x config is just so bewilderingly complex!), and I'm not convinced Portainer is really adding any value.

Appreciate you sharing your setup script too.

mradmin 2021-08-19 14:00:33 +0000 UTC [ - ]

Agree the traefik config is a little complex but otherwise it works great for me. About using portainer, it's useful for showing a holistic view of your containers and stacks, but I also use it for remote deployment of services (Eg as part of CI/CD). I'll push a new docker image version then I'll use the portainer webhooks to redeploy the service, then docker swarm takes over.

GordonS 2021-08-19 15:07:55 +0000 UTC [ - ]

Ah, I wasn't aware of the web hooks, that sounds useful :)

mradmin 2021-08-19 16:40:36 +0000 UTC [ - ]

Here's an example using GitHub Actions: https://github.com/badsyntax/docker-box/tree/master/examples...

dneri 2021-08-19 13:42:44 +0000 UTC [ - ]

Absolutely agree, I switched to Caddy recently and the configuration is considerably easier than Traefik. Very simple TLS setup (including self signed certificates).

kawsper 2021-08-19 11:23:06 +0000 UTC [ - ]

I have a similar setup, but with Nomad (in single server mode) instead of docker swarm and portainer. It works great.

stavros 2021-08-19 11:36:08 +0000 UTC [ - ]

What does Nomad do for you, exactly? I've always wanted to try it out, but I never really got how it works. It runs containers, right? Does it also do networking, volumes, and the other things Compose does?

heipei 2021-08-19 11:43:31 +0000 UTC [ - ]

What I like about Nomad is that it allows scheduling non-containerized workloads too. What it "does" for me is that it gives me a declarative language to specify the workloads, has a nice web UI to keep track of the workloads and allows such handy features as looking at the logs or exec'ing into the container from the web UI, amongst other things. Haven't used advanced networking or volumes yet though.

stavros 2021-08-19 11:47:55 +0000 UTC [ - ]

So do you use it just for scheduling commands to run? I.e. do you use `docker-compose up` as the "payload"?

kawsper 2021-08-19 12:30:36 +0000 UTC [ - ]

You send a job-specification to the Nomad API.

There's different kind of workloads, I use Docker containers the most, but jobs can also run on a system-level, there's also different types of operating modes, some jobs can be scheduled like cron, where other jobs just exposes a port and wants to be registered in Consuls service-mesh.

A job can also consist of multiple subtasks, an example could be nginx + django/rails subtasks that will be deployed together.

You can see an example of a Docker job here: https://www.nomadproject.io/docs/job-specification#example

With a few modifications you can easily allow for blue/green-deployments.

stavros 2021-08-19 12:32:02 +0000 UTC [ - ]

This is very interesting, thanks! I'll give it a go.

GordonS 2021-08-19 13:23:05 +0000 UTC [ - ]

Don't suppose you're able to point to a simple Nomad config for a dockerised web app, with a proxy and Let's Encrypt?

kawsper 2021-08-19 14:34:07 +0000 UTC [ - ]

I will see if I can write up a simple example, do you have anywhere I can ping you?

GordonS 2021-08-19 15:09:19 +0000 UTC [ - ]

That would be great, thanks!

I'm at: gordon dot stewart 333 at gmail dot com

mrweasel 2021-08-19 14:02:35 +0000 UTC [ - ]

That’s still a bit more than I feel is required.

My problem is in the two to eight server space, but networking is already externally managed and I have a loadbalancer. It’s in this space I feel that we’re lacking good solution. The size is to small to justify taking out nodes for a control plane, but big enough that Ansible feels weird.

rcarmo 2021-08-19 10:57:55 +0000 UTC [ - ]

This looks great. But if you don’t need containers or are using tiny hardware, consider trying out Piku:

https://github.com/piku

(You can use docker-compose with it as well, but as a deployment step — I might bake in something nicer if there is enough interest)

uniqueuid 2021-08-19 11:03:52 +0000 UTC [ - ]

+1 for pikku which is one of my favorite examples of "right abstraction, simple, just works, doesn't re-invent the architecture every 6 months".

Thanks for that, Rui!

rcarmo 2021-08-19 14:17:48 +0000 UTC [ - ]

Well, I am thinking of reinveinting around 12 lines of it to add explicit Docker/Compose support, but it’s been a year or so since any major changes other than minor tweaks :)

It has also been deployed on all top 5 cloud providers via could-init (and I’m going back to AWS plain non-Ubuntu AMIs whenever I can figure out the right packages).

stavros 2021-08-19 11:13:12 +0000 UTC [ - ]

That looks nice, isn't it kind of like Dokku? It's a nice option but not a very good fit if you don't need ingress/aren't running web services (most of my services were daemons that connect to MQTT).

rcarmo 2021-08-19 14:16:37 +0000 UTC [ - ]

You can have services without any kind of ingress. It’s completely optional to use nginx, it just gets set up automatically if you want to expose a website.

My original use case was _exactly_ that (MQTT services).

nijave 2021-08-19 16:38:01 +0000 UTC [ - ]

At a previous place I worked, someone setup something similar with `git pull && ansible-playbook` on a cron

It was using GitHub so just needed a read-only key and could be bootstrapped by connecting to the server directly and running the playbook once

In addition, it didn't need any special privileges or permissions. The playbook setup remote logging (shipping to CloudWatch Logs since we used AWS heavily) along with some basic metrics so the whole thing could be monitored. Plus, you can get a cron email as basic monitoring to know if it failed

Imo it was a pretty clever way to do continuous deploy/updates without complicated orchestrators, management servers, etc

mafro 2021-08-19 11:08:56 +0000 UTC [ - ]

So far I've found that "restart: always" in the compose.yml is enough for my home server apps. In the rare case that one of the services is down, I can SSH in and have a quick look - after all it's one of my home servers, not a production pod on GKE :p

That said, the project looks pretty good! I'll have a tinker and maybe I'll be converted

stavros 2021-08-19 11:10:59 +0000 UTC [ - ]

Agreed about restarting, but I hated two things: Having to SSH in to make changes, and having a bunch of state in unknowable places that made it extremely hard to change anything or migrate to another machine if something happened.

With Harbormaster, I just copy one YAML file and the `data/` directory and I'm done. It's extremely convenient.

uniqueuid 2021-08-19 11:18:56 +0000 UTC [ - ]

Just to add: It's definitely a bad practice to never update your images, because the docker images and their base images will accumulate security holes. There aren't many solutions around for automatically pulling and running new images.

scandinavian 2021-08-19 11:43:26 +0000 UTC [ - ]

>There aren't many solutions around for automatically pulling and running new images.

Isn't that exactly what watchtower does?

https://github.com/containrrr/watchtower

It works great on my mediacenter server running deluge, plex, sonarr, radarr, jackett and OpenVPN in docker.

uniqueuid 2021-08-19 12:50:41 +0000 UTC [ - ]

Well, yes. Curiously enough, (IIRC) watchtower started out automatically pulling new images when available. Then the maintainers found that approach to be worse than proper orchestration and disabled the pulling. Perhaps it's different now.

Aeolun 2021-08-19 12:55:40 +0000 UTC [ - ]

My experience with watchtower is that it kept breaking stuff (or maybe just pulling broken images?)

My server was much more stable after it didn’t try to update all the time any more.

I wonder if I can set a minimum timeout.

stavros 2021-08-19 11:52:10 +0000 UTC [ - ]

Watchtower runs the images if they update, but AFAIK it doesn't pull if the base image changes.

Then again, Harbormaster doesn't do that either unless the upstream git repo changes.

NortySpock 2021-08-19 13:56:43 +0000 UTC [ - ]

I've heard about Watchtower (auto update) and DUIN (docker image update notifier), but I haven't quite found something that will "just tell me what updates are available, on a static site".

I want to "read all available updates" at my convenience, not get alerts reminding me to update my server.

Maybe I need to write some sort of plugin to DUIN that appends to a text file or web page or SQLite db... Hm.

andrewkdinh 2021-08-19 15:17:22 +0000 UTC [ - ]

Looks like https://crazymax.dev/diun/notif/script/ would be useful for that.

Personally, since I’m a big fan of RSS, I’d set up email in Diun and send it to an email generated by https://kill-the-newsletter.com/

hardwaresofton 2021-08-19 11:17:24 +0000 UTC [ - ]

Other uncomplicated pieces of software that manage dockerized workloads:

- https://dokku.com

- https://caprover.com

wilsonfiifi 2021-08-19 12:20:22 +0000 UTC [ - ]

I really wish Dokku would embrace docker swarm like caprover. Currently they have a scheduler for kubernetes but the docker swarm scheduler is indefinitely delayed [0]. It’s like the missing piece to making Dokku a real power tool for small teams.

Currently, if you want to scale Dokku horizontally and aren’t ready to take the kubernetes plunge, you have to put a load balancer in front of your multiple VMs running Dokku and that comes with it’s own headaches.

[0] https://github.com/dokku/dokku/projects/1#card-59170273

proxysna 2021-08-19 13:54:29 +0000 UTC [ - ]

you should give nomad a try. Dokku has a nomad backend. https://github.com/dokku/dokku-scheduler-nomad.

stavros 2021-08-19 11:21:51 +0000 UTC [ - ]

I use Dokku and love it! The use case is a bit different, as it's mostly about running web apps (it does ingress as well and is rather opinionated about the setup of its containers), but Harbormaster is just a thin management layer over Compose.

conradfr 2021-08-19 13:15:33 +0000 UTC [ - ]

I use CapRover and it mostly works.

My biggest complaint would be the downtime when the docker script runs after each deployment.

corndoge 2021-08-19 15:27:00 +0000 UTC [ - ]

Is there software like Compose in terms of simplicity, that supports multiple nodes? I use k8s for an application that really needs to use multiple physical nodes to run containerized jobs but k8s feels like overkill for the task and I spend more time fixing k8s fuckups than working on the application. Is there anything in between compose and k8s?

maltalex 2021-08-19 15:29:40 +0000 UTC [ - ]

Docker Swarm?

stavros 2021-08-19 15:57:15 +0000 UTC [ - ]

I hear Nomad mentioned a lot for this, yeah.

heipei 2021-08-19 15:31:09 +0000 UTC [ - ]

Hashicorp Nomad

nonameiguess 2021-08-19 11:11:29 +0000 UTC [ - ]

Beware that harbormaster is also the name of a program for adding RBAC to docker: https://github.com/kassisol/hbm

It's kind of abandonware because it was the developer's PhD project and he graduated, but it is rather unfortunately widely used in one of the largest GEOINT programs in the US government right now because it was the only thing that offered this capability 5 years ago. Raytheon developers have been begging to fork it for a long time so they can update and make bug fixes, but Raytheon legal won't let them fork a GPL-licensed project.

aidenn0 2021-08-19 16:37:28 +0000 UTC [ - ]

It's also the CI component of (the now unmaintained) Phabricator

ThaJay 2021-08-19 12:28:00 +0000 UTC [ - ]

One of them should fork it on their personal account and work on it during bussiness hours. No liability and all the benefits. Don't tell legal obviously.

"Someone forked it so now our fixes can get merged! :D"

nonameiguess 2021-08-19 12:37:07 +0000 UTC [ - ]

I've honestly considered this since leaving. Why not do my old coworkers a solid and fix something for them, but then I consider I'd be doing free labor for a company not willing to let its own workers contribute to a project if they can't monopolize the returns from it.

vonmoltke 2021-08-19 13:26:10 +0000 UTC [ - ]

> I consider I'd be doing free labor for a company not willing to let its own workers contribute to a project if they can't monopolize the returns from it

I don't think that is the reason. When Raytheon or other contractors perform software work under a DOD contract (i.e., they charge the labor to a contract) the government generally gets certain exclusive rights to the software created. Raytheon is technically still the copyright holder, but effectively is required to grant the US government an irrevocable license to do whatever they want with the source in support of government missions if the code is delivered to the government. Depending on the contract, such code may also fall under blanket non-disclosure agreements. I believe both of these are incompatible with the GPL, and the latter with having a public fork at all.

The company could work this out with the government, but it would be an expensive and time-consuming process because government program offices are slow, bureaucratic, and hate dealing with small exceptions on large contracts. They might even still refuse to make the contract mods required at the end simply because they don't understand it or they are too risk averse. Legal is likely of the opinion that it isn't worth trying, and the Raytheon program office likely won't push them unless they can show a significant benefit for the company.

stavros 2021-08-19 11:47:10 +0000 UTC [ - ]

Yeah, there were a few projects named that :/ I figured none of them were too popular, so I just went ahead with the name.

adamddev1 2021-08-19 11:07:04 +0000 UTC [ - ]

I guess I'm one of those people mentioned on the rationle who keeps little servers ($5-10 Droplets) and runs a few apps on them. (Like a couple of Node/Go apps, a CouchDB, a Verdaccio server). I also haven't had issues with things breaking as I do OS updates. Seems like it would be nice though just to have a collection of dockerfiles that could be used to deploy a new server automatically. My current "old fashioned" way has been very doable to me but my big question before jumping to some Docker-based setup is, does running everything on Docker take a huge hit on the performance/memory/capabilities of the machine? Like could I still comfortably run 4-5 apps on a $5 Droplet? Assuming I would have seperate containers for each app? I'm having trouble finding info about this.

jrockway 2021-08-19 16:46:21 +0000 UTC [ - ]

"Docker containers" are Linux processes with maybe a filesystem, cpu/memory limits, and a special network; applied through cgroups. You can do all of those things without Docker, and there is really not much overhead.

systemd has "slice units" that are implemented very similarly to Docker containers, and it's basically the default on every Linux system from the last few years. It's underdocumented but you can read a little about it here: https://opensource.com/article/20/10/cgroups

stavros 2021-08-19 11:10:06 +0000 UTC [ - ]

I haven't noticed any performance degradation (though granted, these are small apps), and my home server is 10 years old (and was slow even then).

reddec 2021-08-19 10:54:05 +0000 UTC [ - ]

Looks nice. I did something similar not so much time ago https://github.com/reddec/git-pipe

uniqueuid 2021-08-19 11:06:46 +0000 UTC [ - ]

Wow this has a lot of great features baked in.

Especially the backup and Let's encrypt elements are great. And it handles docker networks, which makes it very flexible.

Will definitely check it out.

mnahkies 2021-08-19 13:40:51 +0000 UTC [ - ]

Interestingly this seems like a pretty popular problem to solve.

I made a similar thing recently as well, although with the goal to handle ingress and monitoring out the box as well, whilst still able to run comfortably on a small box.

I took a fairly similar approach, leveraging docker-compose files, and using a single data directory for ease of backup (although it's on my to-do list to split out conf/data).

If there was a way to get a truly slim and easy to setup k8s compatible environment I'd probably prefer that, but I couldn't find anything that wouldn't eat most of my small servers ram

https://github.com/mnahkies/shoe-string-server if you're interested

stavros 2021-08-19 13:45:53 +0000 UTC [ - ]

Huh, nice! I think the main problem yours and my project have is that they're difficult to explain, because it's more about the opinions they have rather than about what they do.

I'll try to rework the README to hopefully make it more understandable, but looking at your project's README I get as overwhelmed as I imagine you get looking at mine. It's a lot of stuff to explain in a short page.

debarshri 2021-08-19 13:43:53 +0000 UTC [ - ]

It is quite slim and easy to setup k8s environment, thanks to microk8s and k3s. Microk8s comes with newer version of ubuntu. k3s is a single binary installation.

mnahkies 2021-08-19 14:16:22 +0000 UTC [ - ]

Last I checked k3s required a min of 512mb of ram, 1gb recommended. Is this not the case?

debarshri 2021-08-19 14:24:20 +0000 UTC [ - ]

Yes it is. Docker's minimum requirement is 512mb with 2gb recommended. Containerd + k8s is almost the same requirements.

sgentle 2021-08-19 15:11:10 +0000 UTC [ - ]

I unironically solved this problem by running docker-compose in Docker. You can build an image that's just the official docker/compose image with your docker-compose.yml on top, mount /var/run/docker.sock into it, and then when you start the docker-compose container, it starts all your dependencies. If you run Watchtower as well, everything auto-updates.

Instead of deploying changes as git commits, you deploy them as container image updates. I'm not going to call it a good solution, exactly, but it meant I could just use one kind of thing to solve my problem, which is a real treat if you've spent much time in the dockerverse.

stavros 2021-08-19 15:56:33 +0000 UTC [ - ]

Hmm, that's interesting, do you run Docker in Docker, or do you expose the control socket?

devmor 2021-08-19 16:14:09 +0000 UTC [ - ]

If this ever gets expanded to handle clustering, it'd be perfect for me. I use k8s on my homelab across multiple raspberry pis.

uniqueuid 2021-08-19 11:01:15 +0000 UTC [ - ]

This looks awesome!

What I couldn't immediately see from skimming the repo is:

How hard would it be to use a docker-based automatic https proxy such as this [1] with all projects?

I've had a handfull of docker-based services running for many years and love the convenience. What I'm doing now is simply wrap the images in a bash script that stops the containers, snapshots the ZFS volume, pulls newer versions and re-launches everything. That's then run via cron once a day. Zero issues across at least five years.

[1] https://github.com/SteveLTN/https-portal

stavros 2021-08-19 11:12:22 +0000 UTC [ - ]

Under the hood, all Harbormaster does is run `docker-compose up` on a bunch of directories. I'm not familiar with the HTTPS proxy, but it looks like you could just add it to the config and it'd auto-deploy and run.

Sounds like a very good ingress solution, I'll try it for myself too, thanks! I use Caddy now but configuration is a bit too manual.

uniqueuid 2021-08-19 11:16:06 +0000 UTC [ - ]

Thanks!

One thing to note is that you'll need to make sure that all the compose bundles are on the same network.

I.e. add this to all of them:

  networks:
    default:
      external:
        name: nginx-proxy

stavros 2021-08-19 11:24:12 +0000 UTC [ - ]

Ah yep, thanks! One thing that's possible (and I'd like to do) with Harbormaster is add configuration to the upstream apps themselves, so to deploy, say, Plex, all you need to do is add the Plex repo URL to your config (and add a few env vars) and that's it!

I already added a config for Plex in the Harbormaster repo, but obviously it's better if the upstream app itself has it:

https://gitlab.com/stavros/harbormaster/-/blob/master/apps/p...

3np 2021-08-19 12:59:57 +0000 UTC [ - ]

FWIW Traefik is pretty easy to get running and configured based on container tags, which you cam set in compose files.

Traefik can be a bit hairy in some ways, but for anything you'd run Harbormaster for it should be a good fit.

Right now I have some Frankenstein situation with all of Traefik, Nginx, HAProxy, Envoy (though this is inherited from Consul Connect) at different points... I keep thinking about replacing Traefik with Envoy, but the docs and complexity are a bit daunting.

GekkePrutser 2021-08-19 13:51:25 +0000 UTC [ - ]

Cool name. Reminds me of dockmaster which was some ancient NSA system (It was mentioned in Clifford Stoll's excellent book "The Cuckoo's Egg"). It was the one the German KGB hacker he caught was trying to get into.

It sounds like a good option too, I don't want all the complexity of Kubernetes at home. If I worked for the cloud team in work I might use it at home but I don't.

3np 2021-08-19 12:53:52 +0000 UTC [ - ]

For users who are fine with the single-host scope, this looks great. Definitely easier than working with systemd+$CI, if you don't need it (and for all the flame it gets, systemd is very powerful if you just spend the time to get into it, but then again if you don't need it you don't)

I could also see this being great for a personal lab/playground server. Or for learning/workshops/hackathons. Super easy to get people running from 0.

If I ever run a class or workshop that has some server-side aspect to it, I'll keep this in mind for sure.

nixgeek 2021-08-19 14:19:45 +0000 UTC [ - ]

Any chance this will get packaged up as a container instead of “pipx install”, then all the timers can just be in the container, and it can control via Docker socket exposed to the container?

Simple one-time setup and then everything is a container?

If that interesting to OP then I might look into that one weekend soon.

stavros 2021-08-19 14:43:30 +0000 UTC [ - ]

Oh yeah, that's very interesting! That would be great, I forgot that you can expose the socket to the container. I'd definitely be interested in that, thanks!

dneri 2021-08-19 13:46:54 +0000 UTC [ - ]

This seems like a neat project! I run a homelab and my container host runs Portainer & Caddy which is a really clean and simple docker-compose deployment stack. This tools seems like it does less than Portainer, so I am not clear on why it would be preferable - just because it is even simpler?

selfhoster11 2021-08-19 11:23:54 +0000 UTC [ - ]

I have a similar DIY solution with an Ansible playbook that automatically installs or restarts docker-compose files. I am considering switching to Harbormaster, since it's much closer to what I wanted from the start.

aae42 2021-08-19 12:00:02 +0000 UTC [ - ]

this is nice, this should help a lot of people in that in-between space

i just recently decided to graduate from just `docker-compose up` running inside tmux to a more fully fledged system myself...

since i know Chef quite well i just decided to use Chef in local mode with the docker community cookbook

i also get the nice tooling around testing changes to the infrastructure in test kitchen

if this would have existed before i made that switch, i may have considered it, nice work!

tkubacki 2021-08-19 12:24:22 +0000 UTC [ - ]

My simple solution for smaller projects is to ssh with port forward to docker registry - here I wrote blog post on that topic:

https://wickedmoocode.blogspot.com/2020/09/simple-way-to-dep...

gentleman11 2021-08-19 13:37:03 +0000 UTC [ - ]

How am I supposed to know whether to jump on the kubernetes bandwagon when all these alternatives keep popping up? Kidding/not kidding

debarshri 2021-08-19 13:41:44 +0000 UTC [ - ]

Depends upon which job interview you are going to.

If is a startup, use some buzzwords like cloud native, devops etc. Check their sentiments towards kubernetes.

On a serious note, You might have to jump on the kubernetes bandwagon whether you like it or not as many of the companies are serious investing their resources. Having spoken to various companies from series A to Enterprise. I do see the kubernetes adoption is actually not as much as I would have imagined based on the hype.

P.S discussion of kubernetes or not kubernetes was recently accelerated by a post from Ably [1]

[1] https://ably.com/blog/no-we-dont-use-kubernetes

p_l 2021-08-19 14:16:59 +0000 UTC [ - ]

What's missing km the conversation is that said blog post can be summarised as "we have money to burn".

proxysna 2021-08-19 13:43:33 +0000 UTC [ - ]

This is not an alternative, just a small personal project. Learn docker, basics of kubernetes and maybe nomad.

pdimitar 2021-08-19 11:04:42 +0000 UTC [ - ]

This looks super, I'll try it on my NAS.

zeckalpha 2021-08-19 11:44:27 +0000 UTC [ - ]

If it can pull from git, why not have the YAML in a git repo, too?

stavros 2021-08-19 11:46:24 +0000 UTC [ - ]

That is, in fact, the recommended way to deploy it! If you look at the systemd service/timer files, that's what it does, except Harbormaster itself isn't aware of the repo.

I kind of punted on the decision of how to run the top layer (ie have Harbormaster be a daemon that auto-pulls its config), but it's very simple to add a cronjob to `git pull; harbormaster` (and is more composable) so I didn't do any more work in that direction.

sandGorgon 2021-08-19 13:32:12 +0000 UTC [ - ]

you should check k3s or k0s - single machine kubernetes

stavros 2021-08-19 13:34:06 +0000 UTC [ - ]

I did, but even that was a bit too much when I don't really need to be K8s-compatible. Harbormaster doesn't run any extra daemons at all, so that was a better fit for what I wanted to do (I also want to run stuff on Raspberry Pis and other computers with low resources).

sandGorgon 2021-08-19 14:35:53 +0000 UTC [ - ]

fair point. I have generally had a very cool experience running these single daemon kubernetes distros.

stavros 2021-08-19 14:46:49 +0000 UTC [ - ]

They look very very interesting for development and things like that, and I'm going to set one up locally to play with, they just seemed like overkill for running a bunch of Python scripts, Plex, etc.