After numerous years of running the majority of my web presence entirely on kubernetes (shout-out to k3s for simplifying the task for smaller deployments), I decided to migrate to a new setup. This series will explore the migration and configuration of this new setup.
Following upgrades in our on-premise environment, and the fact that ultra-high availability wasn’t a requirement, I decided to experiment with running the services on-premise, but exposing them to the internet through a remote POP. While there exists both turnkey commercial solutions (e.g. Cloudflare Tunnel) and self-hosted opensource solutions (e.g. Nginx Proxy Manager), I decided to create a solution by composing a number of existing project in order to achieve the stated goal: on-premise services securely exposed at an external POP.
The components that will make up the stack are as follows. They were selected either due to existing familiarity, a desire to learn them, or their compatibility with other components.
Nomad as the orchestrator. It can orchestrate various types of applications, including docker containers and binaries. This should provide for additional flexibility when developing and integrating workloads in the system.
Tailscale as the data layer. Tailscale uses wireguard under the hood to build mesh networks that connect all the nodes to each other. This provides a fast and secure overlay network for all the nodes in the system.
Instead of using Tailscale’s infrastructure as the control plane for the mesh network, I’m instead using headscale, an open-source self-hosted implementation of the control server. Headscale has a smaller scope than the "real" tailscale control server, but it still meets our needs and even simplifies the configuration.
And finally Træfik as the front-end/load-balancer. Træfik can ingest nomad service labels and automatically expose endpoints to the internet. Additionally, træfik can automatically request and use Let’s Encrypt certificates.
With the components selected, next we turn to the topology of the setup.
One (1) machine at a hosting provider serving as the POP for the setup.
3 or more machines on-premise, serving as the core of the setup. These will host the actual websites and workloads to be exposed.
The next post in this series will begin exploring the details of the configuration necessary to run such as system.