Deploy your Lighthouse to Fly.io

A red and white striped lighthouse shining out against the sea backdropped by a purple sky filled with nebula.

This is a guide for setting up a Defined Networking managed lighthouse using Fly.io. The instructions provided here are similar but incomplete for what you need to set up open-source Nebula accordingly.

If youʼre just here for the steps, jump to the prerequisites, otherwise, read ahead.

Why use Nebula?

Nebula is a peer-to-peer networking solution that allows you to make secure, encrypted connections between your backend services with minimal cost and downtime.

Managed Nebula is the service that Defined Networking provides. It makes managing such a network easier and allows you to access expert support for Nebula.

Nebula is different from a traditional on-site VPN, as it doesnʼt require sending traffic through a dedicated VPN server; hosts connect directly to each other! Using certificates, the Noise encryption protocol, and a firewall solution modeled after AWSʼs security groups, itʼs possible to allow secure access while preventing heavy traffic between two nodes from impacting connections between other nodes in your overlay network.

This means you can have servers on-prem, in Googleʼs GCP, and in Amazonʼs AWS and have fast and secure connections between them all, bridging the gap between on-prem and cloud and reducing cloud provider lock-in.

How does the network communicate?

The network is composed of lighthouses, relays, and regular hosts. Lighthouses are hosts that have a static IP address to which all hosts on your network communicate their available routes, both IPv4 and IPv6. The lighthouse is similar to a DNS server, providing available routes for hosts to find each other. The beauty of a system that relies on cryptographic security is that the hosts donʼt need to trust the lighthouse since the Nebula handshake between hosts prevents the possibility of a man-in-the-middle attack.

Relays allow you to connect your hosts when there are tricky networking issues. For example, when symmetric NAT is used, the IP address the lighthouse records for a host may not be the same IP address other hosts need to use to connect to it. A relay with a static internet IP can connect to both hosts and forward packets between them using AEAD (authenticated encryption with associated data), which ensures that it can cryptographically validate which packets are for which host without knowing the contents.

We recommend running lighthouses and relays as always-on servers from a cloud infrastructure provider. Because of their design, high levels of trust in the provider is not required, and high-availability can be achieved by running several instances of lighthouses and relays to ensure connectivity in the event of a failure.

Why use Fly.io?

Fly.io lets us upload a docker image which Fly will then manage. Fly will run our Docker image while providing restart policies, secrets, networking, and other useful functionality. This differs from other cloud providers like Digital Ocean by allowing a one-command deployment instead of directly managing a long-running machine that requires SSH for updates. Today weʼll go through the steps for what it takes to deploy a lighthouse or relay to Fly.io.

Prerequisites

First things first, you need an account at Defined Networking, an account at Fly.io, and an installation of flyctl.

Fly launch

Set up a new Fly.io project using the Defined Networking template. The command below will clone the template, copy the config and validate that the Docker image is available for deployment. (Note: you may be prompted to authenticate your flyctl CLI.)

fly launch --from https://github.com/DefinedNet/fly-dot-io-lighthouse \
           --copy-config --auto-confirm --build-only
❯ fly launch --from https://github.com/DefinedNet/fly-dot-io-lighthouse \
           --copy-config --auto-confirm --build-only
Launching from git repo https://github.com/DefinedNet/fly-dot-io-lighthouse
Cloning into '.'...
remote: Enumerating objects: 8, done.
remote: Counting objects: 100% (8/8), done.
remote: Compressing objects: 100% (7/7), done.
Receiving objects: 100% (8/8), done.
remote: Total 8 (delta 0), reused 5 (delta 0), pack-reused 0
An existing fly.toml file was found for app fly-dn-lighthouse
Using build strategies '[the "definednet/dnclient:latest" docker image]'. Remove [build] from fly.toml to force a rescan
Creating app in /var/folders/45/8yp231n55w98bz9cwc0vd7_m0000gn/T/tmp.1RUnu4sXo7
Weʼre about to launch your app on Fly.io. Hereʼs what youʼre getting:

Organization: Caleb Jasik                           (fly launch defaults to the personal org)
Name:         fly-dn-lighthouse-red-wildflower-5406 (from your fly.toml)
Region:       Dallas, Texas (US)                    (from your fly.toml)
App Machines: shared-cpu-1x, 256MB RAM              (from your fly.toml)
Postgres:     <none>                                (not requested)
Redis:        <none>                                (not requested)
Sentry:       false                                 (not requested)

Created app 'fly-dn-lighthouse-red-wildflower-5406' in organization 'personal'
Admin URL: https://fly.io/apps/fly-dn-lighthouse-red-wildflower-5406
Hostname: fly-dn-lighthouse-red-wildflower-5406.fly.dev
Wrote config file fly.toml
Validating /var/folders/45/8yp231n55w98bz9cwc0vd7_m0000gn/T/tmp.1RUnu4sXo7/fly.toml
✓ Configuration is valid
==> Building image
Searching for image 'definednet/dnclient:latest' remotely...
image found: img_ox20przd80g34j1z

You should see a file named fly.toml in your folder now. Optionally, you can update the PRIMARY_REGION environment variable to a region close to you.

# fly.toml app configuration file generated for fly-dn-lighthouse on 2024-04-17T10:11:31-05:00
#
# See https://fly.io/docs/reference/configuration/ for information about how to use this file.
#

app = 'fly-dn-lighthouse'
primary_region = 'dfw'

# Pull Defined Networkingʼs Docker image from the Docker Hub
[build]
  image = 'definednet/dnclient:latest'

# Optional: edit this to something more local to you;
# Iʼm choosing DFW since Iʼm writing this from Texas 🤠.
[env]
  PRIMARY_REGION = 'dfw'

# This configures an attached volume, so we can persist authentication
[[mounts]]
  source = 'defined'
  destination = '/etc/defined/'

[[services]]
  protocol = 'udp'
  internal_port = 4242

  [[services.ports]]
    port = 4242

[[vm]]
  memory = '256mb'
  cpu_kind = 'shared'
  cpus = 1

# This enables the built-in Prometheus support on Fly.io
[[metrics]]
  port = 9091
  path = '/metrics'

Allocate dedicated IPv4 address

Next, we need a dedicated IPv4 address for our lighthouse. Currently, this costs $2 per month. Copy your IP address allocation for the next step.

fly ips allocate-v4
❯ fly ips allocate-v4
? Looks like youʼre accessing a paid feature. Dedicated IPv4 addresses now cost $2/mo.
Are you ok with this? Alternatively, you could allocate a shared IPv4 address with the --shared flag. Yes
VERSION	IP            	TYPE                     	REGION	CREATED AT
v4     	169.155.50.195	public (dedicated, $2/mo)	global	just now

Create a Lighthouse

Now itʼs time to create a lighthouse in Managed Nebula. Copy your IP address from the previous step and create a new lighthouse with these settings:

General tab:

  • Public IP or hostname: Dedicated IPv4 from above
  • Role: “Lighthouse”

Advanced tab:

  • listen.host: set to fly-global-services (Fly.io requires this for UDP services.)

Fly runs a Prometheus and Graphana server for your apps. If you want custom metrics for the nebula service, set stats type: Prometheus, stats path: /metrics, stats namespace: nebula, stats interval: 60s all in the Advanced tab.

Once you have created the lighthouse, copy the enrollment code provided before moving on.

Save the enrollment code to fly secrets

Now we need to save the enrollment code to Fly secrets. Run the following command, replacing $CODE with your enrollment code. This will allow the Docker image to access the $DN_ENROLLMENT_CODE environment variable on first startup, authenticating it to communicate with our API.

fly secrets set DN_ENROLLMENT_CODE=$CODE
❯ fly secrets set DN_ENROLLMENT_CODE=[~~cut enrollment token~~]
Secrets are staged for the first deployment

Deploy!

Now weʼre ready to deploy! Fly.io will grab the dnclient Docker image, and deploy to the region we specified in our [env] block of ./fly.toml and also create an attached volume for saving the certificates.

fly deploy
❯ fly deploy
==> Verifying app config
Validating /var/folders/45/8yp231n55w98bz9cwc0vd7_m0000gn/T/tmp.1RUnu4sXo7/fly.toml
✓ Configuration is valid
--> Verified app config
==> Building image
Searching for image 'definednet/dnclient:latest' remotely...
image found: img_ox20przd80g34j1z

Watch your deployment at https://fly.io/apps/fly-dn-lighthouse-red-wildflower-5406/monitoring

Creating a 3 GB volume named 'defined' for process group 'app'. Use 'fly vol extend' to increase its size
This deployment will:
 * create 1 "app" machine

No machines in group app, launching a new machine
Finished launching new machines
-------
 ✔ Machine 2875695b092e58 [app] update finished: success
-------

Checking our Lighthousesʼs status

Now that weʼve deployed, we can check out our lighthouse in several ways.

We can run fly logs to stream logs output to our terminal.

We can view our deployment via the Fly.io dashboard.

We can view the lighthouses list in the Managed Nebula admin panel and see that thereʼs a green dot next to the lighthouse name, indicating that itʼs communicating with the Managed Nebula API.

If you set up the custom metrics when creating your lighthouse, you can also go to the Grafana website hosted by Fly.io to view the nebula stats output by the Prometheus reporter.

If you run into any issues, feel free to contact us.

Finishing up

Now that we have a deployment strategy, we can commit the fly.toml to a Git repo and save it for future deployments, such as when the dnclient Docker image updates. You can also hook into CI like GitHub Actions to manage these deployments, but Iʼll leave that as an exercise to the reader 😉.

P.S. Deploying Relays

We can follow the same process for relays, we just need to create a new relay in the Managed Nebula admin panel and use its enrollment token. While relays donʼt inherently require a dedicated IPv4 address, Fly.io requires dedicated IPv4 addresses when using UDP services (such as Nebula), so you will need to allocate one for each relay.

Extra Credit: Enroll and deploy a relay to fly.io to improve your networkʼs reliability.

Nebula, but easier

Take the hassle out of managing your private network with Defined Networking, built by the creators of Nebula.

Get started