I suspect this is a very niche circumstance, but if you are running GitLab on a Tailscale network (henceforth a ‘tailnet’) and you want to add GitLab Pages to your GitLab installation and make it accessible on your tailnet, then there is some non-standard configuration required to make it work smoothly, which is not immediately obvious.

By way of background—for a while now I have run a personal GitLab CE installation in an LXC container on a Proxmox VE host. I have Tailscale running inside the LXC container with GitLab, and use tailscale serve to expose GitLab to my tailnet. This all works fine, with minimal additional configuration required. But, recently I decided that I wanted to get GitLab Pages running, mostly because it seemed like it would be a fun challenge.

There are various deployment options proposed for GitLab Pages in the documentation, but none of them particularly match what we can do with Tailscale. Specifically:

  • We can’t exactly “add another IP address” to the existing host with Tailscale. I’m not even sure if there is any reasonable way to run two tailscaled daemons on the same host (sure, you can do so in a container, or perhaps with userspace networking, but not in the sense of “I already have a tailscale0 interface; now I want to add tailscale1”).

  • Tailscale’s MagicDNS doesn’t support wildcard DNS, and consequently Tailscale’s Let’s Encrypt integration also does not support obtaining wildcard TLS certificates.

This leads to two inexorable conclusions: we are going to have to run GitLab Pages on another host, and we are going to have to leverage the new experimental support for running GitLab Pages without wildcard DNS. We will also have to configure GitLab Pages to allow tailscale serve to do the TLS termination, and that’s where an interesting wrinkle comes in! (We’ll get back to that later.)

To be clear, it is not strictly necessary to use tailscale serve here. We could allow GitLab’s NGINX instance to do the TLS termination, providing it with a certificate obtained from tailscale cert. But, then we’d have to set up a cron job to renew the certificate, whereas tailscale serve automatically takes care of certificate renewal.

So, how do we do this? I began by preparing a new container1 following my standard procedures, then installing GitLab CE from the Debian package repository. Next, I followed the documented steps for running GitLab Pages on a separate server, on both my existing GitLab server as well as the new GitLab Pages server, then I merged in the configuration changes for running without wildcard DNS. Running GitLab Pages on a separate server from the main GitLab server requires2 configuring object storage so that GitLab can share the Pages artifacts between servers, so I spun up an instance of Garage on a local Kubernetes cluster, so I wouldn’t be dependent on Amazon S3 or a similar cloud storage service. Finally, I ran tailscale serve --bg 80 and tried visiting a test GitLab Pages site in my browser.

And…it didn’t work! After some extensive debugging, I found that the X-Forwarded-Host header set by tailscale serve was confusing the rest of the stack. Thus, the NGINX frontend in front of the GitLab Pages server must suppress this header. This can easily be done by setting the pages_nginx['proxy_set_headers'] parameter in the gitlab.rb file on the GitLab Pages server, then running gitlab-ctl reconfigure.

Putting it all together, here’s a lightly-redacted and heavily-commented extract from the gitlab.rb file on the GitLab Pages server:

# Note that on the GitLab Pages server, `external_url` is 
# the URL _for GitLab Pages_, not your GitLab instance.
external_url 'https://gitlab-pages.tailnet-nnnn.ts.net'

# Setting the role keeps the rest of the GitLab stack from being configured.
roles ['pages_role']

# We configure object storage for GitLab Pages, pointing at an
# S3-compatible storage server elsewhere on the tailnet. This could
# also use Amazon S3, any other S3-compatible service, or any other
# storage provider supported by GitLab.
gitlab_rails['pages_object_store_enabled'] = true
gitlab_rails['pages_object_store_remote_directory'] = "gitlab-pages"
gitlab_rails['pages_object_store_connection'] = {
  'provider' => 'AWS',
  'region' => 'garage',
  'aws_access_key_id' => '',
  'aws_secret_access_key' => '',
  'aws_signature_version' => 4,
  'endpoint' => 'https://objects.tailnet-nnnn.ts.net',
  'path_style' => true 
}
gitlab_rails['pages_local_store_enabled'] = false

pages_external_url "https://gitlab-pages.tailnet-nnnn.ts.net"
gitlab_pages['enable'] = true
gitlab_pages['gitlab_server'] = "https://gitlab.tailnet-nnnn.ts.net"

# This block is optional, but access control isn't _much_ harder
# to get running on top of the base GitLab Pages configuration.
# We need to manually reconfigure the redirect URL to account for
# the use of `namespace_in_path`. The same needs to be applied to the
# OAuth application configuration on the GitLab server.
# We also want to reduce the scope of the GitLab Pages OAuth application
# to just `read_api` rather than the default of `api`.
gitlab_pages['access_control'] = true
gitlab_pages['auth_redirect_uri'] =  "https://gitlab-pages.tailnet-nnnn.ts.net/projects/auth"
gitlab_pages['auth_scope'] = "read_api"

# This configures the (still-experimental) support for placing the
# namespace parameter in the URL rather than as a subdomain of the
# GitLab Pages server.
gitlab_pages['namespace_in_path'] = true

# In order for namespace_in_path to work, we _must_ run NGINX in front
# of the GitLab Pages server. We want to serve plain HTTP on port 80,
# which will allow Tailscale to terminate TLS for us, and we suppress 
# the X-Forwarded-Host header.
pages_nginx['enable'] = true
pages_nginx['redirect_http_to_https'] = true
pages_nginx['listen_port'] = 80
pages_nginx['listen_https'] = false
pages_nginx['proxy_set_headers'] = {
  "X-Forwarded-Host" => "''"
}

# Because we're using Tailscale as a reverse proxy in front of the
# GitLab Pages NGINX instance, we should get the client IP address
# from the X-Forwarded-For header. In addition, there is no need for
# us to listen on any interface other than the loopback interface.
pages_nginx['real_ip_trusted_addresses'] = ['127.0.0.1']
pages_nginx['real_ip_header'] = 'X-Forwarded-For'
pages_nginx['real_ip_recursive'] = nil
pages_nginx['listen_addresses'] = ['127.0.0.1', '[::1]']

# No sense in even _trying_ to procure TLS certificates with ACME,
# since we're going to allow Tailscale to terminate TLS and handle 
# certificates for us.
letsencrypt['enable'] = false

On the GitLab server itself, the main points are as follows:

  • Configure object storage with the same parameters as the GitLab Pages server.
  • Set pages_external_url to the same value as on the GitLab Pages server.
  • Set gitlab_pages['enable'] = false (because we aren’t actually running GitLab Pages on that host) and gitlab_pages['namespace_in_path'] = true.
  • Enable access control, if the same was done on the GitLab Pages server. Note the remarks in the GitLab documentation about reducing the scope of the GitLab Pages client; using read_api rather than the default api scope is recommended.

By the way, if you’re a Tailscale user and you have a GitLab instance on your tailnet, you might be wondering “say, can I use tailscale serve’s identity headers to automatically authenticate to GitLab based on Tailscale user and device identity?”. The answer is “yes, but not directly”. GitLab doesn’t have support for using specially-named headers to authenticate a user, like Grafana does, but you can use the still-experimental tsidp and configure GitLab’s OAuth 2.0 client to use tsidp to authenticate users. (I have not actually tried this on my tailnet yet, but I don’t see any reason why it wouldn’t work.)

  1. I did this with LXC containers running Debian, but there’s no reason the same approach wouldn’t work using the GitLab Docker image with the Tailscale Docker container as a sidecar, or VMs, or even bare-metal hosts. 

  2. In theory this could be done without object storage, bind-mounting a shared volume between the two containers, but getting Garage up and running seemed like less work than having to deal with UID/GID mapping across two LXC containers. NFS might also be supported, but that seemed like even more of a potential waste of effort.