Hero Image

Digital Sovereignty

Digital Sovereignty

This page documents the external dependencies of this website and its underlying infrastructure, including vendors, jurisdictions, and legal frameworks.

The goal is not to claim independence from regulation or from the internet as a whole, but to be transparent about where control resides, how dependencies are distributed, and what the practical consequences would be if any single party were to become unavailable or intervene.

Modern internet services inevitably depend on third parties. The question is not whether dependencies exist, but where, how concentrated, and under which jurisdictions.

This page outlines how DNS, hosting, routing, e-mail, and domain registrations are structured, which legal frameworks apply, and how resilient the overall system is against vendor failure, legal pressure, or unilateral takedown.

The personal goal behind this design is to minimise single points of failure — technical, legal, or organisational.

In this context, sovereignty refers to the distribution of control rather than absolute independence. It describes how legal jurisdiction, operational responsibility, and technical authority are deliberately separated so that no single provider, registry, registrar, or legal framework can unilaterally disable or fundamentally control essential services.

This approach focuses on practical resilience and transparency: understanding where dependencies exist, which parties have influence over which components, and what the consequences are if any single dependency fails or intervenes.

Many internet services depend on third parties for critical functions such as DNS resolution, certificate issuance, and hosting. By documenting where control lies and how dependencies are structured, this page provides an engineering view of resilience and risk, not a claim of absolute independence.


DNS hosting and authority

Authoritative DNS for this site is intentionally split across multiple name servers, top-level domains, registries, registrars, hosting providers, and legal jurisdictions. The goal is to minimise single points of failure at both the technical and jurisdictional level.

Design goals

  • Avoid reliance on a single registry, registrar, or jurisdiction
  • Separate DNS authority (control plane) from DNS hosting (data plane)
  • Ensure that loss or intervention by any single party does not fully disable DNS resolution

Name server structure

Two groups of authoritative name servers are used:

  • US-registry–dependent name servers, using .com, .net, and .org domains registered with US-based registrars
  • EU-registry–dependent name servers, using .ch, .de and .eu domains registered with Swiss and European registrars operating under Swiss and EU legal frameworks

Both groups serve equivalent DNS data but are subject to different legal and organisational constraints.

None of these servers are running as primary. There is a hidden master hosted in the Netherlands that has a physical HSM with the root DNSSEC-keys.

Group 1: Current authoritative name servers with (some small) ties to the US

Nameserver Registry Registrar Hosting Glue at TLD DNSSEC
ns.mostertman.com US US DE ✅ Yes ✅ Yes
ns.mostertman.net US US CH ✅ Yes ✅ Yes
ns.mostertman.org US US DE ✅ Yes ✅ Yes

A current known risk with this group is that all domains are registered through essentially the same registrar (Namecheap and Spaceship are owned by the same company).

Group 2: Current authoritive name servers with no ties to the US

Nameserver Registry Registrar Hosting Glue at TLD DNSSEC
ns.mostertman.ch CH CH CH ✅ Yes ✅ Yes
ns.mostertman.de DE CH DE ✅ Yes ✅ Yes
ns.mostertman.eu EU CH DE ✅ Yes ✅ Yes

ns.mostertman.ch is pending registration; for now ns.mostertman.net is used.

A current known risk with this group is that all domains are registered through the same registrar (Infomaniak).

Glue records

Glue records are a critical factor in DNS resilience. Where present, glue records allow resolvers to reach authoritative name servers without relying on additional DNS lookups, reducing circular dependencies and control-plane coupling.

Glue record availability varies by registry and registrar tooling.

DNSSEC

All authoritative zones are signed with DNSSEC and have DNSSEC correctly configured end-to-end.

  • Each zone uses a Key Signing Key (KSK) generated and stored in a Hardware Security Module (HSM).
  • The KSK is published and registered at the respective registries.
  • Zones are signed using a software-based Zone Signing Key (ZSK).
  • ZSKs are rotated on a regular schedule (currently quarterly).

This approach separates long-term trust anchors from operational signing keys while limiting the impact of key compromise.

Notes and limitations

  • The absence of glue records for some US-registered domains is a registrar limitation rather than a design choice.
  • Name servers without glue remain authoritative but introduce additional resolution dependencies.
  • Hosting locations and DNS software are independent of registry and registrar jurisdiction.
  • Registrar concentration for US-based TLDs is a known limitation and may be addressed in the future.

Future direction

The authoritative name server set may be further mixed across US and EU-managed domains to reduce reliance on any single registry or legal framework during delegation and resolution.

Web hosting and traffic flow

The primary public entry point for all websites is www.mostertman.com. An alternative access path is available via www.mostertman.eu.

Both entry points ultimately serve the same content but are reachable through different DNS and jurisdictional paths.

Public entry points

  • Primary: www.mostertman.com Uses authoritative name servers under .com, .net, and .org
  • Alternative: www.mostertman.eu Uses authoritative name servers under .ch, .de, and .eu

This provides redundancy at the DNS and registry level, while keeping the application stack unified.

Global ingress and reverse proxy

All public web traffic is terminated on a VPS, hosted at Hetzner in Germany.

This system acts as a global ingress node and performs:

  • TLS termination
  • Virtual host routing
  • Forwarding of traffic to the appropriate backend

Traffic is forwarded over an encrypted WireGuard tunnel to the actual backend systems.

Backend hosting and data location

All websites and their data are hosted on a server located at home in the Netherlands.

  • Location: Netherlands
  • Connectivity: 1000/1000 Mbit fibre
  • Provider: KPN
  • Connectivity model: Direct internet uplink

This backend is currently a single point of failure. That said, however:

  • Server has an Uninterruptable Power Supply
  • Data is on storage with multiple redundancies
  • Data is backed up
  • Services are designed to be easily deployable to a new environment if necessary

A secondary internet connection (for example via satellite) is being evaluated as a future fallback.

TLS certificates

All certificates are automatically (re)issued, renewed, and rolled out. They all use secp384r1 for a decent balance between performance and security.

TLS certificates are less prone to jurisdictions, data laws, and such as they themselves exist for a longer period of time and can not provide any more data than is already publicly available.

Certificates are issued through 3 different root CA structures:

Internal Services

All internal services use certificates issued by an internal PKI for privacy and flexibility.

  • The key for the internal Root CA is stored in a physical HSM.
  • The key for the internal Intermediate CA is stored in a software HSM.

This PKI is set up in and operated from the Netherlands. I have sole access to this.

Let's Encrypt

Most public-facing services have been using certificates issued by Let’s Encrypt ever since its inception.

This project is looked after by the Linux Foundation and many well known corporations. They all benefit from this project being a success. I myself have contributed to this project in a few ways and I've always appreciated their transparency and efforts.

However, Let’s Encrypt operates under US jurisdiction. Plus, it could still fail for any reason, and I need a backup scenario.

ZeroSSL

My paid backup-scenario. It was originally an Austrian company, now owned by a multinational.

Ultimately however, they are cross-signed by USERTrust (a US company). More commonly known as "Sectigo's old Root CA".


E-mail infrastructure

  • E-mail for @mostertman.com, @mostertman.eu (and all other domains) is currently hosted on the same EU VPS that serves as the global ingress node.
  • E-mail for @mostertman.org is still being hosted at Google (US) but is slowly being phased out.

Mail routing

  • Mail domain: mostertman.email
  • MX records: mostertman.email & mx.mostertman.eu
  • The mail domain itself uses authoritative name servers under .com, .net, and .org

An EU-based alternative MX hostname (mx.mostertman.eu) was introduced to provide jurisdictional redundancy at the DNS level. It points to the same server.

Mail service characteristics

  • The mail server software does not support clustering or high availability
  • The VPS hosting the mail service is therefore a single point of failure
  • TLS for mail services also relies on Let’s Encrypt
  • @mostertman.org still has some legacy and Android vendor lock-in things, but is being phased out

These limitations are known and accepted at this time.


Summary of current single points of failure

The following single points of failure are currently acknowledged:

  • Home backend server (hardware and connectivity)
  • Home internet uplink (single fibre connection)
  • Global ingress VPS (web and mail)
  • TLS certificate authority (Let’s Encrypt)
  • Non-clustered mail server software

All of the above are explicitly documented, monitored, and considered acceptable risks given current cost and complexity constraints.