Deploying a portfolio on GitHub Pages or Vercel — that takes five minutes. Still, I chose to run my own server. Not because it’s easier, but because the portfolio doesn’t just showcase the projects running on it — it is one itself. Server administration, network architecture, CI/CD, and security hardening are skills best demonstrated by applying them.
This article documents the infrastructure behind mathis-adler.dev: how the servers are set up, how traffic flows, and why certain decisions were made the way they were.
Two Servers, One Private Network
The infrastructure runs on two Hetzner Cloud servers in Nuremberg, connected through a private Hetzner network (10.0.0.0/16):
┌─────────────────────────────────────┐
│ Hetzner Privat-Netz │
│ 10.0.0.0/16 │
│ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ Webserver │ │ VPN-Server │ │
│ │ 10.0.0.2 │ │ 10.0.0.3 │ │
│ │ │ │ │ │
│ │ Nginx │ │ WireGuard │ │
│ │ Astro │ │ dnsmasq │ │
│ │ go-redirect │ │ wireguard-ui│ │
│ │ once │ │ │ │
│ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────┘
Why two servers? Separation of concerns. The web server hosts the applications, the VPN server controls access. If the VPN server is compromised, the attacker has no direct access to the web applications — and vice versa. Additionally, both servers can be updated and restarted independently.
The Hetzner private network enables communication between the servers without traffic going over the public internet. No additional costs, no encryption needed (the traffic never leaves the data center), minimal latency.
VPN: WireGuard with Split Tunnel
Access to the portfolio site currently runs exclusively through WireGuard VPN. This is a deliberate decision: as long as the site is in development, it should not be publicly reachable. Once it’s finished, the VPN restriction will be removed — one line in the Nginx config.
The VPN configuration uses split tunneling : only traffic to the defined subnets (10.0.0.0/16 and 172.30.0.0/24) goes through the tunnel. Normal browsing, streaming, downloads — everything continues to go directly through the regular internet connection. No unnecessary detour through the VPN server, no reduced bandwidth.
DNS in the VPN
One problem with VPN-protected web services: the browser resolves mathis-adler.dev via public DNS and gets the public IP 46.225.230.154. But Nginx on the web server only allows access from the private network and the VPN subnet. The request is blocked.
Solution: On the VPN server, dnsmasq runs as a DNS resolver that returns the private IP 10.0.0.2 for mathis-adler.dev and its subdomains:
address=/mathis-adler.dev/10.0.0.2
address=/go.mathis-adler.dev/10.0.0.2
address=/once.mathis-adler.dev/10.0.0.2
VPN clients use 172.30.0.1 (the WireGuard interface of the VPN server) as their DNS server. This way, the domain is correctly resolved to the private IP within the VPN context, and traffic flows through the Hetzner private network — without touching the public internet.
Nginx: More Than a Web Server
Nginx serves as the central entry layer for all services. Each service has its own Nginx site configuration:
| Domain | Backend | Access |
|---|---|---|
mathis-adler.dev | Static files (/var/www/.../dist/) | VPN only |
go.mathis-adler.dev | Docker -> 127.0.0.1:3001 | Public (API VPN only) |
once.mathis-adler.dev | Docker -> 127.0.0.1:3002 | Public (Admin VPN only) |
SSL with Let’s Encrypt
All domains use Let’s Encrypt certificates that are automatically renewed via Certbot. HTTP is redirected to HTTPS, HSTS is enabled. No manual certificate management, no costs.
Access Control
Nginx handles access control at the network level:
# Portfolio: fully VPN-only
allow 10.0.0.0/16;
allow 172.30.0.0/24;
deny all;
For go.mathis-adler.dev and once.mathis-adler.dev, public access is allowed — the redirect links and reveal pages should be reachable without VPN, after all. But administrative endpoints (/api/ for go-redirect, /api/admin/ for once) are restricted to VPN access. This way, no outsider can create short links or manage secrets.
Rate Limiting
Publicly accessible services need protection against abuse. Nginx limits the request rate at two levels:
limit_req_zone $binary_remote_addr zone=go_general:10m rate=30r/m;
limit_req_zone $binary_remote_addr zone=go_api:10m rate=10r/m;
30 requests per minute for normal access, 10 for API endpoints. Rate limiting at the Nginx level rather than in the application has the advantage that requests never even reach the application server — less load, faster rejection.
The Three Services
Three independent services run on the web server:
mathis-adler.dev — The Portfolio
A static website built with Astro. No server-side rendering, no backend, no framework JavaScript in the bundle. Nginx serves the finished HTML, CSS, and JS files directly from /var/www/mathis-adler.dev/dist/. More on this in the Astro Architecture section.
go.mathis-adler.dev — Redirect Service
A self-hosted URL shortener with Node.js, Express, and SQLite. Runs as a Docker container on port 3001. Short links are managed via a CLI with an API key — no web interface, no registration form, no abuse potential. Details in the blog post about URL shorteners.
once.mathis-adler.dev — One-Time Secrets
A one-time secret service with end-to-end encryption. Also Node.js, Express, SQLite, Docker container on port 3002. The server never sees the plaintext — encryption and decryption happen in the browser. The technical details are described in the blog post about E2E one-time secrets.
Both services run in Docker containers to isolate them from the host system. SQLite databases and .env files reside outside the container on the host and are mounted via volumes. This way, data survives a container restart, and secrets never end up in the Git repository.
CI/CD: Push and Done
Each of the three projects has a GitHub Actions pipeline that automatically deploys on push to main. The flow is the same for all:
Push auf main
│
├─► Portfolio: npm ci → npm run build → rsync dist/ → fertig
│
├─► go-redirect: rsync Projektdateien → docker compose up -d --build
│
└─► once: rsync Projektdateien → docker compose up -d --build
All three pipelines use the same deploy key (Ed25519) and the same GitHub Secrets. rsync copies only changed files — a typical deployment takes just a few seconds.
Important: The rsync commands for go-redirect and once exclude .env and data/. Environment variables (API keys, secrets) are configured once on the server and are never overwritten by the pipeline. Databases remain untouched as well.
If the pipeline fails for any reason, manual deployment also works with two commands:
npm run build
rsync -avz --delete dist/ root@mathis-adler.dev:/var/www/mathis-adler.dev/dist/
Astro Architecture
The portfolio uses Astro 5 with Static Site Generation — all pages are compiled to HTML at build time. No Node.js process on the server, no database, no server-side rendering. This minimizes the attack surface and simplifies hosting: Nginx serves static files, done.
Content Collections
Blog posts and projects are managed as MDX files in content collections. Astro validates the frontmatter of each file with Zod schemas:
const blog = defineCollection({
schema: z.object({
title: z.string(),
description: z.string(),
pubDate: z.coerce.date(),
tags: z.array(z.string()).default([]),
glossary: z.array(z.string()).default([]),
draft: z.boolean().default(false),
}),
});
Draft posts are filtered during the build and don’t appear on the site. Type errors in frontmatter — wrong date, missing title — are caught at build time, not in production.
Glossary System
Each blog post and project can reference glossary terms in the frontmatter. In the text, they are marked with a <Term> component that renders the term as a clickable reference. On desktop, a sidebar displays all definitions; on mobile, a dialog appears via a button in the toolbar.
The glossary is a simple TypeScript file with key-value pairs — no CMS, no headless backend. A new term is one line of code.
Project Structure
src/
├── components/ ContentDetail, GlossarySidebar, Term, WikiQrShowcase
├── content/ MDX-Dateien (blog/ + projects/)
├── data/ Glossar-Einträge (glossary.ts)
├── layouts/ Base.astro (Nav, Toolbar, Theme-Init)
├── pages/ 7 Routen (index, blog, projekte, kontakt, qr-generator)
├── scripts/ toolbar, mobile-menu, color-picker, scroll-reveal
├── styles/ 7 CSS-Module (variables, base, nav, components, toolbar, prose, animations)
└── utils/ Collection-Helpers (getPublishedPosts, getPublishedProjects)
Blog and project detail pages share ContentDetail.astro — the respective slug files are only ~15 lines long. DRY, without abstraction for abstraction’s sake.
Security Decisions at a Glance
Security is not a feature you bolt on at the end. It runs through every layer of the architecture:
- Network: Two separate servers, private network, VPN for admin access
- Transport: TLS everywhere, HSTS, automatic certificate renewal
- Access: Nginx ACLs by IP ranges, API endpoints VPN-only
- SSH: Password auth disabled, fail2ban, MaxStartups hardened — details in the blog post about SSH brute force
- Rate limiting: At the Nginx level, before requests reach the application
- Application: Content Security Policy , no inline JavaScript, no tracking
- Deployment: Deploy key with minimal permissions, secrets never in the repository
- Data: SQLite databases and
.envfiles outside the containers, excluded from deployment
None of these measures alone makes the system secure. Together, they form a defense-in-depth strategy where each layer compensates for the weaknesses of the others.
Conclusion
The infrastructure behind this portfolio is deliberately kept simple — two servers, one VPN, one reverse proxy, three services. No Kubernetes clusters, no microservice architecture, no over-engineering. The complexity lies in the details: how DNS works within the VPN, why rate limiting runs at the Nginx level rather than in the application, why .env files are excluded from deployment.
This is not a setup for an application with thousands of users. It’s a setup for a portfolio that demonstrates how I make infrastructure decisions — and that I can execute on them.