Why I run SaaS on my own server instead of the cloud
Many developers reflexively reach for AWS, Google Cloud, or Azure. I don't. For years I've been running my SaaS products on my own servers — saving money, but also gaining control, performance, and legal clarity.
The numbers speak for themselves
My Hetzner CPX22 costs €6.19 per month: 3 vCPU, 4 GB RAM, 80 GB SSD, 20 TB traffic. The equivalent AWS setup (t3.medium + EBS + transfer) realistically costs €70–100 per month. That's a factor of 12–16x.
On this single server I currently run: Baseloq (SaaS), AllesWurst dev environment, this website, a vacation rental booking system, Freqtrade (crypto bot), a stock trading app, and several smaller projects. Nginx as reverse proxy, PM2 for Node processes, PostgreSQL, Redis — all cleanly isolated.
Control isn't a luxury
In the cloud, I'm a tenant. Prices can change, services get deprecated, APIs disappear. AWS has had multiple price increases in recent years — without notice. On my own server, I'm the owner. I decide what runs, how it's configured, and when updates are applied.
That also means: no vendor lock-in. If Hetzner doubles prices tomorrow, I migrate to Netcup, Contabo, or another provider in an hour. With AWS, where everything is linked through proprietary services (RDS, Lambda, S3, SQS), migration is a multi-month project.
GDPR without headaches
Hetzner has data centers in Germany and Finland. My European customers' data never leaves the EU. No Privacy Shield chaos, no standard contractual clause juggling, no data transfers to the US. Baseloq processes employee data — GDPR compliance here isn't a nice-to-have, it's mandatory.
By default, AWS routes everything through US-East if you're not careful. And even EU regions are subject to the US CLOUD Act — an often underestimated legal risk for European businesses.
Performance: Surprisingly good
Hetzner servers are in Nuremberg and Helsinki — for Austrian and German users that's 10–25ms latency. AWS Frankfurt would be similar, but costs multiples more. And since I configure everything myself, there's no cold-start problem like with Lambda functions, no auto-scaling latency, no traffic spike surprises.
When does cloud still make sense?
I'm not a cloud fundamentalism opponent. There are legitimate scenarios:
- Global traffic: If you have users on 5 continents and truly need low latency, a CDN or multi-region deployment makes sense.
- Extreme scalability: If your traffic can explode from 100 to 100,000 requests in seconds (viral product, ticket sales), auto-scaling is a real advantage.
- Specialized services: For machine learning (GPU instances), speech-to-text, or specialized databases, there are cloud services without reasonable self-hosted alternatives.
- Enterprise compliance: Some enterprise customers require certified cloud infrastructure (ISO 27001, SOC 2).
For 90% of early-stage SaaS products — the first 1,000 customers, initial scaling phase — a dedicated or virtual server at Hetzner, Netcup, or Contabo is the more economical and often technically superior choice.
My setup in detail
To keep things concrete, here's my actual setup: The Hetzner CPX22 runs Ubuntu as its operating system. Nginx serves as web server and reverse proxy, routing all incoming requests to the various projects. Each project has its own Nginx server block with its own domain and its own configuration.
For Node.js process management I use PM2. Each project — Baseloq, AllesWurst, the booking platform — runs as an independent PM2 process. That means: if one project crashes, the others are unaffected. PM2 automatically restarts crashed processes and provides built-in monitoring with CPU and RAM usage per process.
Databases run on PostgreSQL — each project has its own database with its own user and its own permissions. Redis handles session management and caching, which drastically reduces response times for recurring requests.
SSL certificates come from Let's Encrypt via Certbot and are automatically renewed. And the best part: all these components — Nginx, PM2, PostgreSQL, Redis, Certbot — are open source and free. The only recurring cost is the €6.19 for the server itself.
Automated backups round out the setup. Every night, PostgreSQL dumps are created and synced to a separate storage server. In a worst-case scenario, no more than 24 hours of data can be lost.
Cost comparison: Own server vs. cloud — the full picture
In the earlier section I mentioned the rough factor. Here's the detailed breakdown of what my setup would cost on AWS:
- Hetzner CPX22: €6.19/month — everything included (compute, storage, 20 TB traffic)
- AWS EC2 t3.medium (comparable): ~€35/month
- AWS EBS 80 GB SSD: ~€8/month
- AWS Data Transfer 1 TB/month: ~€90/month (after free tier)
- AWS RDS PostgreSQL (db.t3.micro): ~€30/month
- AWS ElastiCache Redis (cache.t3.micro): ~€15/month
AWS total: ~€180/month— that's a factor of 29 compared to Hetzner. Over three years: Hetzner ~€223 vs. AWS ~€6,480. Savings of over €6,250.
Even if I factor in 2–3 hours per month for server administration (which is realistic once the setup is in place), at an internal hourly rate of €80 that's about €200/month. Combined with the €6.19 server cost, I'm at ~€206/month — still cheaper than the bare AWS infrastructure without any administration time.
Security and monitoring
The most common objection to self-hosting: "But what about security!" In reality, a well-configured own server is no less secure than a cloud instance — often even more secure because the attack surface is smaller.
My security setup includes:
- SSH key-only authentication: Password login is completely disabled. Brute-force attacks on SSH go nowhere.
- fail2ban: Automatically blocks IP addresses after multiple failed login attempts — not just for SSH, but also for Nginx.
- UFW firewall: Only ports 22 (SSH), 80 (HTTP), and 443 (HTTPS) are open. Everything else is blocked.
- Unattended upgrades: Security updates are installed automatically, without manual intervention.
For monitoring I use PM2's built-in supervision: CPU usage, RAM consumption, and automatic restart on crashes. On top of that, simple uptime monitoring notifies me via email when a service becomes unreachable. PostgreSQL backups run automatically every night via pg_dump and a cron job, with dumps synced to external storage.
Migration: Moving is surprisingly easy
Another advantage of my setup: migrating to a new server takes 1–2 hours. All configurations are either version-controlled as code or easily reproducible. The process is simple:
- PostgreSQL: pg_dump on the old server, pg_restore on the new one — database migration in minutes.
- Files: rsync copies all project files, configurations, and uploads in one go.
- Nginx & PM2: Copy configuration files, start PM2 processes — done.
- DNS: Change the A record to the new IP, wait for TTL — migration complete.
Compare that with an AWS migration: Lambda functions, IAM roles and policies, VPC configurations, security groups, S3 buckets, RDS instances, ElastiCache clusters, CloudWatch alarms — everything needs to be individually recreated or migrated via CloudFormation/Terraform. That's not an afternoon project, it's a multi-week undertaking. My setup? A new server, a few commands, DNS switch — done.
Conclusion
Running your own server isn't a retro move. It's the pragmatic decision for cost, control, and compliance. Cloud providers have excellent marketing — but Hetzner CPX22 at €6/month beats AWS t3.medium at €80/month on price-performance by a massive margin. As long as my setup meets the requirements, I'll stick with it.
Questions or feedback? office@markusstoeger.com