Five days. That’s how long it took until a fresh Ubuntu server with an open SSH port was hit by nearly 19,000 login attempts per day. Not because someone was specifically targeting my portfolio — but because automated scanners continuously sweep the entire IPv4 internet for open SSH ports.
That it would happen was expected. How quickly it escalated and what concrete effects it had was still instructive. This article documents the progression based on real logs — and the three measures that solved the problem.
The Trigger
A GitHub Actions pipeline that had been running reliably for days suddenly failed:
kex_exchange_identification: read: Connection reset by peer
Connection reset by 10.0.0.2 port 22
SSH was reachable, the server was running, no firewall rule was blocking. Yet connections were being rejected. A look at the sshd logs revealed the cause:
error: beginning MaxStartups throttling
drop connection #10 from [167.99.38.221]:53320 past MaxStartups
sshd had reached the default limit of 10 concurrent unauthenticated connections — and was indiscriminately dropping new connections after that. Including those from GitHub Actions.
The Escalation
A look at the journal logs from the first week shows how quickly this escalated:
| Day | Failed attempts | Note |
|---|---|---|
| Feb 26 | 1,367 | Server goes online, scanning begins |
| Feb 27 | 10,341 | IP added to scan lists |
| Feb 28 | 13,449 | Increase |
| Mar 1 | 18,948 | Peak — nearly 19,000 attempts in one day |
| Mar 2 | 9,193 | fail2ban active from 10:36, sharply reduced after |
Within two days of the first public SSH port, the IP was in the scan lists. By the fourth day, nearly 19,000 attempts. This is not an edge case — this is the normal state for any server with an open port 22 on the internet.
The Attackers
On the day of the failed deployment, I analyzed the logs in detail:
- 20+ different attacker IPs active
- 213 MaxStartups throttling events — that’s how often legitimate connections were dropped
- Peaks of 40 attempts per minute during the night
Origin
| IP | Attempts | Origin |
|---|---|---|
| 134.199.157.148 | 2,332 | DigitalOcean |
| 46.225.10.160 | 1,833 | Hetzner Cloud |
| 165.245.137.142 | 1,360 | DigitalOcean |
| 134.199.156.126 | 855 | DigitalOcean |
| 209.97.130.120 | 715 | DigitalOcean |
Almost exclusively cloud VMs from DigitalOcean and Hetzner. Compromised or purpose-rented instances for scanning. One IP even came from the same Hetzner data center as my server.
Attempted Usernames
3.531 root (38%)
792 admin
410 postgres
408 test
369 oracle
308 ubuntu
230 hadoop
216 git
200 mysql
Classic dictionary scanning against standard service users. root, admin, postgres, mysql, jenkins, docker, elasticsearch — the bots try every username that might exist on a typical server installation. No targeted attack, pure spray-and-pray. But in volume, it’s enough to cripple an unprotected sshd.
The Countermeasures
1. Password Authentication Disabled
PasswordAuthentication no
The most obvious measure: brute-force attacks with passwords run into nothing. Only public key authentication is allowed. The attackers waste their time — but they still burden the sshd process with connection establishment. That’s why this measure alone is not enough.
2. fail2ban
fail2ban monitors the sshd logs and bans IPs after repeated failed attempts via firewall rules:
[sshd]
enabled = true
backend = systemd
maxretry = 3
findtime = 600
bantime = 86400
3 failed attempts within 10 minutes -> IP is blocked for 24 hours. Not just for SSH, but at the firewall level — packets are dropped before they reach sshd.
The effect was immediately measurable: from ~30 failed attempts per minute down to under 5. Within the first minutes, 15 IPs were banned. After one hour, SSH traffic had fallen to a tenth of the previous level.
3. MaxStartups Increased
MaxStartups 30:50:100
The format means: starting at 30 unauthenticated connections, sshd begins rejecting new connections with a 50% probability. At 100, every new connection is rejected. The default of 10:30:100 was too low — even a moderate scan reached the limit.
Combined with fail2ban, this limit is no longer reached in practice. The attackers are banned before they can build up enough concurrent connections.
The Effect
After activating all three measures:
- Failed attempts: From ~30/minute down to under 5/minute
- MaxStartups throttling: Zero events since activation
- Banned IPs: Continuously rising, currently over 15 per server
- Legitimate connections: Not a single drop since
The CI/CD pipeline has been running reliably ever since. The actual problem — the failed deploy — was resolved within minutes.
Conclusion
A server with an open port 22 will be attacked. Not maybe, not eventually — immediately and permanently. The question is not whether but how quickly the scan bots find the IP. In my case: two days.
The three measures — disabling password auth, fail2ban, hardening MaxStartups — take less than five minutes to set up combined. Without them, my deploy workflow would still be unreliable, and sshd would be wasting a significant portion of its resources processing bot traffic.
SSH hardening doesn’t belong on the “I’ll get to it eventually” list. It belongs in the first hour after server installation.