Jump to content

Protecting your website from Layer 7 DoS attacks


Recommended Posts

  • Premium

There was a time, when Metin2 was still a recent game, when it was fairly easy to perform Layer 7 attacks on FreeBSD servers, or even hack into them. Much software was shipped with insecure defaults, and it was expected from the user to properly secure it.

This has changed, and now MySQL is only listening to localhost by default, Apache is for the most part an unnecessary relic from the past, root user cannot login to ssh, and so on. But there is a part of the structure that has always been extremely vulnerable: the website, particularly the cheap webhosts many people opt for when they need to use certain poorly written CMS or Forum software that doesn't play well with Nginx.

Since the needs of a game server (payment, voting and so on) can hardly be covered by any off-the-shelf solution, there will often be a need for some php script directly pulling data from the database to show a player ranking, or similar functions. This script can be repeatedly hit by one or multiple IP addresses and eventually overload the MySQL database which your game happens to use as well; eventually, both your game server and website go down. And no, Cloudflare will not help you unless you pay money and/or configure it extensively and properly, a process I may explain some other time.

Today we are going to introduce two extremely easy solutions to mitigate this sort of attack I described with the help of nginx and a bit of mysql.

 

Two stage rate limiting

The first technique is rate limiting. It involves throttling repeated request from the same IP, particularly to php files which are the ones that consume by far more resources in the server. Hitting anything else is unlikely to cause any harm. 

In order to enable rate limiting, first we must add in the http context a "zone" where IPs are saved:

limit_req_zone $binary_remote_addr zone=www:10m rate=5r/s;

This will create a 10 mb memory zone to store a log of connections; if any of them exceeds 5 requests per second, they will be refused with a 503 error. But for this to actually work we must add this extra line into the php part of the server context - just mind the first two line heres and ignore the others that are there for context:

location ~ \.php$ {
	limit_req zone=one burst=20 delay=10;
	limit_req_status 444;
	try_files $uri =404;
	fastcgi_pass   unix:/var/run/php-fpm.sock;
	fastcgi_index  index.php;
	include        fastcgi_params;
}

Besides specifying where to perform this rate limiting (.php files), the settings enabled here make the experience a bit smoother by allowing the client to send a burst of up to 20 requests/second before refusing subsequent requests. Finally, the delay parameter indicates that when the connection speed exceeds 10r/second, the subsequent requests will be served with a delay.

The second limit_req_status line instructs to give an empty response (444) instead of the default 503 error to excess connections, slightly reducing the server resources needed to deal with the presumed attack.

 

FastCGI cache: serving stale content

Now this is all fine and well, but what happens if we are attacked from multiple IPs? The feared DDoS!

Well, it depends on what our hypotetical php script is exactly doing.

If it's simply pulling data from the Database, we can use the proxy cache to force NGINX to serve such pages from a cache and avoid making repeated connections to the database. Let's define our cache in the http context:

fastcgi_cache_path /var/run/nginx-cache levels=1:2 keys_zone=mycache:10m inactive=10m;
fastcgi_cache_key "$request_method$host$request_uri";
fastcgi_cache_use_stale updating;

The first line creates a cache zone in memory of 10 mb, and specifies that if there are no requests for 10 minutes, the cache will be refreshed anyway.

The second line specifies the arguments to use for creating a key (a sort of hash) in the cache for this request. In practice this means that if, for example, the query string is different in two requests they will still be considered to be the same request and for caching purposes, as the query string is not part of this "key".

And the third line and most relevant means that while the cache is updating, the client will not wait for said update to finish before serving the requested content;instead it will serve the outdated ("stale") version of the page. Since every request normally triggers a cache update, this technique reduces the number of times the php script is actually executed during a flood enormously.

(On a side note, this setting also allows us to show stale content when the backend is not responding. This is exactly what Cloudflare does with its "offline mode". We can enable this behavior by adding further triggers:)

fastcgi cache use stale updating error timeout invalid_header http_500 http_503;

Finally and to use the cache we defined in a location (in this case it must be the php location since its a fastcgi cache) we add this. Second line specifies for how long a 200 OK response is valid; other response codes will not be cached:

fastcgi_cache mycache;
fastcgi_cache_valid 200 5m;

 

The icing in the cake: limit MySQL connections by user

What about scripts that UPDATE the database? Things can get nasty here, since there's no cache to speak of all we can do is limit the total amount of requests that can be made to the backend and the database. In this case, nginx is not going to help; instead, create a specific user for your website different from the game user in MySQL and set a strict limit of connections. This means such attack will not take down the database.

create user 'website'@'someip' identified by 'somepassword';
grant usage on account.* to 'website'@'someip' with max_user_connections 10;

As an extra measure you can set more strict rate limits for the most vulnerable POSTing scripts in nginx (register, login...). Be aware you need to place specific locations (such as login.php) above the wildcard *.php location in the nginx config:

location /login.php {
    limit_req zone=one burst=5 delay=2;
    limit_req_status 444;
    fastcgi_pass   unix:/var/run/php-fpm.sock;
    include        fastcgi_params;
}

And that is all you need to prevent your php scripts from being flooded. Of course, that's only one of the vector attacks, but also the most overlooked, yet easiest to fix. PHP programmers may also add extra checks in their code for repeated connections - memcached is your friend. But that's outside of our scope here.

Edited by Shogun
  • Metin2 Dev 6
  • Think 1
  • Good 2
  • Love 3
  • Love 7
Link to comment
Share on other sites

  • 6 months later...

Announcements



×
×
  • Create New...

Important Information

Terms of Use / Privacy Policy / Guidelines / We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.