Restrict node with CAPTCHA - drupal

is there any quick solution to restrict access to one node (page) with a captcha module (or some other, similar way)?

If you mean to allow a user to access a node if he passes a CAPTCHA, then there isn't any module for that.
If I understood what you mean, the module should present a CAPTCHA, and if the answer is correct, then the node should be shown.
You can create a custom module using the CAPTCHA module.

If your purpose is to block bots, try these:
ddos
botbouncer
I have used "ddos" earlier just to block too many requests from an IP on a previous website. The usage is fairly simple: -
In your app.js, add
var Ddos = require('ddos')
var ddos = new Ddos({burst:10, limit:50,errormessage:'Maximum number of
requests exceeded from your system, please wait to regain access'})
app.use(ddos.express);
So, how ddos works is it maintains an internal count of the number of requests it receives from each IP. For every request it receives, it increments the counter. And for every second that passes without a request, the previous entries are deleted.
Now, if for a certain IP, the limit (here, 50) is exceeded, 429 error is thrown. From here on in, every subsequent request increments at the specified burst rate (here, 10) until the internal counter resets.
This is the next best thing to incorporating Cloudflare on your website. Hope that helps!

Related

How to secure Symfony app from brute force and malicious traffic

I've been looking around but I couldn't find anything useful. What would be the best practice of securing a Symfony app from brute force attacks? I looked into the SecurityBundle but I couldn't find anything.
Something that I do for this is that I keep a log using event subscribers based on IP addresses and/or usernames attempting to log in. Then, if after an x amount of time an IP/User has tried to log in with a failure then I move that IP address/User to a ban list.. and after that anytime that IP/User tries to log in I deny it right away based on that ban list.
You can also play with the time between attempts and all those goodies inside the event subscriber
Let me know if it makes sense.
Use cloudflare for DDOS attacks. However it may be expensive.
You can prevent dictionary attacks using https://github.com/codeconsortium/CCDNUserSecurityBundle
Honestly I do that with my web/cache server when I need to. I recently used varnish cache to do that with a plugin called vsthrottle. (which is probably one of many things you can use on the server level) the advantage of doing it on the webserver level instead of symfony is that you are not even hitting the php level and compiling all the vendors just to end up rejecting a request, and you are not using a separate data storage (be it mysql or something fast like memcached) to log every request and compare on the next one... If the request reaches the php layer, then it already cost you some performance, and a DDOS of that type will still hurt you even if you are returning a rejection from symfony because it is causing the server to compile php and part of the symfony code.
If you insist on doing it in symfony, register a listener that listens on all requests, parse request headers for either IP addresses or X_forwarded_for (in case you are behind a load balancer in which case only the load balancer ip will keep showing with regular ip check) and then find a suitable way to keep track of all requests up to a minute old (you could probably use memcached for fast storage, with a smart way to increment counts for each ips) and if an ip hits you more than lets say 100 times per the last 1 minute, you return a forbidden response or a too many requests response instead of the usual one... But I do not recommend this as usually built solutions (like the varnish I used) are better, in my case I could throttle for specific routes and not others for example.

How to handle multiple update requests on load-balanced servers

I have a client application which is adding items to a cart. An "add" operation is firing an update request via a HTTP REST call to a remote endpoint. Please note: this request contains the complete cart as a whole, not solely the item being added. This request is then load-balanced between two servers using a round-robin alogrithm.
The problem I'm trying to tackle is that the client does not wait for the "add" request to return before to launch another "add" request, if the user does so. This is good from an end-user perspective because the user doesn't have to wait. But this is a nightmare from a server perspective : you can't be sure in which order the requests will be processed because of the load-balancer.
Here is an example:
The user adds the item #1 to the cart. The request A is sent.
The user adds the item #2 to the cart. The request B is sent. Please note that the request B is sent before the request A has received a response.
The request A is load-balanced on server 1, and the request B is load-balanced on server 2
For some reasons, the server 1 is slower than server 2, so that the request B is processed first => the cart has item #1 and #2
The server 1 processes the request A => the cart has item #1 only (reminder: each update request contains the wole cart)
I'm not so sure about how to handle this. So far, the possible solutions I can think of are:
Send a timestamp with the request and keep the timestamp in database. Before to update the cart, check that the timestamp of the request is higher. Otherwise drop the request. But this rely heavily on client-side behaviour.
Set a version number on the cart itself, and increment it at each update. This would force the client to wait for the response before to send another update. Not very satisfactory from an end-user perspective, because he has to wait.
Set a "session affinity" on the load-balancer so that the requests from a particular client are piped to the same server each time. The problem is that it affects the balance on the server load.
I'm pretty sure I'm not the first one to face such issue, but I failed at finding a similar case, quite surprisingly. I must probably have asked the wrong question or keywords! Anyway, I'd be very interested to share your thoughts and experience on this problem.
The easiest way would be to send the operation details (added #1, added #2…) to be able to reconstruct the cart in an incremental way on the servers. With this information, you don't rely on the requests being processed in a specific order at all.
If you can't modify the API, your third solution (handling a same session on a same server during its whole duration) would probably be the way to go, without more information on your expected load by customer/customer count.

GET, PUT, DELETE, HEAD : What do you mean by Idempotent

I was listening a video on youtube on REST API, below is the link:
REST API
It is said that GET, PUT, DELETE, HEAD are idempotent operations i.e. you can invoke them multiple times and get the same state on the server.
Would anybody please elaborate this line?
Just what it says
No matter how many times that Resource is requested with the exact same URL, the state on the server will never change as a side effect because of the request.
idempotent:
Denoting an element of a set that is unchanged in value when
multiplied or otherwise operated on by itself
The point is multiple requests that are exactly the same:
So if you request an image from a server 1000 times with the same URL, nothing on the server is changed.
If you call DELETE multiple times on the same resource, they state
on the server doesn't change. This removes the resource, and nothing
else, no side effect. And if the resource is not there, good that is
what we wanted and nothing else should be affected on the server.
Those Verbs should never have side effects.
Doing a GET should not cause side effects to alter the state of the server no matter how many times this exact URL is requested.
It is about repeated subsequent calls
Example:
Calling GET on a resource should NOT modify a database record, or
cause any changes. If it does it isn't following the rules.
If you call HEAD on a resource 1000 times in a row, the state on the server should not change. It might return different data because some removed the resource separately, but repeated calls should never do anything different on the server.
What is NOT idempotent
Example:
Calling GET multiple times causes a counter on tracking that
resource to increment every time you make the request with the exact
same URL. This is not idempotent. There is a side effect and the
state of the server is changing because of the request.
Idempotent means no matter how many times you invoke that method (e.g. GET), you won't introduce side effects. For example when you issue a GET request to a URL (navigating to http://www.google.com in a browser), you theoretically won't change the state of the web server no matter how many GET requests you issued to the server.
As a real world example, you shouldn't allow some database DELETE/INSERT operations be accessible via an HTTP GET. There are numerous accidents for the Google Crawler to accidentally deleted entities from databases while crawling (i.e. GETting) websites.

Tell if someone is accesing my HTTP-resources directly?

Is there a way to find out if anyone is calling the image located on my website directly on their website?
I have a website and I just want to make sure no one is using my bandwidth.
Sure there are methods, some which can be trusted a little more than others.
Using Referer-Header
There is a HTTP-Header named Referer which most often contain a string representing the URL which a user visited to get access to the current request.
You can see it as a "I came from here"-header.
If it was guaranteed to always exists it would be a piece of cake to prevent people from leeching your bandwitdh, though since this is not the case it's pretty much a gamble to just rely on this value (which might not exists at times).
Using Cookies
Another way of telling whether a user is a true visitor on your website is to use cookies, a user that hasn't got a cookie and tries to get access to a specific resource (such as an image) could get a message saying "sorry, only real visitors of example.com get access to this image".
Too bad that nothing states that a client is forced to implement and handle cookies.
Using links with a set expiration time [RECOMMENDED]
This is probably the safest option, though it's the hardest to implement.
Using links that is only valid for N hours will make it impossible to leech your bandwidth without going into trouble of implementing some sort of crawler which regularly crawls your site and returns the current access token required to get access to a resource (such as an image).
When a user visits the site a token generated N hours is applied to all resources available is appended to their path sent back to the visitor. This token is mandatory and only valid for N hours.
If the user tries to access an image with an invalid/non-existent token you could send back either 404 or 401 as HTTP status code (preferably the later since it's a Forbidden request).
There are however some quirks worth mentioning:
Crawlers from *search-engine*s might not visit the whole site at a given moment inside the N hours, make sure that they can access the whole content of your site. Identify them by using the value of header User-Agent.
Don't be tempted to lower the lifespan of your token to less than any reasonable time, remember that some users are on slow connections and that having a token of 5 seconds might sound cool - but real users can get flagged erroneously.
never put a token on a resource that people should be able to find from external point (search engines for one), such as the page containing the images you wish to protect.
If you do this by accident you will mostly harm the reputation of your site.
Additional thoughts...
Please remember that any method implemented to make it impossible
for leechers to hotlink your resources never should result in true
visitors being flagged for bandwidth leech. You probably want to ease
up on the restriction rather than making it stronger.
I rather have 10 normal visitors and 2 leechers than no leechers but
only 5 normal users (because I accidentally flagged 5 of the real
visitors as leechers without thinking too much).

prevent brute force attack by blocking IP address

Im building a site where users enter promo codes. There is no user authentication, but i want to prevent someone entering promo codes by brute force. I'm not allowed to use captcha, so was thinking of using an IP address blocking process. The site would block a user's IP address for X amount of time if they had X failed attempts at entering the promo code.
Is there any glaring issues in implementing something like this?
Blocking IP addresses is a bad idea because that IP address might be the address of a corporate http proxy server.
Most corporates/institutes connect to internet using a gateway. In such a case, the IP address you see is of the gateway and N number of users might be behind that. If you block this IP address because of nuisance caused by one user in that network, IP based blocking will also make your site unavailable for other N users. This is true where ever a bunch of computers are NATed behind a single router.
Scenario 2: What if say X users in that same network did inadvertently provide an incorrect code within your limit of Y minutes. All users in that network again get blocked to enter any more codes.
You can use cookie based system, where you store the number of attempts in past Y minutes in an cookie (or in session variable on server side) and validate it each time. However, this isn't foolproof again as a user who knows your implementation can circumvent that as well.
If you're IIS 7 there's actually an extension that help you to do precisely what you're talking about.
http://www.iis.net/download/DynamicIPRestrictions
This could save you from trying to implement this through code. As for any "glaring issues", this sort of thing is done all the time to prevent brute force attacks on web applications. I can't think of any reason why a real user would need to try to enter in codes in the same manner a computer that's issuing a brute force attack would. Testing any and all possible user experiences would hopefully get you past any issues that might pop up.
-if it's not linked to a shop DO NOT CONSIDER THIS-
tought about placing an hidden tag on your orders? not 100% foolproof but it will discourage some bruteforces.
all you got to check is if the hidden tag pops up with tons of promocodes you block the order.
i would still recommand you to set some kind of login.
I don't think there is one solution that will solve all your problems, but if you want to slowdown a brute force attack just adding a delay of a few hundred milliseconds in the page load will do a lot!
You could also force them to first visit the page where you enter the code, there you could add a hidden field with a value and store the same value in the session, when the user validates the code you compare the hidden field to the session value.
This way you force the attacker to make two requests instead of just one, you could also measure the time between those two requests and if its below a set amount of time you can more or less guarantee its a bot.

Resources