Im building a site where users enter promo codes. There is no user authentication, but i want to prevent someone entering promo codes by brute force. I'm not allowed to use captcha, so was thinking of using an IP address blocking process. The site would block a user's IP address for X amount of time if they had X failed attempts at entering the promo code.
Is there any glaring issues in implementing something like this?
Blocking IP addresses is a bad idea because that IP address might be the address of a corporate http proxy server.
Most corporates/institutes connect to internet using a gateway. In such a case, the IP address you see is of the gateway and N number of users might be behind that. If you block this IP address because of nuisance caused by one user in that network, IP based blocking will also make your site unavailable for other N users. This is true where ever a bunch of computers are NATed behind a single router.
Scenario 2: What if say X users in that same network did inadvertently provide an incorrect code within your limit of Y minutes. All users in that network again get blocked to enter any more codes.
You can use cookie based system, where you store the number of attempts in past Y minutes in an cookie (or in session variable on server side) and validate it each time. However, this isn't foolproof again as a user who knows your implementation can circumvent that as well.
If you're IIS 7 there's actually an extension that help you to do precisely what you're talking about.
http://www.iis.net/download/DynamicIPRestrictions
This could save you from trying to implement this through code. As for any "glaring issues", this sort of thing is done all the time to prevent brute force attacks on web applications. I can't think of any reason why a real user would need to try to enter in codes in the same manner a computer that's issuing a brute force attack would. Testing any and all possible user experiences would hopefully get you past any issues that might pop up.
-if it's not linked to a shop DO NOT CONSIDER THIS-
tought about placing an hidden tag on your orders? not 100% foolproof but it will discourage some bruteforces.
all you got to check is if the hidden tag pops up with tons of promocodes you block the order.
i would still recommand you to set some kind of login.
I don't think there is one solution that will solve all your problems, but if you want to slowdown a brute force attack just adding a delay of a few hundred milliseconds in the page load will do a lot!
You could also force them to first visit the page where you enter the code, there you could add a hidden field with a value and store the same value in the session, when the user validates the code you compare the hidden field to the session value.
This way you force the attacker to make two requests instead of just one, you could also measure the time between those two requests and if its below a set amount of time you can more or less guarantee its a bot.
Related
I've been looking around but I couldn't find anything useful. What would be the best practice of securing a Symfony app from brute force attacks? I looked into the SecurityBundle but I couldn't find anything.
Something that I do for this is that I keep a log using event subscribers based on IP addresses and/or usernames attempting to log in. Then, if after an x amount of time an IP/User has tried to log in with a failure then I move that IP address/User to a ban list.. and after that anytime that IP/User tries to log in I deny it right away based on that ban list.
You can also play with the time between attempts and all those goodies inside the event subscriber
Let me know if it makes sense.
Use cloudflare for DDOS attacks. However it may be expensive.
You can prevent dictionary attacks using https://github.com/codeconsortium/CCDNUserSecurityBundle
Honestly I do that with my web/cache server when I need to. I recently used varnish cache to do that with a plugin called vsthrottle. (which is probably one of many things you can use on the server level) the advantage of doing it on the webserver level instead of symfony is that you are not even hitting the php level and compiling all the vendors just to end up rejecting a request, and you are not using a separate data storage (be it mysql or something fast like memcached) to log every request and compare on the next one... If the request reaches the php layer, then it already cost you some performance, and a DDOS of that type will still hurt you even if you are returning a rejection from symfony because it is causing the server to compile php and part of the symfony code.
If you insist on doing it in symfony, register a listener that listens on all requests, parse request headers for either IP addresses or X_forwarded_for (in case you are behind a load balancer in which case only the load balancer ip will keep showing with regular ip check) and then find a suitable way to keep track of all requests up to a minute old (you could probably use memcached for fast storage, with a smart way to increment counts for each ips) and if an ip hits you more than lets say 100 times per the last 1 minute, you return a forbidden response or a too many requests response instead of the usual one... But I do not recommend this as usually built solutions (like the varnish I used) are better, in my case I could throttle for specific routes and not others for example.
I have a website which is developed using WordPress.
Since last 3 months someone continuously try to login to my WordPress Admin panel. They are using different IP address EVERYTIME. I think they are using : brute force attack
To security purpose I am using "Sucuri Security" plugin which is installed to my site and I have also installed "Limit Login Attempts"
From "Sucuri Security" plugin they send me a message after failed login :
Message is look like this :
1 failed login attempts (1 lockout(s)) from IP: 109.173.88.245
Last user attempted: administrator
1 failed login attempts (1 lockout(s)) from IP: 37.194.196.180
1 failed login attempts (1 lockout(s)) from IP: 83.174.209.143
Now, In last 45 minutes they attempted to login 31 times.
What should I do now ?
The first and most obvious answer: Use a long, strong password (random letters, numbers, and preferably other characters too).
If you are doing so, the chance they will get access to your site in this particular way is close to zero.
Consider the number of login-attempts on a per hour, or per year basis:
31 logins / 45 mins = aproximately 0.7 logins per minute.
Multiply that by the number of minuts in a year:
60 mins * 24 hours * 365 days = approximately 367'920 login attempts
in the space of a year.
For a good password, this number is so small that the "brute force attack" will be practically insignificant.
I won't blame you if you still want to improve your security though. If so, you should look into options for two factor authentication for WordPress (i.e. a system where some extra piece of information is required to confirm your identity before you can log in).
PS: I haven't tried any of these personally, but if I was in your shoes, I'd probably give the Google Authenticator plugin a shot.
Use strong passwords. :)
This type of thing happens all the time, not just against websites, people try to brute force just about anything, smtp, pop3 and imap servers, ssh servers, commonly used applications like Wordpress, etc.
The most important thing is to make sure that if password authentication is used, those passwords (all of them!) resist brute force attempts. This means having enough entropy, ie. being long enough and from multiple character classes, not using dictionary words, etc. A 10 characters long random
password with letters, capital letters and numbers is pretty much infeasible to brute force.
In addition to that, you can implement active monitoring of some sort, banning the user and/or the request IP address for a while after several unsuccessful password attempts, etc. These countermeasures raise the bar for an attacker, but a strong enough non-dictionary password is probably good enough anyway for a Wordpress site. (So it's a risk-based thing. If the value you can lose is $10000, you don't want to spend $50000 to protect it.)
One of my sites is constantly bombarded with failed login attempts. The IP addresses keep changing - I do have a blacklist but I'm not sure how much value there is in maintaining that if the IP addresses are always changing.
I could whitelist the admin to allow access only from a specific IP range; I feel this is very restrictive for the nature of the website - a free service. Here's a screenshot of thousands of emails I get for this site regarding failed login attempts - consistent for well over a year if not more.
I got the same login attack email inform (almost 50 attacks one day) and solved via disable xml-rpc.php
Add this to theme or child theme function.php:
//disable XML-RPC
add_filter('xmlrpc_enabled', '__return_false');
I have some webservices which are called by some clients and that includes through mobile and web. I have no control on the clients code.
But, I need to identify who is calling my web services, via the IP address or something else.
Is there any way to identify that?
A better approach to tracking this sort of thing is to introduce the notion of an API key. That way you know exactly who is using your service and you can track their usage etc.
On every call to your service the user would have to provide their key as a means of authorisation (not authentication). This sort of approach can generally help avoid misuse of an API, however, it can't eradicate it completely. At least with this approach if you do find malicious user it's as simple as disabling that particular API key.
You should check your IIS Logs, these will list (if you have them turned on, default they are on) all the requests made to your server.
So search through the log for the URL of the service and check the logs around the time of requests you are having issues with and it will list the IP address.
Your logs can generally be found at: C:\inetpub\logs\LogFiles
If the folder is empty then you are out of luck currently, you will need to turn logging on in IIS and then you will be able to check them after a few hours and start seeing where requests are coming from.
E.g a sample from a log.
2012-10-29 04:49:44 129.35.250.132 GET /favicon.ico/sign-in returnUrl=%252ffavicon.ico 82 - 27.x.x.x Mozilla/5.0+(Windows+NT+6.1;+rv:16.0)+Gecko/20100101+Firefox/16.0 200 0 0 514
So the first highlighted item is the date and time, and the second highlighted item is the IP address (redacted as it's a real log.)
I have a DB with user accounts information.
I've scheduled a CRON job which updates the DB with every new user data it fetches from their accounts.
I was thinking that this may cause a problem since all requests are coming from the same IP address and the server may block requests from that IP address.
Is this the case?
If so, how do I avoid being banned? should I be using a proxy?
Thanks
You get banned for suspicious (or malicious) activity.
If you are running a normal business application inside a normal company intranet you are unlikely to get banned.
Since you have access to user accounts information, you already have a lot of access to the system. The best thing to do is to ask your systems administrator, since he/she defines what constitutes suspicious/malicious activity. The systems administrator might also want to help you ensure that your database is at least as secure as the original information.
should I be using a proxy?
A proxy might disguise what you are doing - but you are still doing it. So this isn't the most ethical way of solving the problem.
Is the cron job that fetches data from this "database" on the same server? Are you fetching data for a user from a remote server using screen scraping or something?
If this is the case, you may want to set up a few different cron jobs and do it in batches. That way you reduce the amount of load on the remote server and lower the chance of wherever you are getting this data from, blocking your access.
Edit
Okay, so if you have not got permission to do scraping, obviously you are going to want to do it responsibly (no matter the site). Try gather as much data as you can from as little requests as possible, and spread them out over the course of the whole day, or even during times that a likely to be low load. I wouldn't try and use a proxy, that wouldn't really help the remote server, but it would be a pain in the ass to you.
I'm no iPhone programmer, and this might not be possible, but you could try have the individual iPhones grab the data so all the source traffic isn't from the same IP. Just an idea, otherwise just try to be a bit discrete.
Here are some tips from Jeff regarding the scraping of Stack Overflow, but I'd imagine that the rules are similar for any site.
Use GZIP requests. This is important! For example, one scraper used 120 megabytes of bandwidth in only 3,310 hits which is substantial. With basic gzip support (baked into HTTP since the 90s, and universally supported) it would have been 20 megabytes or less.
Identify yourself. Add something useful to the user-agent (ideally, a link to an URL, or something informational) so we can see your bot as something other than "generic unknown anonymous scraper."
Use the right formats. Don't scrape HTML when there is a JSON or RSS feed you could use instead. Heck, why scrape at all when you can download our cc-wiki data dump??
Be considerate. Pulling data more than every 15 minutes is questionable. If you need something more timely than that ... why not ask permission first, and make your case as to why this is a benefit to the SO community and should be allowed? Our email is linked at the bottom of every single page on every SO family site. We don't bite... hard.
Yes, you want an API. We get it. Don't rage against the machine by doing naughty things until we build it. It's in the queue.
What I really want is to limit 1 vote per person but the next best thing i can think of is limit 1 vote per IP address to prevent malicious users/hackers from severely tempering with my company's voting system. I was thinking of using a database to keep track of the IP addresses.
Update:
Sorry about not being clear in the first time aruond. What i wanted to know if limiting 1 vote per IP address was a good strategy to limiting 1 vote per person. Basically, i wanted to know if 1 unique IP address is roughly equal to 1 person. People have already mentioned that proxies and routers re-use ip addresses so unfortunately, many people can be using the same ip address.
Thanks. I think, for my case, it'll be best to NOT limit 1 vote per ip address.
I would suggest not going with the IP approach. When I looked at this before some of your large ISPs reuse IPs a lot (AOL...), but if you do use IP addresses, use a database to track them. A fast way to do it is to make it a unique key and to catch the exception as "already voted".
One good thing to add is not to show a user that their vote was not counted, just show the results, or thank them for voting. By not giving that specific error, it is harder and sometimes not even noticed by your problem users.
If you use IP addresses then you'll be limiting most companies to only one vote because they route all outbound internet traffic through a firewall or proxy server. We did this a couple of years ago and found that all AOL traffic came from only 5 ip addresses.
Generally, yes, what you would do is have a database table for the votes, and simply store choice+ip address - then when inserting, do a DB query to see if an entry already exists with the given IP.
The ideal solution would be to tie votes to user accounts which are in turn linked to more concrete presence (such as a credit card, cell phone, or other less-easily-multiplied identity source).
What exactly is the question you're asking?
The way I have always done it is to concat the user agent and ip address into an MD5 hash (in some cases this will allow people from the same IP to vote, long as they are using different browsers), and store that as a "fingerprint" for the vote the the database and add a unique key to it. As IPX Ares said, from there you can catch the duplicate key exception, and you should be good.
If you wanted to allow people to vote once a day, you could also append the Ymd to that "fingerprint", or other variations to allow x amount an hour or x amount per day.
Yes, use database. Don't rely on cookies, they can be easily deleted.
IMO, so far, IP based voiting limitation is the best option.
IP address has its limitations as we have noted from above, but there are many other characteristics a browser has which can damper mischeivious voters. BrowserID, for example, is different for just about every browser. You could use a combination of BrowserID and IP address to create a unique ID.
Another way to 'help' avoid cheating is to provide a 1 time use hash into the form then check if that's is valid before you count the vote.
For example:
When you create the voting form you make a random hash and store it in the database and put it in the form as a hidden field.
(might want to add a date field to the hash database to you can clean up the unused hashes)
Then when you get a vote POST request you can check if the supplied hash is in the database and remove it from the database so it cant be used again.
CONS:
Might load the database with high IO if the voting page has high traffic.
Can't cache the page as plan html so it puts more stress on the web app.