Limit 1 vote per IP Address? - networking

What I really want is to limit 1 vote per person but the next best thing i can think of is limit 1 vote per IP address to prevent malicious users/hackers from severely tempering with my company's voting system. I was thinking of using a database to keep track of the IP addresses.
Update:
Sorry about not being clear in the first time aruond. What i wanted to know if limiting 1 vote per IP address was a good strategy to limiting 1 vote per person. Basically, i wanted to know if 1 unique IP address is roughly equal to 1 person. People have already mentioned that proxies and routers re-use ip addresses so unfortunately, many people can be using the same ip address.
Thanks. I think, for my case, it'll be best to NOT limit 1 vote per ip address.

I would suggest not going with the IP approach. When I looked at this before some of your large ISPs reuse IPs a lot (AOL...), but if you do use IP addresses, use a database to track them. A fast way to do it is to make it a unique key and to catch the exception as "already voted".
One good thing to add is not to show a user that their vote was not counted, just show the results, or thank them for voting. By not giving that specific error, it is harder and sometimes not even noticed by your problem users.

If you use IP addresses then you'll be limiting most companies to only one vote because they route all outbound internet traffic through a firewall or proxy server. We did this a couple of years ago and found that all AOL traffic came from only 5 ip addresses.

Generally, yes, what you would do is have a database table for the votes, and simply store choice+ip address - then when inserting, do a DB query to see if an entry already exists with the given IP.
The ideal solution would be to tie votes to user accounts which are in turn linked to more concrete presence (such as a credit card, cell phone, or other less-easily-multiplied identity source).
What exactly is the question you're asking?

The way I have always done it is to concat the user agent and ip address into an MD5 hash (in some cases this will allow people from the same IP to vote, long as they are using different browsers), and store that as a "fingerprint" for the vote the the database and add a unique key to it. As IPX Ares said, from there you can catch the duplicate key exception, and you should be good.
If you wanted to allow people to vote once a day, you could also append the Ymd to that "fingerprint", or other variations to allow x amount an hour or x amount per day.

Yes, use database. Don't rely on cookies, they can be easily deleted.
IMO, so far, IP based voiting limitation is the best option.

IP address has its limitations as we have noted from above, but there are many other characteristics a browser has which can damper mischeivious voters. BrowserID, for example, is different for just about every browser. You could use a combination of BrowserID and IP address to create a unique ID.

Another way to 'help' avoid cheating is to provide a 1 time use hash into the form then check if that's is valid before you count the vote.
For example:
When you create the voting form you make a random hash and store it in the database and put it in the form as a hidden field.
(might want to add a date field to the hash database to you can clean up the unused hashes)
Then when you get a vote POST request you can check if the supplied hash is in the database and remove it from the database so it cant be used again.
CONS:
Might load the database with high IO if the voting page has high traffic.
Can't cache the page as plan html so it puts more stress on the web app.

Related

Website - blocking view from none specified country locations

I am looking for as reliable and accurate / quick means possible to add in some htaccess code to block visits to a website from countries / IPs which are not in the white listed list of countries I want to allow access for. I have looked at https://www.ip2location.com/free/visitor-blocker which seems to offer a solution - for the 4 allowed countries I want to allow access - it has created a 4.1MB htaccess file! Will this mean slow access when someone attempts to view the site? I guess using a free service like this means the data is likely nowhere near comprehensive?
Does anyone have any suggestions on a good way to allow just visitors from a few countries access to a website?
It sounds like the service you used basically tried tried to brute force the blacklist. If you look into the htaccess file I'm sure you will be a long list of hard coded IP blocks.
In my opinion this is a terrible way to handle a geographic blacklist. To your original question - there is no "most reliable, most accurate, and quickest" method. Those are separate categories and you will need to preference one over the next.
For performance you could consider blacklisting at the routing level / dns server / proxy. This obviously isn't going to be the quickest way in terms of performance. There are Apache Modules that exist that allow you to use a local database to compare the incoming IP address with a list of known IP blocks from the blacklisted country. One of the main issues with this is that you need to constantly update your database to take in new IP blocks.
In my opinion the "best" method to do this is a simple redirect at the application layer using server side code. There exists several geographic API's where you can send in the IP or Hostname and get back a country of origin. An example:
$xml= new SimpleXMLElement(file_get_contents('http://www.freegeoip.net/xml/{IP_or_hostname}'));
if($xml->CountryCode == "US") {
header('Location: http://www.google.com');
}
There are two ways to block a visitor in web server. One is using firewall (.htaccess etc) and another one is using server-side scripting (PHP etc).
If you are concern of the performance of the firewall option, then you can download the IP2Location LITE database from http://lite.ip2location.com and implement the database in your local server. For every connection, you query the visitor IP address and find their country. You can redirect or block them using the PHP codes. Please find the complete steps in https://www.ip2location.com/tutorials/redirect-web-visitors-by-country-using-php-and-mysql-database
There is also another option to use remote geolocation API. However, we do not suggest this method because of network latency. It will slow down all user experience due to API queries.

Whitelisting Problems?

I have a huge issue that has to do with whitelisting. I have been doing C++ for about 6 months now and I can't seem to figure out how to pinpoint my targets to limit who can open and use my application with a whitelist.
For example, if the user is not on the whitelist the program would tell them by the way it loads. I would like to see this done with ID's if specific ID matches with the whitelist then that person can use my program.
I have tried doing target drawbacks such as getting IP's, but doing this is so vulnerable if the IP is changed. Also, multiple programs could be opened up on different IDs on that IP, which I don't want.
Sorry if this is very confusing I have just been STRUGGLING with this whitelist I have less hair than I did before I started making the whitelist.
Thanks if you can help, tried to explain the best I could! :)
The general strategy is pretty simple.
First, specify what criteria a user should meet to be on the whitelist.
Second, specify how data about users on the whitelist will be stored.
Third, when the program starts, gather information about the user - when the program starts - that can be compared against the criteria on the whitelist.
Fourth, when comparing data about the user with stored whitelist data, start by assuming the user is NOT on the whitelist and only permit access if a match is found. If there are multiple criteria, you need to decide how to combine them to find a match (e.g. restrict a user to a specific IP, allow a user only if using an IP in a range - which will prevent a user starting the program from home, etc etc)
Fifth, take steps to ensure your program can access the stored whitelist data, but users cannot modify it.
There are many ways to target specific users. First, I need some extract information.. How can you identify a single user ? Your program should be a connection toward any server ? In that case, your user should provide an id and a password or it's a anonymous connection ?

prevent brute force attack by blocking IP address

Im building a site where users enter promo codes. There is no user authentication, but i want to prevent someone entering promo codes by brute force. I'm not allowed to use captcha, so was thinking of using an IP address blocking process. The site would block a user's IP address for X amount of time if they had X failed attempts at entering the promo code.
Is there any glaring issues in implementing something like this?
Blocking IP addresses is a bad idea because that IP address might be the address of a corporate http proxy server.
Most corporates/institutes connect to internet using a gateway. In such a case, the IP address you see is of the gateway and N number of users might be behind that. If you block this IP address because of nuisance caused by one user in that network, IP based blocking will also make your site unavailable for other N users. This is true where ever a bunch of computers are NATed behind a single router.
Scenario 2: What if say X users in that same network did inadvertently provide an incorrect code within your limit of Y minutes. All users in that network again get blocked to enter any more codes.
You can use cookie based system, where you store the number of attempts in past Y minutes in an cookie (or in session variable on server side) and validate it each time. However, this isn't foolproof again as a user who knows your implementation can circumvent that as well.
If you're IIS 7 there's actually an extension that help you to do precisely what you're talking about.
http://www.iis.net/download/DynamicIPRestrictions
This could save you from trying to implement this through code. As for any "glaring issues", this sort of thing is done all the time to prevent brute force attacks on web applications. I can't think of any reason why a real user would need to try to enter in codes in the same manner a computer that's issuing a brute force attack would. Testing any and all possible user experiences would hopefully get you past any issues that might pop up.
-if it's not linked to a shop DO NOT CONSIDER THIS-
tought about placing an hidden tag on your orders? not 100% foolproof but it will discourage some bruteforces.
all you got to check is if the hidden tag pops up with tons of promocodes you block the order.
i would still recommand you to set some kind of login.
I don't think there is one solution that will solve all your problems, but if you want to slowdown a brute force attack just adding a delay of a few hundred milliseconds in the page load will do a lot!
You could also force them to first visit the page where you enter the code, there you could add a hidden field with a value and store the same value in the session, when the user validates the code you compare the hidden field to the session value.
This way you force the attacker to make two requests instead of just one, you could also measure the time between those two requests and if its below a set amount of time you can more or less guarantee its a bot.

Multiple requests to server question

I have a DB with user accounts information.
I've scheduled a CRON job which updates the DB with every new user data it fetches from their accounts.
I was thinking that this may cause a problem since all requests are coming from the same IP address and the server may block requests from that IP address.
Is this the case?
If so, how do I avoid being banned? should I be using a proxy?
Thanks
You get banned for suspicious (or malicious) activity.
If you are running a normal business application inside a normal company intranet you are unlikely to get banned.
Since you have access to user accounts information, you already have a lot of access to the system. The best thing to do is to ask your systems administrator, since he/she defines what constitutes suspicious/malicious activity. The systems administrator might also want to help you ensure that your database is at least as secure as the original information.
should I be using a proxy?
A proxy might disguise what you are doing - but you are still doing it. So this isn't the most ethical way of solving the problem.
Is the cron job that fetches data from this "database" on the same server? Are you fetching data for a user from a remote server using screen scraping or something?
If this is the case, you may want to set up a few different cron jobs and do it in batches. That way you reduce the amount of load on the remote server and lower the chance of wherever you are getting this data from, blocking your access.
Edit
Okay, so if you have not got permission to do scraping, obviously you are going to want to do it responsibly (no matter the site). Try gather as much data as you can from as little requests as possible, and spread them out over the course of the whole day, or even during times that a likely to be low load. I wouldn't try and use a proxy, that wouldn't really help the remote server, but it would be a pain in the ass to you.
I'm no iPhone programmer, and this might not be possible, but you could try have the individual iPhones grab the data so all the source traffic isn't from the same IP. Just an idea, otherwise just try to be a bit discrete.
Here are some tips from Jeff regarding the scraping of Stack Overflow, but I'd imagine that the rules are similar for any site.
Use GZIP requests. This is important! For example, one scraper used 120 megabytes of bandwidth in only 3,310 hits which is substantial. With basic gzip support (baked into HTTP since the 90s, and universally supported) it would have been 20 megabytes or less.
Identify yourself. Add something useful to the user-agent (ideally, a link to an URL, or something informational) so we can see your bot as something other than "generic unknown anonymous scraper."
Use the right formats. Don't scrape HTML when there is a JSON or RSS feed you could use instead. Heck, why scrape at all when you can download our cc-wiki data dump??
Be considerate. Pulling data more than every 15 minutes is questionable. If you need something more timely than that ... why not ask permission first, and make your case as to why this is a benefit to the SO community and should be allowed? Our email is linked at the bottom of every single page on every SO family site. We don't bite... hard.
Yes, you want an API. We get it. Don't rage against the machine by doing naughty things until we build it. It's in the queue.

How can I implement an IRC Server with 'owned' nicknames?

Recently, I've been reading up on the IRC protocol (RFCs 1459, 2810-2813), and I was thinking of implementing my own server.
I'm not necessarily looking into adhering religiously to the IRC protocol (I'm doing this for fun, after all), but one of the things I do like about it is that a network can consist of multiple servers transparently.
There are a number of things I don't like about the protocol or the IRC specification. The first is that nicknames aren't owned. While services like NickServ exist, they're not part of the official protocol. On the other hand, implementing something like NickServ properly kind of defeats the purpose of distribution (i.e. there'd be one place where NickServ is running, and one data store for it).
I was hoping there'd be a way to manage nicknames on a per-server basis. The problem with this is that if you have two servers that have some registered nicknames, and they then link up, you can have collisions.
Is there a way to avoid this, without using one central data store? That is: is it possible to keep the servers loosely connected (such that they each exist as an independent entity, but can also connect to one another) and maintain uniqueness amongst nicknames?
I realize this question is vague, but I can't think of a better way of wording it. I'm looking more for suggestions than I am for actual yes/no answers. So if anyone has any ideas as to how to accomplish nickname uniqueness in a network while still maintaining server independence, I'd be interested in hearing it. Note that adhering strictly to the IRC protocol isn't at all necessary; I've got no problem changing things to suit my purposes. :)
There's a simple solution if you don't care about strictly implementing an IRC server, but rather implementing a distributed message system that's like IRC, but not exactly IRC.
The simple solution is to use nicknames in the form "nick#host", much like email. So instead of merely being "mipadi", my nickname could be "mipadi#free-memorys-server.net". So I register with just your server, but when your server links up with others to form another a big ole' chat network, you can easily union all the usernames together. There might be a "mipadi" on otherserver.net, but then our nicknames become "mipadi#free-memorys-server.net" and "mipadi#otherserver.net", and everything is cool.
Of course, this deviates a good deal from IRC. :)
They have to be aware of each other. If not, you cannot prevent the sharing of nicknames. If they are, you simply need to transfer updates on the back-end. To prevent simultaneous registrations, you need a transaction system that blocks, requests permission from all other servers, and responds.
To prevent simultaneous registrations during outages, you have no choice but to timestamp the registration, and remove all but the last (or a random for truly simultaneous) registered copy of the nick.
It's not very pretty considering these servers aren't initially merged in the first place.
You could still implement nick ownership without a central instance, if your server instances trust each other.
When a user registers a nick, it is registered with the current server he's connected with
When a server receives a registration that it didn't know of, it forwards that information to all other servers that don't know it yet (might need a smart algorithm to avoid spamming the network)
When a server re-connects to another server then it tries to synchronize the list of registered nicks and which server handles which nick
If there is a collision during that sync, then the older registration is used, and the newer one marked as invalid
If you can't trust your servers, then it'll get a lot harder, as a servers could easily claim every username and even claim the oldest registration for each one.
Since you are trying to come up with something new, the idea that springs to mind, is simply including something unique about the server as part of the nick name when communicating outside of the server. So if you want to message a user on a different server you might have something like user#server
If you don't need them to be completely separate you might want to consider creating some kind of multiple-master replicated database of accounts. Where each server stores a complete copy of the account database, and each server can create new accounts which will be replicated to other servers as possible. You'll probably still have to deal with collisions on occasion though.
While services like NickServ exist, they're not part of the official protocol.
Services are not part of the official protocol because they've nothing to do with the protocol. They're bots with permissions. There's no reason why you couldn't have one running on each server but it does make them harder to maintain.
If you were to go down that path, I would probably suggest the commonly used "multiple master" database replication technique. If one receives a write (in your case, a new user is created or updated, etc) it sends the data to all the other nodes. You'll have to be careful though. If one node is offline when the others get an update, it will need to know to resync on reconnection.
Another technique would be as above but in reverse. Data is only exchanged between nodes when it's needed. Eg if a user tries to log in on a node where there's no data for it, it'll query the others and issue a move order to get all the data to that one node. This is potentially less painful than the replication version but there could be severe problems in netsplits if somebody signs up on a node disconnected from the pack for a duplicate nick.
One technique to nullify the problems of netsplits would be to make chat nodes and their bots netsplit-aware. When they're split, they probably shouldn't allow any write actions... But this could impact on your network if you're splitting lots.
You've also got to ask how secure this might or might not be. IRC network nodes are distributed for performance but they're not "secure". Because of this, service bots are usually run centrally to keep ultimate control over their running. If you distributed the bots and remote node got hacked, they'd potentially have access to the whole user database (depending on the model).

Resources