As an additional measure for securing a web application, I'm considering implementing client IP whitelisting.
The preferred way seems to be: do this at the router. However, this is a significant administrative burden in my scenario.
I would like to do this in software, on the web server. Is there a reason why this is less secure?
The HttpRequest.UserHostAddress is retrieved from the source ip number in the http request that is sent from the end user to your server. An http request is divided in several ip packets, and the source ip number is a field in each ip packet. Anyone on the net may craft ip packets with any ip number in the source field and send them to you.
However, the usefullness of this is somewhat restricted. When you respond to an http request, the response is sent to the source ip number in the request. The attacker will not recieve the response unless he is able to intercept the response on its way to the reciever. As an example: If the attacker send you a login request with a username and password, then you probably respond with a cookie. But since the cookie is sent to the fake source ip, the attacker will never see it.
IP spoofing is not technically difficult, but since the attacker will not recieve the response, it is mostly used for attacks that can be done with a single request.
Routers and firewalls may also protect you against malicious ip packets with false source ip's. Most firewalls will i.e. block packets from the external net with a source ip from the internal net.
Client IP whitelisting at the router level is done to keep traffic out of a network because you don't want hosts communicating with your servers if they aren't from the right neighborhood. This protects the servers from any number of OS-level attacks that only require access to the network stack.
What you are effectively talking about is using IP whitelisting as another factor for authentication on your server. It will not help you against a hacker that has the right attacks because unauthorized clients are still able to reach your server at the network stack level.
Both methods are susceptible to IP spoofing, and guarding against that is the responsibility of your network team. On a properly secured network you don't need to worry too much about malicious spoofing.
So, from a security standpoint it doesn't really hurt to use client IP whitelisting, but in the end, you're probably wasting your time maintaining the ACL. If you want to control which hosts can connect to your application to limit your security profile, a firewall or at least a router access list is the way to go.
Edit: An OS-level firewall is also a choice you should consider if routing & switching changes are too cumbersome for your situation.
In response to comment: A firewall whitelist would be more secure because it would be able to protect from OS-level attacks.
If you are not concerned with OS-level attacks then that part is not significant, and the end answer is yes, spoofing attacks are possible, and no, there is nothing you can do about it from within IIS or ASP.NET.
So I would still conclude that it is probably a waste of your time and the time of users who need to wait to be added to the whitelist. But from a security standpoint it probably won't hurt you and might keep some attackers from profiling your server as a soft target.
Related
I am wondering if my network is vulnerable to (D)DoS attacks even though I have no ports open. Currently, I am thinking that you should be able to throw random malformed packets at my public IP, forcing the firewall to sort through all of them. If I sent enough packets, would this be a decent way to take down a whole network?
Is it safe to restrict access to a site by IP?
I know there is something called "IP spoofing" - does this mean that (under some conditions) IP restriction is not accurate?
If a client forges its source IP address, it will be very difficult to establish a TCP connection, because as #cdhowie noted in a comment below, the client would need to ACK the server's SYN + ACK back, which it will never receive.
Spoofed IP addresses are mostly dangerous for denial of service attacks, as the attacker would not care about receiving responses to the attack packets, and they would be much more difficult to filter since each spoofed packet appears to come from a different address.
Not really. First, you would need to restrict all proxies, too, to be effective. More importantly, you may block legitimate users like this. It can be a quick-fix for some chronic issues, but in general it's not as effective as it seems.
IP Spoofing is mostly possible on LAN. In my opinion it is not possible to restrict access to site per IP. I would rather consider applying some certificates/auth methods.
Here is an example. Read some theory here
A broad question.
We have a client application that currently talks to a web service to exchange data between two clients. The first client stores data on the service and other clients poll the service to collect it at some later time.
We are looking to change this infrastructure a little in that clients will Connect() to the service supplying the IP and Port that they will 'speak' on. When client A wishes to send data to client B it will look up client B's IP and port on the service and then send the data directly to that IP and port.
In the situation where several clients exist behind a common firewall (and NAT), my gut instinct is that I would need to configure port forwarding for each client so that inbound messages at the public IP (ie public side of firewall) can be routed to the appropriate client. As our application is designed to shield 'techy' details we'd like to avoid this it all possible. One caveat is that we are quite happy for the client to have to open a port on the firewall, but to avoid the extra step of setting up port forwarding.
Hope this makes sense, and please feel free to ask for any clarification.
-- Edit --
We are aware of UPnP but it is a non-starter for us due to the fact that it is either not available on some routers and the fact that some corporate environments don't allow it.
Thanks,
Simon
Most home routers provide a UPnP interface to allow applications to set up port forwarding without requiring the user to do anything. Depending on the router model, they may need to enable it on the router, usually a checkbox in some buried config screen.
Scenario: 2 network devices, each on separate private LANs. The LANs are connected by public Internet.
Device on network A is listening on a socket; network A firewall has NAT port forward setup for external access from network B's port range.
Device on network B makes outgoing connection to network A to connect to the listen socket.
Is there any difference in vulnerability of short-term connection made for a data transfer then dropped when complete (e.g. few seconds), and a persistent connection which employs a keep-alive mechanism and reconnects when dropped (hours, days, ..)?
The security of actually making the connection is not part of my question.
the client will maintain a persistent connection to server
No such thing exists.
Each connection -- no matter how long it's supposed to last -- will eventually get disconnected. It may be seconds before the disconnect or centuries, but it will eventually get disconnected. Nothing is "persistent" in the sense of perpetually on.
There is no such thing as a "keep-alive mechanism". It will get disconnected.
"Assume the server authenticates the client upon connection". Can't assume that. That's the vulnerability. Unless you have a secure socket layer (SSL) to assure that the TCP/IP traffic itself is secure. If you're going to use SSL, why mess around with "keep-alive"?
When it gets disconnected, how does it get connected again? And how do you trust the connection?
Scenario One: Denial of Service.
Bad Guys are probing your socket waiting for it to accept a connection.
Your "persistent" connection goes down. (Either client crashed or you crashed or network routing infrastructure crashed. Doesn't matter. Socket dead. Must reconnect.)
Bad Guys get your listening socket first. They spoof their IP address and you think they're the client. They're in -- masquerading as the client.
The client host attempts their connection and you reject it saying they're already connected.
Indeed, this is the exact reason why folks invented and use SSL.
Based on this, you can dream up a DNS-enabled scenario that will allow Bad Guys to (a) get connected and then (b) adjust a DNS entry to make them receive connections intended for you. Now they're in the middle. Ideally, DNS security foils this, but it depends on the client's configuration. They could be open to DNS hacks, who knows?
The point is this.
Don't Count On A Persistent Connection
It doesn't exist. Everything gets disconnected and reconnected. That's why we have SSL.
The client can simply reconnect, the server must respond to the user request with the appropriate error.
False. The client cannot "simply" reconnect. Anyone can connect. Indeed, you have to assume "everyone" is trying to connect and will beat the approved client.
To be sure it's the approved client you have to exchange credentials. Essentially implementing SSL. Don't implement your own SSL. Use existing SSL.
would they have to break into a switch site?
Only in the movies. In the real world, we use packet sniffers.
Am I able to depend on a requestor's IP coming through on all web requests?
I have an asp.net application and I'd like to use the IP to identify unauthenticated visitors. I don't really care if the IP is unique as long as there is something there so that I don't get an empty value.
If not I guess I would have to handle the case where the value is empty.
Or is there a better identifier than IP?
You can get this from Request.ServerVariables["REMOTE_ADDR"].
It doesn't hurt to be defensive. If you're worried about some horrible error condition where this isn't set, check for that case and deal with it accordingly.
There could be many reasons for this value not to be useful. You may only get the address of the last hop, like a load balancer or SSL decoder on the local network. It might be an ISP proxy, or some company NAT firewall.
On that note, some proxies may provide the IP for which they're forwarding traffic in an additional HTTP header, accessible via
Request.ServerVariables["HTTP_X_FORWARDED_FOR"]. You might want to check this first, then fall back to Request.ServerVariables["REMOTE_ADDR"] or Request.UserHostAddress.
It's certainly not a bad idea to log these things for reference/auditing.
I believe that this value is set by your web sever and there is really no way to fake it as your response to there request wouldn't be able to get back to them if they set there IP to something else.
The only thing that you should worry about is proxies. Everyone from a proxy will get the same IP.
You'll always get an IP address, unless your web server is listening on some sort of network that is not an IP network. But the IP address won't necessarily be unique per user.
Well, web request is an http connection, which is a tcp connection and all tcp connections have two endpoints. So, it always exists. But that's about as much as you know about it. It's neither unique nor reliably accurate (with all the proxies and stuff).
Yes, every request must have an IP address, but as stated above, some ISP's use proxies, NAT or gateways which may not give you the individual's computer.
You can easily get this IP (in c#) with:
string IP = Context.Request.ServerVariables["REMOTE_ADDR"].ToString();
or in asp/vbscript with
IP = request.servervariables("REMOTE_ADDR")
IP address is not much use for identifying users. As mentioned already corporate proxies and other private networks can appear as a single IP address.
How are you authenticating users? Typically you would have them log in and then store that state in their session in your app.