I'm working on an application that will be used from different locations so it has to be on a network, and since the distance from each workstation that will use the application is quite far, it will be on the internet. Definitely on a dedicated Windows Server.
I have security concerns because it is such an application that black-hat hackers and crackers will like to abuse to their own ends.
So I'm thinking, I can (since I am the I.T head of the company) procure a static IP address for all the workstations that will use the application, then I can compile a white list of IP addresses. If a request is not coming from an IP address in the white list, the request will be denied. Does this make sense?
I could also use more security tips on securing the server and the application.
It's an ASP.NET MVC application.
Does this make sense?
At a network level? Somewhat. At an application level? Probably not.
IP filtering is something that makes sense at the network level. So setting firewall rules to dictate which IPs are allowed to access certain ports on a server. That is both sensible and common.
Trying to do the same thing at the application layer is error prone and problematic. For instance, if your application is behind a load balancer, the IP address your application sees may well be the one belonging to the load balancer, not the client who originated the request.
As an additional note, just because a request is coming from a trusted IP, doesn't mean that you don't have to be careful. Your "trusted" client systems could be compromised or an attacker could be using a CSRF attack.
Related
I use Nginx to manage a lot of my web services. They listens different port, but all accessed by the reverse proxy of Nginx within one domain. Such as to access a RESTful-API server I can use http://my-domain/api/, and to access a video server I can use http://my-domain/video.
I have generated a SSL certificate for my-domain and added it into my Nginx conf so my Nginx server is HTTPS now -- But those original servers are still using HTTP.
What will happen when I visit https://my-domain/<path>? Is this as safe as configuring SSL on the original servers?
One of the goals of making sites be HTTPS is to prevent the transmitted data between two endpoints from being intercepted by outside parties to either be modified, as in a man-in-the-middle attack, or for the data to be stolen and used for bad purposes. On the public Internet, any data transmitted between two endpoints needs to be secured.
On private networks, this need isn't quite so great. Many services do run on just HTTP on private networks just fine. However, there are a couple points to take into consideration:
Make sure unused ports are blocked:
While you may have an NGINX reverse proxy listening on port 443, is port 80 blocked, or can the sites still be accessed via HTTP?
Are the other ports to the services blocked as well? Let's say your web server runs on port 8080, and the NGINX reverse proxy forwards certain traffic to localhost:8080, can the site still be accessed at http://example.com:8080 or https://example.com:8080? One way to prevent this is to use a firewall and block all incoming traffic on any ports you don't intend to accept traffic on. You can always unblock them later, if you add a service that requires that port be opened.
Internal services are accessible by other services on the same server
The next consideration relates to other software that may be running on the server. While it's within a private ecosystem, any service running on the server can access localhost:8080. Since the traffic between the reverse proxy and the web server are not encrypted, that traffic can also be sniffed, even if authorisation is required in order to authenticate localhost:8080. All a rogue service would need to do is monitor the port and wait for a user to login. Then that service can capture everything between the two endpoints.
One strategy to mitigate the dangers created by spyware is to either use virtualisation to separate a single server into logical servers, or use different hardware for things that are not related. This at least keeps things separate so that the people responsible for application A don't think that service X might be something the team running application B is using. Anything out of place will more likely stand out.
For instance, a company website and an internal wiki probably don't belong on the same server.
The simpler we can keep the setup and configuration on the server by limiting what that server's job is, the more easily we can keep tabs on what's happening on the server and prevent data leaks.
Use good security practices
Use good security best practices on the server. For instance, don't run as root. Use a non-root user for administrative tasks. For any services that run which are long lived, don't run them as root.
For instance, NGINX is capable of running as the user www-data. With specific users for different services, we can create groups and assign the different users to them and then modify the file ownership and permissions, using chown and chmod, to ensure that those services only have access to what they need and nothing more. As an example, I've often wondered why NGINX needs read access to logs. It really should, in theory, only need write access to them. If this service were to somehow get compromised, the worst it could do is write a bunch of garbage to the logs, but an attacker might find their hands are tied when it comes to retrieving sensitive information from them.
localhost SSL certs are generally for development only
While I don't recommend this for production, there are ways to make localhost use HTTPS. One is with a self signed certificate. The other uses a tool called mkcert which lets you be your own CA (certificate authority) for issuing SSL certificates. The latter is a great solution, since the browser and other services will implicitly trust the generated certificates, but the general consensus, even by the author of mkcert, is that this is only recommended for development purposes, not production purposes. I've yet to find a good solution for localhost in production. I don't think it exists, and in my experience, I've never seen anyone worry about it.
I've heard from more than one IT Manager that they don't allow users to user RDP to connect to their internal network from the outside, because it's not safe. They claim that if they'd allow their users to do so, then anyone from the outside will have access to their network as well.
I'm not getting it. In order to use RDP, you need a user name and password, and you can't get in without it. The same is for using Gmail, online banking, and any other web service.
So what do they use instead? LogMeIn. Or a VPN connection, and then use internal RDP. VPN also requires a user name and password.
If they're afraid of a brute-force attack, then someone can brute-force attack the VPN server or LogMeIn just the same. And if these other technologies have lockouts (after x number of failed attempts) then why can't the same be set up for RDP?
Similarly, people always say that VPN is very secure because it uses a "tunnel". I don't fully understand what that means, but regardless, why can't the username and password be cracked the same way any website or web service which uses a user name and password can be.
With proper configuration, RDP is capable of 128-bit RC4 encryption, virtually any port or set of port allocations, and has proven to be relatively bug-free, with only extremely minor flaws ever discovered.
On the other hand, the secure tunnel created in a VPN is far more secure than Remote Desktop. All your data is encrypted for safe transfer from one remote location to another.
Moreover, VPN only allows shared content to be accessed remotely to tighten the security. If your device falls in the wrong hands, they won’t be able to access and manipulate unshared data and resources.
The bottom line is that both RDP and VPN have their own advantages, however, with high security, better performance and manageability, VPN seems to be a clear winner in the competition of Remote Desktop VS VPN service.
My college has different proxies for accessing Internet like 192.168.0.2/3/4 and also a specific port number.What is the advantage of using this ? I also would like to know what exactly happens there.I also heard that my institution has different ISP connections shared over the same network. What is the role of proxy there?
It will be very easy to know if you understand what proxies do and why they are used generally. Which could be found on a magical website called www.google.com. By using a proxy, you get more control over the network because all request go through there.Your school may want to do stuffs like traffic shaping, content filtering etc. Using the proxy server will make sure all request to the internet are routed there first.
Proxies are good for a few things:
Filtering. By using a proxy, your college can filter out viruses, porn, Facebook or torrent downloads.
Logging. By requiring a username and password, the college can track what you do with your internet time, and can tell you off if you go somewhere you shouldn't or help you be allowing them to do traffic shaping, or other network maintenance.
Line Bonding. For example, if you have two ADSL lines of 5Mb, you can bond those to get a 10Mb line (normally this is done at the gateway stage, and not the proxy, but it is possible to do it at this stage of the network)
Failover. Again, this would normally be done at the gateway/router stage. This detects which lines are active and routes your traffic to those lines.
Network Connectivity. If your college is in-turn part of a bigger academic network, this could allow crossing those network boundaries to get internet access.
Although those are valid possibilities, it's probably just for Filtering...
In the wider internet, proxies are in use for allowing access to blocked content - like giving China access to Google...
Am i correct that when using State Server traffic between my web site and the state server isn't encrypted? If it isn't, how can i secure it (SSL)?
The ASP.NET Session State server uses clear-text http-requests in a rest-like manner for communication. The actual protocol specification is publicly available at [MS-ASP]: ASP.NET State Server Protocol Specification.
I've never heard of anyone encrypting the state traffic, cant find any references for it, and nothing that states that it's even possible.
It's impossible for any of us to say whether the traffic between your web site and state server is encrypted or not.
At a high level, state server uses clear text for transferring the data. But this doesn't necessarily mean it's not encrypted.
However, depending on how your network is setup it might be encrypted at a lower layer by the operating system. Namely, if the machines are part of a domain the network administrator might have turned on the proper settings to force kerberos encryption between the machines.
Further, if you encrypt the data prior to putting it in "session" then it would obviously be encrypted.
If you are worried about internal threats then your network should be configured to encrypt all traffic between machines. (if you want to know how, go to serverfault.com).
The state server should be behind the firewall and not public, there should be no reason to encrypt the traffic. You would only want to make sure that the traffic is only able to go to and from the web tier to the state server via network layering.
I have a web application that should behave differently for internal users than external ones. The web application is available over the Internet, and therefore obviously to the internal users as well.
All the users are anonymous, not authenticated, but the page should render differently for internal users than external. What I'm doing in my code is use Request.UserHostName and then Dns.GetHostEntry. The result is then compared to a setting in my web.config (that holds something like *.mydomain.local) . If the comparison gives a positive result then I render the HTML that the internal user should see otherwise I render the HTML the external user should see.
However, my problem is that I don't always get the expected value from Request.UserHostName. on the development site I get the IP-number (?) of the machine running the browser but on the customer site I don't get the IP-number of the user machine, I get some other IP-number. The browsers don't have any proxies set or anything like that.
Should I be using something else than Request.UserHostName?
I recommend using IP addresses as well. I'm dealing with this exact same situation setting up an authentication system right now as well and the conditions described by Epso and Robin M are exactly what is happening. External users coming to the site give me their actual IP address while all internal users provide the IP of the gateway machine(router) on to the private subnet the webservers sit on.
To deal with it I just check for that one IP. If I get the IP of the gateway, I provide the internal access. If I get anything else they get the external one which requires additional authentication in my case. In yours, it would just mean a different interface.
Try Request.UserHostAddress, which returns the client's IP address. Assuming your internal network uses IP addresses reserved for LANs, it should be relatively simple to check if an IP is internal or external.
There might be a firewall that is doing some sort of NAT, to enable inside clients to use the external dns-name to reach the server.
Is the IP-number you get on customer site the same at the external customer-server ip? In that case you can hard code for that one IP-address. All internal computers behind that firewall will appear to have to same ip-address and you can classify them as "internal".
It looks like you're being returned a public facing IP Address. Get the user to go to http://www.myipaddress.com . If this is the same as the IP Address returned to your software, then this is definitely the case.
The only solution I can see to get around this is to either get them to connect to the machine holding the asp.net application via a VPN, or to use some other kind of authentication. The latter is probably the best option.
It does sound like there is a proxy between users and the server on the customer site (it doesn't need to be configured in the browser). It may be an internal or external proxy depending on your network configuration.
I would avoid using the UserHostName for what is effectively authentication as it is presented by the browser duing the request and would be easy to spoof. IP address would be much more effective as it's difficult to spoof an IP address in a TCP/IP connection (and maintain a connection). It's still weak authentication but may be sufficient in this scenario.
Even if you are using IP address, if there's a NAT proxy between client and server, you may have to accept that anything coming through that proxy is trusted (I'm assuming that external/untrusted clients don't come through that proxy).
If that isn't acceptable, you're back to other methods of authentication. Rather than requiring a logon or VPN connection, you might consider a permanent cookie or client certificates and only give those to internal clients but you would need some way of delivering those to the client. You could certainly deliver a permanent cookie based on a one-time logon. Cookies can be spoofed in a similar way in that the UserHostName can be however you've got a better opportunity to create a cookie value that is less guessable than a domain name.