We have a web server in the DMZ that has open access to the Internet (of course) and RDP access to our internal network via an internal Firewall.
Our web developers (who use many different tools including Visual Studio) require the ability to 'publish' content changes and new projects to particular folders on the web server. This ability requires a drive to be mapped to the server you are publishing to.
The problem is, our network team refuses to open up NTFS access to the server internally. I somewhat agree with them - there is no way to limit NTFS access by port number. It simply doesn't exist as an option I am aware of.
So our question becomes - other companies must have this need to secure traffic coming to and from the web server from the internal network. How does one allow mapped drives to a web server in a DMZ without openening up the web server completely?
Thanks
CIFS normally runs on one of several ports: TCP 445, 137, 139; UDP 137, 138. Your firewall team ought to be able to poke holes through for these specific ports to specific internal hosts that should have privilege to update the live webserver.
If it is a "real" DMZ with separate firewalls on both sides of the hosts, it should be easy to modify both the firewalls and the web server's host firewall to allow the accesses. If the DMZ is "faked" a bit with a single firewall, it is still possible to allow the access from only internal hosts on both the firewall and the server's host firewall, but I could understand the reticence to trust so much to a single firewall.
Related
I use Nginx to manage a lot of my web services. They listens different port, but all accessed by the reverse proxy of Nginx within one domain. Such as to access a RESTful-API server I can use http://my-domain/api/, and to access a video server I can use http://my-domain/video.
I have generated a SSL certificate for my-domain and added it into my Nginx conf so my Nginx server is HTTPS now -- But those original servers are still using HTTP.
What will happen when I visit https://my-domain/<path>? Is this as safe as configuring SSL on the original servers?
One of the goals of making sites be HTTPS is to prevent the transmitted data between two endpoints from being intercepted by outside parties to either be modified, as in a man-in-the-middle attack, or for the data to be stolen and used for bad purposes. On the public Internet, any data transmitted between two endpoints needs to be secured.
On private networks, this need isn't quite so great. Many services do run on just HTTP on private networks just fine. However, there are a couple points to take into consideration:
Make sure unused ports are blocked:
While you may have an NGINX reverse proxy listening on port 443, is port 80 blocked, or can the sites still be accessed via HTTP?
Are the other ports to the services blocked as well? Let's say your web server runs on port 8080, and the NGINX reverse proxy forwards certain traffic to localhost:8080, can the site still be accessed at http://example.com:8080 or https://example.com:8080? One way to prevent this is to use a firewall and block all incoming traffic on any ports you don't intend to accept traffic on. You can always unblock them later, if you add a service that requires that port be opened.
Internal services are accessible by other services on the same server
The next consideration relates to other software that may be running on the server. While it's within a private ecosystem, any service running on the server can access localhost:8080. Since the traffic between the reverse proxy and the web server are not encrypted, that traffic can also be sniffed, even if authorisation is required in order to authenticate localhost:8080. All a rogue service would need to do is monitor the port and wait for a user to login. Then that service can capture everything between the two endpoints.
One strategy to mitigate the dangers created by spyware is to either use virtualisation to separate a single server into logical servers, or use different hardware for things that are not related. This at least keeps things separate so that the people responsible for application A don't think that service X might be something the team running application B is using. Anything out of place will more likely stand out.
For instance, a company website and an internal wiki probably don't belong on the same server.
The simpler we can keep the setup and configuration on the server by limiting what that server's job is, the more easily we can keep tabs on what's happening on the server and prevent data leaks.
Use good security practices
Use good security best practices on the server. For instance, don't run as root. Use a non-root user for administrative tasks. For any services that run which are long lived, don't run them as root.
For instance, NGINX is capable of running as the user www-data. With specific users for different services, we can create groups and assign the different users to them and then modify the file ownership and permissions, using chown and chmod, to ensure that those services only have access to what they need and nothing more. As an example, I've often wondered why NGINX needs read access to logs. It really should, in theory, only need write access to them. If this service were to somehow get compromised, the worst it could do is write a bunch of garbage to the logs, but an attacker might find their hands are tied when it comes to retrieving sensitive information from them.
localhost SSL certs are generally for development only
While I don't recommend this for production, there are ways to make localhost use HTTPS. One is with a self signed certificate. The other uses a tool called mkcert which lets you be your own CA (certificate authority) for issuing SSL certificates. The latter is a great solution, since the browser and other services will implicitly trust the generated certificates, but the general consensus, even by the author of mkcert, is that this is only recommended for development purposes, not production purposes. I've yet to find a good solution for localhost in production. I don't think it exists, and in my experience, I've never seen anyone worry about it.
Assuming a Windows Server 2012 VPS:
It seems that many tutorials include the setting up of DNS Server (setup of forward lookup zones, and A record) as part of the basic steps to deploy and run an ASP.NET web application on IIS.
I'm slightly confused, because within IIS manager you can set the bindings ( IP address, URL, SSL, port) of a web application. Wouldn't this alone not suffice to correctly route incoming requests to the correct web application?
What would be the advantage to running DNS Server?
IIS Manager can only manage IIS related Windows settings, but to make a site work you need much more settings than that.
DNS settings are critical to direct web browsers to your side. Nobody uses IP addresses to access a site, so a typical URL uses domain name. That requires DNS to translate the domain name to an IP address so that browsers can send HTTP packets to the proper location.
IIS Manager could not manage that for you, as which DNS product to use or how to configure it is usually vendor specific and out of IIS's scope.
I'm working on an application that will be used from different locations so it has to be on a network, and since the distance from each workstation that will use the application is quite far, it will be on the internet. Definitely on a dedicated Windows Server.
I have security concerns because it is such an application that black-hat hackers and crackers will like to abuse to their own ends.
So I'm thinking, I can (since I am the I.T head of the company) procure a static IP address for all the workstations that will use the application, then I can compile a white list of IP addresses. If a request is not coming from an IP address in the white list, the request will be denied. Does this make sense?
I could also use more security tips on securing the server and the application.
It's an ASP.NET MVC application.
Does this make sense?
At a network level? Somewhat. At an application level? Probably not.
IP filtering is something that makes sense at the network level. So setting firewall rules to dictate which IPs are allowed to access certain ports on a server. That is both sensible and common.
Trying to do the same thing at the application layer is error prone and problematic. For instance, if your application is behind a load balancer, the IP address your application sees may well be the one belonging to the load balancer, not the client who originated the request.
As an additional note, just because a request is coming from a trusted IP, doesn't mean that you don't have to be careful. Your "trusted" client systems could be compromised or an attacker could be using a CSRF attack.
I'm running a web application on a glassfish 3 server. The application should not be accessible for anyone. Instead I want to limit access to a handful static IP addresses. To block all communication via a firewall is not an option, since the server hosts other web services too.
Given this background, my question would be:
How can I tell glassfish to only respond to requests from given number of IP addresses?
Your help is highly appreciated!
IP based security is not very robust and... secure (think network topology changes, IP spoofing) but it should be possible to:
create a virtual server
configure the application to be available on that virtual server only
define allowRemoteHost/denyRemoteHost properties at the virtual server level
A better alternative would be to move to certificates.
You can always write a filter that returns 404 or whatever for invalid IPs. Note that IPs can be spoofed.
I have 50 machines in a LAN and each of these have internet access. Can a program be developed using vc++ which will tell what are all the websites which is being opened by users in each machine?
You can easily accomplish this by writing an application which captures packets outbound on port 80 (and the associated DNS information). The problem is that this application must run on every client computer which you want to trace. The easier method, as stated by others, is to take advantage of your network architecture and tunnel all traffic through a central proxy which can record the same information.
There are many-many enterprise tools suited for just this task in the latter instance.
Route your internet traffic through a centralized proxy and monitor the traffic from proxy say using Fiddler, or something else. In case proxying is not possible, use Fiddler to generate data at known location and then collate it at required intervals.
Install a firewall, if you don't already have one, and use it to log connections.