Multiple certificates for HTTPS on a software NLB'd IIS7 cluster - networking

We're currently trying to set up a HTTPS with multiple certificates. We've had some limited success but we're getting some results I can't make any sense of...
Basically we have two servers on our NLB (10.0.51.51 and 10.0.51.52) and two IPs assigned to our NLB (10.0.51.2 and 10.0.51.4) and we have IIS listening on both of these IPs with a different wildcard certificates (To avoid giving out public IP's let's say A:443 routes to 10.0.51.2:443 and B:443 routes to 10.0.51.4:443). We also have a Cisco router using port address translation to route port 443 from two external IP's to these internal NLB IPs.
The weird thing is, this works if we request A:443 or B:443, but if you go internally on 10.0.51.51:443, 10.0.51.52:443, 10.0.51.2:443 or 10.0.51.4:443 you ALWAYS get the same SSL cert. This cert was in the past assigned to *:443 but we've made sure there's no * bindings anymore defined in IIS.
When i run "netsh http show sslcert" after trimming out all the irrelevant stuff I get:
IP:port : 0.0.0.0:443
Certificate Hash : <Removed: Cert 1>
IP:port : 10.0.51.2:446
Certificate Hash : <Removed: Cert 3 - Another site>
IP:port : 10.0.51.3:446
Certificate Hash : <Removed: Cert 3 - Another site>
IP:port : 10.0.51.4:443
Certificate Hash : <Removed: Cert 2>
Which tells me that the * binding is still in there, which is a bit weird, but I can't see why that would prevent the other from working (Or even more more strangely why the request through the router would work).
It's got me wondering whether it's actually treating the requests as the machine's IP rather than the NLB IP, but unfortunately our dev environment is only a single server which sorta reduces the amount of trial/error I can take to this (Since all I can test on is a live environment) without convincing management to buy more servers for the test environment - which is something I'm trying.
Does anyone have any idea:
Why there's a difference between internal and through the router?
Why the internal request is getting the wrong cert?
How I can remedy this so that we get the same behavior on both sides?

I ended up tracking the problem down. Leaving this as a hint for anyone else who falls in the same trap...
The problem was caused by us using a shared configuration model on our IIS servers. When setting up a HTTPS binding this appears to only actually bind it on the box you're managing it on (Leaving the other completely unbound). Since our * binding still existed it was catching it on the server we didn't do through the UI and just let pick up the shared config.
Crazy bad luck with single-affinity NLB sent us down the garden path after the router being the cause by making our internal requests go to one server and our external requests to another.
We ended up finding this by running "netsh http show sslcert > certs.txt" on both servers and diff'ing the outputs.
Going forwards our plan is to no longer use the IIS UI for SSL configuration instead following the steps below:
Install the certificates on each server.
Run a command-line binding of the SSL port "netsh http add sslcert ipport=?:? certhash=? appid=?" (ip:port is easy to work out, certhash can be copied from the "certificate hash" section of the server certificates page, appid can be copied from an existing IIS binding on the netsh http add sslcert)
Edit the IIS ApplicationHost.config file directly to add the bindings without the UI being involved.
Our understanding is this will prevent a repeat of this error.

Related

What will happen if a SSL-configured Nginx reverse proxy pass to an web server without SSL?

I use Nginx to manage a lot of my web services. They listens different port, but all accessed by the reverse proxy of Nginx within one domain. Such as to access a RESTful-API server I can use http://my-domain/api/, and to access a video server I can use http://my-domain/video.
I have generated a SSL certificate for my-domain and added it into my Nginx conf so my Nginx server is HTTPS now -- But those original servers are still using HTTP.
What will happen when I visit https://my-domain/<path>? Is this as safe as configuring SSL on the original servers?
One of the goals of making sites be HTTPS is to prevent the transmitted data between two endpoints from being intercepted by outside parties to either be modified, as in a man-in-the-middle attack, or for the data to be stolen and used for bad purposes. On the public Internet, any data transmitted between two endpoints needs to be secured.
On private networks, this need isn't quite so great. Many services do run on just HTTP on private networks just fine. However, there are a couple points to take into consideration:
Make sure unused ports are blocked:
While you may have an NGINX reverse proxy listening on port 443, is port 80 blocked, or can the sites still be accessed via HTTP?
Are the other ports to the services blocked as well? Let's say your web server runs on port 8080, and the NGINX reverse proxy forwards certain traffic to localhost:8080, can the site still be accessed at http://example.com:8080 or https://example.com:8080? One way to prevent this is to use a firewall and block all incoming traffic on any ports you don't intend to accept traffic on. You can always unblock them later, if you add a service that requires that port be opened.
Internal services are accessible by other services on the same server
The next consideration relates to other software that may be running on the server. While it's within a private ecosystem, any service running on the server can access localhost:8080. Since the traffic between the reverse proxy and the web server are not encrypted, that traffic can also be sniffed, even if authorisation is required in order to authenticate localhost:8080. All a rogue service would need to do is monitor the port and wait for a user to login. Then that service can capture everything between the two endpoints.
One strategy to mitigate the dangers created by spyware is to either use virtualisation to separate a single server into logical servers, or use different hardware for things that are not related. This at least keeps things separate so that the people responsible for application A don't think that service X might be something the team running application B is using. Anything out of place will more likely stand out.
For instance, a company website and an internal wiki probably don't belong on the same server.
The simpler we can keep the setup and configuration on the server by limiting what that server's job is, the more easily we can keep tabs on what's happening on the server and prevent data leaks.
Use good security practices
Use good security best practices on the server. For instance, don't run as root. Use a non-root user for administrative tasks. For any services that run which are long lived, don't run them as root.
For instance, NGINX is capable of running as the user www-data. With specific users for different services, we can create groups and assign the different users to them and then modify the file ownership and permissions, using chown and chmod, to ensure that those services only have access to what they need and nothing more. As an example, I've often wondered why NGINX needs read access to logs. It really should, in theory, only need write access to them. If this service were to somehow get compromised, the worst it could do is write a bunch of garbage to the logs, but an attacker might find their hands are tied when it comes to retrieving sensitive information from them.
localhost SSL certs are generally for development only
While I don't recommend this for production, there are ways to make localhost use HTTPS. One is with a self signed certificate. The other uses a tool called mkcert which lets you be your own CA (certificate authority) for issuing SSL certificates. The latter is a great solution, since the browser and other services will implicitly trust the generated certificates, but the general consensus, even by the author of mkcert, is that this is only recommended for development purposes, not production purposes. I've yet to find a good solution for localhost in production. I don't think it exists, and in my experience, I've never seen anyone worry about it.

Proxy + HTTPS = Page doesn't load

I've developed a web app, which uses HTTPS and which works fine when I access is it (live). Yet some customers, who use proxy servers, can't access the site. I already tried to use a real certificate (a cheap one and only a trial, but yet valid), but that didn't help.
Everytime one of these users tries to access the site the browser tries to load it until a timeout occurs. One user even was shown an authentication (but I'm not 100% sure if this was due to a proxy, still waiting for response from the customer)
For which reasons can this happen and what can I do about it?
I'm using IIS, ASP.NET (C#) and JS. Sideinfo: The URL contains a port, the internal structure of the network the IIS is running in (not mine) doesn't allow it otherwise.
443 is dedicated port for HTTPS connectivity. Add type 'HTTPS' with default port 443 in Site bindings of hosted site directory. Check after whether SSL is enabled or not? in IE(browser)->Tools->Internet options->Advanced->Security.
If the HTTPS port in your web app's URL isn't port 443, you'll have a problem with corporate proxies that don't like non-standard HTTPS ports.
i.e. I hope your URL looks something like this: http://example.com:443/...

Can not access the website via SSL

We have deployed our website to the live webserver, Windows Server, IIS 7.5. Website asp.net, .NET 4.5
I have configured the website bindings to allow https requests for this website.
Asked the hosting provider to open up the port 443.
I can access the website over internet with port 80, no issues at all. (http://mysite.com)
But I can not access via https, (https://mysite.com).
But I can access the site via SSL from the server itself, that means SSL configurations are fine.(https - localhost)
But I can telnet (telnet mysite.com 443), it responds to GET request via telnet.
I have rechecked the certificate and changed it to a self-signed certificate, issue is still there.
These requests not being tracked in IIS logs as well, seems like the request is not reaching IIS. Hopefully something goes wrong before it reaches the server.
But, when I access the website as http://mysite.com:443, it works.
I m bit confused with this behaviour. Obviously the port 443 is open by the hosting company. But something is wrong with requests over HTTPS, which is supposed to send a request to port 443. Please help.
Because your site is working when you access http://mysite.com:443, I am almost sure that you created wrong binding on IIS. Instead of selecting https from combo box you selected default http.
There is a tutorial on how to do this on youtube: Changing IIS 7.5 Bindings by David Johnson
You've establish that the port is open and the hostname binding is there, otherwise http://mysite.com:443 would not work. Its the SSL part that's not working, hence you can connect directly by port and telnet (port 443 but not SSL) but not a browser via https. It's only a browser connecting to a https url that will expect SSL.. I'm pretty sure I've had the same issue, but cannot recall the exact cause but it was definitely related to an invalid SSL configuration or SSL binding.. The behaviour was like there is no connection, nothing, which is unusual, its the bad config causes the browser to abort the connection. If I remember what, I'll update or comment below.
So you can access the site using https://localhost? Your question is not quite clear on this point... what is the exact URL you are using? If it's https://localhost, that is actually an indication that your certificate is configured incorrectly. You seem to be interpreting this as an indicator that it's working OK and that is not the case. The domain name is tied to the certificate and SSL will work only when accessing the site using that domain name. So if it works for "localhost", something is wrong.
Finally I found the solution. Issue was a setting in the load balancer of the hosting provider. I have asked the question from them and they have figured out the issue. Anyways it was a good learning curve for me. And this knowledge is going to help others.
The firewall was already allowing both HTTP/HTTPS, which is why we could telnet through and run a GET / and still pull down content from the 404 page of the IP address.
It appears there was a certain profile applied to the HTTPS configuration in the load balancer which would only work for HTTP, so they have disabled that.
When they set this up for HTTP and HTTPS they were not able to test HTTPS, because to do so would require an SSL certificate in IIS - which it appears we have already provided.
Thanks everyone for your help on this!

Simulate SSL termination with IIS Express

In our production environment a website runs under HTTPS with SSL terminating on a load balancer and passing traffic to the IIS servers as HTTP.
There are various in-house and 3rd party components and controls within the site and some of them use mechanisms similar to the .NET System.Web.HttpRequest.IsSecureConnection property which simply queries the HTTPS server variable to return its result. As the connection into the web server from the load balancer is HTTP, these methods return the incorrect value and cause some components to fail. For example, a component might direct the user to a HTTP URL instead of HTTPS for a JavaScript file and cause the browser not to load the mixed content.
In order to debug these components and to develop a workaround, I need to recreate this scenario on my development machine. My question is Is there an easy way to simulate an externally terminated SSL connection for the Visual Studio / IIS Express development environment?
I've found a way using Port Forwarding Wizard.
Create a single TCP mapping with Listen Port set to a spare port (e.g. 443), destination as localhost with web server port (e.g. 80). Leave everything else as default, but go into SSL Encryption and generate a Root Key and Certificate in CA Center. Once done, select Enable SSL Encryption and select Server. Generate a Private Key file, Cert Req file and a Certificate and then bob's your uncle, you get terminated SSL forwarding to your local IIS Express server: Simply Start your port mapping and then connect to https://localhost with your web browser (specifying the port if it's not 443).

How to set up SSL in a load balanced environment?

Here is our current infrastructure:
2 web servers behind a shared load balancer
dns is pointing to the load balancer
web app is done in asp.net, with wcf services
My question is how to set up the SSL certificate to support https connection.
Here are 2 ideas that I have:
SSL certificate terminates at the load balancer. secure/unsecure communication behind the load balancer will be forwarded to 2 different ports.
pro: only need 1 certificate as I scale horizontally
cons: I have to check secure or not secure by checking which port the request is
coming from. doesn't quite feel right to me
WCF by design will not work when IIS is binded 2 different ports
(according to this)
SSL certificate terminates on each of the server?
cons: need to add more certificates to scale horizontally
thanks
Definitely terminate SSL at the load balancer!!! Anything behind that should NOT be visible outside. Why wouldn't two ports for secure/insecure work just fine?
You don't actually need more certificates at all. Because the externally seen FQDN is the same you use the same certificate on each machine.
This means that WCF (if you're using it) will work. WCF with the SSL terminating on the external load balancer is painful if you're signing/encrypting at a message level rather than a transport level.
You don't need two ports, most likely. Just have the SSL virtual server on the load balancer add an HTTP header to the request and check for that. It's what we do with our Zeus ZXTM 5.1.
You don't have to get a cert for every site there are such things as wildcard certs. But it would have to be installed on every server. (assuming you are using subdomains, if not then you can reuse the same cert across machines)
But I would probably put the cert on the load balancer if not just for the sake of easy configuration.

Resources