Is SSL set on a per machine or per connection basis - asp.net

Is it possible to have a FTP server using SSL on a application server that does not use SSL?
How would you setup an ASP.NET 2.0 to consume a SSL certificate?
This certianly sounds possible but is it advisable, is it good practice?

The choice on using SSL us made on a per connection level, usually determined by the IP port being used (i.e. will be set up between client and server before any application code involved).
The same service/content could be set up on multiple ports each with a different choice for SSL.
The certificate is per host name, but servers can generally support appearing under different names.
To use SSL with ASP.NET takes nothing special, it just works once the IIS web site is configured to support SSL (or to require it: when connections to port 80 for HTTP are redirected to the SSL port); this choice can be made on a per folder basis.

FTP is at the application layer, and SSL is lower, at the presentation layer. The SSL sessions are on a per connection basis. Take a look at the Wikipedia page. The SSL connection is established before anything happens with your application. Your FTP server probably isn't running inside your .NET application server, is it? You should be able to setup an SSH server listening for SCP connections separately. If it really does run from inside your app server, you should be able to listen on a separate port for the SSL connection.
Short of any of that, heres a good link for configuring SSL in IIS. You don't have to make the certificates mandatory. That way you can allow unsecured traffic and secure traffic if that fits in with your application model.

While protecting an application with SSL is always a good idea, it is technically not trivial.
Having a web application protected with SSL requires the webserver to be reachable on a new port (443/https instead of 80/http). This has to be configured "system-wide". Also, there may be only one certificate per IP-address, which is often a problem when hosting multiple domains on the same server.

Related

What will happen if a SSL-configured Nginx reverse proxy pass to an web server without SSL?

I use Nginx to manage a lot of my web services. They listens different port, but all accessed by the reverse proxy of Nginx within one domain. Such as to access a RESTful-API server I can use http://my-domain/api/, and to access a video server I can use http://my-domain/video.
I have generated a SSL certificate for my-domain and added it into my Nginx conf so my Nginx server is HTTPS now -- But those original servers are still using HTTP.
What will happen when I visit https://my-domain/<path>? Is this as safe as configuring SSL on the original servers?
One of the goals of making sites be HTTPS is to prevent the transmitted data between two endpoints from being intercepted by outside parties to either be modified, as in a man-in-the-middle attack, or for the data to be stolen and used for bad purposes. On the public Internet, any data transmitted between two endpoints needs to be secured.
On private networks, this need isn't quite so great. Many services do run on just HTTP on private networks just fine. However, there are a couple points to take into consideration:
Make sure unused ports are blocked:
While you may have an NGINX reverse proxy listening on port 443, is port 80 blocked, or can the sites still be accessed via HTTP?
Are the other ports to the services blocked as well? Let's say your web server runs on port 8080, and the NGINX reverse proxy forwards certain traffic to localhost:8080, can the site still be accessed at http://example.com:8080 or https://example.com:8080? One way to prevent this is to use a firewall and block all incoming traffic on any ports you don't intend to accept traffic on. You can always unblock them later, if you add a service that requires that port be opened.
Internal services are accessible by other services on the same server
The next consideration relates to other software that may be running on the server. While it's within a private ecosystem, any service running on the server can access localhost:8080. Since the traffic between the reverse proxy and the web server are not encrypted, that traffic can also be sniffed, even if authorisation is required in order to authenticate localhost:8080. All a rogue service would need to do is monitor the port and wait for a user to login. Then that service can capture everything between the two endpoints.
One strategy to mitigate the dangers created by spyware is to either use virtualisation to separate a single server into logical servers, or use different hardware for things that are not related. This at least keeps things separate so that the people responsible for application A don't think that service X might be something the team running application B is using. Anything out of place will more likely stand out.
For instance, a company website and an internal wiki probably don't belong on the same server.
The simpler we can keep the setup and configuration on the server by limiting what that server's job is, the more easily we can keep tabs on what's happening on the server and prevent data leaks.
Use good security practices
Use good security best practices on the server. For instance, don't run as root. Use a non-root user for administrative tasks. For any services that run which are long lived, don't run them as root.
For instance, NGINX is capable of running as the user www-data. With specific users for different services, we can create groups and assign the different users to them and then modify the file ownership and permissions, using chown and chmod, to ensure that those services only have access to what they need and nothing more. As an example, I've often wondered why NGINX needs read access to logs. It really should, in theory, only need write access to them. If this service were to somehow get compromised, the worst it could do is write a bunch of garbage to the logs, but an attacker might find their hands are tied when it comes to retrieving sensitive information from them.
localhost SSL certs are generally for development only
While I don't recommend this for production, there are ways to make localhost use HTTPS. One is with a self signed certificate. The other uses a tool called mkcert which lets you be your own CA (certificate authority) for issuing SSL certificates. The latter is a great solution, since the browser and other services will implicitly trust the generated certificates, but the general consensus, even by the author of mkcert, is that this is only recommended for development purposes, not production purposes. I've yet to find a good solution for localhost in production. I don't think it exists, and in my experience, I've never seen anyone worry about it.

Require client certificate only for a folder

Currently my application is under a load balancer (NetScaler) and it does SSL Offload, so my application run in http, but externally is on https. In IIS is bound only http:80. The load balancer use a certificate called *.mycert.com
Now I have to require for a client certificate for a specific folder of my application /Services, but the certificate is myPeskyCert so different from *.mycert.com. This is necessary because I have to respect how the client will call me.
Currenlty I'm following the following answers:
Can IIS require SSL client certificates without mapping them to a windows user?
What is the difference between requiring an SSL cert and accepting an SSL cert?
,but in this way my application:
I have to do ssl bridging, so I have to bind 443 on the web app
in this way ALL my application is presented as myPeskyCert
How do I have to handle IIS in order to present my application as *.mycert.com, but ask for myPeskyCert when the folder /Services is requested?
It's non possible, a certificate must refer to the entire site bound.
The solution is the following:
bound the application to two different url binding
on the balancer set one certificate or the other with ssloffload on the two different url

Simulate SSL termination with IIS Express

In our production environment a website runs under HTTPS with SSL terminating on a load balancer and passing traffic to the IIS servers as HTTP.
There are various in-house and 3rd party components and controls within the site and some of them use mechanisms similar to the .NET System.Web.HttpRequest.IsSecureConnection property which simply queries the HTTPS server variable to return its result. As the connection into the web server from the load balancer is HTTP, these methods return the incorrect value and cause some components to fail. For example, a component might direct the user to a HTTP URL instead of HTTPS for a JavaScript file and cause the browser not to load the mixed content.
In order to debug these components and to develop a workaround, I need to recreate this scenario on my development machine. My question is Is there an easy way to simulate an externally terminated SSL connection for the Visual Studio / IIS Express development environment?
I've found a way using Port Forwarding Wizard.
Create a single TCP mapping with Listen Port set to a spare port (e.g. 443), destination as localhost with web server port (e.g. 80). Leave everything else as default, but go into SSL Encryption and generate a Root Key and Certificate in CA Center. Once done, select Enable SSL Encryption and select Server. Generate a Private Key file, Cert Req file and a Certificate and then bob's your uncle, you get terminated SSL forwarding to your local IIS Express server: Simply Start your port mapping and then connect to https://localhost with your web browser (specifying the port if it's not 443).

Unable to obtain tokenresult oauthClient.ExchangeCodeForAccessToken; unreachable network 69.171.229.24:443

I developed an FBConnect web application using C# .Net Framework 4.0 recently. Tested with my UAT server, everything just works fine, I'm able to login with my Facebook account, and perform all operation.
Unfortunately when I deploy the same code to my client's production environment, the FBConnect return "unreachable network 69.171.229.24:443". After several investigation, notice the port 443 is blocked! And due to corporate policy, this port is not allowed!!
Is there alternate way I can tweak my facebook app settings NOT to authenticate via port 443? instead of rewriting my code?
Please advice.
No, there is no alternative.
Port 443 is for secure HTTPS connections using TLS and SSL. Facebook, quite correctly, restricts access to their authentication mechanism to this port: (as far as I know) there are no alternative mechanisms that use a different port or an insecure login on port 80.
Check with your client to see if there's an proxy server that can be used for HTTPS connections.
Otherwise, request that your client opens that port.

How to set up SSL in a load balanced environment?

Here is our current infrastructure:
2 web servers behind a shared load balancer
dns is pointing to the load balancer
web app is done in asp.net, with wcf services
My question is how to set up the SSL certificate to support https connection.
Here are 2 ideas that I have:
SSL certificate terminates at the load balancer. secure/unsecure communication behind the load balancer will be forwarded to 2 different ports.
pro: only need 1 certificate as I scale horizontally
cons: I have to check secure or not secure by checking which port the request is
coming from. doesn't quite feel right to me
WCF by design will not work when IIS is binded 2 different ports
(according to this)
SSL certificate terminates on each of the server?
cons: need to add more certificates to scale horizontally
thanks
Definitely terminate SSL at the load balancer!!! Anything behind that should NOT be visible outside. Why wouldn't two ports for secure/insecure work just fine?
You don't actually need more certificates at all. Because the externally seen FQDN is the same you use the same certificate on each machine.
This means that WCF (if you're using it) will work. WCF with the SSL terminating on the external load balancer is painful if you're signing/encrypting at a message level rather than a transport level.
You don't need two ports, most likely. Just have the SSL virtual server on the load balancer add an HTTP header to the request and check for that. It's what we do with our Zeus ZXTM 5.1.
You don't have to get a cert for every site there are such things as wildcard certs. But it would have to be installed on every server. (assuming you are using subdomains, if not then you can reuse the same cert across machines)
But I would probably put the cert on the load balancer if not just for the sake of easy configuration.

Resources