How to host HTTPS API on LAN - http

I'm planning an API that will be used by a client on their internal office networks in multiple separate locations. Each location will have a separate instance installed.
They want it to be secure and running on HTTPS.
What I cant seem to understand how can a HTTPS certificate work when there is no externally facing fully qualified name. eg. MyApiServer.mycompany.com
Instead they will likely just be running it on a server/computer with just a hostname. ie. MyApiServer
The data being transferred is not necessarily sensitive but it places records in a sales system.
If HTTPS is not possible in this scenario whats an alternative method to secure the communication?

The server name has not to be "fully-qualified". For securing the call it will be enough to have the domain specified in URL equal to the domain name specified in certificate.
So your clients would call https://MyApiServer/endpoint in your LAN which should cause your service to provide server certificate where the subject would be MyApiServer.

Related

What will happen if a SSL-configured Nginx reverse proxy pass to an web server without SSL?

I use Nginx to manage a lot of my web services. They listens different port, but all accessed by the reverse proxy of Nginx within one domain. Such as to access a RESTful-API server I can use http://my-domain/api/, and to access a video server I can use http://my-domain/video.
I have generated a SSL certificate for my-domain and added it into my Nginx conf so my Nginx server is HTTPS now -- But those original servers are still using HTTP.
What will happen when I visit https://my-domain/<path>? Is this as safe as configuring SSL on the original servers?
One of the goals of making sites be HTTPS is to prevent the transmitted data between two endpoints from being intercepted by outside parties to either be modified, as in a man-in-the-middle attack, or for the data to be stolen and used for bad purposes. On the public Internet, any data transmitted between two endpoints needs to be secured.
On private networks, this need isn't quite so great. Many services do run on just HTTP on private networks just fine. However, there are a couple points to take into consideration:
Make sure unused ports are blocked:
While you may have an NGINX reverse proxy listening on port 443, is port 80 blocked, or can the sites still be accessed via HTTP?
Are the other ports to the services blocked as well? Let's say your web server runs on port 8080, and the NGINX reverse proxy forwards certain traffic to localhost:8080, can the site still be accessed at http://example.com:8080 or https://example.com:8080? One way to prevent this is to use a firewall and block all incoming traffic on any ports you don't intend to accept traffic on. You can always unblock them later, if you add a service that requires that port be opened.
Internal services are accessible by other services on the same server
The next consideration relates to other software that may be running on the server. While it's within a private ecosystem, any service running on the server can access localhost:8080. Since the traffic between the reverse proxy and the web server are not encrypted, that traffic can also be sniffed, even if authorisation is required in order to authenticate localhost:8080. All a rogue service would need to do is monitor the port and wait for a user to login. Then that service can capture everything between the two endpoints.
One strategy to mitigate the dangers created by spyware is to either use virtualisation to separate a single server into logical servers, or use different hardware for things that are not related. This at least keeps things separate so that the people responsible for application A don't think that service X might be something the team running application B is using. Anything out of place will more likely stand out.
For instance, a company website and an internal wiki probably don't belong on the same server.
The simpler we can keep the setup and configuration on the server by limiting what that server's job is, the more easily we can keep tabs on what's happening on the server and prevent data leaks.
Use good security practices
Use good security best practices on the server. For instance, don't run as root. Use a non-root user for administrative tasks. For any services that run which are long lived, don't run them as root.
For instance, NGINX is capable of running as the user www-data. With specific users for different services, we can create groups and assign the different users to them and then modify the file ownership and permissions, using chown and chmod, to ensure that those services only have access to what they need and nothing more. As an example, I've often wondered why NGINX needs read access to logs. It really should, in theory, only need write access to them. If this service were to somehow get compromised, the worst it could do is write a bunch of garbage to the logs, but an attacker might find their hands are tied when it comes to retrieving sensitive information from them.
localhost SSL certs are generally for development only
While I don't recommend this for production, there are ways to make localhost use HTTPS. One is with a self signed certificate. The other uses a tool called mkcert which lets you be your own CA (certificate authority) for issuing SSL certificates. The latter is a great solution, since the browser and other services will implicitly trust the generated certificates, but the general consensus, even by the author of mkcert, is that this is only recommended for development purposes, not production purposes. I've yet to find a good solution for localhost in production. I don't think it exists, and in my experience, I've never seen anyone worry about it.

Restrict OpenVPN server to be accessable only through Domain Name and not the server IP on the browser

I have deployed an OpenVPN server from GCP market Place and have attached a Domain name to it along with the SSL certificate. Currently, I am able to access the server through both
https://domain-name.com
https://x.x.x.x -(Server Static Ip)
I want the server to be accessible only through the hostname and not its Server IP as the latter URL gives an SSL security error as the SSL certificate is attached to the Domain name and not to the server IP.
Can anyone help me to restrict it or give some advice to solve it?
You could try to do it(prevent access by IP) but I advice you to not try to do it.
Theoretically it could be possible for your HTTP server to reset SSL connection when browser sends "wrong" SNI(Server Name Indication) in a handshake.
Thus you could prevent your browser displaying security alerts.
Instead your browser would show network error message.
I doubt you would like to trade one type of error to another one.
I suggest you to do nothing about such "error" because legitimate visitors will come to your site via domain name and will not see such security warning.
Also there is huge possibility that legitimate visitor (with paranoid mindset) will use browser with SNI feature disabled so your server will not be able to make difference between good and bad URLs.
PS: here are relevant questions and discussions at reddit and at ServerFault and another one

Is it possible to setup multiple SSL on one Jelastic app?

I want to ask if the configuration to have multiple SSL on one IP in Jelastic is possible with Nginx Load Balancer.
The usage is for a proxy server that will receive request from multiple custom domains.
For example:
example-proxy.com points to a Public IP address assigned to a Jelastic Jetty Application.
Now custom domains points to the Jetty Application
custom-domain-example.com CNAME www points to example-proxy.com etc.
custom-domain-example-N.org CNAME www points to example-proxy.com etc.
Is it is possible to have this kind of configuration with Jelastic?
Is this possible to be done using the existing Jelastic API? Right now what I see in the API docs is BindSSL but it seems it can only bind one, is this correct?
Yes it's possible, but you need to configure it manually (just in nginx configs) instead of using the Jelastic dashboard/API SSL feature.
The other point to remember is that because there's 1 IP per container, multiple SSL certificates can only be served via SNI. That may have implications for you depending on what browsers your users use: in most cases it's ok now (old mobile OS and Windows XP are the primary exceptions)
The BindSSL API method allows you to automatically configure one SSL certificate on the externally facing node of your environment (Nginx Load Balancer in your case). If you attempt to BindSSL multiple times you just replace the existing certificate (not add multiple certificates).
Basically this functionality was built before SNI was widely acceptable, so it was assumed 1 SSL cert. per 1 environment. You can read more about SNI to make an informed decision about whether it will suit your needs here: http://blog.layershift.com/sni-ssl-production-ready/
An alternative for your needs would be to purchase a multi-domain SSL certificate (SAN cert). This lets you contain multiple hostnames within 1 certificate. Since you mentioned that you're our customer, you can contact our SSL team for details/pricing for this option.
If you still want to use multiple SSL certs + serve them via SNI, you will probably need to use the Read and Write API methods to save the SSL certificate parts and config. file(s) on your Nginx node.
Don't forget to restart the nginx service (you can use RestartNodeById for that) after any config. changes.
EDIT: As you mentioned that your end users will have control over this process, you probably prefer to use reload instead of restart (see http://nginx.org/en/docs/beginners_guide.html#control).
You can invoke that via Jelastic API using ExecCmdById, with commandList=[{"command": "sudo service nginx reload"}]
But take care if you're allowing end users to upload their own certificates via your application - you need to ensure that what they upload is really a certificate and nothing malicious...

What exactly does "every SSL certificate requires a dedicated IP" mean?

I've read a bit about SSL certificates, and in particular I've read that an SSL certificate "requires a dedicated IP address". Now, I'm unsure of the meaning of this; does it mean that the certificate requires a dedicated IP address separate from the IP address used for normal HTTP communication, or just that it can't share the IP address with other SSL certificates?
To clarify, I have a VPS with a dedicated IP address. The VPS is hosting quite a few different sites, including several subdomains of the main site, but only the main site and the subdomains requires SSL. Can I simply purchase an SSL certificate for *.example.com using my current IP address, or do I need to get one that is separate from the other sites on the VPS? Or even worse, do I need to get one that is separate from all HTTP traffic on the server? Keep in mind that none of the other sites needs SSL.
Thanks for any clarification on the topic.
Edit: Some sources for my worries:
http://symbiosis.bytemark.co.uk/docs/symbiosis.html#ch-ssl-hosting
Is it necessary to have dedicated IP Address to install SSL certificate?
There's no such thing as "SSL certificate". The term is misleading. X.509 certificates can be issued for different purposes (as defined by their Key Usage and Extended Key Usage "properties"), in particular for securing SSL/TLS sessions.
Certificates don't require anything in regards to sockets, addresses and ports as certificates are pure data.
When securing some connection with TLS, you usually use the certificate to authenticate the server (and sometimes the client). There's one server per IP/Port, so usually there's no problem for the server to choose what certificate to use.
HTTPS is the exception — several different domain names can refer to one IP and the client (usually a browser) connects to the same server for different domain names. The domain name is passed to the server in the request, which goes after TLS handshake.
Here's where the problem arises - the web server doesn't know which certificate to present. To address this a new extension has been added to TLS, named SNI (Server Name Indication). However, not all clients support it. So in general it's a good idea to have a dedicated server per IP/Port per domain. In other words, each domain, to which the client can connect using HTTPS, should have its own IP address (or different port, but that's not usual).
SSL certificates do not require a dedicated IP address. SSL certificates store a so called common name. Browser interpret this common name as the DNS name of the server they are talking to. If the common name does not match DNS name of the server that the browser is talking to, the browser will issue a warning.
You can get a so called wildcard certificate, that would be admissible for all hosts within a certain domain.
...following up on #Eugene's answer with more info about the compatibility issue...
According to this page from namecheap.com SNI does not work on:
Windows XP + any version Internet Explorer (6,7,8,9)
Internet Explorer 6 or earlier
Safari on Windows XP
BlackBerry Browser
Windows Mobile up to 6.5
Nokia Browser for Symbian at least on Series60
Opera Mobile for Symbian at least on Series60
Web site will still be available via HTTPS, but a certificate mismatch error will appear.
Thus, as we enter 2016 I would venture to stick my neck out there and say, "If you're building a modern website anyway (not supporting old browsers), and if the project is so small that it cannot afford a dedicated IP address, you'll probably be fine relying on SNI." Of course, there are thousands of experts who would disagree with this, but we're talking about being practical, not perfect.
The ssl certificate commmon name has to match the domain name. You don't have any requisite over the ip address, unless it's a limitation imposed by the certificate provider or the http server software.
Edit: looking into the web, it seems that the rumor has been spread because Apache's ssl plugin doesn't have (at least it didn't have in 2002) any mechanism to use different certificate based on the hostname. In such scenario you would have to run two different Apache web servers on the two different IP addresses.
Anyway in your configuration you shouldn't have any problem using only one IP because you don't have to use two different certificates (because you plan to use a wildcard certificate).
I would try anyway configuring the webserver with a self signed certificate before spending money for a second ip or certificate.
Edit 2: reference apache documentation:
http://httpd.apache.org/docs/2.2/vhosts/name-based.html
It seems like now (apache >= 2.2.12) it is supported

Can you host multiple tenants on a single ASP.NET application instance over SSL?

I have an ASP.NET application that will host multiple tenants (Software-as-a-Service style). Each tenant will have their own domain name (www.mydomain.com, www.yourdomain.com) and their own SSL certificate.
Is there a way to host the application such that all of the tenants are on the same application instance?
I know you can have multiple IIS web sites pointing to the same shared location, but that won't work - it's not the same instance. That's different instances of the same application.
I also know you can use SSL host header mapping with wildcard certificates, but that won't work because all of the tenants would need to be subdomains of the same primary domain - yourdomain.commondomain.com, mydomain.commondomain.com. For the solution to be valid, everyone needs to have their own domain name, not be subdomains. (Ideally each tenant could opt to use an EV cert, too, and you can't have wildcard EV certs.)
The problem is that classic SSL requires the certificate to be presented before the web browser has indicated which host it wants to use. You can therefore only configure one certificate per IP/port combination.
There is an extension to TLS called Server Name Indication which allows the browser to indicate which logical server it wants to talk to. This feature is supported as of IIS 8.0 (Windows Server 2012).
Wildcards work because the certificate itself says that it is valid for all servers under that domain.
You constrained to only IIS - or could putting soft/hard proxies or content-switching hardware also be an option?
Thinking that you could terminate the SSL at a proxy or content-switch - then transform the request into your own internal url.
e.g. foo.com/x and bar.com/y get translated into myapp/x and myapp/y respectively under the hood - passing the original hostname in the request headers.

Resources