I have a pair of public/private certificates, which are very important and can not be stored on my computer, but only in secure premises.
I need to use such certificates to send requests to multiple providers (in my case those are banks).
What I want to do is set up a proxy server which would hold such certificates.
Then I would be able to send requests through that proxy and it will establish mTLS connection with banks using those certificates.
So my questions are:
Is it possible?
Do I understand correctly that such approach could be called "reverse termintaion proxy"?
Ideally, I would like to do that using nginx. It is possible?
I've found some info in the nginx documentation, but not sure whether it is what I am looking for
Related
I'm going to be using gRPC for a device to device connection over a network (my device will be running Linux and collecting patient data from various monitors, gRPC will be used by a Windows client system to grab and display that data).
I obviously want to encrypt the data on the wire, but dealing with certificates is going to be a problem for various reasons. I can easily have the server not ask for the client cert, but so far I've been unable to find a way around the client validating the server's cert.
I've got several reasons I don't want to bother with a server cert:
The data collection device (the gRPC server) is going to be assigned an IP and name via DHCP in most cases. Which means that when that name changes (at install time, or when they move the device to a different part of the hospital), I have to automatically fixup the certs. Other than shipping a self-signed CA cert and key with the device, I don't know how to do that.
There are situations where we're going to want to point client to server via IP, not name. Given that gRPC can't do a cert for an IP (https://github.com/grpc/grpc/issues/2691), this becomes a configuration that we can't support without doing something to give a name to a thing we only have an IP for (hosts file on the Windows client?). Given the realities of operating in a hospital IT environment, NOT supporting use of IPs instead of names is NOT an option.
Is there some simple way to accommodate this situation? I'm far from an expert on any of this, so it's entirely possible I've missed something very basic.
Is there some simple way to set the name that the client uses to check the server to be different than the name it uses to connect to the server? That way I could just set a fixed name, use that all the time and be fine.
Is there some way to get a gRPC client to not check the server certificate? (I already have the server setup to ignore the client cert).
Is there some other way to get gRPC to encrypt the connection?
I could conceivably set things up to have the client open an ssh tunnel to the server and then run an insecure gRPC connection across that tunnel, but obviously adding another layer to opening the connection is a pain in the neck, and I'm not at all sure how comfortable the client team is going to be with that.
Thanks for raising this question! Please see my inline replies below:
I obviously want to encrypt the data on the wire, but dealing with
certificates is going to be a problem for various reasons. I can
easily have the server not ask for the client cert, but so far I've
been unable to find a way around the client validating the server's
cert.
There are actually two types of checks happening on the client side: certificate check and the hostname verification check. The former checks the server certificate, to make sure it is trusted by the client; the latter checks the target name with server's identity on the peer certificate. It seems you are suffering with the latter - just want to make sure because you will need to get both of these checks right on the client side, in order to establish a good connection.
The data collection device (the gRPC server) is going to be assigned
an IP and name via DHCP in most cases. Which means that when that name
changes (at install time, or when they move the device to a different
part of the hospital), I have to automatically fixup the certs. Other
than shipping a self-signed CA cert and key with the device, I don't
know how to do that.
There are situations where we're going to want to point client to
server via IP, not name. Given that gRPC can't do a cert for an IP
(https://github.com/grpc/grpc/issues/2691), this becomes a
configuration that we can't support without doing something to give a
name to a thing we only have an IP for (hosts file on the Windows
client?). Given the realities of operating in a hospital IT
environment, NOT supporting use of IPs instead of names is NOT an
option.
gRPC supports IP address(it is also mentioned in the last comment of the issue you brought up). You will have to put your IP address in the SAN field of server's certificate, instead of the CN field. It's true that it will be a problem if your IP will change dynamically - that's why we need DNS domain name, and set up the PKI infrastructure. If that's a bit heavy amount of work for your team, see below :)
Is there some simple way to accommodate this situation? I'm far from
an expert on any of this, so it's entirely possible I've missed
something very basic.
Is there some simple way to set the name that the client uses to check
the server to be different than the name it uses to connect to the
server? That way I could just set a fixed name, use that all the time
and be fine.
You can directly use IP address to connect, and override the target name in the channel args. Note that the overridden name should match the certificate sent from the server. Depending on which credential type you use, it could be slightly different. I suggest you read this question.
Is there some way to get a gRPC client to not check the server
certificate? (I already have the server setup to ignore the client
cert).
Is there some other way to get gRPC to encrypt the connection?
Note that: Even if you don't use any certificate on the wire, if you are sure the correct credential type(either SSL or TLS) is used, then the data on the wire is encrypted. Certificate helps you to make sure the endpoint to which you are connecting is verified. Failing to use certificates will leave your application to Man-In-The-Middle attacks. Hope this can help you better understand the goals and make the right judgement for your team.
I use Nginx to manage a lot of my web services. They listens different port, but all accessed by the reverse proxy of Nginx within one domain. Such as to access a RESTful-API server I can use http://my-domain/api/, and to access a video server I can use http://my-domain/video.
I have generated a SSL certificate for my-domain and added it into my Nginx conf so my Nginx server is HTTPS now -- But those original servers are still using HTTP.
What will happen when I visit https://my-domain/<path>? Is this as safe as configuring SSL on the original servers?
One of the goals of making sites be HTTPS is to prevent the transmitted data between two endpoints from being intercepted by outside parties to either be modified, as in a man-in-the-middle attack, or for the data to be stolen and used for bad purposes. On the public Internet, any data transmitted between two endpoints needs to be secured.
On private networks, this need isn't quite so great. Many services do run on just HTTP on private networks just fine. However, there are a couple points to take into consideration:
Make sure unused ports are blocked:
While you may have an NGINX reverse proxy listening on port 443, is port 80 blocked, or can the sites still be accessed via HTTP?
Are the other ports to the services blocked as well? Let's say your web server runs on port 8080, and the NGINX reverse proxy forwards certain traffic to localhost:8080, can the site still be accessed at http://example.com:8080 or https://example.com:8080? One way to prevent this is to use a firewall and block all incoming traffic on any ports you don't intend to accept traffic on. You can always unblock them later, if you add a service that requires that port be opened.
Internal services are accessible by other services on the same server
The next consideration relates to other software that may be running on the server. While it's within a private ecosystem, any service running on the server can access localhost:8080. Since the traffic between the reverse proxy and the web server are not encrypted, that traffic can also be sniffed, even if authorisation is required in order to authenticate localhost:8080. All a rogue service would need to do is monitor the port and wait for a user to login. Then that service can capture everything between the two endpoints.
One strategy to mitigate the dangers created by spyware is to either use virtualisation to separate a single server into logical servers, or use different hardware for things that are not related. This at least keeps things separate so that the people responsible for application A don't think that service X might be something the team running application B is using. Anything out of place will more likely stand out.
For instance, a company website and an internal wiki probably don't belong on the same server.
The simpler we can keep the setup and configuration on the server by limiting what that server's job is, the more easily we can keep tabs on what's happening on the server and prevent data leaks.
Use good security practices
Use good security best practices on the server. For instance, don't run as root. Use a non-root user for administrative tasks. For any services that run which are long lived, don't run them as root.
For instance, NGINX is capable of running as the user www-data. With specific users for different services, we can create groups and assign the different users to them and then modify the file ownership and permissions, using chown and chmod, to ensure that those services only have access to what they need and nothing more. As an example, I've often wondered why NGINX needs read access to logs. It really should, in theory, only need write access to them. If this service were to somehow get compromised, the worst it could do is write a bunch of garbage to the logs, but an attacker might find their hands are tied when it comes to retrieving sensitive information from them.
localhost SSL certs are generally for development only
While I don't recommend this for production, there are ways to make localhost use HTTPS. One is with a self signed certificate. The other uses a tool called mkcert which lets you be your own CA (certificate authority) for issuing SSL certificates. The latter is a great solution, since the browser and other services will implicitly trust the generated certificates, but the general consensus, even by the author of mkcert, is that this is only recommended for development purposes, not production purposes. I've yet to find a good solution for localhost in production. I don't think it exists, and in my experience, I've never seen anyone worry about it.
I'm planning an API that will be used by a client on their internal office networks in multiple separate locations. Each location will have a separate instance installed.
They want it to be secure and running on HTTPS.
What I cant seem to understand how can a HTTPS certificate work when there is no externally facing fully qualified name. eg. MyApiServer.mycompany.com
Instead they will likely just be running it on a server/computer with just a hostname. ie. MyApiServer
The data being transferred is not necessarily sensitive but it places records in a sales system.
If HTTPS is not possible in this scenario whats an alternative method to secure the communication?
The server name has not to be "fully-qualified". For securing the call it will be enough to have the domain specified in URL equal to the domain name specified in certificate.
So your clients would call https://MyApiServer/endpoint in your LAN which should cause your service to provide server certificate where the subject would be MyApiServer.
I'm trying to solve an architecture design puzzle, it's about designing an infra for keeping data and servers as much secured/hidden as possible, here are requirements:
I want to hide the internal design of my infra (several data servers with public and private hosts)
I want to access to each service using same IP address, and the query is forwarded to right server based on something (cookie, uri, port or whatever)
access to data service must be enforced with ssl/tls encryption
After studying carefully these requirements I was thinking about using a reverse proxy and grant access to all data services only across the reverse proxy server, an other pro of a reverse proxy is that access authentication is enforced at once with sll/tls encryption and no need to configure each endpoint separately.
my real issue is that I didn't find any reverse proxy that supports tcp queries, and same for static load balancing algorithms that are supported only for HTTP requests, (haproxy for instance)
Any idea how to solve this issue ?
Thanks to all
I've a quick question:
I have 2 websites, 1 has some links to file downloads. Those files are hosted on another server.
I need to encrypt the request data between the 2 servers..can I do it just using a SSL certificate?
Any other/better idea?
Those files are private docs, so I don't want the server 2 or any other people being able to track the file requests between the servers.
Thanks
Yes, use SSL (or actually TLS) if you want to achieve transport level security. If these are two servers that you control you can configure your own self signed certificates. If you want to make sure that only the two servers can communicate with each other, then require client-authentication, where both the server and client use a certificate/private key pair.
Most of the time the trick is to implement a sensible key management procedure. Setting up a web server to handle TLS using certificates should not be too hard.
SSL certificate will work fine for ensuring the transfer is encrypted. Even just a self signed certificate will be fine for this purpose (provided you can tell the client you're going to use to accept the self signed cert)
Alternatively if it's two linux machines then scp (secure copy) is a great tool where it'll connect via ssh and grab the files. (There probably is a windows scp tool but I don't know it)
Rsync also supports going via ssh
As for tracking the request... there's nothing you can do to prevent any device between your computer and the destination computer logging the fact a connection was made but the encryption should prevent anyone from getting to the actual data you're sending.