How to provide secure communication between client & server? - http

I'm creating a web server using Jetty (v9) and I need any traffic between browsers and the server to be encrypted. I'll be uploading files to the server, plus the client/server will maintain a session carrying sensitive access tokens.
I don't have much experience with web servers, but it seems like the solution is to have the web server serve on port 443 so that communication will use the HTTPS protocol.
I was going to start running through this tutorial for configuring Jetty with SSL, but before I start messing around with certificates and signing etc. I just wanted to ask if this is the right approach or if there is something else more suitable that I don't know about.

In answer to your question, using https is indeed the right approach.

Related

What will happen if a SSL-configured Nginx reverse proxy pass to an web server without SSL?

I use Nginx to manage a lot of my web services. They listens different port, but all accessed by the reverse proxy of Nginx within one domain. Such as to access a RESTful-API server I can use http://my-domain/api/, and to access a video server I can use http://my-domain/video.
I have generated a SSL certificate for my-domain and added it into my Nginx conf so my Nginx server is HTTPS now -- But those original servers are still using HTTP.
What will happen when I visit https://my-domain/<path>? Is this as safe as configuring SSL on the original servers?
One of the goals of making sites be HTTPS is to prevent the transmitted data between two endpoints from being intercepted by outside parties to either be modified, as in a man-in-the-middle attack, or for the data to be stolen and used for bad purposes. On the public Internet, any data transmitted between two endpoints needs to be secured.
On private networks, this need isn't quite so great. Many services do run on just HTTP on private networks just fine. However, there are a couple points to take into consideration:
Make sure unused ports are blocked:
While you may have an NGINX reverse proxy listening on port 443, is port 80 blocked, or can the sites still be accessed via HTTP?
Are the other ports to the services blocked as well? Let's say your web server runs on port 8080, and the NGINX reverse proxy forwards certain traffic to localhost:8080, can the site still be accessed at http://example.com:8080 or https://example.com:8080? One way to prevent this is to use a firewall and block all incoming traffic on any ports you don't intend to accept traffic on. You can always unblock them later, if you add a service that requires that port be opened.
Internal services are accessible by other services on the same server
The next consideration relates to other software that may be running on the server. While it's within a private ecosystem, any service running on the server can access localhost:8080. Since the traffic between the reverse proxy and the web server are not encrypted, that traffic can also be sniffed, even if authorisation is required in order to authenticate localhost:8080. All a rogue service would need to do is monitor the port and wait for a user to login. Then that service can capture everything between the two endpoints.
One strategy to mitigate the dangers created by spyware is to either use virtualisation to separate a single server into logical servers, or use different hardware for things that are not related. This at least keeps things separate so that the people responsible for application A don't think that service X might be something the team running application B is using. Anything out of place will more likely stand out.
For instance, a company website and an internal wiki probably don't belong on the same server.
The simpler we can keep the setup and configuration on the server by limiting what that server's job is, the more easily we can keep tabs on what's happening on the server and prevent data leaks.
Use good security practices
Use good security best practices on the server. For instance, don't run as root. Use a non-root user for administrative tasks. For any services that run which are long lived, don't run them as root.
For instance, NGINX is capable of running as the user www-data. With specific users for different services, we can create groups and assign the different users to them and then modify the file ownership and permissions, using chown and chmod, to ensure that those services only have access to what they need and nothing more. As an example, I've often wondered why NGINX needs read access to logs. It really should, in theory, only need write access to them. If this service were to somehow get compromised, the worst it could do is write a bunch of garbage to the logs, but an attacker might find their hands are tied when it comes to retrieving sensitive information from them.
localhost SSL certs are generally for development only
While I don't recommend this for production, there are ways to make localhost use HTTPS. One is with a self signed certificate. The other uses a tool called mkcert which lets you be your own CA (certificate authority) for issuing SSL certificates. The latter is a great solution, since the browser and other services will implicitly trust the generated certificates, but the general consensus, even by the author of mkcert, is that this is only recommended for development purposes, not production purposes. I've yet to find a good solution for localhost in production. I don't think it exists, and in my experience, I've never seen anyone worry about it.

Is gRPC not suited for small projects?

For the past 2 weeks I have been struggling to setup a simple backend that utilises gRPC and communicates with a mobile client.
Reading online about this technology it feels like it is the proper easy going solution for my needs.
Bridging client/server communication written in multiple languages Java/Kotlin/Swift/Go.
Backwards compatibility checks for the API realized with buf
Efficient communication by transferring binary data and utilising HTTP2
Support for both RPC and REST thanks to grpc-gateway
However when I decided to go down the gRPC path I faced a ton of issues (highlights of the issues, not actual questions):
How to share protobuf message definitions across clients and server?
How to manage third party protobuf message dependencies?
How to manage stub generation for projects using different build tools?
How to secure the communication using SSL certificates? Also keep in mind that here I am talking about a mobile client <--> server communication and not server <--> server communication.
How to buy a domain because SSL certificates are issued agains public domains in order to be trusted by Certificate Authorities?
How to deploy a gRPC server as it turns out that there aren't any easy to use PaaS that support gRPC and HTTP2? Instead you either need to configure the infrastructure like load balancers and machines hosting the server by installing the appropriate certificates or just host everything on your own bear metal.
How to manage all of the above in a cost effective way?
This is more of a frustration question.
Am I doing something wrong and misunderstanding how to use gRPC or is it simply too hard to setup for a small project that should run in production mod?
I feel like I wasted a ton of time without having made any progress.

No need for HTTPS at all, but there seems to be no other way from the web browser

Given a web application running over an HTTPS connection. It also has to communicate with a Java application on the local area network.
This server is literally in the same room with the PC on which the web app is running, a simple HTTP connection would be completely fine between the two, but since the web app is running over HTTPS, the browser forces the HTTPS.
It's already stupid and a big overkill that I must employ an HTTPS server in the Java application just because of that, but still now it doesn't work yet, because now the browser is complaining about the certificate that it is self-signed..
I mean, do I really need to purchase an SSL certificate so two of my computers in the same room can communicate over HTTP? Even if I wanted I couldn't. There's not even a fix domain.
I'm confused, is there a way around?
UPDATE:
The web application is served from the Internet, that's why the HTTPS connection. Whereas it should receive data from a Java application running locally. Hundreds of megabytes in every couple of minutes (confidential medical images) so sending all that through a proxy is not really an option.
I also wanted to avoid the need of any manual configuration from the user's side to make the communication work (like importing a certificate into the web browser and similar) but maybe I have no other option.

data encryption between 2 servers on file request

I've a quick question:
I have 2 websites, 1 has some links to file downloads. Those files are hosted on another server.
I need to encrypt the request data between the 2 servers..can I do it just using a SSL certificate?
Any other/better idea?
Those files are private docs, so I don't want the server 2 or any other people being able to track the file requests between the servers.
Thanks
Yes, use SSL (or actually TLS) if you want to achieve transport level security. If these are two servers that you control you can configure your own self signed certificates. If you want to make sure that only the two servers can communicate with each other, then require client-authentication, where both the server and client use a certificate/private key pair.
Most of the time the trick is to implement a sensible key management procedure. Setting up a web server to handle TLS using certificates should not be too hard.
SSL certificate will work fine for ensuring the transfer is encrypted. Even just a self signed certificate will be fine for this purpose (provided you can tell the client you're going to use to accept the self signed cert)
Alternatively if it's two linux machines then scp (secure copy) is a great tool where it'll connect via ssh and grab the files. (There probably is a windows scp tool but I don't know it)
Rsync also supports going via ssh
As for tracking the request... there's nothing you can do to prevent any device between your computer and the destination computer logging the fact a connection was made but the encryption should prevent anyone from getting to the actual data you're sending.

Value of proxying HTTP requests with node.js

I have very recently started development on a multiplayer browser game that will use nowjs to synchronize player states from the server state. I am new to server-side development (so many of the things I'm saying are probably being said incorrectly), and while I understand how node.js works on its own I have seen discussions about proxying HTTP requests through another server technology (a la NGinx or Apache) for efficiency.
I don't understand why it would be beneficial to do so, even though I've seen plenty of explanations of how to do so. My current plan is to have the game's website and info on the same server as the game itself, so if there is any gain from proxying node I'd love to know why.
In the context of your question it seems you are looking for an answer on the benefits of implementing a reverse proxy in front of your node.js webserver. In summary, a reverse proxy (depending on implementation) can provide the following features out of the box:
Load balancing
Caching of static content
Failover
Compression of responses (e.g gzip)
SSL support
All these features are cross-cutting concerns that you should not need to accommodate in your application tier/code. By implementing these features within the proxy it allows you to focus on developing the code for your application and leaves the web server to do what it's good at, serving the HTTP requests for your application.
nginx appears to be a common choice in a reverse proxy/node configuration and if you take a look at the modules reference you should get a feel for what features the proxy can provide.
When you say "through another technology" I assume you mean through a dedicated web server such as NGinx or Apache.
The reason you do that is b/c in a production environment there are a number of considerations you don't want your application to have to do on its own. Caching, domain (or sub-domain) mapping, perhaps security, SSL, load balancing, and serving static files to name a few.
The web servers are already built to do all those things for you, and so they can handle them and then pass only the requests on to your app that actually need to be handled by your app. They're also optimized for doing those things and will probably do them as well or better than the average developer can.
Hope that helps.
Another issue that people haven't added in here is that with a front-end proxy, when you need to take your service down for maintenance (or even just restart it), nginx can serve up a pretty "YourCompanyName is currently under maintenance" page, making for a much more pleasant user experience.

Resources