My application currently supports both http and https and I would like to force the use of the latter when someone tries to access the first one (which also happens to be the default). However, I am a bit unsure of how to set this up when it comes to how I've deployed things.
To give a higher-level perspective, I have 3 nodes running on Heroku corresponding to:
A Next.js frontend app
An Express backend server
An nginx reverse proxy that acts as the entrypoint of the system and redirects requests to either the front or the backend.
How would one go about forcing the use of https? Is that configured at the proxy level? at the frontend level? Or maybe at the dns config level?
I think that's usually done at the proxy level but I'm not sure, plus the fact that I'm using the ssl certificate that heroku provides out the box, makes things even more confusing.
Any suggestions?
Related
I use Nginx to manage a lot of my web services. They listens different port, but all accessed by the reverse proxy of Nginx within one domain. Such as to access a RESTful-API server I can use http://my-domain/api/, and to access a video server I can use http://my-domain/video.
I have generated a SSL certificate for my-domain and added it into my Nginx conf so my Nginx server is HTTPS now -- But those original servers are still using HTTP.
What will happen when I visit https://my-domain/<path>? Is this as safe as configuring SSL on the original servers?
One of the goals of making sites be HTTPS is to prevent the transmitted data between two endpoints from being intercepted by outside parties to either be modified, as in a man-in-the-middle attack, or for the data to be stolen and used for bad purposes. On the public Internet, any data transmitted between two endpoints needs to be secured.
On private networks, this need isn't quite so great. Many services do run on just HTTP on private networks just fine. However, there are a couple points to take into consideration:
Make sure unused ports are blocked:
While you may have an NGINX reverse proxy listening on port 443, is port 80 blocked, or can the sites still be accessed via HTTP?
Are the other ports to the services blocked as well? Let's say your web server runs on port 8080, and the NGINX reverse proxy forwards certain traffic to localhost:8080, can the site still be accessed at http://example.com:8080 or https://example.com:8080? One way to prevent this is to use a firewall and block all incoming traffic on any ports you don't intend to accept traffic on. You can always unblock them later, if you add a service that requires that port be opened.
Internal services are accessible by other services on the same server
The next consideration relates to other software that may be running on the server. While it's within a private ecosystem, any service running on the server can access localhost:8080. Since the traffic between the reverse proxy and the web server are not encrypted, that traffic can also be sniffed, even if authorisation is required in order to authenticate localhost:8080. All a rogue service would need to do is monitor the port and wait for a user to login. Then that service can capture everything between the two endpoints.
One strategy to mitigate the dangers created by spyware is to either use virtualisation to separate a single server into logical servers, or use different hardware for things that are not related. This at least keeps things separate so that the people responsible for application A don't think that service X might be something the team running application B is using. Anything out of place will more likely stand out.
For instance, a company website and an internal wiki probably don't belong on the same server.
The simpler we can keep the setup and configuration on the server by limiting what that server's job is, the more easily we can keep tabs on what's happening on the server and prevent data leaks.
Use good security practices
Use good security best practices on the server. For instance, don't run as root. Use a non-root user for administrative tasks. For any services that run which are long lived, don't run them as root.
For instance, NGINX is capable of running as the user www-data. With specific users for different services, we can create groups and assign the different users to them and then modify the file ownership and permissions, using chown and chmod, to ensure that those services only have access to what they need and nothing more. As an example, I've often wondered why NGINX needs read access to logs. It really should, in theory, only need write access to them. If this service were to somehow get compromised, the worst it could do is write a bunch of garbage to the logs, but an attacker might find their hands are tied when it comes to retrieving sensitive information from them.
localhost SSL certs are generally for development only
While I don't recommend this for production, there are ways to make localhost use HTTPS. One is with a self signed certificate. The other uses a tool called mkcert which lets you be your own CA (certificate authority) for issuing SSL certificates. The latter is a great solution, since the browser and other services will implicitly trust the generated certificates, but the general consensus, even by the author of mkcert, is that this is only recommended for development purposes, not production purposes. I've yet to find a good solution for localhost in production. I don't think it exists, and in my experience, I've never seen anyone worry about it.
I have different versions of a backend service, and would like nginx to be like a "traffic cop", sending users ONLY to the currently online live backend service. Is there a simple way to do this without changing the nginx config each time I want to redirect users to a different backend service?
In this example, I want to shut down the live backend service and direct users to the test backend service. Then, vice-versa. I'm calling it a logical "traffic cop" which knows which backend service to direct users to.
I don't think adding all backend services to the proxy_pass using upstream load balancing will work. I think load balancing would not give me what I'm looking for.
I also do not want user root to update the /etc/hosts file on the machine, because of security and collision concerns with multiple programs editing /etc/hosts simultaneously.
I'm thinking of doing proxy_pass http://live-backend.localhost in nginx and using a local DNS server to manage the internal IP for live-backend-localhost which I can change (re-point to another backend IP) at any time. However, would nginx actually query the DNS server on every request, or does it resolve once then cache the IP forever?
Am I over-thinking this? Is there an easy way to do this within nginx?
You can use the backup parameter to the server directive so that the test server will only be used when the live one is down.
NGINX queries DNS on startup and caches it, so you'd still have to reload it to update.
I want to ask if the configuration to have multiple SSL on one IP in Jelastic is possible with Nginx Load Balancer.
The usage is for a proxy server that will receive request from multiple custom domains.
For example:
example-proxy.com points to a Public IP address assigned to a Jelastic Jetty Application.
Now custom domains points to the Jetty Application
custom-domain-example.com CNAME www points to example-proxy.com etc.
custom-domain-example-N.org CNAME www points to example-proxy.com etc.
Is it is possible to have this kind of configuration with Jelastic?
Is this possible to be done using the existing Jelastic API? Right now what I see in the API docs is BindSSL but it seems it can only bind one, is this correct?
Yes it's possible, but you need to configure it manually (just in nginx configs) instead of using the Jelastic dashboard/API SSL feature.
The other point to remember is that because there's 1 IP per container, multiple SSL certificates can only be served via SNI. That may have implications for you depending on what browsers your users use: in most cases it's ok now (old mobile OS and Windows XP are the primary exceptions)
The BindSSL API method allows you to automatically configure one SSL certificate on the externally facing node of your environment (Nginx Load Balancer in your case). If you attempt to BindSSL multiple times you just replace the existing certificate (not add multiple certificates).
Basically this functionality was built before SNI was widely acceptable, so it was assumed 1 SSL cert. per 1 environment. You can read more about SNI to make an informed decision about whether it will suit your needs here: http://blog.layershift.com/sni-ssl-production-ready/
An alternative for your needs would be to purchase a multi-domain SSL certificate (SAN cert). This lets you contain multiple hostnames within 1 certificate. Since you mentioned that you're our customer, you can contact our SSL team for details/pricing for this option.
If you still want to use multiple SSL certs + serve them via SNI, you will probably need to use the Read and Write API methods to save the SSL certificate parts and config. file(s) on your Nginx node.
Don't forget to restart the nginx service (you can use RestartNodeById for that) after any config. changes.
EDIT: As you mentioned that your end users will have control over this process, you probably prefer to use reload instead of restart (see http://nginx.org/en/docs/beginners_guide.html#control).
You can invoke that via Jelastic API using ExecCmdById, with commandList=[{"command": "sudo service nginx reload"}]
But take care if you're allowing end users to upload their own certificates via your application - you need to ensure that what they upload is really a certificate and nothing malicious...
Idea
Gradually use a few small-scale dedicated servers in combination with an expensive cloud platform, where - on little traffic - the dedicated servers should first filled up before the cloud kicks in. Hedging against occasional traffic spikes.
nginx
Is there an easy way (without nginx plus) to achieve a "waterfall like" set-up, where small servers should first be served up to a maximum number of concurrent connections, or better, current bandwidth before the cloud platform sees any traffic?
Nginx Config, Libraries, Tools?
Thanks
You will use nginx upstream module.
If you want total control, set your cloud servers with backup parameter, so that they won't be used until your primary servers fail. Then use custom monitoring scripts to determine when those could servers should kick-in, change nginx config and remove the backup keyword from them. Also monitor conditions when you want to stop using the cloud servers and alter the nginx config.
More simple solution (but without fine tuning like avoiding spikes) is to use the max_conns=number parameter. Nginx should start to use the backup server if all other already have max number of connections (I didn't test it).
NOTE: max_conns parameter was only available in paid nginx between v1.5.9 and v1.11.5, so the only solution with these versions is own monitoring + reloading of nginx config when needed to change the upstream servers. Thanks Mickaƫl Le Baillif's comment to point out this parameter is now available to all.
I'm writing a web app in Go. Currently I have a layout that looks like this:
[CloudFlare] --> [Nginx] --> [Program]
Nginx does the following:
Performs some redirects (i.e. www.domain.tld --> domain.tld)
Adds headers such as X-Frame-Options.
Handles static images.
Writes access.log.
In the past I would use Nginx as it performed SSL termination and some other tasks. Since that's now handled by CloudFlare, all it does, essentially, is static images. Given that Go has a built in HTTP FileServer and CloudFlare could take over handling static images for me, I started to wonder why Nginx is in-front in the first place.
Is it considered a bad idea to put nothing in-front?
In your case, you can possibly get away with not running nginx, but I wouldn't recommend it.
However, as I touched on in this answer there's still a lot it can do that you'll need to "reinvent" in Go.
Content-Security headers
SSL (is the connection between CloudFlare and you insecure if they are terminating SSL?)
SSL session caching & HSTS
Client body limits and header buffers
5xx error pages and maintenance pages when you're restarting your Go application
"Free" logging (unless you want to write all that in your Go app)
gzip (again, unless you want to implement that in your Go app)
Running Go standalone makes sense if you are running an internal web service or something lightweight, or genuinely don't need the extra features of nginx. If you're building web applications then nginx is going to help abstract "web server" tasks from the application itself.
I wouldn't use nginx at all to be honest, some nice dude tested fast cgi go + nginx and just go standalone library. The results he came up with were quite interesting, the standalone hosting seemed to be much better in handling requests than using it behind nginx, and the final recommendation was that if you don't need specific features of nginx don't use it. full article
You could run it as standalone and if you're using partial/full ssl on your site you could use another go http server to redirect to safe https routes.
Don't use ngnix if you do not need it.
Go does SSL in less lines then you have to write in ngnix configure file.
The only reason is a free logging but I wonder how many lines of code is logging in Go.
There is nice article in Russian about reverse proxy in Go in 200 lines of code.
If Go could be used instead of ngnix then ngnix is not required when you use Go.
You need ngnix if you wish to have several Go processes or Go and PHP on same site.
Or if you use Go and you have some problem when you add ngnix then it fix the problem.