SSL with GoLang using reverse proxy Nginx - nginx

I am currently writing a GoLang website and would like to add SSL soon. My question is what are the advantages/disadvantages of using the built-in Golang SSL packages or should/can I just do SSL with the nginx when I use it for the reverse proxy?

It is ultimately up to you, but nginx's SSL configuration is extremely configurable, battle-tested and performant.
nginx can provide an SSL session cache to boost performance - ssl_session_cache
Good cipher compatibility
I believe that nginx's SSL implementation is faster (more req/s and less CPU) than Go's, but have not tested this myself. This would not be surprising given maturity of the nginx project.
Other benefits like response caching for both proxied and static content.
The downside, of course, is that it's another moving part that requires configuration. If you are already planning to use nginx as a reverse proxy however I would use it for SSL as well.

Related

In a reverse proxy server + Python HTTPS Server, who should handle SSL Certificates for HTTPS connections?

Suppose I want to use a combination of NGinX (probably another since it doesn't proxy HTTP/2 requests) and Hypercorn. As both can handle SSL certificate files, I wonder who is the best suited to do this for an HTTPS request. It is important to me that Hypercorn could listen to 443 port and I'm not sure it can do that without specifying certfile and keyfile parameters.
Well, that depend what you want to do.
The simpliest solution is to configure both to use SSL.
Nginx will receive the request, decipher it, process it, send it to Hypercom on port 443 as an HTTPS Client. Hypercom will get the request as any normal HTTPS client.
If your goal is security : go with both
If your goal is just to not
have hypercom expose directly, you can configure it to not use SSL
Nginx support by default proxying request to an HTTPS upstream so that's the best solution I think. However, you might need to play with setting http-header for hypercom to correctly understand who's the client by playing with X-Forwarded-For, X-Forwarded-Host and any headers that might be needed by Hypercom.

nginx server use HTTP/2 protocol version by default?

I have various nginx server and recently I note that by default response these servers responses using the HTTP/2 version of protocol.
I don't have configured the http2 parameter in nginx.conf.
Is this the right behavior?
No, that is not the default.
If you observe HTTP2 despite not configuring it in NGINX, you likely have a CDN in front of it, e.g. Cloudflare.

Enable http2 in nginx only for some clients

I have two set of users using okhttp/2.7.0 and okhttp/3.12.0. I want to enable http2 in nginx only for those users who are using okhttp/3.12.0. The client ensures to send their identifier. Is there a way to use this information and enable http2 only for those users.
Note: Multiple ports is not an option for me.
My nginx and OS version
nginx version: nginx/1.14.2
built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.4)
built with OpenSSL 1.0.2h 3 May 2016
TLS SNI support enabled
My nginx conf goes like this
server {
listen 443 ssl http2;
...
This is not really possible. The client is only sent as part of a HTTP message, which is only sent after the version of HTTP to use is decided, obviously. The initially message to create the connection, and set up the SSL/TLS parameters won’t have the client (which is usually where the HTTP version is decided using the ALPN extension to TLS).
There are however other ways this might be possible, including:
Depending on the capabilities of the client. I’m not familiar with okhttp but from a quick Google it seems ALPN support was only added in v3, so you could disable the older NPN on your server and then, if that is correct, then in theory the older client will not be able to negotiate HTTP/2 so will fallback to HTTP/1.1. Unfortunately there appears to be no Nginx config option for that so you’d need to build a special version of OpenSSL without NPN support and then compile Nginx against that. Probably more hassle than it’s worth.
Use Apache instead of Nginx as it never supported NPN
Using Multiple IPs and somehow directing each version to a separate IP. Though I suspect as you cannot use separate ports you probably cannot do this either.
All in all it’s a bit of hack to be honest and so is not something that I would suggest you pursue. What you have not explained however is why you want to use HTTP/2 for one set of clients but not the other. Maybe there’s a better way to achieve what you want if you explain that.

Go web server with nginx server in web application [duplicate]

This question already has answers here:
What are the benefits of using Nginx in front of a webserver for Go?
(4 answers)
Closed 7 years ago.
Sorry, I cannot find this answer from Google search
and nobody seems to explain clearly the difference between pure Go webserver
and nginx reverse proxy. Everybody seems to use nginx in front
for web applications.
My question is, while Go has all http serving functions,
what is the benefit of using nginx over pure Go web servers?
And in most cases, we set up the Go webserver for all routes here
and have the nginx configurations in front.
Something like:
limit_req_zone $binary_remote_addr zone=limit:10m rate=2r/s;
server {
listen 80;
log_format lf '[$time_local] $remote_addr ;
access_log /var/log/nginx/access.log lf;
error_log /var/log/nginx/error.log;
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
server_name 1.2.3.4 mywebsite.com;
}
When we have this Go:
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hi there, I love %s!", r.URL.Path[1:])
}
func main() {
http.HandleFunc("/", handler)
http.ListenAndServe(":8080", nil)
}
Are the traffic to nginx and Go web server different?
If not, why do we have two layers of web server?
Please help me understand this.
Thanks,
There's nothing stopping you from serving requests from Go directly.
On the other hand, there are some features that nginx provides out-of-the box that may be useful, for example:
handle many virtual servers (e.g. have go respond on app.example.com and a different app on www.example.com)
http basic auth in some paths, say www.example.com/secure
access logs
etc
All of this can be done in go but would require programming, while in nginx it's just a matter of editing a .conf file and reloading the configuration. Nginx doesn't even need a restart for this changes to take place.
(From a "process" point of view, nginx could be managed by an ops employee, with root permissions, running in a well known port, while developers deploy their apps on higher ones.)
The general idea of using nginx in this scenario is to serve up static resources via nginx and allow Go to handle everything else.
Search for "try_files" in nginx. It allows for checking the disk for the existence of a file and serving it directly instead of having to handle static assets in the Go app.
This has been asked a few times before[1] but for posterity:
It depends.
Out of the box, putting nginx in front as a reverse proxy is going to
give you:
Access logs
Error logs
Easy SSL termination
SPDY support
gzip support
Easy ways to set HTTP headers for certain routes in a couple of lines
Very fast static asset serving (if you're serving off S3/etc. though, this isn't that relevant)
The Go HTTP server is very good, but you will need to reinvent the
wheel to do some of these things (which is fine: it's not meant to be
everything to everyone).
I've always found it easier to put nginx in front—which is what it is
good at—and let it do the "web server" stuff. My Go application does
the application stuff, and only the bare minimum of headers/etc. that
it needs to. Don't look at putting nginx in front as a "bad" thing.
Further, to extend on my answer there, there's also the question of crash resilience: your Go application isn't restricted by a configuration language and can do a lot of things.
Some of these things may crash your program. Having nginx (or HAProxy, or Varnish, etc.) as a reverse proxy can give you a some request buffering (to allow your program to restart) and/or serve stale content from its local cache (i.e. your static home page), which may be better than having the browser time out and serve a "cannot connect to server error".
On the other hand, if you're building small internal services, 'naked' Go web servers with your own logging library can be easier to manage (in terms of ops).
If you do want to keep everything in your Go program, look at gorilla/handlers for gzip, logging and proxy header middleware, and lumberjack for log rotation (else you can use your system's logging tools).

How can I use Lift asychronously with Nginx?

I want to use Nginx as a frontend redirecting requests to Lift application.
In this post
http://scala-programming-language.1934581.n4.nabble.com/Simple-deployment-of-Lift-apps-on-Jetty-Nginx-td1980295.html
David Polak recommends to use nginx as a reverse proxy. But in book "Nginx HTTP Server by Nedelcu C", I read this:"...the reverse proxy mechanism that we are going to describe
in this chapter is not the optimal solution. It should be employed in problematic
cases..." and FastCGI is described as the best choice.
Next option I see is to use Lift with Netty as here: https://github.com/jrwest/lift-and-netty-examples but it seems it just an expirement for now.
Maybe I am missing something?
I am a big fan of Nginx (make sure of that looking at my SO/SF profiles) and my opinion is that Nginx is a perfect fit for many-many uses.
Nginx can be used as a frontend to Lift application via HTTP transport (i.e. proxy_pass directive in Nginx), just like Nginx is used to proxy to Apache, Jetty, Tomcat or any other backend server talking HTTP. fastcgi_pass is designed to proxy to FastCGI backends. I did not see any benchmarks on which transport implementation is more effective, but I guess this difference will be smaller than differences implied by programming language/app server technologies.
One more note. I have no idea how FastCGI transport can be used to implement Comet applications. At the same time, Liftweb's Comet applications work perfectly via Nginx.

Resources