Go websocket, nginx proxy is this correct? - nginx

I have a RESTful server in go, and it's behind nginx. everything is fine and we are happy with this setup (nginx and go) but now we have a websocket route for this application. (Its currently works ok with nginx in our staging server, not real load yet.)
The questions :
Is this good for my websocket route to be behind nginx too? is there any good reason for/against this?
Is there any way to bypass this route from nginx proxy and serve it directly with go? not in another subdomain or another binary.
Thanks!

I am no nginx expert but given that nobody else has answered I will present some of my research.
1) Yes, nginx is definitely a good choice for that. You can find some benchmarks here. Possible caveats are mentioned in this (older) post. The most important point to consider is the timeout aspect. These two answers give helpful information in that regard.
2) Not exactly sure what you want to achieve by that but you could simply use a different port, as websockets are not subject to the same origin policy, or use the tcp forwarding module that is described in one of the answers above.

Related

DNS points to www.example.com but no to http://www.example.com?

So I'm trying to deploy a Ghost blog into a Google Cloud vm instance and I can't get it to work. Part of the problem, I think, is that I haven't set up the DNS correctly. I bought farodefe.org via Google Domains and I tried to configure it following this tutorial, and it worked... partially. I used DIG in Ubuntu to try and verify that my DNS configuration. Here are the results:
enter image description here
As seen in the image above, when I do:
dig farodefe.org
and/or
dig www.farodefe.org
I do receive an answer to my query.
But then I do dig http://www.farodefe.org and I receive nothing.
enter image description here
Why is this happening and how can I fix it?
Thanks in advance!
But then I do dig http://www.farodefe.org
But this does not mean anything, or at least certainly not what you think. The DNS has no concept of URLs, only names.
So you are doing here a query for the name http://www.farodefe.org (which is possible in the DNS, but not just for an A record type which is the default one used by dig), which is certainly not what you had in mind.
Part of the problem, I think, is that I haven't set up the DNS correctly.
Don't think, test. If you are not familiar with DNS, use good online troubleshooting tools, like DNSViz. If you see any red things in the output, your DNS configuration needs to be fixed. Alternatively, your DNS provider should be able to help you.
DNS wise, you first need to understand the difference between authoritative and recursive nameservers and service, and hence when doing tests you need to first send your queries to the authoritative nameservers (which is what DNSViz does) and only when that is ok and you still have problems, then you query recursive nameservers as needed.
If you want to understand more, also learn about the OSI/Internet layers, and how HTTP is layered on top of TCP and IP, which are some protocols among others, and how the DNS (a service itself using TCP and UDP) is used to map data, and in a web setting, to map a given hostname (website) to one or more IPv4 or IPv6 addresses, for an HTTP client (like a browser) to be able to initiate its TCP/IP connection.

I am slowly going crazy. This must have an easy answer. How do localhost RPC servers tell clients which port they're running on?

I have a problem, I'm sure this is a common problem, but I've been Googling for hours and I've found nothing.
I have written an RPC server for an application that serves requests over HTTP.
It only needs to be accessible from localhost.
I run my server on port 0, because I want to avoid port clashes. I receive a dynamic port. The server knows this port.
My client connects to... Which port? How does it know???
What does the server need to do to publicise the port?
There must be a simple, standard solution to this. I don't want to hard-code a port since I don't want the user to ever have to touch port settings. Every question I find is telling me to hard-code the port. This seems completely insane for a localhost server. Surely this is a common problem? Surely there's a better way?
What on earth is the standard way of doing this?

NGINX Reverse Proxy A Port Between 2 Servers

Apologies if this is an obvious question, I've been reading up on NGINX and am hoping this is something I can use with my Icecast server.
Essentially I have the following setup:
ipAddress:8080 - Icecast Server (mount point is /stream)
domain.tld - Server running NGINX & hosting a PHP site.
What I'd like todo is take any requests to, for example, domain.tld:8000/stream and have it return what is actually ipAddress:8080/stream
Is this something NGINX can handle? Forgive me if I am missing something obvious, presently all I can find are guides on redirecting files to ports etc
Thanks!
It is generally not advisable to reverse-proxy Icecast. It breaks an array of things and if not configured properly can bring down your web server.
If you want to run Icecast on port 80, then I've explained this for Debian (and derivatives like Ubuntu) here: http://lists.xiph.org/pipermail/icecast/2015-February/013198.html

Should I always use a reverse proxy for a web app?

I'm writing a web app in Go. Currently I have a layout that looks like this:
[CloudFlare] --> [Nginx] --> [Program]
Nginx does the following:
Performs some redirects (i.e. www.domain.tld --> domain.tld)
Adds headers such as X-Frame-Options.
Handles static images.
Writes access.log.
In the past I would use Nginx as it performed SSL termination and some other tasks. Since that's now handled by CloudFlare, all it does, essentially, is static images. Given that Go has a built in HTTP FileServer and CloudFlare could take over handling static images for me, I started to wonder why Nginx is in-front in the first place.
Is it considered a bad idea to put nothing in-front?
In your case, you can possibly get away with not running nginx, but I wouldn't recommend it.
However, as I touched on in this answer there's still a lot it can do that you'll need to "reinvent" in Go.
Content-Security headers
SSL (is the connection between CloudFlare and you insecure if they are terminating SSL?)
SSL session caching & HSTS
Client body limits and header buffers
5xx error pages and maintenance pages when you're restarting your Go application
"Free" logging (unless you want to write all that in your Go app)
gzip (again, unless you want to implement that in your Go app)
Running Go standalone makes sense if you are running an internal web service or something lightweight, or genuinely don't need the extra features of nginx. If you're building web applications then nginx is going to help abstract "web server" tasks from the application itself.
I wouldn't use nginx at all to be honest, some nice dude tested fast cgi go + nginx and just go standalone library. The results he came up with were quite interesting, the standalone hosting seemed to be much better in handling requests than using it behind nginx, and the final recommendation was that if you don't need specific features of nginx don't use it. full article
You could run it as standalone and if you're using partial/full ssl on your site you could use another go http server to redirect to safe https routes.
Don't use ngnix if you do not need it.
Go does SSL in less lines then you have to write in ngnix configure file.
The only reason is a free logging but I wonder how many lines of code is logging in Go.
There is nice article in Russian about reverse proxy in Go in 200 lines of code.
If Go could be used instead of ngnix then ngnix is not required when you use Go.
You need ngnix if you wish to have several Go processes or Go and PHP on same site.
Or if you use Go and you have some problem when you add ngnix then it fix the problem.

Value of proxying HTTP requests with node.js

I have very recently started development on a multiplayer browser game that will use nowjs to synchronize player states from the server state. I am new to server-side development (so many of the things I'm saying are probably being said incorrectly), and while I understand how node.js works on its own I have seen discussions about proxying HTTP requests through another server technology (a la NGinx or Apache) for efficiency.
I don't understand why it would be beneficial to do so, even though I've seen plenty of explanations of how to do so. My current plan is to have the game's website and info on the same server as the game itself, so if there is any gain from proxying node I'd love to know why.
In the context of your question it seems you are looking for an answer on the benefits of implementing a reverse proxy in front of your node.js webserver. In summary, a reverse proxy (depending on implementation) can provide the following features out of the box:
Load balancing
Caching of static content
Failover
Compression of responses (e.g gzip)
SSL support
All these features are cross-cutting concerns that you should not need to accommodate in your application tier/code. By implementing these features within the proxy it allows you to focus on developing the code for your application and leaves the web server to do what it's good at, serving the HTTP requests for your application.
nginx appears to be a common choice in a reverse proxy/node configuration and if you take a look at the modules reference you should get a feel for what features the proxy can provide.
When you say "through another technology" I assume you mean through a dedicated web server such as NGinx or Apache.
The reason you do that is b/c in a production environment there are a number of considerations you don't want your application to have to do on its own. Caching, domain (or sub-domain) mapping, perhaps security, SSL, load balancing, and serving static files to name a few.
The web servers are already built to do all those things for you, and so they can handle them and then pass only the requests on to your app that actually need to be handled by your app. They're also optimized for doing those things and will probably do them as well or better than the average developer can.
Hope that helps.
Another issue that people haven't added in here is that with a front-end proxy, when you need to take your service down for maintenance (or even just restart it), nginx can serve up a pretty "YourCompanyName is currently under maintenance" page, making for a much more pleasant user experience.

Resources