How to configure GWAN as a reverse proxy? - wordpress

I saw some performance of GWAN and interested in testing it as a reverse proxy of static content in front of Apache with APC for optimizing PHP opcode, to run a Wordpress multisite. I can get GWAN up and running but I have no idea how to configure it for reverse proxy, as there seems to be almost no information on it. Anyone use GWAN as a reverse proxy?

It's still a not documented feature ... maybe in a next release ? #gil ?

Right now there's no easy way for you to do that. That will change with the next release.
We first hardcoded the reverse-proxy feature in G-WAN along with the load-balancer. Then, as we needed to personalize reverse-proxying, we implemented it as a protocol handler script.
Protocol handler scripts allow users to implement any protocol (like SMTP, LDAP, etc.) without haveing to deal with multi-threading nor socket events.
But finally, to reduce complexity for users, we might revert to the hard-coded implementation with connection handlers scripts to let people personalize the reverse proxy.
It's maturing under different use cases, hence the delay in publicly releasing this feature and a few others.
Rushing to implement features and interfaces is not always optimal, if the goal is to stay flexible and easy to use.

Related

AMQP/RabbitMQ consumer on NGINX

Is it possible to have RabbitMQ Consumer listening to a queue for message via AMQP protocol. I am aware that nginx only supports HTTP/s protocol. Was wondering if this can be achieved by using tcp module extension.
I am using nginx as API Gateway and want to do a protocol translation from AMQP to HTTP since all the backend service's are exposed on HTTP.
It would definitely be possible writing your own C extension. nginx is suitable for TCP proxying, therefore I don't see any reason why you couldn't send your own TCP packets to RabbitMQ using nginx, and consequently use nginx as a RabbitMQ consumer. It's probably a lot of work to make it run, and even more work to make it stable and reliable, but doable. Do me a favor though, don't do this. There will always be better, more elegant and simpler solution.
HTTP is definitely not suitable for consuming from a queue (in the amqp sense) because you have to keep the socket open while you consume. However, you could write a C extension to publish/retrieve messages to/from RabbitMQ (and apparently, somebody has already done this). If you're not that much into C or don't want to maintain your own nginx package, you could also write a LUA extension for lua-nginx-module (once again, somebody seems to have worked in this direction). These are PoC for talking to MQ from nginx, but they are not consumers. Both extensions seems to act in the HTTP context, so you need to answer (and close the socket) pretty fast.
However, as far as I know, there isn't any community-driven and well maintained project that would serve this purpose directly or indirectly; you'd have to make and maintain your own extension/client. Moreover, nginx is your current API gateway. Do take the risk into account. Things could go really wrong. Only you can tell whether it is worth the hassle or not, but it's most likely not.
Since you don't gave that much information on what you're exactly looking for, I just answered you on the NGINX/AMQP part. But you might just be looking for an HTTP interface for RabbitMQ. In this case, the Management Plugin might be the way to go. It has a pretty cool HTTP API. Once again, you'd loose every stateful features (like basic consuming, ack/nack/rejects), but that's inherently due to the way HTTP is designed.
Eventually, if you really need a RabbitMQ "basic-"consumer, I would recommend you to write a proper consumer as a separate application and forget about doing this in nginx. That's definitely the best and most supported solution.

Serving two websites written in Google Go within a single VM

I have a VM from Digital Ocean. It currently has two domains linked to the VM.
I do not use any other web server but Golang's built in http module. Performance-wise I like it, and I feel like I have a full control over it.
Currently I am using a single Go program that has multiple websites built in.
http.HandleFunc("test.com/", serveTest)
http.HandleFunc("123.com/", serve123)
http.HandleFunc("/", serve123)
As they are websites, Go program is using port 80 for that.
And the problem is when I am trying to update only 1 website, I have to recompile whole thing as they are written in the same code.
1) Is there a way to make it hot-swappable only with Golang (without Nginx or Apache)
2) What would be a standard best practice?
Thank you so much!
Well, you can do hotswapping in go, but I really wouldn't want to do that unless really ncecessary as the complexity added isn't negligible (and I'm not talking about code).
You can have something close with a kind of proxy that would sit in front of the program and do a graceful swap whenever your binary change : the principle is to have the binary on one port, the proxy on another. When a new binary is ready, you run it on another port, and make the proxy redirect to the new port, then gracefully shutdown the old one.
There was a tool for that in Go that I can't remember the name of…
EDIT: not the one I had in mind, but close call https://github.com/rcrowley/goagain
Personnal advice: use a reverse proxy for that, its much more simple to do. My personnal setup is to use h2o to terminate SSL, HTTP2, etc, and send the requests to the various websites running on the background. Not only Go ones, though, but also PHP ones, a Gitlab instance, etc. Its much more flexible, and the performance penalty of the proxy is small…

Varnish to be used for https

Here's the situation. I have clients over a secured network (https) that talk to multiple backends. Now, I wanted to establish a reverse proxy for majorly load balancing (based on header data or cookies) and a little caching. So, I thought varnish could be of use.
But, varnish does not support ssl-connection. As I've read at many places, quoting, "Varnish does not support SSL termination natively". But, I want every connection, ie. client-varnish and varnish-backend to be over https. I cannot have plaintext data anywhere throughout network (there are restrictions) so nothing else can be used as SSL-Terminator (or can be?).
So, here are the questions:
Firstly, what does this mean (if someone can explain in simple terms) that "Varnish does not support SSL termination natively".
Secondly, is this scenario good to implement using varnish?
and Finally, if varnish is not a good contender, should I switch to some other reverse proxy. If yes, then which will be suitable for the scenario? (HA, Nginx etc.)
what does this mean (if someone can explain in simple terms) that "Varnish does not support SSL termination natively"
It means Varnish has no built-in support for SSL. It can't operate in a path with SSL unless the SSL is handled by separate software.
This is an architectural decision by the author of Varnish, who discussed his contemplation of integrating SSL into Varnish back in 2011.
He based this on a number of factors, not the least of which was wanting to do it right if at all, while observing that the de facto standard library for SSL is openssl, which is a labyrinthine collection of over 300,000 lines of code, and he was neither confident in that code base, nor in the likelihood of a favorable cost/benefit ratio.
His conclusion at the time was, in a word, "no."
That is not one of the things I dreamt about doing as a kid and if I dream about it now I call it a nightmare.
https://www.varnish-cache.org/docs/trunk/phk/ssl.html
He revisited the concept in 2015.
His conclusion, again, was "no."
Code is hard, crypto code is double-plus-hard, if not double-squared-hard, and the world really don't need another piece of code that does an half-assed job at cryptography.
...
When I look at something like Willy Tarreau's HAProxy I have a hard time to see any significant opportunity for improvement.
No, Varnish still won't add SSL/TLS support.
Instead in Varnish 4.1 we have added support for Willys PROXY protocol which makes it possible to communicate the extra details from a SSL-terminating proxy, such as HAProxy, to Varnish.
https://www.varnish-cache.org/docs/trunk/phk/ssl_again.html
This enhancement could simplify integrating varnish into an environment with encryption requirements, because it provides another mechanism for preserving the original browser's identity in an offloaded SSL setup.
is this scenario good to implement using varnish?
If you need Varnish, use it, being aware that SSL must be handled separately. Note, though, that this does not necessarily mean that unencrypted traffic has to traverse your network... though that does make for a more complicated and CPU hungry setup.
nothing else can be used as SSL-Terminator (or can be?)
The SSL can be offloaded on the front side of Varnish, and re-established on the back side of Varnish, all on the same machine running Varnish, but by separate processes, using HAProxy or stunnel or nginx or other solutions, in front of and behind Varnish. Any traffic in the clear is operating within the confines of one host so is arguably not a point of vulnerability if the host itself is secure, since it never leaves the machine.
if varnish is not a good contender, should I switch to some other reverse proxy
This is entirely dependent on what you want and need in your stack, its cost/benefit to you, your level of expertise, the availability of resources, and other factors. Each option has its own set of capabilities and limitations, and it's certainly not unheard-of to use more than one in the same stack.

Proxys for WebDAV

I'd like to set up a reverse proxy for my webdav server. The main reason for this is so that I can better control which files are being uploaded to the webdav server. I cannot do this at the webdav server itself, it's a service by alfresco and I have now idea whether or not it's possible to configure the webdav service at all.
In particular I'd like to prevent my mac to do the AppleDouble thingy on the webdav server, i.e. stop my mac from uploading ._* files for every real file I upload. There is as far as I know no way to stop my mac from attempting this.
Does the proxy server need to know more than merely relaying http requests back and forth, does it also need to know something about webdav in order for this to work?
Which proxy servers could your recommend for this?
Günther
Unless I'm missing something, a reverse proxy will have to rewrite header fields (such as Destination: and If:) to work properly and potentially even request/response bodies, and thus is unlikely to work well.
A "proper" proxy shouldn't get in the way, though.
You could do this with SabreDAV. It has a TemporaryFileFilter Plugin that does exactly what you need. Not only does it intercept these resource forks, it also places them in a temporary 'quarantine'. This is important, because OS/X will check if the file was successfully written and fail horribly otherwise.
There will be two things you still need to do to make this work though:
Automatic cleanup of these files (a script suitable for cron is also supplied).
The actual proxy bit. This means you'll have to implement a Collection and a File class that perform the HTTP requests.
Disclaimer: I authored SabreDAV

Why use Mongrel2?

I'm confused what purpose Mongrel2 serves/provides that nginx doesn't already do.
(Yes, I've read the manual but I must to be too much of a noob to understand how it's fundamentally different than nginx)
My current web application stack is:
- nginx: webserver
- Lua: programming language
- FastCGI + LuaJIT: to connect nginx to Lua
- Postgres: database
If you could only name one thing then it would be that Mongrel2 is build around ZeroMQ which means that scaling your web server has never been easier.
If a request comes in, Mongrel2 receives it (nothing unusual here, same as for NginX and any other httpd). Next thing that happens is that Mongrel2 distributes the task of compiling a response to n (ZeroMQ-enabled) backends, waits for them to do the work, receives results, compiles the response and sends it off to the client.
Now, the magic is with the fact that n can be any number and, that each of n can be written in any language as supported by ZeroMQ (20 or so) plus, all goes across the network so each n can be a dedicated box, possibly in another datacenter.
In other words: with NginX and all the rest you have to do scalability in your logic tier, Mongrel2 allows you to start (from a request/response cycle point of view) this right where the request hits your infrastructure, at the httpd rather than letting complexity penetrate down to your logic tier which blows complexity upwards by at least one order of magnitude imo.
You should look at the strengths of each and decide to use either or both depending on your use cases..
While, it seems that nginx does everything that mongrel2 provides in the surface, you'll find there are major differences in focus between the two.
Nginx shines as a front-end webserver, that can proxy requests to your backend webservers/appservers and also serve static content.
Mongrel2 is a slight change in the stack. As mentioned, it's power comes from it's use of zeromq as the transport layer between it and the backend appservers. It can serve dynamic request urls (app requests) and direct the compute portion of the task out to different backends using zeromq..
mongrel2 allows you to serve not just http, websockets etc, but other protocols (if you're inclined to do so) all from the same server. the user would never know that portions of the app are being served from different backends.
If your requirements for the functionality of your webapp keeps changing or you want to add things like streaming, the ability to code in different languages in the back end etc, then I would definitely look at mongrel2. Or even have a hybrid
where you use nginx/haproxy/varnish for static files and caching, and everything else is directed to mongrel2.

Resources