How can I use Lift asychronously with Nginx? - asynchronous

I want to use Nginx as a frontend redirecting requests to Lift application.
In this post
http://scala-programming-language.1934581.n4.nabble.com/Simple-deployment-of-Lift-apps-on-Jetty-Nginx-td1980295.html
David Polak recommends to use nginx as a reverse proxy. But in book "Nginx HTTP Server by Nedelcu C", I read this:"...the reverse proxy mechanism that we are going to describe
in this chapter is not the optimal solution. It should be employed in problematic
cases..." and FastCGI is described as the best choice.
Next option I see is to use Lift with Netty as here: https://github.com/jrwest/lift-and-netty-examples but it seems it just an expirement for now.
Maybe I am missing something?

I am a big fan of Nginx (make sure of that looking at my SO/SF profiles) and my opinion is that Nginx is a perfect fit for many-many uses.
Nginx can be used as a frontend to Lift application via HTTP transport (i.e. proxy_pass directive in Nginx), just like Nginx is used to proxy to Apache, Jetty, Tomcat or any other backend server talking HTTP. fastcgi_pass is designed to proxy to FastCGI backends. I did not see any benchmarks on which transport implementation is more effective, but I guess this difference will be smaller than differences implied by programming language/app server technologies.
One more note. I have no idea how FastCGI transport can be used to implement Comet applications. At the same time, Liftweb's Comet applications work perfectly via Nginx.

Related

Proxy bandwidth to / from backend server

I am using NGINX with the OpenResty framework.
I have a server acting as a reverse proxy between users and multiple sites. I am interested in logging 4 different types of bandwidth use.
Bytes from users to proxy
Bytes from proxy to users
Bytes from proxy to site server
Bytes from site server to proxy
Currently I have access to the NGINX variables $bytes_sent and $bytes_received that are trivially provided on the http://nginx.org/en/docs/stream/ngx_stream_core_module.html documentation page. I believe these only provide half the story I am interested in.
What is not made clear is how these variables interact when measured in conjunction with proxy_pass http://$site_ip:$site_port$request_uri;
Is there some way I can calculate the 4 cases of bandwidth I am interested in without modification to NGINX? I am able to use computational solutions if such a thing could be written via the lua hooks provided by OpenResty.
Thanks!

Why use gunicorn with a reverse-proxy?

From Gunicorn's documentation:
Deploying Gunicorn
We strongly recommend to use Gunicorn behind a proxy server.
Nginx Configuration
Although there are many HTTP proxies available, we strongly advise that
you use Nginx. If you choose another proxy server you need to make sure
that it buffers slow clients when you use default Gunicorn workers.
Without this buffering Gunicorn will be easily susceptible to
denial-of-service attacks. You can use slowloris to check if your proxy
is behaving properly.
Why is it strongly recommended to use a proxy server, and how would the buffering prevent DOS attacks?
According to the Nginx documentation, a reverse proxy can be used to provide load balancing, provide web acceleration through caching or compressing inbound and outbound data, and provide an extra layer of security by intercepting requests headed for back-end servers.
Gunicorn is designed to be an application server that sits behind a reverse proxy server that handles load balancing, caching, and preventing direct access to internal resources.
By exposing Gunicorn's synchronous workers directly to the internet, a DOS attack could be performed by creating a load that trickles data to the servers, like the Slowloris.
The reason is that there are many slow clients that need time to consume server responses, while Gunicorn is designed to respond fast. There is an explanation of this situation for a similar web server for Ruby called Unicorn.

SSL with GoLang using reverse proxy Nginx

I am currently writing a GoLang website and would like to add SSL soon. My question is what are the advantages/disadvantages of using the built-in Golang SSL packages or should/can I just do SSL with the nginx when I use it for the reverse proxy?
It is ultimately up to you, but nginx's SSL configuration is extremely configurable, battle-tested and performant.
nginx can provide an SSL session cache to boost performance - ssl_session_cache
Good cipher compatibility
I believe that nginx's SSL implementation is faster (more req/s and less CPU) than Go's, but have not tested this myself. This would not be surprising given maturity of the nginx project.
Other benefits like response caching for both proxied and static content.
The downside, of course, is that it's another moving part that requires configuration. If you are already planning to use nginx as a reverse proxy however I would use it for SSL as well.

What is the benefit of using NginX for Node.js?

From what I understand Node.js doesnt need NginX to work as a http server (or a websockets server or any server for that matter), but I keep reading about how to use NginX instead of Node.js internal server and cant find of a good reason to go that way
Here http://developer.yahoo.com/yui/theater/video.php?v=dahl-node Node.js author says that Node.js is still in development and so there may be security issues that NginX simply hides.
On the other hand, in case of a heavy traffic NginX will be able to split the job between many Node.js running servers.
In addition to the previous answers, there’s another practical reason to use nginx in front of Node.js, and that’s simply because you might want to run more than one Node app on your server.
If a Node app is listening on port 80, you are limited to that one app. If nginx is listening on port 80 it can proxy the requests to multiple Node apps running on other ports.
It’s also convenient to delegate TLS/SSL/HTTPS to Nginx. Doing TLS directly in Node is possible, but it’s extra work and error-prone. With Nginx (or another proxy) in front of your app, you don’t have to worry about it and there are tools to help you securely configure it.
But be prepared: nginx don't support http 1.1 while talking to backend so features like keep-alive or websockets won't work if you put node behind the nginx.
UPD: see nginx 1.2.0 - socket.io - HTTP/1.1 - Proxy websocket connections for more up-to-date info.

Has anybody used YAWS server as an HTTP Proxy?

I am planning to setup an YAWS webserver as a HTTP proxy server .
I am basically trying to achieve a high throughput HTTP proxy server which should be able to take webscale load.
The requirement is to be able to redirect certain URI's to our company's enterprise portal.
Has anybody used this setup in production ?
Does anybody know of any issues with the slated requirements?
Thanks in advance!
Yaws reverse proxy stuff is pretty experimental, I wouldn't use it if that's all you need from it. Rather, I'd look at Varnish or even Squid.
YAWS is a fine application server but not more, to serve static files or for proxying it's far from ideal. We use haproxy and lighttpd in front of Yaws for better performance.

Resources