Nginx: Is there encryption-decryption module? - encryption

We are working on scaling our object store layer, while making sure we have all security primitives. The proposed design will have http access via Nginx; and if we could encrypt/decrypt at the Nginx layer, the system would scale much better. But couldn't find any modules for Nginx. Thoughts ?

Related

How to access redis server via domain name?

I'm trying to make my redis server accessible via domain name, to instead of writing "ip:port" I would simply use "redis.example.org" in my applications. (remote applications)
I tried to achieve this using nginx and Redis2 module but I could not make it work, is nginx even the technology to use this for?
The thing is, I don't even know what terms to search to find my answer, what is this kind of "proxy" to a redis server called? (in future I want to use domain to access my postgres server as well, so a general solution would be great)
Thanks.

Sticky session load balancer with nginx open source

What the main difference between the sticky session available in nginx plus and hashing cookie in open source version?
According to the docs nginx open source allows session persistence based on hashing different global variables available within nginx, including $cookie_
With the following configuration:
upstream myserver {
hash $cookie_sessionID;
server localhost:8092;
server localhost:8093;
server localhost:8094 weight=3;
}
location / {
proxy_pass http://myserver;
}
Assuming, there will be centralized mechanism across backends for generating unique sessionID cookie for all new requests, so what the main disadvantages of such method compared to the nginx plus sticky session approach?
IP Hash load‑balancing can work as "sticky sessions",
but you have to keep in mind that this load balancing method is working relative bad itself, because a lot of user/devices shared same external IP-Address in modern world.
We experimented with a rather heavily loaded (thousands of parallel users) application and observed tens of percent imbalance between servers when using IP Hash.
Theoretically, the situation should improve with increasing load and number of servers, but for example we did not see any significant difference when using 3 and 5 servers.
So, I would strongly advise against using IP Hash in productive environment.
As open-source based sticky sessions solution, not bad idea to use HAProxy, because HAProxy support it out-of-the-box.
Or HAProxy + Nginx bundle, where HAProxy is responsible for "sticky sessions".
(I know about one extremely loaded system that successfully uses such a bundle for this very purpose, so, this is working idea.)
Your approach will work. According to the official NGINX documentation (Configuring Basic Session Persistence):
"If your application requires basic session persistence (also known as sticky sessions), you can implement it in NGINX Open Source with the IP Hash load‑balancing algorithm."
While NGINX Plus "offers a more sophisticated form of session persistence". For example "Least Time" method – when for each request the server with the lowest average latency and the lowest number of active connections is chosen.

How NGINX Reverse Proxy handle image/video file upload or download

First off, I'll explain my situation to you. I'm building a server for storing and retrieve data for my phone application. I'm new to NGINX. What I know point of using load balancing/reverse proxy is to improve performance and reliability by distributing the workload across multiple servers. But i'm not understand while it working with image/video file. Let say below is mine NGINX config file
upstream backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend;
}
}
I have few question here.
First while i upload an image/video file, did i need to upload image into those all backend server or there is other way?
Second if I just save it to a separate server which only store image, while I requesting and download image or video file I proxy it to that specified server then what is the point of load balancing with image/video file since reverse proxy is to improve performance and reliability by distributing the workload across multiple servers?
Third does amazon s3 really better for storing file? Does it cost cheaper?
I'm looking for solution which can be done it by my own server beside using third parties.
Thx for any help!!
You can either use shared storage e.g. NFS, or upload to both servers, or incorporate a strategy to distribute files between servers, storing each file on a single server.
First two options logically are the same and provide fallover, hence improving reliability.
Third option, as you note, does not improve reliability (may be somehow, if one server fails second may still serve some files). It can improve performance, though, if you have many concurrent requests for different files and distribute them evenly between servers. This is not achieved through nginx load balancing but rather by redirecting to different servers based on request (e.g. file name or key).
For shared storage solution, you can use, for example, NFS. There are many resources going into deeper detail, for example https://unix.stackexchange.com/questions/114699/nfs-automatic-fail-over-or-load-balanced-or-clustering
For duplicate upload solution, you can either send file twice from client or do it server side with some code. Server side solution has the benefit of single file traffic from client and sending to second server only on fast network. In simple case this can be achieved, for example, by receiving file in a servlet, storing incoming data to disk and simultaneously upload to another servlet on the second server through http or other protocol.
Note that setting up any of these options correctly can involve quite significant effort, testing and maintanance.
Here comes S3, ready to use distributed/shared storage, with simple API, integrations, clients and reasonable price. For simple solution usually it is not cheaper in terms of $ per storage, but much cheaper in terms of R&D. It also has the option to serve flies and content through http (balanced, reliable and distributed), so you either can download file in client directly from S3 hosts or make permanent or temporary redirects there from your http servers.

Should I always use a reverse proxy for a web app?

I'm writing a web app in Go. Currently I have a layout that looks like this:
[CloudFlare] --> [Nginx] --> [Program]
Nginx does the following:
Performs some redirects (i.e. www.domain.tld --> domain.tld)
Adds headers such as X-Frame-Options.
Handles static images.
Writes access.log.
In the past I would use Nginx as it performed SSL termination and some other tasks. Since that's now handled by CloudFlare, all it does, essentially, is static images. Given that Go has a built in HTTP FileServer and CloudFlare could take over handling static images for me, I started to wonder why Nginx is in-front in the first place.
Is it considered a bad idea to put nothing in-front?
In your case, you can possibly get away with not running nginx, but I wouldn't recommend it.
However, as I touched on in this answer there's still a lot it can do that you'll need to "reinvent" in Go.
Content-Security headers
SSL (is the connection between CloudFlare and you insecure if they are terminating SSL?)
SSL session caching & HSTS
Client body limits and header buffers
5xx error pages and maintenance pages when you're restarting your Go application
"Free" logging (unless you want to write all that in your Go app)
gzip (again, unless you want to implement that in your Go app)
Running Go standalone makes sense if you are running an internal web service or something lightweight, or genuinely don't need the extra features of nginx. If you're building web applications then nginx is going to help abstract "web server" tasks from the application itself.
I wouldn't use nginx at all to be honest, some nice dude tested fast cgi go + nginx and just go standalone library. The results he came up with were quite interesting, the standalone hosting seemed to be much better in handling requests than using it behind nginx, and the final recommendation was that if you don't need specific features of nginx don't use it. full article
You could run it as standalone and if you're using partial/full ssl on your site you could use another go http server to redirect to safe https routes.
Don't use ngnix if you do not need it.
Go does SSL in less lines then you have to write in ngnix configure file.
The only reason is a free logging but I wonder how many lines of code is logging in Go.
There is nice article in Russian about reverse proxy in Go in 200 lines of code.
If Go could be used instead of ngnix then ngnix is not required when you use Go.
You need ngnix if you wish to have several Go processes or Go and PHP on same site.
Or if you use Go and you have some problem when you add ngnix then it fix the problem.

Value of proxying HTTP requests with node.js

I have very recently started development on a multiplayer browser game that will use nowjs to synchronize player states from the server state. I am new to server-side development (so many of the things I'm saying are probably being said incorrectly), and while I understand how node.js works on its own I have seen discussions about proxying HTTP requests through another server technology (a la NGinx or Apache) for efficiency.
I don't understand why it would be beneficial to do so, even though I've seen plenty of explanations of how to do so. My current plan is to have the game's website and info on the same server as the game itself, so if there is any gain from proxying node I'd love to know why.
In the context of your question it seems you are looking for an answer on the benefits of implementing a reverse proxy in front of your node.js webserver. In summary, a reverse proxy (depending on implementation) can provide the following features out of the box:
Load balancing
Caching of static content
Failover
Compression of responses (e.g gzip)
SSL support
All these features are cross-cutting concerns that you should not need to accommodate in your application tier/code. By implementing these features within the proxy it allows you to focus on developing the code for your application and leaves the web server to do what it's good at, serving the HTTP requests for your application.
nginx appears to be a common choice in a reverse proxy/node configuration and if you take a look at the modules reference you should get a feel for what features the proxy can provide.
When you say "through another technology" I assume you mean through a dedicated web server such as NGinx or Apache.
The reason you do that is b/c in a production environment there are a number of considerations you don't want your application to have to do on its own. Caching, domain (or sub-domain) mapping, perhaps security, SSL, load balancing, and serving static files to name a few.
The web servers are already built to do all those things for you, and so they can handle them and then pass only the requests on to your app that actually need to be handled by your app. They're also optimized for doing those things and will probably do them as well or better than the average developer can.
Hope that helps.
Another issue that people haven't added in here is that with a front-end proxy, when you need to take your service down for maintenance (or even just restart it), nginx can serve up a pretty "YourCompanyName is currently under maintenance" page, making for a much more pleasant user experience.

Resources