About the the use of Nginx [closed] - nginx

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I was learning nginx and as I found out that it is a load balancer helping to handle a loads of requests to a server. The question I wanted to ask is that as I also found out, nginx is best to be used when one server gets overloaded and we need to add up one more server. So, is it true that nginx is best to used ONLY when one server cannot handle the number of requests?

Although it looks like Nginx should be added only when you need to load balance between multiple servers, and IMO this decision is correct as sometimes it is good to avoid increasing the entropy if you can't manage it.
But apart from being a load balancer, Nginx is also widely used for:
Reverse proxy for multiple services [virtual hosts] (load balancing isn't mandatory)
Content caching (to avoid request hitting upstream servers everytime)
SSL termination
API Gateway (for security, rate limiting and routing)
Sometimes, also as a web server
so even if you aren't load-balancing you can get benefit from facilities provided by nginx like content caching, SSL termination, rate limiting, etc.
Later when need arises you can easily add more machines in the upstream to start load balancing.

Related

wordpress website AWS taking too long to load [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have a website running on wordpress (Bitnami).the server is on an AWS behind an elastic load balancer. However, when i hit the wordpress website, it is taking too long to respond.
There is another nodejs API that is running on the same AWS server (on port 4000). That is returning a response pretty fast. So, this would not be a DNS resoulution issue.
Any idea how i can debug the reason why the wordpress website is taking too long to load?
This will likely be a security group issue judging on the behaviour you're experiencing.
Ensure the following:
The Load Balancers security group allows inbound access (port 80 for HTTP, port 443 for HTTPS)
The instances security group allows inbound access from the load balancer (on the port the application should be loaded from).
Check the health of the host in the load balancer interface within the console.
If the database is external to the instance host (i.e. another server or RDS) then ensure it supports inbound access from the instance (port 3306 for MySQL).
If the database is running on the same server (the default for bitnami) ensure it is connecting to the host as localhost.

Why https and www are in same URL? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Some URLs have both https and www.
What is the reason behind this?
For example, https://www.facebook.com starts with https://www. Is this redundant?
https is the protocol. It stands for hyper text transfer protocol (over TLS).
It means that you surf on websites and it is encrypted. By default, the protocol is http(no encryption) but this is often redirected to https.
www is the server.
It can be anything but in most cases, the web server is www. Also, the domains redirect you to the webserver (if it is configured that way) if you don't type it in explicitely.
Lastly, facebook.com is the domain.
Facebook registered to own the domain facebook.com. (.com are normally commercial websites) With that, they can deploy servers on addresses that end with .facebook.com in a way that they are found.
e.g. https://www.facebook.com means that you want to talk using the protocol https(secure web transfer) with the www server of facebook.com.

Unicorn multiple machines setup [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a good experience with Unicorn configuration with conjunction of Nginx, it works really well after optimizations and tuning procedures. But now I have got a question what is the best way to spread the load across multiple machines with Unicorns.
The question is you have 3 machines (Nginx load balancer, 2 APP servers with Unicorns), how do you manage load balancing of Unicorns with serving static assets.
Do you now any drawbacks with connection to Unicorn over TCP (timeouts, connection lost), is there any other way to upstream socket connection over the network (maybe port forwarding over SSH)? Unicorn designed to be stateless, but how do you manage the edge cases?
I don't want to serve static from balancer node, so would it be ok to setup Nginx on each of APP server and setup dumb Nginx balancer in front of them?
P.S. My current configuration is well-tested and can be found on Github, but the setup with Nginx+Unicorn on the same machine that already became a bottleneck.
UPDATE: Development is rigidly depends on the specific server configuration. Bottlenecks are going to happen not just because of developer's decisions, but also with the environment where he run it. Stackoverlow is full with highly marked Q&A related to the hard-to-know details about specific configuration. Alex who answered below works with Github I'm really appreciate to have a reply by such qualified person!
Don't access the Unicorns over TCP/network.
Your setup seems just fine, you can simply add a load-balancer in front of the APP servers, but I would suggest Keepalived (LVS ftw) as load-balancer instead of Nginx.
You can have them balance connections to the APP servers running Nginx+Unicorns over sockets.

Mail server for multiple domains? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have server, bought from Linode. I decided to setup a mail server on it. I have about 20 domains will be pointing it.
I have a couple of questions;
This is the combination i will use; postfix + dovecot + squirrelmail. Are there better alternatives? I am completely open for recommadations because this is the first time i will setup a mail server.
Is it possible to use multiple domains with one mail server?
If it is possible to work with multiple domains, is it require a complicated and painful configuration?
Note: I can't use Google Apps because 40 EUR for per mail address is very expensive when you have a hundred mail address.
You have to have at least a basic understanding of how DNS works. It can be kind of a pain, but if you use one of the postfix plugins for management, should be fine. But yes, multi-domains on the same server is fine, it just has to know that it is representing those host hame records, and your DNS for your domains needs to be configured to have the MX records point at your server's IP.

SPDY module for IIS7 [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
My goal is to implement SPDY protocol (a new experimental protocol by GOOGLE) on IIS servers.
SPDY is a TCP based application level protocol and as such I am guessing that I have to
work at TCP level (socket programing) as the built in extensions are for HTTP.
My problem is that once I write a socket programing code to do the same, where do I plug it into IIS7? WAS looks like a good candidate and if so, how do I go about doing it?
IIS has little or nothing to do with SPDY. IIS is just an application server that responds to HTTP requests handed off by the http.sys kernel mode driver. All HTTP requests in Windows are handled by this driver.
This is the level at which SPDY would be need to be implemented.
If you were to implement SPDY you'd need to have this as a shim driver between the TCP stack and http.sys, or maybe even write your own http.sys driver.
Alternatively you could write your own SPDY/HTTP stack but if you wanted to use this with IIS then you're in for a lot of work.

Resources