Error 504 bad gateway on Back Office of Prestashop store - nginx

I have a problem with my Prestashop Store where sometimes I can get into Front Office without problems, but other times I get an error 504 and worse I cannot get into Back Office because get this error 504.
This is happening for four or five days and I don't know the root of the problem. I checked server logs but it only shows Negotiation error and idle timeout (120s) error. I cannot change the php.ini or nginx conf files because the hosting I have does not allow it (they say I need to change to VPS server to have root access and what I am using which is web cloud does not grant me access). I really need some guidance because I really don't want to lose possible customers.

Based on these, the server you are using seems to be very weak.
The best solution is to switch to a server that supports PrestaShop (a dedicated PrestaShop package) and does not necessarily need a VPS.
Check this:
https://digital.com/best-web-hosting/prestashop/

Just increase timeout of fastcgi directive.
On NGINX:
server{
# ...
fastcgi_read_timeout 60s;
fastcgi_send_timeout 60s;
# ...
}

Related

Is it wise to serve a stale page when upstream server is busy or down?

I work for a small nonprofit that during certain times of the year, sees an sudden uptick in traffic coalescing around virtual events or particular emails.
During these busy times, our server will sometimes get overwhelmed and NGINX will occasionally respond with a 502 error.
I know we need to address what's going with our caching or purchase more server space, but for now, I am also thinking that some of the performance issues can be resolved by having NGINX return stale content when the upstream is busy.
Our content -- particularly on the pages where we see an uptick during certain times of year -- is more or less is static. My thinking is rather than return 502 to the client, why not just send the user a maybe slightly older version of the page?
Our fastcgi_use_stale config currently looks like this:
fastcgi_cache_use_stale updating error timeout invalid_header http_500;
According to the PHP FPM error logs, we are simply maxing out our child processes, and I am assuming that's what is causing the 502 error. (Again, I know this needs to addressed.)
Most example configs look like the above as well and do not include http_503 or http_502.
Is there a reason why you would not want to include http_503 and/or http_502 so they are at least getting something? Or is there a reason why most use-stale configs don't include those codes?

Basic NGINX proxy_pass example is serving 404's

I have an very basic out-of-the-box NGINX server running on a Windows machine at http://10.0.15.19:80. I'm attempting to use it as a proxy to get around a CORS restriction. I have the following nginx.conf, which from every example I've seen sounds like it should redirect any traffic from http://10.0.15.19 to http://10.0.1.2:3000.
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name 10.0.15.19;
location / {
proxy_pass http://10.0.1.2:3000/;
}
}
}
Instead of serving the content from http://10.0.1.2:3000 I get the default index.html page inside the included html folder. Similarly, if I were to go to http://10.0.15.19/any/path I get a 404 even though http://10.0.1.2:3000/any/path works correctly.
EDIT: I've noticed that even after commenting out the entire server block of my configuration, it's still serving content from the included html folder. This makes me think there is another server configuration running that I'm not aware of, but I'm not sure where. I downloaded NGINX from here, and I assume all configuration files exist within this folder.
It turns out this was because simply closing the window that pops up when you open nginx.exe doesn't actually kill the process. And in Windows you can apparently have multiple services bound to the same port, so I had many servers serving on port 80. Killing all of these processes and relaunching with the originally posted config fixed my problem.

Nginx request not timing out

I am very new to Nginx. I have set up a Nginx in my Live environment.
Problem Statement
I have set 4 servers as upstream servers in my Nginx configuration. I could see there are few requests which take more than 180 seconds overall and that makes my system very slow. I could see few requests going to the first server in the upstream and then selecting the 2nd server in the upstream. So i guess, the problem could be the first server is timing out and sending back the response after some timeout period. The only timeout period set in my configuration is
proxy_read_timeout 180;
Is this the main culprit? Can I get the timeout from the server if I change this value to a lesser value?
I need to change the value in the Live only after some expert advice.
Please someone put some light into this query.

Using Proxy server to switch between Golang Applications

I have a server with CentOS, and there I will have at least 4 Golang applications running, every one of them is a different site that I should be able to access in the browser with domain/subdomains as follows:
dev00.mysite.com
dev01.mysite.com
dev02.mysite.com
dev03.mysite.com
So, I need to configure some kind of software that redirects the requests to the correct Golang process. Every site will be running in a different port, so for example if someone calls dev00.mysite.com I should be able to send that request to the process of dev00 site (this is for development porpouses, not production). So, here I'm starting to believe that I need Nginx or Caddy as I read, but I have no experience with none of them.
Can someone confirm that this is the way to fix that problem? and where can I find some example of configuration of any of that servers redirecting to Golang applications?
And, in the future if a have a lot (really a lot) of domains running in the same server, which of that servers is better? who is better with high load?
Yes, Nginx can solve your problem:
Start a web server using the standard library of Go or Caddy.
Redirect request to Go application using Nginx:
Example Nginx configuration:
server {
listen 80;
server_name dev00.mysite.com;
...
location / {
proxy_pass http://localhost:8000;
...
}
}
server {
listen 80;
server_name dev01.mysite.com;
...
location / {
proxy_pass http://localhost:8001;
...
}
}

Go web server with nginx server in web application [duplicate]

This question already has answers here:
What are the benefits of using Nginx in front of a webserver for Go?
(4 answers)
Closed 7 years ago.
Sorry, I cannot find this answer from Google search
and nobody seems to explain clearly the difference between pure Go webserver
and nginx reverse proxy. Everybody seems to use nginx in front
for web applications.
My question is, while Go has all http serving functions,
what is the benefit of using nginx over pure Go web servers?
And in most cases, we set up the Go webserver for all routes here
and have the nginx configurations in front.
Something like:
limit_req_zone $binary_remote_addr zone=limit:10m rate=2r/s;
server {
listen 80;
log_format lf '[$time_local] $remote_addr ;
access_log /var/log/nginx/access.log lf;
error_log /var/log/nginx/error.log;
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
server_name 1.2.3.4 mywebsite.com;
}
When we have this Go:
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hi there, I love %s!", r.URL.Path[1:])
}
func main() {
http.HandleFunc("/", handler)
http.ListenAndServe(":8080", nil)
}
Are the traffic to nginx and Go web server different?
If not, why do we have two layers of web server?
Please help me understand this.
Thanks,
There's nothing stopping you from serving requests from Go directly.
On the other hand, there are some features that nginx provides out-of-the box that may be useful, for example:
handle many virtual servers (e.g. have go respond on app.example.com and a different app on www.example.com)
http basic auth in some paths, say www.example.com/secure
access logs
etc
All of this can be done in go but would require programming, while in nginx it's just a matter of editing a .conf file and reloading the configuration. Nginx doesn't even need a restart for this changes to take place.
(From a "process" point of view, nginx could be managed by an ops employee, with root permissions, running in a well known port, while developers deploy their apps on higher ones.)
The general idea of using nginx in this scenario is to serve up static resources via nginx and allow Go to handle everything else.
Search for "try_files" in nginx. It allows for checking the disk for the existence of a file and serving it directly instead of having to handle static assets in the Go app.
This has been asked a few times before[1] but for posterity:
It depends.
Out of the box, putting nginx in front as a reverse proxy is going to
give you:
Access logs
Error logs
Easy SSL termination
SPDY support
gzip support
Easy ways to set HTTP headers for certain routes in a couple of lines
Very fast static asset serving (if you're serving off S3/etc. though, this isn't that relevant)
The Go HTTP server is very good, but you will need to reinvent the
wheel to do some of these things (which is fine: it's not meant to be
everything to everyone).
I've always found it easier to put nginx in front—which is what it is
good at—and let it do the "web server" stuff. My Go application does
the application stuff, and only the bare minimum of headers/etc. that
it needs to. Don't look at putting nginx in front as a "bad" thing.
Further, to extend on my answer there, there's also the question of crash resilience: your Go application isn't restricted by a configuration language and can do a lot of things.
Some of these things may crash your program. Having nginx (or HAProxy, or Varnish, etc.) as a reverse proxy can give you a some request buffering (to allow your program to restart) and/or serve stale content from its local cache (i.e. your static home page), which may be better than having the browser time out and serve a "cannot connect to server error".
On the other hand, if you're building small internal services, 'naked' Go web servers with your own logging library can be easier to manage (in terms of ops).
If you do want to keep everything in your Go program, look at gorilla/handlers for gzip, logging and proxy header middleware, and lumberjack for log rotation (else you can use your system's logging tools).

Resources