I know that there are lots of questions on this topic, but I don't think any of them helped me resolve it.
So, here is my /etc/nginx/sites-enabled/app file
server {
listen 80;
server_name {LINODE_IP_ADDRESS};
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
And here is my /etc/supervisor/conf.d/app.conf file
[program:app]
directory=/var/www/app
command=gunicorn --workers=3 wsgi:app --limit-request-line 10000
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
stderr_logfile=/var/log/app/app.err.log
stdout_logfile=/var/log/app/app.out.log
I always get 502 Bad Gateway when I hit Python handler that does redirect(). I can visit the pages that are statically generated with render() function, but anytime redirect() in Flask is executed, I get 502
This is what I get from /var/log/nginx/error.log
2022/08/29 16:44:23 [notice] 43608#43608: signal process started
2022/08/29 16:47:27 [error] 43609#43609: *2 upstream sent too big header while reading response header from upstream, client: 178.222.210.17, server: {LINODE_IP}, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8000/", host: "{LINODE_IP}"
2022/08/29 16:50:29 [notice] 44141#44141: signal process started
2022/08/29 16:50:45 [error] 44142#44142: *68 connect() failed (111: Connection refused) while connecting to upstream, client: 178.222.210.17, server: {LINODE_IP}, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:5000/", host: "{LINODE_IP}"
2022/08/29 16:50:45 [error] 44142#44142: *68 connect() failed (111: Connection refused) while connecting to upstream, client: 178.222.210.17, server: {LINODE_IP}, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:5000/favicon.ico", host: "{LINODE_IP}", referrer: "http://{LINODE_IP}/"
2022/08/29 16:50:46 [error] 44142#44142: *68 connect() failed (111: Connection refused) while connecting to upstream, client: 178.222.210.17, server: {LINODE_IP}, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:5000/", host: "{LINODE_IP}"
2022/08/29 16:50:46 [error] 44142#44142: *68 connect() failed (111: Connection refused) while connecting to upstream, client: 178.222.210.17, server: {LINODE_IP}, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:5000/favicon.ico", host: "{LINODE_IP}", referrer: "http://{LINODE_IP}/"
Can someone please help me understand and fix this? Thanks!
Related
I have a very simple config:
server {
listen 80;
server_name: example.fr;
location / {
proxy_pass http://localhost:8000;
}
}
However when I restart my backend (for example Node.JS restarted on change by nodemon), even if the backend starts like in 2 seconds, Nginx returns a 502 for 10s and this shows in the logs:
2022/05/10 16:10:48 [error] 2013398#2013398: *2241 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: example.fr, request: "HEAD / HTTP/2.0", upstream: "http://127.0.0.1:8000/", host: "example.fr"
2022/05/10 16:10:48 [error] 2013398#2013398: *2241 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: example.fr, request: "HEAD / HTTP/2.0", upstream: "http://[::1]:8000/", host: "example.fr"
2022/05/10 16:10:49 [error] 2013398#2013398: *2244 no live upstreams while connecting to upstream, client: 127.0.0.1, server: example.fr, request: "HEAD / HTTP/2.0", upstream: "http://localhost/", host: "example.fr"
2022/05/10 16:10:50 [error] 2013398#2013398: *2245 no live upstreams while connecting to upstream, client: 127.0.0.1, server: example.fr, request: "HEAD / HTTP/2.0", upstream: "http://localhost/", host: "example.fr"
2022/05/10 16:10:51 [error] 2013398#2013398: *2246 no live upstreams while connecting to upstream, client: 127.0.0.1, server: example.fr, request: "HEAD / HTTP/2.0", upstream: "http://localhost/", host: "example.fr"
2022/05/10 16:10:52 [error] 2013398#2013398: *2247 no live upstreams while connecting to upstream, client: 127.0.0.1, server: example.fr, request: "HEAD / HTTP/2.0", upstream: "http://localhost/", host: "example.fr"
2022/05/10 16:10:53 [error] 2013398#2013398: *2248 no live upstreams while connecting to upstream, client: 127.0.0.1, server: example.fr, request: "HEAD / HTTP/2.0", upstream: "http://localhost/", host: "example.fr"
2022/05/10 16:10:54 [error] 2013398#2013398: *2249 no live upstreams while connecting to upstream, client: 127.0.0.1, server: example.fr, request: "HEAD / HTTP/2.0", upstream: "http://localhost/", host: "example.fr"
2022/05/10 16:10:55 [error] 2013398#2013398: *2250 no live upstreams while connecting to upstream, client: 127.0.0.1, server: example.fr, request: "HEAD / HTTP/2.0", upstream: "http://localhost/", host: "example.fr"
2022/05/10 16:10:57 [error] 2013398#2013398: *2251 no live upstreams while connecting to upstream, client: 127.0.0.1, server: example.fr, request: "HEAD / HTTP/2.0", upstream: "http://localhost/", host: "example.fr"
2022/05/10 16:10:58 [error] 2013398#2013398: *2252 no live upstreams while connecting to upstream, client: 127.0.0.1, server: example.fr, request: "HEAD / HTTP/2.0", upstream: "http://localhost/", host: "example.fr"
I've seen that for the upstream directive there are some controls (max_fails and fail_timeout (http://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_fails)) but this only seems to apply to upstream and I cannot find equivalent options for plain proxy_pass.
Any idea?
Short answer: Replace localhost with 127.0.0.1.
Quoting user #rogerdpack for the long answer:
The really tricky part is that if you specify proxy_pass to
"localhost" and your box happens to also have ipv6 and ipv4 "versions
of localhost" on it at the same time (most boxes do by default), it
will count as if you had a "list" of multiple servers in your server
group, which means you can get into the situation above of having it
return "502 for 10s" even though you list only one server.
See here "If a domain name resolves to several addresses, all of them
will be used in a round-robin fashion." One workaround is to declare
it as proxy_pass http://127.0.0.1:8000; (its ipv4 address) to
avoid it being both ipv6 and ipv4. Then it counts as "only a
single server" behavior.
(answering my own question because the answers explaining the problem were buried or the question wasn't really explicit about the particular symptoms I had)
So I am trying to set up a Jenkins deploy pipeline. Almost everything seems to be working fine, except for the last part of the job, which is to run wrangler publish (to publish the site to Cloudflare Workers).
I have tried running it twice now, and both times, it will fail during the job, and I will receive an error code "502" when I try to access my jenkins server. The only thing that works is a full reboot of the server.
I have tried checking logs, but nothing much shows up. In jenkins.error.log I see this:
2020/09/23 21:12:00 [error] 1098#1098: *498 connect() failed (111: Connection refused) while connecting to upstream, client: 162.158.94.165, server: jenkins.mydomain.com, request: "POST /job/my-project-staging/15/logText/progressiveHtml HTTP/1.1", upstream: "http://127.0.0.1:8080/job/my-project-staging/15/logText/progressiveHtml", host: "jenkins.mydomain.com", referrer: "https://jenkins.mydomain.com/job/my-project-staging/15/console"
2020/09/23 21:12:02 [error] 1098#1098: *500 connect() failed (111: Connection refused) while connecting to upstream, client: 162.158.93.72, server: jenkins.mydomain.com, request: "HEAD /job/my-project-staging/15/statusIcon HTTP/1.1", upstream: "http://127.0.0.1:8080/job/my-project-staging/15/statusIcon", host: "jenkins.mydomain.com"
2020/09/23 21:12:03 [error] 1098#1098: *502 connect() failed (111: Connection refused) while connecting to upstream, client: 172.69.34.207, server: jenkins.mydomain.com, request: "GET /job/my-project-staging/15/statusIcon HTTP/1.1", upstream: "http://127.0.0.1:8080/job/my-project-staging/15/statusIcon", host: "jenkins.mydomain.com"
2020/09/23 21:12:03 [error] 1098#1098: *504 connect() failed (111: Connection refused) while connecting to upstream, client: 162.158.91.152, server: jenkins.mydomain.com, request: "HEAD /job/my-project-staging/15/console HTTP/1.1", upstream: "http://127.0.0.1:8080/job/my-project-staging/15/console", host: "jenkins.mydomain.com"
2020/09/23 21:12:06 [error] 1098#1098: *506 connect() failed (111: Connection refused) while connecting to upstream, client: 162.158.91.146, server: jenkins.mydomain.com, request: "GET /job/my-project-staging/15/console HTTP/1.1", upstream: "http://127.0.0.1:8080/job/my-project-staging/15/console", host: "jenkins.mydomain.com"
2020/09/23 22:15:09 [error] 1098#1098: *1773 connect() failed (111: Connection refused) while connecting to upstream, client: 74.120.14.35, server: jenkins.mydomain.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "123.123.123.123:443"
2020/09/23 22:15:10 [error] 1098#1098: *1775 connect() failed (111: Connection refused) while connecting to upstream, client: 74.120.14.35, server: jenkins.mydomain.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "123.123.123.123"
2020/09/23 23:08:18 [error] 1098#1098: *2378 connect() failed (111: Connection refused) while connecting to upstream, client: 193.118.53.210, server: jenkins.mydomain.com, request: "GET /solr/ HTTP/1.1", upstream: "http://127.0.0.1:8080/solr/", host: "123.123.123.123"
2020/09/24 01:36:08 [error] 1098#1098: *7943 connect() failed (111: Connection refused) while connecting to upstream, client: 51.158.24.203, server: jenkins.mydomain.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "123.123.123.123"
Any ideas?
I have tried restarting nginx, that doesn't prevent the 502 error. However, when I run sudo /etc/init.d/jenkins restart the jenkins server will reboot and be online again.
When I go into the failed job, I see this at the very end, before the server crashed:
memory allocation of 240904 bytes failedCreating placeholder flownodes because failed loading originals.
java.io.IOException: Tried to load head FlowNodes for execution Owner[jwr-nuxt-staging/15:jwr-nuxt-staging #15] but FlowNode was not found in storage for head id:FlowNodeId 1:26
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.initializeStorage(CpsFlowExecution.java:689)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.onLoad(CpsFlowExecution.java:726)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.getExecution(WorkflowRun.java:691)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.onLoad(WorkflowRun.java:550)
at hudson.model.RunMap.retrieve(RunMap.java:225)
at hudson.model.RunMap.retrieve(RunMap.java:57)
at jenkins.model.lazy.AbstractLazyLoadRunMap.load(AbstractLazyLoadRunMap.java:501)
at jenkins.model.lazy.AbstractLazyLoadRunMap.load(AbstractLazyLoadRunMap.java:483)
at jenkins.model.lazy.AbstractLazyLoadRunMap.getByNumber(AbstractLazyLoadRunMap.java:381)
at hudson.model.RunMap.getById(RunMap.java:205)
at org.jenkinsci.plugins.workflow.job.WorkflowRun$Owner.run(WorkflowRun.java:940)
at org.jenkinsci.plugins.workflow.job.WorkflowRun$Owner.get(WorkflowRun.java:951)
at org.jenkinsci.plugins.workflow.flow.FlowExecutionList$1.computeNext(FlowExecutionList.java:65)
at org.jenkinsci.plugins.workflow.flow.FlowExecutionList$1.computeNext(FlowExecutionList.java:57)
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at org.jenkinsci.plugins.workflow.flow.FlowExecutionList$ItemListenerImpl.onLoaded(FlowExecutionList.java:178)
at jenkins.model.Jenkins.<init>(Jenkins.java:1017)
at hudson.model.Hudson.<init>(Hudson.java:85)
at hudson.model.Hudson.<init>(Hudson.java:81)
at hudson.WebAppMain$3.run(WebAppMain.java:282)
Finished: FAILURE
Thinking this could be a memory related issue I went into /etc/default/jenkins and updated JAVA_ARGS to JAVA_ARGS="-Xmx4g -XX:MaxPermSize=512m -Djava.awt.headless=true" - but that didn't change anything. I have verified I can run wrangler publish via terminal on the server, without issues.
So aparently this was indeed a memory issue. My machine had 8Gb but turns out Jenkins use quite a lot, especially together with wrangler publish. So I added some 4GB swap memory and that seemed to resolve the issue.
You may follow this article on how to add swap memory on ubuntu: https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-18-04
I currently have this problem with my wordpress website where it will work for about a day after a server restart, but then hit this set of errors:
2015/12/15 22:06:42 [crit] 12650#0: *28 connect() to unix:/var/run/php5-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 46.166.139.20, server: example.com, request: "POST /xmlrpc.php HTTP/1.0", $
2015/12/15 22:08:19 [error] 3216#0: *18 FastCGI sent in stderr: "PHP message: PHP Warning: trim() expects parameter 1 to be string, array given in /var/www/html/wp-includes/option.php on line 30
PHP message: PHP Warning: trim() expects parameter 1 to be string, array given in /var/www/html/wp-includes/option.php on line 30" while reading response header from upstream, client: 104.33.64.70, server: example.com, request: "P$
2015/12/15 22:40:08 [error] 3216#0: *197 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 180.76.15.142, server: example.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/$
2015/12/15 22:40:20 [error] 3216#0: *199 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 180.76.15.19, server: example.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/v$
2015/12/15 23:08:27 [error] 3216#0: *201 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 180.76.15.143, server: example.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/$
2015/12/15 23:08:39 [error] 3216#0: *203 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 180.76.15.12, server: example.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/v$
2015/12/15 23:20:20 [error] 3216#0: *205 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 104.33.64.70, server: example.com, request: "GET /wp-admin/upgrade.php?step=1&backto=%2Fwp$
2015/12/15 23:22:20 [error] 3216#0: *205 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 104.33.64.70, server: example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcg$
2015/12/15 23:57:09 [error] 3216#0: *367 connect() to unix:/var/run/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 46.166.139.20, server: example.com, request: "POST /xmlrpc.php HT$
2015/12/15 23:57:39 [error] 3216#0: *369 connect() to unix:/var/run/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 46.166.139.20, server: example.com, request: "POST /xmlrpc.php HT$
2015/12/15 23:57:41 [error] 3216#0: *371 connect() to unix:/var/run/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 46.166.139.20, server: example.com, request: "POST /xmlrpc.php HT$
2015/12/15 23:57:56 [error] 3216#0: *373 connect() to unix:/var/run/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 46.166.139.20, server: example.com, request: "POST /xmlrpc.php HT$
Afterwards, the error log just repeats that (11: Resource temporarily unavailable) code over and over. At this point trying to access the website itself just gives an nginx "an error has occurred" page, telling me to check error logs.
I don't know what exactly is causing the initial holdup, and it looks like after a few time outs the server just locks up entirely. Any advice? Thanks!
Someone from 46.166.139.20 tries to guess your password. If you don't use WP's XML-RPC you should disable it.
I got this error when I want to open my website:
2015/01/27 07:04:38 [error] 1727#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 62.217.151.69, server: localhost, request: "GET / HTTP/1.0", upstream: "fastcgi://127.0.0.1:9000", host: "www.alapar.az"
How can I resolve it?
Ensure that php-fpm or whatever you use as fastcgi backend is running and accepting connections at 127.0.0.1:9000
Hi I am getting the following error and have been searching and searching for an answer for my situation. I realize googling this issue has many solutions but ive tried them all and cant figure out why mine isnt working below is my nginx.conf and a log of the error please let me know if more information in needed
Log
2014/08/18 20:03:36 [error] 27960#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.x, server: , request: "GET / HTTP/1.1", upstream: "uwsgi://192.168.0.13:8081", host: "mysite.com"
2014/08/18 20:05:01 [error] 27960#0: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "uwsgi://192.168.0.13:8081", host: "mysite.com"
2014/08/18 20:08:19 [error] 28371#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.x, server: , request: "GET / HTTP/1.1", upstream: "uwsgi://127.0.0.1:8001", host: "mysite.com"
2014/08/18 20:08:21 [error] 28371#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.x, server: , request: "GET / HTTP/1.1", upstream: "uwsgi://127.0.0.1:8001", host: "mysite.com"
.conf
upstream django {
server 127.0.0.1:8001;
}
server {
location / {
include /etc/nginx/uwsgi_params;
uwsgi_pass django;
}
}
So it seems that whatever port I set in my uwsgi.ini file "socket = :" has to match the port I put as the server port in
upstream django {
server 127.0.0.1:8001;
}
Dont know if this is a true solution or whats actually going on behidn the scenes but it seemed to fix the issue