I'm struggling with this issue more than a week, and still cannot pull it off.
I'm trying to provision our environment using Ansible, and I want to provision a staging server, with the same environment as production, I have setup the Redis server, and it's running listening on 6379 I have Nginx up and running and it's serving requests, but when it's got to the part of Lua to connect to Redis, it throw on me connection refused error.
Here is Nginx debug log: Link
Redis Listening on 6379
$ sudo lsof -i -P -n | grep LISTEN | grep 6379
redis-ser 1978 redis 4u IPv6 138447828 0t0 TCP *:6379 (LISTEN)
redis-ser 1978 redis 5u IPv4 138447829 0t0 TCP *:6379 (LISTEN)
Connecting to Redis through Python
Python 2.7.12 (default, Oct 8 2019, 14:14:10)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
import redis
redis.Redis(host='127.0.0.1', port=6379, db='0')
r.set("Test", 'value')
True
r.get("Test")
'value'
Lua Code:
local red = redis:new()
red:set_timeout(500)
local ok, err = red:connect("127.0.0.1", 6379)
if not ok then
ngx.say("Redis failed to connect: ", err)
return
end
Nginx conf:
server {
listen 8080;
server_name xxx.com;
access_log /var/log/nginx/xxxx_access.log;
error_log /var/log/nginx/xxxx_error.log debug;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header REMOTE_ADDR $http_cf_connecting_ip;
proxy_set_header X-Real-IP $http_cf_connecting_ip;
proxy_set_header X-URI $uri;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-Protocol $scheme;
location / {
rewrite_by_lua_file '/var/www/xxxx/nginx/add_header_web.lua';
proxy_pass http://xxxxx/;
}
}
Environment
Redis 3.2.0
Nginx: openresty/1.7.7.2
configure arguments: --prefix=/usr/local/openresty/nginx --with-debug --with-cc-opt='-I/opt/ngx_openresty-1.7.7.2/build/luajit-root/usr/local/openresty/luajit/include/luajit-2.1 -DNGX_LUA_USE_ASSERT -DNGX_LUA_ABORT_AT_PANIC -O2 -O2' --add-module=../ngx_devel_kit-0.2.19 --add-module=../echo-nginx-module-0.57 --add-module=../xss-nginx-module-0.04 --add-module=../ngx_coolkit-0.2rc2 --add-module=../set-misc-nginx-module-0.28 --add-module=../form-input-nginx-module-0.10 --add-module=../encrypted-session-nginx-module-0.03 --add-module=../srcache-nginx-module-0.28 --add-module=../ngx_lua-0.9.14 --add-module=../ngx_lua_upstream-0.02 --add-module=../headers-more-nginx-module-0.25 --add-module=../array-var-nginx-module-0.03 --add-module=../memc-nginx-module-0.15 --add-module=../redis2-nginx-module-0.11 --add-module=../redis-nginx-module-0.3.7 --add-module=../rds-json-nginx-module-0.13 --add-module=../rds-csv-nginx-module-0.05 --with-ld-opt='-Wl,-rpath,/usr/local/openresty/luajit/lib -L/opt/ngx_openresty-1.7.7.2/build/luajit-root/usr/local/openresty/luajit/lib -Wl,-rpath,/usr/local/lib' --conf-path=/etc/nginx/nginx.conf --with-http_realip_module --with-http_stub_status_module --with-http_geoip_module --with-http_ssl_module --with-http_sub_module --add-module=/opt/nginxmodules/nginx-upload-progress-module --add-module=/opt/nginxmodules/nginx-push-stream-module
Update:
Well, I just updated my openresty to latest version and things went back to work
I was having this same issue but figured it out after a bit of help from a Chinese thread from a google group. Essentially we have to add options table with the pool name when connecting. I don't know why, but it worked for me.
Here is my code:
local options_table = {}
options_table["pool"] = "docker_server"
local ok, err = red:connect("10.211.55.8", 6379, options_table)
Related
jruby 9.3.6 (hence ruby 2.6.8), rails 6.1.6.1. in production using ssl (wss) with devise, puma, nginx.
Locally actioncable is running without problems, but on the external server actioncable establishes a Websocket connection, leading to nginx: GET /cable HTTP/1.1" 101 210, the user gets verified correctly from the expanded session_id in the cookie, and after "Successfully upgraded to WebSocket", the browser receives a {"type":"welcome"} and pings.
Hence the actioncable javascript sends a request to subscribe to a channel, but that doesn't have success.
I tried a lot regarding the configuration of nginx, e.g. switching to passenger for the actioncable location /cable in nginx, and after 5 days I even changed from calling the server side of actioncable from a simple "new Websocket" in javascript to the client-side implementation of actioncable - as is designed to be (using import maps and actioncable.esm.js), but it didn't solve the main problem.
The logfiles:
production.log:
[ActionCable connect in app/channels/application_cable/connection.rb:] WebSocket error occurred: Broken pipe -
If the domain.com is called once, this error takes place every time 0.2 Seconds. If the page is closed, the error continues. The 0.2 seconds is the same frequency with which the browser sends the requests to subscribe to the channel. I assumed a log time, it would have to do with ssl, but I don't catch the problem. So now, I assume that the "broken pipe" is a problem between the jruby app and actioncable on the server side. But I am not sure and actually I don't know, how to troubleshoot there.
Additional there is a warning in puma.stderr.log:
warning: thread "Ruby-0-Thread-27:
/home/my_app/.rbenv/versions/jruby-9.3.6.0/lib/ruby/gems/shared/gems/actioncable-6.1.6.1/lib/action_cable/connection/stream_event_loop.rb:75" terminated with exception (report_on_exception is true):
ArgumentError: mode not supported for this object: r
starting redis-cli and doing 'monitor':
1660711192.522723 [1 127.0.0.1:33630] "select" "1"
1660711192.523545 [1 127.0.0.1:33630] "client" "setname" "ActionCable-PID-199512"
publishing to MessagesChannel_1 works:
1660711192.523831 [1 127.0.0.1:33630] "publish" "messages_1" "{\"message\":\"message-text\"}"
In comparison in the local development configuration, this looks different:
1660712957.712189 [1 127.0.0.1:46954] "select" "1"
1660712957.712871 [1 127.0.0.1:46954] "client" "setname" "ActionCable-PID-18600"
1660712957.713495 [1 127.0.0.1:46954] "subscribe" "_action_cable_internal"
1660712957.716100 [1 127.0.0.1:46954] "subscribe" "messages_1"
1660712957.974486 [1 127.0.0.1:46952] "publish" "messages_3" "{\"message\":\"message-text\"}"
So what is "_action_cable_internal", and why doesn't it take place in production?
I found the code for the actioncable gem and added 'p #pubsub' in gems/actioncable-6.1.6.1/lib/action_cable/server/base.rb at the end of the def pubsub -function and compared that information with the local configuration.
locally there is an info:
#thread=#<Thread:0x5326bff6#/home/me_the_user/.rbenv/versions/jruby-9.2.16.0/lib/ruby/gems/shared/gems/actioncable-6.1.6.1/lib/action_cable/connection/stream_event_loop.rb:75 sleep>
which corresponds to the info at the server:
#thread=#<Thread:0x5f4462e1#/home/me_the_user/.rbenv/versions/jruby-9.3.6.0/lib/ruby/gems/shared/gems/actioncable-6.1.6.1/lib/action_cable/connection/stream_event_loop.rb:75 dead>
So it looks like the "warning" was an 'error'.
Also I am not sure, if the output of wscat / curl is normal or reports an error:
root#server:~# wscat -c wss://domain.tld
error: Unexpected server response: 302
Which could be normal due to missing '/cable'.
But:
root#server:~# wscat -c wss://domain.tld/cable
error: Unexpected server response: 404
root#server:~# curl -I https://domain.tld/cable
HTTP/1.1 404 Not Found
the configurations:
nginx.conf:
http {
upstream app {
# Path to Puma SOCK file, as defined previously
server unix:///var/www/my_app/shared/sockets/puma.sock fail_timeout=0;
}
server {
listen 443 ssl default_server;
server_name domain.com www.domain.com;
include snippets/ssl-my_app.com.conf;
include snippets/ssl-params.conf;
root /var/www/my_app/public;
try_files $uri /index.html /index.htm;
location /cable {
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Forwarded-Proto https;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
add_header X-Frame-Options SAMEORIGIN always;
proxy_pass http://app;
}
rails_env production;
} }
cable.yml:
production:
adapter: redis
url: redis://127.0.0.1:6379/1
ssl_params:
verify_mode: <%= OpenSSL::SSL::VERIFY_NONE %>
initializer/redis.rb:
$redis = Redis.new(:host => 'domain.com/cable', :port => 6379)
routes.rb:
mount ActionCable.server => '/cable'
config/environments/production.rb:
config.force_ssl = true
config.action_cable.allowed_request_origins = [/http:\/\/*/, /https:\/\/*/]
config.action_cable.allow_same_origin_as_host = true
config.action_cable.url = "wss://domain.com/cable"
I assume that is has to do with the ssl, but I am not an expert for configurations, so thank you very much for any help.
The problem was caused due to hardware: I am using a so called 'airbox' in France, which is a mobile internet access offering WLAN.
So the websocket connection always was closed due to actioncable websocket certificate not "being known" and mobile internet to be more strict.
Hence I created a pseudo-websocket: if the websocket connection is closed imediately, the browser of the user asks every 2,5 seconds, if there is something that would have been sent via websocket.
I've set up a new Ubuntu 18.04 server and deployed my phoenix app but am getting a 502 error when trying to access it.
I don't yet have a domain name because I will be transferring one from another server, so just trying to connect with the IP address.
The Phoenix app is deployed and running, and I can ping it with edeliver.
Prod conf:
config :app, AppWeb.Endpoint,
load_from_system_env: false,
url: [host: "127.0.0.1", port: 4013],
cache_static_manifest: "priv/static/cache_manifest.json",
check_origin: true,
root: ".",
version: Mix.Project.config[:version]
config :logger, level: :info
config :phoenix, :serve_endpoints, true
import_config "prod.secret.exs"
Nginx conf:
server {
listen 80;
server_name _;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://127.0.0.1:4013;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Nginx Error log:
2020/05/14 22:28:23 [error] 22908#22908: *24 connect() failed (111: Connection refused) while connecting to upstream, client: ipaddress, server: _, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:4013/", host: "ipaddress"
Edit:
Last two entries of OTP logs confirming app is alive
===== ALIVE Fri May 15 07:33:19 UTC 2020
===== ALIVE Fri May 15 07:48:19 UTC 2020
Edit 2:
I have posted a Gist detailing all the steps I have taken going from a clean Ubuntu box to where I am now here: https://gist.github.com/phollyer/cb3428e6c23b11fadc5105cea1379a7c
Thanks
You have to add server: true to your configuration, like:
config :wtmitu, WtmituWeb.Endpoint,
server: true, # <-- this line
load_from_system_env: false,
...
You don't have to add it to the dev environment because mix phx.server is doing it for you.
The Doc
This has been resolved as follows:
There were two problems that required resolving.
Adding config :app, AppWeb.Endpoint, server: true to either prod.secret.exs or prod.exs was required.
I had a running process left over from mistakenly deploying staging to the same server, initially. I originally logged in to the server, and stopped staging with ./bin/app stop, maybe this left a process running, maybe somehow I started the process by mistake later on. Anyway, I used ps ux to list the running processes and found that one of the processes listed staging in its path, so I killed all running processes related to the deloyment, both staging and production, with kill -9 processId, re-deployed to production, and all is now fine.
I have created a web application with Django Channels which I face problems with while trying to set up with Supervisor system.
To start with, the application locally works well.
Remotely (I use an AWS EC2 instance with Ubuntu Server 18.04 LTS), when run with a command daphne -b 0.0.0.0 -p 8000 mysite.asgi:application it also works well.
However, I cannot make it work with Supervisor. I follow instructions from the official Django Channels docs (https://channels.readthedocs.io/en/latest/deploying.html) and therefore I have:
nginx config file:
upstream channels-backend {
server localhost:8000;
}
server {
server_name www.example.com;
keepalive_timeout 5;
client_max_body_size 1m;
access_log /home/ubuntu/django_app/logs/nginx-access.log;
error_log /home/ubuntu/django_app/logs/nginx-error.log;
location /static/ {
alias /home/ubuntu/django_app/mysite/staticfiles/;
}
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
listen 80;
server_name www.example.com;
if ($host = www.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
return 404; # managed by Certbot
}
Supervisor config file:
[fcgi-program:asgi]
socket=tcp://localhost:8000
directory=/home/ubuntu/django_app/mysite
command=/home/ubuntu/django_app/venv/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application
numprocs=4
process_name=asgi%(process_num)d
autostart=true
autorestart=true
stdout_logfile=/home/ubuntu/django_app/logs/supervisor_log.log
redirect_stderr=true
When set this way, the webpage does not work (504 Gateway Time-out). In the Supervisor log file I see:
2018-11-14 14:48:21,511 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne0.sock
2018-11-14 14:48:21,516 INFO HTTP/2 support enabled
2018-11-14 14:48:21,517 INFO Configuring endpoint fd:fileno=0
2018-11-14 14:48:22,015 INFO Listening on TCP address 127.0.0.1:8000
2018-11-14 14:48:22,025 INFO Configuring endpoint unix:/run/daphne/daphne0.sock
2018-11-14 14:48:22,026 CRITICAL Listen failure: [Errno 2] No such file or directory: '1416' -> b'/run/daphne/daphne0.sock.lock'
2018-11-14 14:48:22,091 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne2.sock
2018-11-14 14:48:22,096 INFO HTTP/2 support enabled
2018-11-14 14:48:22,097 INFO Configuring endpoint fd:fileno=0
2018-11-14 14:48:22,135 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne3.sock
2018-11-14 14:48:22,152 INFO HTTP/2 support enabled
2018-11-14 14:48:22,153 INFO Configuring endpoint fd:fileno=0
2018-11-14 14:48:22,237 INFO Listening on TCP address 127.0.0.1:8000
2018-11-14 14:48:22,241 INFO Listening on TCP address 127.0.0.1:8000
2018-11-14 14:48:22,242 INFO Configuring endpoint unix:/run/daphne/daphne3.sock
2018-11-14 14:48:22,242 CRITICAL Listen failure: [Errno 2] No such file or directory: '1419' -> b'/run/daphne/daphne3.sock.lock'
2018-11-14 14:48:22,252 INFO Configuring endpoint unix:/run/daphne/daphne2.sock
2018-11-14 14:48:22,252 CRITICAL Listen failure: [Errno 2] No such file or directory: '1420' -> b'/run/daphne/daphne2.sock.lock'
etc.
Please note that in the Supervisor command the Daphne process is invoked in another way (with other set of parameters) than I run it before - instead of parameters for address and port, there are parameters for socket and file descriptor (about which I do not know much at all). I suspect that it is the reason of the encountered error.
Any help or suggestions will be highly appreciated.
The relevant packages versions:
channels==2.1.2
channels-redis==2.2.1
daphne==2.2.1
Django==2.1.2
EDIT:
When I create empty files for socket files (which are present in command for Daphne in the Supervisor config file), ie. /run/daphne/daphne0.sock, /run/daphne/daphne1.sock, etc., then the log file states the following:
2018-11-15 10:24:38,289 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne0.sock
2018-11-15 10:24:38,290 INFO HTTP/2 support enabled
2018-11-15 10:24:38,280 INFO Configuring endpoint fd:fileno=0
2018-11-15 10:24:38,458 INFO Listening on TCP address 127.0.0.1:8000
2018-11-15 10:24:38,475 INFO Configuring endpoint unix:/run/daphne/daphne0.sock
2018-11-15 10:24:38,476 CRITICAL Listen failure: Couldn't listen on any:b'/run/daphne/daphne0.sock': [Errno 98] Address already in use.
Question: should these files not be empty? What should they include?
In the supervisor ASGI config file, in the following line
command=/home/ubuntu/django_app/venv/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application
replace --fd 0 with --endpoint fd:fileno=0.
Issue: https://github.com/django/daphne/issues/234
Fabio's answer, with replacing the file descriptor parameter for the endpoint parameter, presents a quick workaround for this problem (which appeared to be a bug in the Daphne code).
However, the fix in Daphne repository was quickly committed so that the original instructions work well.
As a side note (for people still getting critical listen failures which I wrote about in the original question), please be sure that the physical location for socket files (/run/daphne/ in my case) is accessible - I spent too much time just to discover that simply creating the daphne folder in /run catalog does the job (even though I run everything with sudo)... For precautionary measures one may consider redirecting the socket files to another folder, e.g. /tmp which does let creating a directory without sudo permission.
In order to monitor my docker containers, I've decided to expose docker remote API through nginx by the following rule:
server {
listen 1234;
server_name xxx.xxx.xxx.xxx;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unix:/var/run/docker.sock;
}
}
But in the nginx.error file, I get the following error:
connect() to unix:/var/run/docker.sock failed (13: Permission denied
The reason is that docker.sock is under the ownership of docker group while nginx is running in www-data group.
What is the best way to solve this problem?
The reason is that docker.sock is under the ownership of docker group
while nginx is running in www-data group.
What is the best way to solve this problem?
For this issue you can add user under which nginx running to docker group.
usermod -a -G www-data,docker user
You can open the http socket in the daemon by adding the following text to the ExecStart in the daemon config file:
-H <ip address>:2375
You can find the location of the configuration file in the output of the command:
systemctl status docker
You can set permission for docker socket:
sudo chmod a+r /var/run/docker.sock
I've installed Redmine on an Ubuntu 13.04 server.
This installation worked fine and I confirmed Redmine was working through the WEBrick server (as per redmine documentation).
To make things more stable I want to run Redmine behind Nginx & Thin.
With this part I run into problems as Nginx reports getting timeouts:
2013/07/19 07:47:32 [error] 1051#0: *10 upstream timed out (110: Connection timed out) while connecting to upstream, .......
Thin Configuration:
---
chdir: /home/redmine/app/redmine
environment: production
address: 127.0.0.1
port: 3000
timeout: 5
log: log/thin.log
pid: tmp/pids/thin.pid
max_conns: 128
max_persistent_conns: 64
require: []
wait: 10
servers: 1
daemonize: true
I can see Thin is running, the pid file is created and a logfile is started.
I see no further additions to the logfile when doing requests.
Nginx configuration:
upstream redmine {
server 127.0.0.1:3000;
}
server {
server_name redmine.my.domain;
listen 443;
ssl on;
ssl_certificate /home/redmine/sites/redmine/certificates/server.crt;
ssl_certificate_key /home/redmine/sites/redmine/certificates/server.key;
access_log /home/redmine/sites/redmine/logs/server.access.nginx.log;
error_log /home/redmine/sites/redmine/logs/server.error.nginx.log;
root /home/redmine/app/redmine;
location / {
try_files $uri #ruby;
}
location #ruby {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_read_timeout 5;
proxy_pass http://redmine;
}
}
I can see additions to the Nginx log.
Can anyone give me a hint on where to find the problem in this?
Current result of iptables -L
Chain INPUT (policy DROP)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:3000
ACCEPT tcp -- anywhere anywhere tcp dpt:https
ACCEPT tcp -- anywhere anywhere tcp dpt:http
ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
The error is because your firewall "iptables" blocked the port.
Rollback your iptables config, then issue the follow command:
iptables -I INPUT -i lo -p tcp --dport 3123 -j ACCEPT
Remember to save the setting by:
service iptables save
More information about iptables: https://help.ubuntu.com/community/IptablesHowTo
p.s. sudo may be needed for the above commands.