I have been setting up CKAN 2.9 in a local Debian Buster VM, by following the instructions on how to install from source and deployment.
I got CKAN to run, using NGINX, UWSGI and Supervisor, however, I got on to trouble, when I try to change the URL path where CKAN is running.
See CKAN runs fine in http://192.168.60.11/ but I want it to run in http://192.168.60.11/ckan
In order to do so I change the in ckan.ini ckan.site_url to ckan.site_url = http://192.168.60.11/ckan`
And the NGINX default site conf to:
location /ckan/ {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_cache cache;
proxy_cache_bypass $cookie_auth_tkt;
proxy_no_cache $cookie_auth_tkt;
proxy_cache_valid 30m;
proxy_cache_key $host$scheme$proxy_host$request_uri;
# In emergency comment out line to force caching
# proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
}
Reload nginx.service and restarted supervisior.service
http://192.168.60.11/ckan/ brings me to CKNA landing page, but non of the CSS/JS/Images are loaded. In the browser I loading errors such as: Loading failed for the <script> with source “http://192.168.60.11/webassets/vendor/d8ae4bed_jquery.js”.
And if I click in the link to datasets I am directed to http://192.168.60.11/dataset/, and not http://192.168.60.11/ckan/dataset/
And in /etc/ckan/default/uwsgi.ERR:
2020-08-30 19:19:41,554 INFO [ckan.config.middleware.flask_app] / render time 0.114 seconds [pid: 7699|app: 0|req: 53/53] 127.0.0.1 () {42 vars in 767 bytes} [Sun Aug 30 19:19:41 2020] GET / => generated 13765 bytes in 122 msecs (HTTP/1.0 200) 3 headers in 106 bytes (1 switches on core 0)
So it seems that CKAN is missing some configuration parameter to make it aware of the URL path. Any ideas how? Thanks
# ckan.ini
ckan.site_url = "http://public.domain" # no path suffix here
ckan.root_path = "/prefix"
# nginx config
location /prefix/ {
proxy_pass http://127.0.0.1:8080; # no path suffix here
}
Answer was in CKAN documentation
And nginx location just has to change from / to /custom/path
Related
There seems to be quite a bit floating around the internet about problems such as this, but nothing I have found and tried quite pertains to my particular issue.
I had a functional Plumber api working in Nginx on Digital Ocean, alas when I installed php 8.0.2 and upgraded Ubuntu to 22.04 (and overwrote my conf files then reconfigured them!), it ceased to work. I can see that my R pid is listening to port 3000, and myApi-plumber.service is also directed at port 3000, yet when I test http://127.0.0.1:3000 as root user in console it returns a 404 error instead of an expected 405 (I have provided all info below).
I have not altered any settings in my plumber.R file since it was functional, and it still works perfectly on my local machine, which leads me to suspect it's a server configuration.
I would think I've likely done something incorrectly since, but I've spent days on this and cannot conceive of what it might be. I have adhered to consistent trailing slashes, restarted nginx and rebooted the droplet. I have since spun up new droplets and reinstalled everything from scratch too, but nothing seems to be working. My firewall is also configured as it should be.
Here are my settings:
Nginx version 1.18.0
/etc/nginx/sites-available/default
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.php index.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location /myApi/ {
proxy_pass http://127.0.0.1:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
}'
curl -i http://127.0.0.1:3000
(Here I would expect a '405 - Method not allowed' for a POST API with no JSON content being passed to it)
HTTP/1.1 404 Not Found
Date: Fri, 27 Jan 2023 13:11:52 GMT
Access-Control-Allow-Origin: *
Content-Type: application/json
Content-Length: 36
Webpage form POST request
(Here I would expect a JSON object to get passed to the plumber API and run through an R script which compiles a file from server-side files and becomes available as a link on the page.)
function foo(myCallback) {
$.ajax({
type: "POST",
url: "http://XX.XX.XX.XX/myAPI/",
data: myAPIString,
dataType: "json",
success: myCallback,
})
Returns a 404 xhr error in the network tab of the console in the browser.
sudo lsof -i -P -n | grep LISTEN
R 717 root 15u IPv4 XXXXX 0t0 TCP 127.0.0.1:3000
plumber-API.service - Plumber API
Loaded: loaded (/etc/systemd/system/plumber-API.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2023-01-27 10:40:03 UTC; 2h 45min ago
Main PID: 717 (R)
Tasks: 4 (limit: 1131)
Memory: 210.8M
CGroup: /system.slice/plumber-myTree.service
└─717 /usr/lib/R/bin/exec/R --no-echo --no-restore -e pr~+~<-~+~plumber::pr('/var/plumber/myApi/plumber.R');~+~pr$setDocs(FALSE);~+~~+~pr$run(port=3000)
Jan 27 10:40:03 ShakenPolygamy systemd[1]: Started Plumber API.
Jan 27 10:40:10 ShakenPolygamy Rscript[717]: Loading required package: maps
Jan 27 10:40:13 ShakenPolygamy Rscript[717]: Running plumber API at http://127.0.0.1:3000
Very frustrating indeed. Any help is so much appreciated!
jruby 9.3.6 (hence ruby 2.6.8), rails 6.1.6.1. in production using ssl (wss) with devise, puma, nginx.
Locally actioncable is running without problems, but on the external server actioncable establishes a Websocket connection, leading to nginx: GET /cable HTTP/1.1" 101 210, the user gets verified correctly from the expanded session_id in the cookie, and after "Successfully upgraded to WebSocket", the browser receives a {"type":"welcome"} and pings.
Hence the actioncable javascript sends a request to subscribe to a channel, but that doesn't have success.
I tried a lot regarding the configuration of nginx, e.g. switching to passenger for the actioncable location /cable in nginx, and after 5 days I even changed from calling the server side of actioncable from a simple "new Websocket" in javascript to the client-side implementation of actioncable - as is designed to be (using import maps and actioncable.esm.js), but it didn't solve the main problem.
The logfiles:
production.log:
[ActionCable connect in app/channels/application_cable/connection.rb:] WebSocket error occurred: Broken pipe -
If the domain.com is called once, this error takes place every time 0.2 Seconds. If the page is closed, the error continues. The 0.2 seconds is the same frequency with which the browser sends the requests to subscribe to the channel. I assumed a log time, it would have to do with ssl, but I don't catch the problem. So now, I assume that the "broken pipe" is a problem between the jruby app and actioncable on the server side. But I am not sure and actually I don't know, how to troubleshoot there.
Additional there is a warning in puma.stderr.log:
warning: thread "Ruby-0-Thread-27:
/home/my_app/.rbenv/versions/jruby-9.3.6.0/lib/ruby/gems/shared/gems/actioncable-6.1.6.1/lib/action_cable/connection/stream_event_loop.rb:75" terminated with exception (report_on_exception is true):
ArgumentError: mode not supported for this object: r
starting redis-cli and doing 'monitor':
1660711192.522723 [1 127.0.0.1:33630] "select" "1"
1660711192.523545 [1 127.0.0.1:33630] "client" "setname" "ActionCable-PID-199512"
publishing to MessagesChannel_1 works:
1660711192.523831 [1 127.0.0.1:33630] "publish" "messages_1" "{\"message\":\"message-text\"}"
In comparison in the local development configuration, this looks different:
1660712957.712189 [1 127.0.0.1:46954] "select" "1"
1660712957.712871 [1 127.0.0.1:46954] "client" "setname" "ActionCable-PID-18600"
1660712957.713495 [1 127.0.0.1:46954] "subscribe" "_action_cable_internal"
1660712957.716100 [1 127.0.0.1:46954] "subscribe" "messages_1"
1660712957.974486 [1 127.0.0.1:46952] "publish" "messages_3" "{\"message\":\"message-text\"}"
So what is "_action_cable_internal", and why doesn't it take place in production?
I found the code for the actioncable gem and added 'p #pubsub' in gems/actioncable-6.1.6.1/lib/action_cable/server/base.rb at the end of the def pubsub -function and compared that information with the local configuration.
locally there is an info:
#thread=#<Thread:0x5326bff6#/home/me_the_user/.rbenv/versions/jruby-9.2.16.0/lib/ruby/gems/shared/gems/actioncable-6.1.6.1/lib/action_cable/connection/stream_event_loop.rb:75 sleep>
which corresponds to the info at the server:
#thread=#<Thread:0x5f4462e1#/home/me_the_user/.rbenv/versions/jruby-9.3.6.0/lib/ruby/gems/shared/gems/actioncable-6.1.6.1/lib/action_cable/connection/stream_event_loop.rb:75 dead>
So it looks like the "warning" was an 'error'.
Also I am not sure, if the output of wscat / curl is normal or reports an error:
root#server:~# wscat -c wss://domain.tld
error: Unexpected server response: 302
Which could be normal due to missing '/cable'.
But:
root#server:~# wscat -c wss://domain.tld/cable
error: Unexpected server response: 404
root#server:~# curl -I https://domain.tld/cable
HTTP/1.1 404 Not Found
the configurations:
nginx.conf:
http {
upstream app {
# Path to Puma SOCK file, as defined previously
server unix:///var/www/my_app/shared/sockets/puma.sock fail_timeout=0;
}
server {
listen 443 ssl default_server;
server_name domain.com www.domain.com;
include snippets/ssl-my_app.com.conf;
include snippets/ssl-params.conf;
root /var/www/my_app/public;
try_files $uri /index.html /index.htm;
location /cable {
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Forwarded-Proto https;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
add_header X-Frame-Options SAMEORIGIN always;
proxy_pass http://app;
}
rails_env production;
} }
cable.yml:
production:
adapter: redis
url: redis://127.0.0.1:6379/1
ssl_params:
verify_mode: <%= OpenSSL::SSL::VERIFY_NONE %>
initializer/redis.rb:
$redis = Redis.new(:host => 'domain.com/cable', :port => 6379)
routes.rb:
mount ActionCable.server => '/cable'
config/environments/production.rb:
config.force_ssl = true
config.action_cable.allowed_request_origins = [/http:\/\/*/, /https:\/\/*/]
config.action_cable.allow_same_origin_as_host = true
config.action_cable.url = "wss://domain.com/cable"
I assume that is has to do with the ssl, but I am not an expert for configurations, so thank you very much for any help.
The problem was caused due to hardware: I am using a so called 'airbox' in France, which is a mobile internet access offering WLAN.
So the websocket connection always was closed due to actioncable websocket certificate not "being known" and mobile internet to be more strict.
Hence I created a pseudo-websocket: if the websocket connection is closed imediately, the browser of the user asks every 2,5 seconds, if there is something that would have been sent via websocket.
I've set up a new Ubuntu 18.04 server and deployed my phoenix app but am getting a 502 error when trying to access it.
I don't yet have a domain name because I will be transferring one from another server, so just trying to connect with the IP address.
The Phoenix app is deployed and running, and I can ping it with edeliver.
Prod conf:
config :app, AppWeb.Endpoint,
load_from_system_env: false,
url: [host: "127.0.0.1", port: 4013],
cache_static_manifest: "priv/static/cache_manifest.json",
check_origin: true,
root: ".",
version: Mix.Project.config[:version]
config :logger, level: :info
config :phoenix, :serve_endpoints, true
import_config "prod.secret.exs"
Nginx conf:
server {
listen 80;
server_name _;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://127.0.0.1:4013;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Nginx Error log:
2020/05/14 22:28:23 [error] 22908#22908: *24 connect() failed (111: Connection refused) while connecting to upstream, client: ipaddress, server: _, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:4013/", host: "ipaddress"
Edit:
Last two entries of OTP logs confirming app is alive
===== ALIVE Fri May 15 07:33:19 UTC 2020
===== ALIVE Fri May 15 07:48:19 UTC 2020
Edit 2:
I have posted a Gist detailing all the steps I have taken going from a clean Ubuntu box to where I am now here: https://gist.github.com/phollyer/cb3428e6c23b11fadc5105cea1379a7c
Thanks
You have to add server: true to your configuration, like:
config :wtmitu, WtmituWeb.Endpoint,
server: true, # <-- this line
load_from_system_env: false,
...
You don't have to add it to the dev environment because mix phx.server is doing it for you.
The Doc
This has been resolved as follows:
There were two problems that required resolving.
Adding config :app, AppWeb.Endpoint, server: true to either prod.secret.exs or prod.exs was required.
I had a running process left over from mistakenly deploying staging to the same server, initially. I originally logged in to the server, and stopped staging with ./bin/app stop, maybe this left a process running, maybe somehow I started the process by mistake later on. Anyway, I used ps ux to list the running processes and found that one of the processes listed staging in its path, so I killed all running processes related to the deloyment, both staging and production, with kill -9 processId, re-deployed to production, and all is now fine.
I have created a web application with Django Channels which I face problems with while trying to set up with Supervisor system.
To start with, the application locally works well.
Remotely (I use an AWS EC2 instance with Ubuntu Server 18.04 LTS), when run with a command daphne -b 0.0.0.0 -p 8000 mysite.asgi:application it also works well.
However, I cannot make it work with Supervisor. I follow instructions from the official Django Channels docs (https://channels.readthedocs.io/en/latest/deploying.html) and therefore I have:
nginx config file:
upstream channels-backend {
server localhost:8000;
}
server {
server_name www.example.com;
keepalive_timeout 5;
client_max_body_size 1m;
access_log /home/ubuntu/django_app/logs/nginx-access.log;
error_log /home/ubuntu/django_app/logs/nginx-error.log;
location /static/ {
alias /home/ubuntu/django_app/mysite/staticfiles/;
}
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
listen 80;
server_name www.example.com;
if ($host = www.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
return 404; # managed by Certbot
}
Supervisor config file:
[fcgi-program:asgi]
socket=tcp://localhost:8000
directory=/home/ubuntu/django_app/mysite
command=/home/ubuntu/django_app/venv/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application
numprocs=4
process_name=asgi%(process_num)d
autostart=true
autorestart=true
stdout_logfile=/home/ubuntu/django_app/logs/supervisor_log.log
redirect_stderr=true
When set this way, the webpage does not work (504 Gateway Time-out). In the Supervisor log file I see:
2018-11-14 14:48:21,511 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne0.sock
2018-11-14 14:48:21,516 INFO HTTP/2 support enabled
2018-11-14 14:48:21,517 INFO Configuring endpoint fd:fileno=0
2018-11-14 14:48:22,015 INFO Listening on TCP address 127.0.0.1:8000
2018-11-14 14:48:22,025 INFO Configuring endpoint unix:/run/daphne/daphne0.sock
2018-11-14 14:48:22,026 CRITICAL Listen failure: [Errno 2] No such file or directory: '1416' -> b'/run/daphne/daphne0.sock.lock'
2018-11-14 14:48:22,091 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne2.sock
2018-11-14 14:48:22,096 INFO HTTP/2 support enabled
2018-11-14 14:48:22,097 INFO Configuring endpoint fd:fileno=0
2018-11-14 14:48:22,135 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne3.sock
2018-11-14 14:48:22,152 INFO HTTP/2 support enabled
2018-11-14 14:48:22,153 INFO Configuring endpoint fd:fileno=0
2018-11-14 14:48:22,237 INFO Listening on TCP address 127.0.0.1:8000
2018-11-14 14:48:22,241 INFO Listening on TCP address 127.0.0.1:8000
2018-11-14 14:48:22,242 INFO Configuring endpoint unix:/run/daphne/daphne3.sock
2018-11-14 14:48:22,242 CRITICAL Listen failure: [Errno 2] No such file or directory: '1419' -> b'/run/daphne/daphne3.sock.lock'
2018-11-14 14:48:22,252 INFO Configuring endpoint unix:/run/daphne/daphne2.sock
2018-11-14 14:48:22,252 CRITICAL Listen failure: [Errno 2] No such file or directory: '1420' -> b'/run/daphne/daphne2.sock.lock'
etc.
Please note that in the Supervisor command the Daphne process is invoked in another way (with other set of parameters) than I run it before - instead of parameters for address and port, there are parameters for socket and file descriptor (about which I do not know much at all). I suspect that it is the reason of the encountered error.
Any help or suggestions will be highly appreciated.
The relevant packages versions:
channels==2.1.2
channels-redis==2.2.1
daphne==2.2.1
Django==2.1.2
EDIT:
When I create empty files for socket files (which are present in command for Daphne in the Supervisor config file), ie. /run/daphne/daphne0.sock, /run/daphne/daphne1.sock, etc., then the log file states the following:
2018-11-15 10:24:38,289 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne0.sock
2018-11-15 10:24:38,290 INFO HTTP/2 support enabled
2018-11-15 10:24:38,280 INFO Configuring endpoint fd:fileno=0
2018-11-15 10:24:38,458 INFO Listening on TCP address 127.0.0.1:8000
2018-11-15 10:24:38,475 INFO Configuring endpoint unix:/run/daphne/daphne0.sock
2018-11-15 10:24:38,476 CRITICAL Listen failure: Couldn't listen on any:b'/run/daphne/daphne0.sock': [Errno 98] Address already in use.
Question: should these files not be empty? What should they include?
In the supervisor ASGI config file, in the following line
command=/home/ubuntu/django_app/venv/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application
replace --fd 0 with --endpoint fd:fileno=0.
Issue: https://github.com/django/daphne/issues/234
Fabio's answer, with replacing the file descriptor parameter for the endpoint parameter, presents a quick workaround for this problem (which appeared to be a bug in the Daphne code).
However, the fix in Daphne repository was quickly committed so that the original instructions work well.
As a side note (for people still getting critical listen failures which I wrote about in the original question), please be sure that the physical location for socket files (/run/daphne/ in my case) is accessible - I spent too much time just to discover that simply creating the daphne folder in /run catalog does the job (even though I run everything with sudo)... For precautionary measures one may consider redirecting the socket files to another folder, e.g. /tmp which does let creating a directory without sudo permission.
My first ActiveStorage project was working fine on development (puma only) but on production (nginx/puma), i have an issue for downloading big files that appear as truncated files.
For instance, an uploaded file sized 24.1 MB gives a 5 MB (truncated) download.
I mostly upload pdf files, uploaded files are complete (checked on server) & the preview works fine.
All environments use the config.active_storage.service = :local
config/storage.yml
local:
service: Disk
root: <%= Rails.root.join("storage") %>
Download url is generated using rails_blob_path(document.doc, disposition: :inline)
I suspect an option or parameter in puma or nginx that makes downloading the full files fail. As of now I cannot see any error in logs.
/etc/nginx/sites-available/default.conf
upstream app {
# Path to Puma SOCK file, as defined previously
server unix:/home/deploy/rails/shared/sockets/puma.sock fail_timeout=0;
}
server {
listen 80;
root /home/deploy/rails/public;
try_files $uri/index.html $uri #app;
# Make site accessible from http://localhost/
server_name localhost;
location #app {
proxy_pass http://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
error_page 500 502 503 504 /500.html;
keepalive_timeout 10;
client_max_body_size 4G;
}
Rails config/puma.rb
# Default to production
rails_env = ENV['RAILS_ENV'] || "production"
environment rails_env
# Change to match your CPU core count
if rails_env == 'production'
workers 2
else
workers 1
end
# Min and Max threads per worker
threads 1, 6
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "#{app_dir}/shared"
if rails_env == 'production'
# Set up socket location
bind "unix://#{shared_dir}/sockets/puma.sock"
# Logging
stdout_redirect "#{shared_dir}/log/puma.stdout.log", "#{shared_dir}/log/puma.stderr.log", true
# Set master PID and state locations
pidfile "#{shared_dir}/pids/puma.pid"
state_path "#{shared_dir}/pids/puma.state"
activate_control_app
on_worker_boot do
require "active_record"
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{app_dir}/config/database.yml")[rails_env])
end
end
Use proxy_http_version 1.1;
By default, nginx uses HTTP 1.0 for proxying, which does not support chunked transfer encoding.
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version