I have created a web application with Django Channels which I face problems with while trying to set up with Supervisor system.
To start with, the application locally works well.
Remotely (I use an AWS EC2 instance with Ubuntu Server 18.04 LTS), when run with a command daphne -b 0.0.0.0 -p 8000 mysite.asgi:application it also works well.
However, I cannot make it work with Supervisor. I follow instructions from the official Django Channels docs (https://channels.readthedocs.io/en/latest/deploying.html) and therefore I have:
nginx config file:
upstream channels-backend {
server localhost:8000;
}
server {
server_name www.example.com;
keepalive_timeout 5;
client_max_body_size 1m;
access_log /home/ubuntu/django_app/logs/nginx-access.log;
error_log /home/ubuntu/django_app/logs/nginx-error.log;
location /static/ {
alias /home/ubuntu/django_app/mysite/staticfiles/;
}
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
listen 80;
server_name www.example.com;
if ($host = www.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
return 404; # managed by Certbot
}
Supervisor config file:
[fcgi-program:asgi]
socket=tcp://localhost:8000
directory=/home/ubuntu/django_app/mysite
command=/home/ubuntu/django_app/venv/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application
numprocs=4
process_name=asgi%(process_num)d
autostart=true
autorestart=true
stdout_logfile=/home/ubuntu/django_app/logs/supervisor_log.log
redirect_stderr=true
When set this way, the webpage does not work (504 Gateway Time-out). In the Supervisor log file I see:
2018-11-14 14:48:21,511 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne0.sock
2018-11-14 14:48:21,516 INFO HTTP/2 support enabled
2018-11-14 14:48:21,517 INFO Configuring endpoint fd:fileno=0
2018-11-14 14:48:22,015 INFO Listening on TCP address 127.0.0.1:8000
2018-11-14 14:48:22,025 INFO Configuring endpoint unix:/run/daphne/daphne0.sock
2018-11-14 14:48:22,026 CRITICAL Listen failure: [Errno 2] No such file or directory: '1416' -> b'/run/daphne/daphne0.sock.lock'
2018-11-14 14:48:22,091 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne2.sock
2018-11-14 14:48:22,096 INFO HTTP/2 support enabled
2018-11-14 14:48:22,097 INFO Configuring endpoint fd:fileno=0
2018-11-14 14:48:22,135 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne3.sock
2018-11-14 14:48:22,152 INFO HTTP/2 support enabled
2018-11-14 14:48:22,153 INFO Configuring endpoint fd:fileno=0
2018-11-14 14:48:22,237 INFO Listening on TCP address 127.0.0.1:8000
2018-11-14 14:48:22,241 INFO Listening on TCP address 127.0.0.1:8000
2018-11-14 14:48:22,242 INFO Configuring endpoint unix:/run/daphne/daphne3.sock
2018-11-14 14:48:22,242 CRITICAL Listen failure: [Errno 2] No such file or directory: '1419' -> b'/run/daphne/daphne3.sock.lock'
2018-11-14 14:48:22,252 INFO Configuring endpoint unix:/run/daphne/daphne2.sock
2018-11-14 14:48:22,252 CRITICAL Listen failure: [Errno 2] No such file or directory: '1420' -> b'/run/daphne/daphne2.sock.lock'
etc.
Please note that in the Supervisor command the Daphne process is invoked in another way (with other set of parameters) than I run it before - instead of parameters for address and port, there are parameters for socket and file descriptor (about which I do not know much at all). I suspect that it is the reason of the encountered error.
Any help or suggestions will be highly appreciated.
The relevant packages versions:
channels==2.1.2
channels-redis==2.2.1
daphne==2.2.1
Django==2.1.2
EDIT:
When I create empty files for socket files (which are present in command for Daphne in the Supervisor config file), ie. /run/daphne/daphne0.sock, /run/daphne/daphne1.sock, etc., then the log file states the following:
2018-11-15 10:24:38,289 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne0.sock
2018-11-15 10:24:38,290 INFO HTTP/2 support enabled
2018-11-15 10:24:38,280 INFO Configuring endpoint fd:fileno=0
2018-11-15 10:24:38,458 INFO Listening on TCP address 127.0.0.1:8000
2018-11-15 10:24:38,475 INFO Configuring endpoint unix:/run/daphne/daphne0.sock
2018-11-15 10:24:38,476 CRITICAL Listen failure: Couldn't listen on any:b'/run/daphne/daphne0.sock': [Errno 98] Address already in use.
Question: should these files not be empty? What should they include?
In the supervisor ASGI config file, in the following line
command=/home/ubuntu/django_app/venv/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application
replace --fd 0 with --endpoint fd:fileno=0.
Issue: https://github.com/django/daphne/issues/234
Fabio's answer, with replacing the file descriptor parameter for the endpoint parameter, presents a quick workaround for this problem (which appeared to be a bug in the Daphne code).
However, the fix in Daphne repository was quickly committed so that the original instructions work well.
As a side note (for people still getting critical listen failures which I wrote about in the original question), please be sure that the physical location for socket files (/run/daphne/ in my case) is accessible - I spent too much time just to discover that simply creating the daphne folder in /run catalog does the job (even though I run everything with sudo)... For precautionary measures one may consider redirecting the socket files to another folder, e.g. /tmp which does let creating a directory without sudo permission.
Related
jruby 9.3.6 (hence ruby 2.6.8), rails 6.1.6.1. in production using ssl (wss) with devise, puma, nginx.
Locally actioncable is running without problems, but on the external server actioncable establishes a Websocket connection, leading to nginx: GET /cable HTTP/1.1" 101 210, the user gets verified correctly from the expanded session_id in the cookie, and after "Successfully upgraded to WebSocket", the browser receives a {"type":"welcome"} and pings.
Hence the actioncable javascript sends a request to subscribe to a channel, but that doesn't have success.
I tried a lot regarding the configuration of nginx, e.g. switching to passenger for the actioncable location /cable in nginx, and after 5 days I even changed from calling the server side of actioncable from a simple "new Websocket" in javascript to the client-side implementation of actioncable - as is designed to be (using import maps and actioncable.esm.js), but it didn't solve the main problem.
The logfiles:
production.log:
[ActionCable connect in app/channels/application_cable/connection.rb:] WebSocket error occurred: Broken pipe -
If the domain.com is called once, this error takes place every time 0.2 Seconds. If the page is closed, the error continues. The 0.2 seconds is the same frequency with which the browser sends the requests to subscribe to the channel. I assumed a log time, it would have to do with ssl, but I don't catch the problem. So now, I assume that the "broken pipe" is a problem between the jruby app and actioncable on the server side. But I am not sure and actually I don't know, how to troubleshoot there.
Additional there is a warning in puma.stderr.log:
warning: thread "Ruby-0-Thread-27:
/home/my_app/.rbenv/versions/jruby-9.3.6.0/lib/ruby/gems/shared/gems/actioncable-6.1.6.1/lib/action_cable/connection/stream_event_loop.rb:75" terminated with exception (report_on_exception is true):
ArgumentError: mode not supported for this object: r
starting redis-cli and doing 'monitor':
1660711192.522723 [1 127.0.0.1:33630] "select" "1"
1660711192.523545 [1 127.0.0.1:33630] "client" "setname" "ActionCable-PID-199512"
publishing to MessagesChannel_1 works:
1660711192.523831 [1 127.0.0.1:33630] "publish" "messages_1" "{\"message\":\"message-text\"}"
In comparison in the local development configuration, this looks different:
1660712957.712189 [1 127.0.0.1:46954] "select" "1"
1660712957.712871 [1 127.0.0.1:46954] "client" "setname" "ActionCable-PID-18600"
1660712957.713495 [1 127.0.0.1:46954] "subscribe" "_action_cable_internal"
1660712957.716100 [1 127.0.0.1:46954] "subscribe" "messages_1"
1660712957.974486 [1 127.0.0.1:46952] "publish" "messages_3" "{\"message\":\"message-text\"}"
So what is "_action_cable_internal", and why doesn't it take place in production?
I found the code for the actioncable gem and added 'p #pubsub' in gems/actioncable-6.1.6.1/lib/action_cable/server/base.rb at the end of the def pubsub -function and compared that information with the local configuration.
locally there is an info:
#thread=#<Thread:0x5326bff6#/home/me_the_user/.rbenv/versions/jruby-9.2.16.0/lib/ruby/gems/shared/gems/actioncable-6.1.6.1/lib/action_cable/connection/stream_event_loop.rb:75 sleep>
which corresponds to the info at the server:
#thread=#<Thread:0x5f4462e1#/home/me_the_user/.rbenv/versions/jruby-9.3.6.0/lib/ruby/gems/shared/gems/actioncable-6.1.6.1/lib/action_cable/connection/stream_event_loop.rb:75 dead>
So it looks like the "warning" was an 'error'.
Also I am not sure, if the output of wscat / curl is normal or reports an error:
root#server:~# wscat -c wss://domain.tld
error: Unexpected server response: 302
Which could be normal due to missing '/cable'.
But:
root#server:~# wscat -c wss://domain.tld/cable
error: Unexpected server response: 404
root#server:~# curl -I https://domain.tld/cable
HTTP/1.1 404 Not Found
the configurations:
nginx.conf:
http {
upstream app {
# Path to Puma SOCK file, as defined previously
server unix:///var/www/my_app/shared/sockets/puma.sock fail_timeout=0;
}
server {
listen 443 ssl default_server;
server_name domain.com www.domain.com;
include snippets/ssl-my_app.com.conf;
include snippets/ssl-params.conf;
root /var/www/my_app/public;
try_files $uri /index.html /index.htm;
location /cable {
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Forwarded-Proto https;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
add_header X-Frame-Options SAMEORIGIN always;
proxy_pass http://app;
}
rails_env production;
} }
cable.yml:
production:
adapter: redis
url: redis://127.0.0.1:6379/1
ssl_params:
verify_mode: <%= OpenSSL::SSL::VERIFY_NONE %>
initializer/redis.rb:
$redis = Redis.new(:host => 'domain.com/cable', :port => 6379)
routes.rb:
mount ActionCable.server => '/cable'
config/environments/production.rb:
config.force_ssl = true
config.action_cable.allowed_request_origins = [/http:\/\/*/, /https:\/\/*/]
config.action_cable.allow_same_origin_as_host = true
config.action_cable.url = "wss://domain.com/cable"
I assume that is has to do with the ssl, but I am not an expert for configurations, so thank you very much for any help.
The problem was caused due to hardware: I am using a so called 'airbox' in France, which is a mobile internet access offering WLAN.
So the websocket connection always was closed due to actioncable websocket certificate not "being known" and mobile internet to be more strict.
Hence I created a pseudo-websocket: if the websocket connection is closed imediately, the browser of the user asks every 2,5 seconds, if there is something that would have been sent via websocket.
I am currently working on a FPV robotics project that has two servers, flask/werkzeug and streamserver, serving http traffic and streaming video to an external web server, located on a different machine.
The way it is currently configured is like this:
http://1.2.3.4:5000 is the "web" traffic (command and control) served by flask/werkzeug
http://1.2.3.4:5001 is the streaming video channel served by streamserver.
I want to place them behind a https reverse proxy so that I can connect to this via https://example.com where "example.com" is set to 1.2.3.4 in my external system's hosts file.
I would like to:
Pass traffic to the internal connection at 1.2.3.4:5000 through as a secure connection. (certain services, like the gamepad, won't work unless it's a secure connection.)
Pass traffic to 1.2.3.4:5001 as a plain-text connection on the inside as "streamserver" does not support HTTPS connections.
. . . such that the "external" connection (to ports 5000 and 5001 are both secure connections as far as the outside world is concerned, such that:
[external system]-https://example.com:5000/5001----nginx----https://example.com:5000
\---http://example.com:5001
http://example.com:5000 or 5001 redirects to https.
All of the literature I have seen so far talks about:
Routing/load-balancing to different physical servers.
Doing everything within a Kubernates and/or Docker container.
My application is just an every-day plain vanilla server type configuration, and the only reason I am even messing with https is because of the really annoying problems with things not working except in a secure context which prevents me from completing my project.
I am sure this is possible, but the literature is either hideously confusing or appears to talk to a different use case.
A reference to a simple how-to would be the most usefull choice.
Clear and unambiguous steps would also be appreciated.
Thanks in advance for any help you can provide.
This minimal config should provide public endpoints:
http://example.com/* => https://example.com/*
https://example.com/stream => http://1.2.3.4:5001/
https://example.com/* => https://1.2.3.4:5000/
# redirect to HTTPS
server {
listen 80;
listen [::]:80;
server_name example.com
www.example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com
www.example.com;
ssl_certificate /etc/nginx/ssl/server.cer;
ssl_certificate_key /etc/nginx/ssl/server.key;
location /stream {
proxy_pass http://1.2.3.4:5001/; # HTTP
}
# fallback location
location / {
proxy_pass https://1.2.3.4:5000/; # HTTPS
}
}
First, credit where credit is due: #AnthumChris's answer is essentially correct. However, if you've never done this before, the following additional information may be useful:
There is actually too much information online, most of which is contradictory, possibly wrong, and unnecessarily complicated.
It is not necessary to edit the nginx.conf file. In fact, that's probably a bad idea.
The current open-source version of nginx can be used as a reverse proxy, despite the comments on the nginx web-site saying you need the Pro version. As of this instant date, the current version for the Raspberry Pi is 1.14.
After sorting through the reams of information, I discovered that setting up a reverse proxy to multiple backend devices/server instances is remarkably simple. Much simpler than the on-line documentation would lead you to believe.
Installing nginx:
When you install nginx for the first time, it will report that the installation has failed. This is a bogus warning. You get this warning because the installation process tries to start the nginx service(s) and there isn't a valid configuration yet - so the startup of the services fails, however the installation is (likey) correct and proper.
Configuring the systems using nginx and connecting to it:
Note: This is a special case unique to my use-case as this is running on a stand-alone robot for development purposes and my domain is not a "live" domain on a web-facing server. It is a "real" domain with a "real" and trusted certificate to avoid browser warnings while development progresses.
It was necessary for me to make entries in the robot's and remote system's HOSTS file to automagically redirect references to my domain to the correct device, (the robot's fixed IP address), instead of directnic's servers where the domain is parked.
Configuring nginx:
The correct place to put your configuration file, (on the raspberry pi), is /etc/nginx/sites-available and create a symlink to that file in /etc/nginx/sites-enabled
It does not matter what you name it as nginx.conf blindly imports whatever is in that directory. The other side of that is if there is anything already in that directory, you should remove it or rename it with a leading dot.
nginx -T is your friend! You can use this to "test" your configuration for problems before you try to start it.
sudo systemctl restart nginx will attempt to restart nginx, (which as you begin configuration, will likely fail.)
sudo systemctl status nginx.service > ./[path]/log.txt 2>&1 is also your friend. This allows you to collect error messages at runtime that will prevent the service from starting. In my case, the majority of the problems were caused by other services using ports I had selected, or silly mis-configurations.
Once you have nginx started, and the status returns no problems, try sudo netstat -tulpn | grep nginx to make sure it's listening on the correct ports.
Troubleshooting nginx after you have it running:
Most browsers, (Firefox and Chrome at least) support a "developer mode" that you enter by pressing F-12. The console messages can be very helpful.
SSL certificates:
Unlike other SSL servers, nginx requires the site certificate to be combined with the intermediate certificate bundle received from the certificate authority by using cat mycert.crt bundle.file > combined.crt to create it.
Ultimately I ended up with the following configuration file:
Note that I commented out the HTTP redirect as there was a service using port 80 on my device. Under normal conditions, you will want to automatically re-direct port 80 to the secure connection.
Also note that I did not use hard-coded IP addresses in the config file. This allows you to reconfigure the target IP address if necessary.
A corollary to that is - if you're redirecting to an internal secure device configured with the same certificates, you have to pass it through as the domain instead of the IP address, otherwise the secure connection will fail.
#server {
# listen example.com:80;
# server_name example.com;
# return 301 https://example.com$request_uri;
# }
# This is the "web" server (command and control), running Flask/Werkzeug
# that must be passed through as a secure connection so that the
# joystick/gamepad works.
#
# Note that the internal Flask server must be configured to use a
# secure connection too. (Actually, that may not be true, but that's
# how I set it up. . .)
#
server {
listen example.com:443 ssl;
server_name example.com;
ssl_certificate /usr/local/share/ca-certificates/extra/combined.crt;
ssl_certificate_key /usr/local/share/ca-certificates/extra/example.com.key;
ssl_prefer_server_ciphers on;
location / {
proxy_pass https://example.com:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# This is the video streaming port/server running streamserver
# which is not, and cannot be, secured. However, since most
# modern browsers will not mix insecure and secure content on
# the same page, the outward facing connection must be secure.
#
server {
listen example.com:5001 ssl;
server_name example.com;
ssl_certificate /usr/local/share/ca-certificates/extra/combined.crt;
ssl_certificate_key /usr/local/share/ca-certificates/extra/www.example.com.key;
ssl_prefer_server_ciphers on;
# After securing the outward facing connection, pass it through
# as an insecure connection so streamserver doesn't barf.
location / {
proxy_pass http://example.com:5002;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Hopefully this will help the next person who encounters this problem.
This might be a simple error but I can't seem to use certbot to verify my domain. I am using nginx that is connected to an express application. I have commented out the configurations from the default nginx file and it only includes the configurations for my site from /etc/nginx/conf.d/mysite.info. In my configuration, the first location entry points to the root /.well-known/acme-challenge directory. Here's the settings from my nginx conf file:
server {
listen 80;
server_name <MYDOMAIN>.info www.<MYDOMAIN>.info;
location '/.well-known/acme-challenge' {
root /srv/www/<MY_ROOT_DIRECTORY>;
}
location / {
proxy_pass http://localhost:4200;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /secure {
auth_pam "Secure zone";
auth_pam_service_name "nginx";
}
}
To verfiy, I used the following certbot command:
certbot certonly --agree-tos --email <My_EMAIL>#gmail.com --webroot -w /srv/www/<ROOT_FOLDER>/ -d <DOMAIN>.info
The error for certbot are as follows:
Performing the following challenges:
http-01 challenge for <MYDOMAIN>.info
Using the webroot path /srv/www/<ROOT_FOLDER> for all unmatched domains.
Waiting for verification...
Challenge failed for domain <MYDOMAIN>.info
http-01 challenge for <MYDOMAIN>.info
Cleaning up challenges
Some challenges have failed.
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: <MYDOMAIN>.info
Type: unauthorized
Detail: Invalid response from
http://<MYDOMAIN>.info/.well-known/acme-challenge/Yb3c1WtCn5G43YatrhVorTbT_nn3WKTLwKjr0c9dW8E
[74.208.<...>.<...>]: "<!DOCTYPE html>\n<html
lang=\"en\">\n<head>\n<meta
charset=\"utf-8\">\n<title>Error</title>\n</head>\n<body>\n<pre>Cannot
GET /.well-known/"
I am literally clueless at this point. All the directories and files have read permission for all users and groups. Any suggestions will be highly appreciated.
EDIT
Since Nginx was failing to deliver the challenge files, I modified my express server to send the files. The express app is accessible and it was easy to send the challenge files to get certbot to work. Although not the desired solution it worked. However, I will keep the post open for a better answer.
About:
Challenge failed for domain
This error can happen if you does not have the port 443 opened in your firewall.
I have the same problem trying to make the certbot works on AWS. After some tries, I just needed to open the port 443 in the Security Group associated with the EC2 instance.
I was facing this issue, but my problem was little bit different, after doing some research i got to know that the domain on which i was trying certbot is protected by cloudflare , and there is a waf rule for country restriction, which was blocking all the traffic from the origin server, so turning off the country restriction for a while did the job.
I think all 3 problems are related to the same issue, so I'm going to put all of them here.
Gitlab itself is working, I even managed to update it from 8.2.2 to 8.2.3.
I can create projects, push my code, pull it, reclone it when I have the proper ssh key, etc.
BUT:
I can't download the code as zip file, got a JSON instead:
{"RepoPath":"/var/opt/gitlab/git-data/repositories/me/myrepo.git",
"ArchivePrefix": "...
People can't clone my public repo (empty repository error).
CI can't build my tests:
warning: You have cloned an empty repository. Checking out 12345 as
develop... fatal: reference is not a tree :
123456789mycommithash987654321
ERROR: Build failed with: exit status 1
NB: I Translated error messages from French ones.
I suppose the problem is in my Nginx configuration, but there is so much documentation I'm not sure which one is the good one: the ones with the workhorse, the ones when I have to change gitlab.rb's gitlab_git_http_server, etc.
My configuration is following:
Gitlab 8.2.3
Ubuntu Trusty (14.04)
Nginx 1.8
My gitlab is hosted on a subdomain using SLL so I added a Nginx proxy
/etc/gitlab/gitlab.rb:
external_url 'https://gitlab.mydomain.com'
nginx['listen_addresses'] = ['127.0.0.1', "[::1]"]
nginx['listen_port'] = 8080
nginx['listen_https'] = false
/etc/nginx/site_enabled/gitlab:
server {
listen *:80 default_server;
listen [::]:80 ipv6only=on default_server;
server_name gitlab.mydomain.com;
return 301 https://$server_name$request_uri;
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
}
server{
# listen 443 ssl;
listen 0.0.0.0:443 ssl default_server;
listen [::]:443 ipv6only=on ssl default_server;
server_name gitlab.mydomain.com;
server_tokens off;
location /{
proxy_pass http://localhost:8080;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ ^/(assets)/ {
root /opt/gitlab/embedded/service/gitlab-rails/public;
gzip_static on; # to serve pre-gzipped version
expires max;
add_header Cache-Control public;
}
client_max_body_size 250m;
# ...
# A lot a of SSL stuff (HSTS, OCSP, dhparam, etc)
# ...
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
error_page 502 /502.html;
UPDATE :
Just upgraded Gilab to 8.3.0.
Git a 502 now.
Applying : https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/update/8.2-to-8.3.md.
We'll see.
UPDATE 2:
Did not finish instructions after all, stop everything and restarting everything twice (Gitlab and Nginx) Finally managed to get the thing working.
Still same problems with CI/Zip/PublicCloning tough.
UPDATE 3:
Just update to 8.2.3
apt-get update
apt-get install gitlab-ce
502.
restart nginx
gitlab-ctl restart
gitlab-rake gitlab:app:check
Checking GitLab ...
Git configured with autocrlf=input? ... yes
Database config exists? ... yes
Database is SQLite ... no
All migrations up? ... yes
Database contains orphaned GroupMembers? ... no
GitLab config exists? ... yes
GitLab config outdated? ... no
Log directory writable? ... yes
Tmp directory writable? ... yes
Uploads directory setup correctly? ... yes
Init script exists? ... skipped (omnibus-gitlab has no init script)
Init script up-to-date? ... skipped (omnibus-gitlab has no init script)
projects have namespace: ...
Redis version >= 2.8.0? ... yes
Ruby version >= 2.1.0 ? ... yes (2.1.7)
Your git bin path is "/opt/gitlab/embedded/bin/git"
Git version >= 1.7.10 ? ... yes (2.6.1)
Active users: 2
Checking GitLab ... Finished
If someone can lead me to the proper documentation or changes to be made that would be awesome.
It looks as though downloading of ZIP-Files is now handled by the gitlab-workhorse.
For that there's some extra stuff in the nginx-configfile. You might want to have a look at https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/support/nginx/gitlab where there is a section
upstream gitlab-workhorse {
server unix:/home/git/gitlab/tmp/sockets/gitlab-workhorse.socket fail_timeout=0;
}
and a
proxy_pass http://gitlab-workhorse;
at the end of the configuration.
I'm currently digging into the same issue and will report back, when I've solved it.
take a look at https://gist.github.com/sameersbn/becd1c976c3dc4866ef8 it seems that there is a option 'gzip' that can been turn off.
gzip off;
at line 53.
The update documentation is missing an item: it renames gitlab-git-http-server to gitlab-workhorse in nginx configuration, but it partially misses /etc/default/gitlab. Replace all occurrences of gitlab-git-http-server with gitlab-workhorse there as well, especially the socket in gitlab_workhorse_options.
Something like
sed -i -e 's/gitlab-git-http-server/gitlab-workhorse/g' /etc/default/gitlab
A beggining but not all of it:
I mistakenly made Gitlab's nginx listen to 8080 port. When it's already the port used by Gitlab's Unicorn.
Changing it to 8081 made the CI better responding. Still have to solve git user right (or better, use docker) but that's not a direct issue of what matters here...
UPDATE: Complete Solution - ACLs
Seems git and gitlab-runner users that are created during install process do have enough rights.
First: Create a real home for each : /home/gitlab-runner, /home/git with proper ssh authorized_keys, and rbenv + ruby installs.
Then: vim /etc/passwd and change there home directory for the new home, where they have full rights.
Now my builds are green !
I've got nginx running handling all SSL stuff and already proxying / to a Redmine instance and /ci to a Jenkins instance.
Now I want to serve an IPython instance on /ipython through that very same nginx.
In nginx.conf I've added:
http {
...
upstream ipython_server {
server 127.0.0.1:5001;
}
server {
listen 443 ssl default_server;
... # all SSL related stuff and the other proxy configs (Redmine+Jenkins)
location /ipython {
proxy_pass http://ipython_server;
}
}
}
In my .ipython/profile_nbserver/ipython_notebook_config.py I've got:
c.NotebookApp.base_project_url = '/ipython/'
c.NotebookApp.base_kernel_url = '/ipython/'
c.NotebookApp.port = 5001
c.NotebookApp.trust_xheaders = True
c.NotebookApp.webapp_settings = {'static_url_prefix': '/ipython/static/'}
Pointing my browser to https://myserver/ipython gives me the usual index page of all notebooks in the directory I launched IPython.
However, when I try to open one of the existing notebooks or create a new one, I'm getting the error:
WebSocket connection failed: A WebSocket connection to could not be established. You will NOT be able to run code. Check your network connection or notebook server configuration.
I've tried the same setup with the current stable (1.2.1, via pypi) and development (Git checkout of master) version of IPython.
I also tried adjusting the nginx config according to nginx reverse proxy websockets with no avail.
Due to an enforced policy I'm not able to allow connections to the server on other ports than 443.
Does anybody have IPython running behind an nginx?
I had the same problem. I updated nginx up to the current version (1.6.0). It seems to be working now.
Server config:
location /ipython {
proxy_pass http://ipython_server;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Origin "";
}
See: http://nginx.org/en/docs/http/websocket.html