SonarQube not reachable on 127.0.0.1:9000 under Ubuntu 18.04 - nginx

I want to run SonarQube on my Ubuntu 18.04 server along with nginx (a Droplet at DigitalOcean).
Mostly I've followed these instructions. I've used Postgres instead of MySQL.
Nginx should accept the request and pass it to the localhost-address used by SonarQube (127.0.0.1:9000).
Nginx is running and working. SSL is active and working. Here my codequality.example.com.conf:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name codequality.example.com www.codequality.example.com;
root /var/www/html;
index index.html index.htm;
access_log /var/log/nginx/codequality.example.com.access.log;
error_log /var/log/nginx/codequality.example.com.error.log;
ssl_certificate /etc/letsencrypt/live/codequality.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/codequality.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://127.0.0.1:9000;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
}
}
The command systemctl status sonarqube gives me the following response:
● sonarqube.service - SonarQube service
Loaded: loaded (/etc/systemd/system/sonarqube.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2020-03-17 22:16:50 UTC; 5s ago
Process: 21796 ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop (code=exited, status=0/SUCCESS)
Process: 21855 ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start (code=exited, status=0/SUCCESS)
Main PID: 21918 (wrapper)
Tasks: 43 (limit: 2361)
CGroup: /system.slice/sonarqube.service
├─21918 /opt/sonarqube/bin/linux-x86-64/./wrapper /opt/sonarqube/bin/linux-x86-64/../../conf/wrapper.co
├─21922 java -Dsonar.wrapped=true -Djava.awt.headless=true -Xms8m -Xmx32m -Djava.library.path=./lib -cl
└─21954 /usr/lib/jvm/java-11-openjdk-amd64/bin/java -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyF
lines 1-11/11 (END)
So I can assume that SonarQube-Server is running correctly. Trying to access SonarQube via https://codequality.example.com results in a 502 Error. The log-file says:
2020/03/17 22:19:44 [error] 19598#19598: *233 connect() failed (111: Connection refused) while connecting to upstream, client: 79.254.63.100, server: codequality.example.com, request: "GET / HTTP/2.0", upstream: "http://127.0.0.1:9000/", host: "codequality.example.com"
Trying to access the localhost (127.0.0.1:9000) during a ssh-session via curl http://127.0.0.1:9000 I get the error:
curl: (7) Failed to connect to 127.0.0.1 port 9000: Connection refused
This is the log from SonarQube:
2020.03.17 22:38:53 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2020.03.17 22:38:53 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2020.03.17 22:38:54 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch
2020.03.17 22:38:54 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2020.03.17 22:38:55 INFO app[][o.e.p.PluginsService] no modules loaded
2020.03.17 22:38:55 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
ERROR: [2] bootstrap checks failed
1: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
2020.03.17 22:39:11 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 78
2020.03.17 22:39:11 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped
2020.03.17 22:39:11 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
What am I doing wrong?

the problem is not the NGINX Proxy. The logfile you shared gives some insides
max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535] [2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] 2020.03.17 22:39:11 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 78 2020.03.17 22:39:11 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped
Your SonarQube isn't running correctly because of some misconfiguration of elastic sarch.
Check this links to find out how to adjust the limit:
general information
https://www.elastic.co/guide/en/elasticsearch/reference/current/file-descriptors.html
how to chnage the settings
https://www.elastic.co/guide/en/elasticsearch/reference/master/setting-system-settings.html
Make sure SonarQube is running correctly on localhost:9000 before move on to the proxy configuration.

Related

Nginx-1.19.6 + Openssl 1.1.1i - Can't do SSL handshake

i'm trying to run a server using Nginx with sslv3 and ciphers RC4-SHA:RC4-MD5 support (i need exactly these ciphers).
I was able to do this on Ubuntu 16.04 using Openssl 1.0.2u source + last nginx version source (nginx-1.19.6). I builded Nginx using this command:
./configure --with-http_ssl_module --with-openssl=/path/to/openssl-1.0.2u --with-openssl-opt=enable-ssl3 --with-openssl-opt=enable-ssl3-method --with-openssl-opt=enable-weak-ssl-ciphers
Nginx config i used is:
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_protocols SSLv3;
ssl_ciphers RC4-SHA:RC4-MD5:#SECLEVEL=0;
ssl_certificate /path/to/server-chain.crt;
ssl_certificate_key /path/to/server.key;
server_name server.name.net;
underscores_in_headers on;
proxy_pass_request_headers on;
location / {
proxy_set_header X-Forwarded-Host \$host:\$server_port;
proxy_set_header X-Forwarded-Server \$host;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:9000;
}
}
After setting up nginx config file everything worked perfectly. I was able to obtain the ssl certificate using this command from an Ubuntu 14.04 machine:
openssl s_client -connect MyIP:443 -ssl3 -cipher RC4-SHA:RC4-MD5.
I tryed to do the same thing building Nginx with Openssl 1.1.1i source with the same configuration options, but after setting up nginx conf file, when i try to run openssl s_client -connect... command, i get this error:
CONNECTED(00000003)
140420793624224:error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number:s3_pkt.c:339:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 5 bytes and written 7 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : SSLv3
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1612540521
Timeout : 7200 (sec)
Verify return code: 0 (ok)
---
In Nginx error.log file i got this:
SSL_do_handshake() failed (SSL: error:141FC044:SSL routines:tls_setup_handshake:internal error) while SSL handshaking, client: 192.168.1.10, server: 0.0.0.0:443
Did something change with openssl 1.1.1? Am i missing any configuration options to enable SSLv3 + RC4-SHA:RC4-MD5?
Thanks for any tips!
In the end i was able to fix this!
I downloaded the last openssl source (1.1.1i) and the last nginx source (1.19.6).
I compiled and installed openssl with the following commands:
./config enable-ssl3 enable-ssl3-method enable-weak-ssl-ciphers
make
sudo make install
I edited openssl.cnf file (/usr/local/ssl/openssl.cnf) adding
openssl_conf = default_conf
at the beginning of the file and adding
[default_conf]
ssl_conf = ssl_sect
[ssl_sect]
system_default = system_default_sect
[system_default_sect]
CipherString = ALL:#SECLEVEL=0
at the bottom of the file. This enables old ciphers (i needed RC4-SHA and RC4-MD5).
Then i compiled and installed nginx with the following commands:
./configure --with-http_ssl_module --with-ld-opt="-L/usr/local"
make
sudo make install
After configuring nginx for ssl certificates i was able to get them using the openssl s_client... command!
SSL Ciphers in nginx need to be supported by your openSSL Version. From the openSSL Changelog of 1.0.2h and 1.1.0:
RC4 based libssl ciphersuites are now classed as "weak" ciphers and are disabled by default. They can be re-enabled using the enable-weak-ssl-ciphers option to Configure.

Call timed out (error 1006) - bigbluebutton

I have installed bigbluebutton on my serverand it was working properly but suddenly microphone can not connect anymore and after a while being stucked in "echo test" i'll get 1006 error and i have tested:
FreeSWITCH fails to bind to IPV4Anchor link for: freeswitch fails to bind to ipv4
In rare occasions after shutdown/restart, the FreeSWITCH database can get corrupted. This will cause FreeSWITCH to have problems binding to IPV4 address (you may see error 1006 when users try to connect).
To check, look in /opt/freeswitch/var/log/freeswitch/freeswitch.log for errors related to loading the database.
2018-10-25 11:05:11.444727 [ERR] switch_core_db.c:108 SQL ERR [unsupported file format]
2018-10-25 11:05:11.444737 [ERR] switch_core_db.c:223 SQL ERR [unsupported file format]
2018-10-25 11:05:11.444759 [NOTICE] sofia.c:5949 Started Profile internal-ipv6 [sofia_reg_internal-ipv6]
2018-10-25 11:05:11.444767 [CRIT] switch_core_sqldb.c:508 Failure to connect to CORE_DB sofia_reg_external!
2018-10-25 11:05:11.444772 [CRIT] sofia.c:3049 Cannot Open SQL Database [external]!
If you see these errors, clear the FreeSWITCH database (BigBlueButton doesn’t use the database and FreeSWITCH will recreate it on startup).
$ sudo systemctl stop freeswitch
$ rm -rf /opt/freeswitch/var/lib/freeswitch/db/*
$ sudo systemctl start freeswitch
but it doesn't solve the problem.
this is output of bbb-conf --check:
BigBlueButton Server 2.2.20 (2037)
Kernel version: 4.4.0-185-generic
Distribution: Ubuntu 16.04.6 LTS (64-bit)
Memory: 4045 MB
CPU cores: 4
/usr/share/bbb-web/WEB-INF/classes/bigbluebutton.properties (bbb-web)
bigbluebutton.web.serverURL: https://online.vikipoyan.ir
defaultGuestPolicy: ALWAYS_ACCEPT
svgImagesRequired: true
/etc/nginx/sites-available/bigbluebutton (nginx)
server name: 49.12.60.238
port: 80, [::]:80
port: 443 ssl
bbb-client dir: /var/www/bigbluebutton
/var/www/bigbluebutton/client/conf/config.xml (bbb-client)
Port test (tunnel): rtmp://online.vikipoyan.ir
red5: online.vikipoyan.ir
useWebrtcIfAvailable: true
/opt/freeswitch/etc/freeswitch/vars.xml (FreeSWITCH)
local_ip_v4: 49.12.60.238
external_rtp_ip: stun:stun.freeswitch.org
external_sip_ip: stun:stun.freeswitch.org
/opt/freeswitch/etc/freeswitch/sip_profiles/external.xml (FreeSWITCH)
ext-rtp-ip: $${local_ip_v4}
ext-sip-ip: $${local_ip_v4}
ws-binding: :5066
wss-binding: :7443
/usr/local/bigbluebutton/core/scripts/bigbluebutton.yml (record and playback)
playback_host: online.vikipoyan.ir
playback_protocol: https
ffmpeg: 4.2.2-1bbb1~ubuntu16.04
/etc/bigbluebutton/nginx/sip.nginx (sip.nginx)
proxy_pass: 49.12.60.238
/usr/local/bigbluebutton/bbb-webrtc-sfu/config/default.yml (Kurento SFU)
kurento.ip: 49.12.60.238
kurento.url: ws://127.0.0.1:8888/kurento
kurento.sip_ip: 49.12.60.238
localIpAddress: 49.12.60.238
recordScreenSharing: true
recordWebcams: true
codec_video_main: VP8
codec_video_content: VP8
/usr/share/meteor/bundle/programs/server/assets/app/config/settings.yml (HTML5 client)
build: 968
kurentoUrl: wss://online.vikipoyan.ir/bbb-webrtc-sfu
enableListenOnly: true
# Potential problems described below
# Warning: API URL IPs do not match host:
#
# IP from ifconfig: 49.12.60.238
# /var/lib/tomcat7/demo/bbb_api_conf.jsp: online.vikipoyan.ir
# Warning: The API demos are installed and accessible from:
#
# https://online.vikipoyan.ir
#
# and
#
# https://online.vikipoyan.ir/demo/demo1.jsp
#
# These API demos allow anyone to access your server without authentication
# to create/manage meetings and recordings. They are for testing purposes only.
# If you are running a production system, remove them by running:
#
# apt-get purge bbb-demo
# Warning: You have this server defined for https, but in
#
# /etc/bigbluebutton/nginx/sip.nginx
#
# did not find the use of https in definition for proxy_pass
#
# proxy_pass http://49.12.60.238:5066;
#
# Warning: You have this server defined for https, but in
#
# /etc/bigbluebutton/nginx/sip.nginx
#
# did not find the use of port 7443 in definition for proxy_pass
#
# proxy_pass http://49.12.60.238:5066;
#
I tested everything i found and the problem still exists. is there any way?
Thanks in advance.
run the command:
sudo nano /etc/bigbluebutton/nginx/sip.nginx
something like this will open:
location /ws {
proxy_pass http://49.12.60.238:4066;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_read_timeout 6h;
proxy_send_timeout 6h;
client_body_timeout 6h;
send_timeout 6h;
}
change the (proxy_pass http://...) line to the following:
proxy_pass https://49.12.60.238:7443;
Please note that I followed the posts at https://github.com/bigbluebutton/bigbluebutton/issues/2628 but didn't get the issue resolved until I ran the following command:
$ wget -qO- https://ubuntu.bigbluebutton.org/bbb-install.sh | bash -s -- -v xenial-22 -s myserver.com -c turn.myserver.com:1004a200000000004cb3f2d6f000d85d -e johndoe#myserver.com -d -g | tee bbb-install.log

Gitlab timeouts / slow on initial page loads

I am running Gitlab on Debian using the package from the Repository. Most of the time Gitlab is running very fast, but after longer idle times Gitlab is very slow or even times out (error 502). One time I also had a timeout on a remote git access (could not reproduce the issue - timeout on the internal API).
In my setup the the Debian machine is behind another nginx proxy which also serves some other services just fine. I did the gitlab-cli checks and everything seems fine.
In the error log of my reverse proxy I only see connection timeouts:
[error] 8643#0: *4139 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.1.1.10, server: gitlab.mydomain.tld, request: "GET / HTTP/1.1", upstream: "http://{SERVER-IP}:80/", host: "gitlab.mydomain.tld"
I can see some errors in my unicorn_stderr.log
E, [2016-03-30T19:40:20.183991 #783] ERROR -- : worker=1 PID:16798 timeout (61s > 60s), killing
E, [2016-03-30T19:40:20.194969 #783] ERROR -- : reaped #<Process::Status: pid 16798 SIGKILL (signal 9)> worker=1
I, [2016-03-30T19:40:20.197554 #16871] INFO -- : worker=1 spawned pid=16871
I, [2016-03-30T19:40:20.197909 #16871] INFO -- : worker=1 ready
E, [2016-03-30T20:08:42.911429 #783] ERROR -- : worker=0 PID:16866 timeout (61s > 60s), killing
E, [2016-03-30T20:08:43.191151 #783] ERROR -- : reaped #<Process::Status: pid 16866 SIGKILL (signal 9)> worker=0
I, [2016-03-30T20:08:43.758363 #18728] INFO -- : worker=0 spawned pid=18728
I, [2016-03-30T20:08:44.108244 #18728] INFO -- : worker=0 ready
What I am a bit curious about is the fact that there are no errors in the log of the nginx delivered with gitlab.
Some more system information:
#sudo gitlab-rake gitlab:env:info
System information
System: Debian 8.3
Current User: git
Using RVM: no
Ruby Version: 2.1.8p440
Gem Version: 2.5.1
Bundler Version:1.10.6
Rake Version: 10.5.0
Sidekiq Version:4.0.1
GitLab information
Version: 8.5.0
Revision: a513e09
Directory: /opt/gitlab/embedded/service/gitlab-rails
DB Adapter: postgresql
URL: http://gitlab.mydomain.tld
HTTP Clone URL: http://gitlab.mydomain.tld/some-group/some-project.git
SSH Clone URL: git#gitlab.mydomain.tld:some-group/some-project.git
Using LDAP: no
Using Omniauth: no
GitLab Shell
Version: 2.6.10
Repositories: /var/opt/gitlab/git-data/repositories
Hooks: /opt/gitlab/embedded/service/gitlab-shell/hooks/
Git: /opt/gitlab/embedded/bin/git
Edit:
My nginx config on the "external" reverse proxy looks like this:
server {
listen 443;
ssl on;
server_name gitlab.mydomain.tld;
access_log /var/log/nginx/gitlab.mydomain.tld.access.log;
error_log /var/log/nginx/gitlab.mydomain.tld.error.log;
ssl_certificate /etc/nginx/ssl/gitlab.mydomain.tld_unified.crt;
ssl_certificate_key /etc/nginx/ssl/mydomain.tld.key;
location / {
proxy_pass http://gitlab:80;
proxy_redirect default;
proxy_set_header Host $http_host;
proxy_set_header X_FORWARDED_PROTO "https";
satisfy any;
}
}
Edit2:
I took the suggested answer into account and also considered this source: https://github.com/gitlabhq/gitlabhq/blob/master/doc/install/requirements.md
I assigned 2GB RAM to the VM now, and also added one additional unicorn worker.
Edit3:
The problem seems to be solved by adding more memory and using 3 unicorn workers.
Jan,
I have a similar setup although our box is dedicated to GITlab. Without knowing the specs of your server (GITLAB likes memory) and the load on that box I would suggest the following diagnostics:
Does your upstream nginx use identical parameters as the gitlab nginx configuration? They have tweaked a number of things including timeouts.
What kind of request result in time outs? Some operations (like generating diffs) can take some time to render.
If you run the requests via SSH do you also experience time outs?
Have you checked global logs in /var/log?
FYI: I had to enlarge my small GitLab installation to have 4GB RAM not to throw OOM errors
Now I think, I'd better go with gogs or other alternative.

Multiple upstream faye websocket servers with nginx

I would like to cluster my faye websocket server. A single server works well but I want to be prepared to scale.
My first attempt is to start a few thin servers on different sockets and then add them to the upstream for my server in nginx.
The bayeux messages are distributed amongst the clusters but chrome dev tools shows about 16
websocket 101 connections with connection closed in the frame tab of the networking panel:
Connection Close Frame (Opcode 8)
Connection Close Frame (Opcode 8, mask)
And a whole bunch of /meta/connect and /meta/handshake on the server side for all of the faye instances.
An excerpt:
D, [2013-11-15T15:34:50.215631 #5344] DEBUG -- : {"channel"=>"/meta/connect", "clientId"=>"q7odwfbovudiw87dg0jke3xbrg51tui", "connectionType"=>"callback-polling", "id"=>"p"}
D, [2013-11-15T15:34:50.245012 #5344] DEBUG -- : {"channel"=>"/meta/connect", "clientId"=>"ckowb5vz9pnbh7jwomc8h0qsk8t0nus", "connectionType"=>"callback-polling", "id"=>"r"}
D, [2013-11-15T15:34:50.285460 #5344] DEBUG -- : {"channel"=>"/meta/handshake", "version"=>"1.0", "supportedConnectionTypes"=>["callback-polling"], "id"=>"u"}
D, [2013-11-15T15:34:50.312919 #5344] DEBUG -- : {"channel"=>"/meta/handshake", "version"=>"1.0", "supportedConnectionTypes"=>["callback-polling"], "id"=>"w"}
D, [2013-11-15T15:34:50.356219 #5344] DEBUG -- : {"channel"=>"/meta/handshake", "version"=>"1.0", "supportedConnectionTypes"=>["callback-polling"], "id"=>"y"}
D, [2013-11-15T15:34:50.394820 #5344] DEBUG -- : {"channel"=>"/meta/handshake", "version"=>"1.0", "supportedConnectionTypes"=>["callback-polling"], "id"=>"10"}
Starting the thin servers (each in its own terminal):
thin start -C config/thin/development.yml -S /tmp/faye.1.sock
thin start -C config/thin/development.yml -S /tmp/faye.2.sock
thin start -C config/thin/development.yml -S /tmp/faye.3.sock
My thin config:
---
environment: development
timeout: 30
log: log/thin.log
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 100
require: []
wait: 30
# socket: /tmp/faye.sock
daemonize: false
rackup: config.ru
My NGINX config:
upstream thin_cluster {
server unix:/tmp/faye.1.sock fail_timeout=0;
server unix:/tmp/faye.2.sock fail_timeout=0;
server unix:/tmp/faye.3.sock fail_timeout=0;
}
server {
# listen 443;
server_name ~^push\.mysite\.dev(\..*\.xip\.io)?$;
charset UTF-8;
tcp_nodelay on;
# ssl on;
# ssl_certificate /var/www/heypresto/certificates/cert.pem;
# ssl_certificate_key /var/www/heypresto/certificates/key.pem;
# ssl_protocols TLSv1 SSLv3;
location / {
proxy_pass http://thin_cluster;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
break;
}
}
Thought it was too good to be true to put more upstreams for websockets and it would just work, but it seems so close...
EDIT: I've realised that this is probably not going to work and I should probably create N servers (push1.mysite.dev, push2.mysite.dev etc) and have the backend tell the frontend which one to connect.
Still If there are any thoughts out there...

Flask on nginx + uWSGI returns a 404 error unless the linux directory exists

This might be kind of a strange problem, but I'm not too experienced with these things and I don't know how to search for this kind of error.
I have a server configured with nginx and uWSGI. Everything runs fine, no errors in the logs that I can see. However, when I'm executing the below code:
from flask import Flask
app = Flask(__name__)
#app.route('/test/')
def page1():
return 'Hello World'
#app.route('/')
def index():
return 'Index Page'
I can not view http://ezte.ch/test/ UNLESS the /test/ directory exists inside linux once I create that directory, everything loads fine. Otherwise I get a 404 error passed to the uWSGI (it does show that it's receiving the request in the terminal) process.
Here is my config.ini for uWSGI:
[uwsgi]
project = eztech
uid = www-data
gid = www-data
plugins = http,python
socket = /usr/share/nginx/www/eztech/uwsgi.sock
chmod-socket = 775
chown-socket = www-data:www-data
wsgi-file hello.py
callable app
processes 4
threads 2
Here is my nginx configuration:
server {
listen 80; ## listen for ipv4; this line is default and implied
listen [::]:80 default ipv6only=on; ## listen for ipv6
autoindex on;
root /usr/share/nginx/www/eztech/public_html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name ezte.ch;
location / {
uwsgi_pass unix:/usr/share/nginx/www/eztech/uwsgi.sock;
include uwsgi_params;
uwsgi_param UWSGI_CHDIR /usr/share/nginx/www/eztech/public_html;
uwsgi_param UWSGI_MODULE hello;
uwsgi_param UWSGI_CALLABLE app;
# First attempt to serve request as file, then
# as directory, then fall back to index.html
try_files $uri $uri/ /index.html;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
Below is what I get when running uWSGI with my config file:
[uWSGI] getting INI configuration from config.ini
open("./http_plugin.so"): No such file or directory [core/utils.c line 3347]
!!! UNABLE to load uWSGI plugin: ./http_plugin.so: cannot open shared object file: No such file or directory !!!
open("./python_plugin.so"): No such file or directory [core/utils.c line 3347]
!!! UNABLE to load uWSGI plugin: ./python_plugin.so: cannot open shared object file: No such file or directory !!!
*** Starting uWSGI 1.9.8 (64bit) on [Sat Apr 27 06:29:18 2013] ***
compiled with version: 4.6.3 on 27 April 2013 00:06:22
os: Linux-3.2.0-36-virtual #57-Ubuntu SMP Tue Jan 8 22:04:49 UTC 2013
nodename: ip-10-245-51-230
machine: x86_64
clock source: unix
detected number of CPU cores: 1
current working directory: /usr/share/nginx/www/eztech
detected binary path: /usr/local/bin/uwsgi
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 4595
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
uwsgi socket 0 bound to UNIX address /usr/share/nginx/www/eztech/uwsgi.sock fd 3
setgid() to 33
setuid() to 33
Python version: 2.7.3 (default, Aug 1 2012, 05:25:23) [GCC 4.6.3]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x2505520
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 72688 bytes (70 KB) for 1 cores
*** Operational MODE: single process ***
*** no app loaded. going in full dynamic mode ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI worker 1 (and the only) (pid: 12800, cores: 1)
Thank you for any assistance you can offer!
As Blender already says, there should be no try_files where is your upstream called.
The following nginx config is enough to host flask application:
server {
listen 80;
server_name ezte.ch;
location / {
uwsgi_pass unix:/usr/share/nginx/www/eztech/uwsgi.sock;
include uwsgi_params;
}
}
my flask config:
<uwsgi>
<autostart>true</autostart>
<master/>
<pythonpath>/var/www/apps/someapp/</pythonpath>
<plugin>python</plugin>
<module>someapp:app</module>
<processes>4</processes>
</uwsgi>
So there is path /var/www/apps/someapp/ and flask file someapp.py
I had the same issue. just remove this line from the nginx configuration :
root /usr/share/nginx/www/eztech/public_html;

Resources