Working on getting a skeleton of ASP.Net application running on Ubuntu. Installed all the requisite applications. Just trying to serve index.html or index.aspx to start. However, running through Nginx never seems to get through the proxy, always get a 404 from Nginx itself.
Starting mono server as:
fastcgi-mono-server4 /loglevels=All /printlog /verbose /applications=/:/opt/esoa /socket=tcp:127.0.0.1:9000
Nginx configuration:
server {
listen 80;
server_name _;
location / {
root /opt/esoa;
index index.html;
access_log /var/log/nginx/esoa.log;
fastcgi_param SERVER_NAME $host;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO "";
fastcgi_pass 127.0.0.1:9000;
include /etc/nginx/fastcgi_params;
}
}
Output from Mono server when it starts:
6611: 1 fastcgi-mono-server [2023-01-27 01:34:35.980188] Debug : fastcgi-mono-server4
6611: 1 fastcgi-mono-server [2023-01-27 01:34:35.987638] Debug : Uid 1000, euid 1000, gid 1000, egid 1000
6611: 1 fastcgi-mono-server [2023-01-27 01:34:35.988138] Debug : Root directory: /home/ubuntu/test
6611: 1 fastcgi-mono-server [2023-01-27 01:34:35.989834] Notice : Adding applications '/:/opt/esoa'...
6611: 1 fastcgi-mono-server [2023-01-27 01:34:35.990535] Notice : Registering application:
6611: 1 fastcgi-mono-server [2023-01-27 01:34:35.990600] Notice : Host: any
6611: 1 fastcgi-mono-server [2023-01-27 01:34:35.990651] Notice : Port: any
6611: 1 fastcgi-mono-server [2023-01-27 01:34:35.990701] Notice : Virtual path: /
6611: 1 fastcgi-mono-server [2023-01-27 01:34:35.990765] Notice : Physical path: /opt/esoa/
6611: 1 fastcgi-mono-server [2023-01-27 01:34:36.005144] Debug : Parsed tcp:127.0.0.1:9000 as URI tcp:127.0.0.1:9000
6611: 1 fastcgi-mono-server [2023-01-27 01:34:36.027285] Debug : Listening on port: 9000
6611: 1 fastcgi-mono-server [2023-01-27 01:34:36.027847] Debug : Listening on address: 127.0.0.1
6611: 1 fastcgi-mono-server [2023-01-27 01:34:36.031714] Debug : Max connections: 1024
6611: 1 fastcgi-mono-server [2023-01-27 01:34:36.031817] Debug : Max requests: 1024
6611: 1 fastcgi-mono-server [2023-01-27 01:34:36.031973] Debug : Multiplex connections: False
6611: 3 fastcgi-mono-server [2023-01-27 01:34:36.036418] Debug : Server started [callback: Mono.WebServer.FastCgi.ServerProxy]
Simple wget never shows traffic to Mono server:
ubuntu#ip-XXX-31-XXX-XXX:~$ wget http://127.0.0.1/index.html
--2023-01-27 01:51:58-- http://127.0.0.1/index.html
Connecting to 127.0.0.1:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2023-01-27 01:51:58 ERROR 404: Not Found.
Duh, there was another server entry in the nginx.conf that seemed to cancel mine out.
Related
I have installed bigbluebutton on my serverand it was working properly but suddenly microphone can not connect anymore and after a while being stucked in "echo test" i'll get 1006 error and i have tested:
FreeSWITCH fails to bind to IPV4Anchor link for: freeswitch fails to bind to ipv4
In rare occasions after shutdown/restart, the FreeSWITCH database can get corrupted. This will cause FreeSWITCH to have problems binding to IPV4 address (you may see error 1006 when users try to connect).
To check, look in /opt/freeswitch/var/log/freeswitch/freeswitch.log for errors related to loading the database.
2018-10-25 11:05:11.444727 [ERR] switch_core_db.c:108 SQL ERR [unsupported file format]
2018-10-25 11:05:11.444737 [ERR] switch_core_db.c:223 SQL ERR [unsupported file format]
2018-10-25 11:05:11.444759 [NOTICE] sofia.c:5949 Started Profile internal-ipv6 [sofia_reg_internal-ipv6]
2018-10-25 11:05:11.444767 [CRIT] switch_core_sqldb.c:508 Failure to connect to CORE_DB sofia_reg_external!
2018-10-25 11:05:11.444772 [CRIT] sofia.c:3049 Cannot Open SQL Database [external]!
If you see these errors, clear the FreeSWITCH database (BigBlueButton doesn’t use the database and FreeSWITCH will recreate it on startup).
$ sudo systemctl stop freeswitch
$ rm -rf /opt/freeswitch/var/lib/freeswitch/db/*
$ sudo systemctl start freeswitch
but it doesn't solve the problem.
this is output of bbb-conf --check:
BigBlueButton Server 2.2.20 (2037)
Kernel version: 4.4.0-185-generic
Distribution: Ubuntu 16.04.6 LTS (64-bit)
Memory: 4045 MB
CPU cores: 4
/usr/share/bbb-web/WEB-INF/classes/bigbluebutton.properties (bbb-web)
bigbluebutton.web.serverURL: https://online.vikipoyan.ir
defaultGuestPolicy: ALWAYS_ACCEPT
svgImagesRequired: true
/etc/nginx/sites-available/bigbluebutton (nginx)
server name: 49.12.60.238
port: 80, [::]:80
port: 443 ssl
bbb-client dir: /var/www/bigbluebutton
/var/www/bigbluebutton/client/conf/config.xml (bbb-client)
Port test (tunnel): rtmp://online.vikipoyan.ir
red5: online.vikipoyan.ir
useWebrtcIfAvailable: true
/opt/freeswitch/etc/freeswitch/vars.xml (FreeSWITCH)
local_ip_v4: 49.12.60.238
external_rtp_ip: stun:stun.freeswitch.org
external_sip_ip: stun:stun.freeswitch.org
/opt/freeswitch/etc/freeswitch/sip_profiles/external.xml (FreeSWITCH)
ext-rtp-ip: $${local_ip_v4}
ext-sip-ip: $${local_ip_v4}
ws-binding: :5066
wss-binding: :7443
/usr/local/bigbluebutton/core/scripts/bigbluebutton.yml (record and playback)
playback_host: online.vikipoyan.ir
playback_protocol: https
ffmpeg: 4.2.2-1bbb1~ubuntu16.04
/etc/bigbluebutton/nginx/sip.nginx (sip.nginx)
proxy_pass: 49.12.60.238
/usr/local/bigbluebutton/bbb-webrtc-sfu/config/default.yml (Kurento SFU)
kurento.ip: 49.12.60.238
kurento.url: ws://127.0.0.1:8888/kurento
kurento.sip_ip: 49.12.60.238
localIpAddress: 49.12.60.238
recordScreenSharing: true
recordWebcams: true
codec_video_main: VP8
codec_video_content: VP8
/usr/share/meteor/bundle/programs/server/assets/app/config/settings.yml (HTML5 client)
build: 968
kurentoUrl: wss://online.vikipoyan.ir/bbb-webrtc-sfu
enableListenOnly: true
# Potential problems described below
# Warning: API URL IPs do not match host:
#
# IP from ifconfig: 49.12.60.238
# /var/lib/tomcat7/demo/bbb_api_conf.jsp: online.vikipoyan.ir
# Warning: The API demos are installed and accessible from:
#
# https://online.vikipoyan.ir
#
# and
#
# https://online.vikipoyan.ir/demo/demo1.jsp
#
# These API demos allow anyone to access your server without authentication
# to create/manage meetings and recordings. They are for testing purposes only.
# If you are running a production system, remove them by running:
#
# apt-get purge bbb-demo
# Warning: You have this server defined for https, but in
#
# /etc/bigbluebutton/nginx/sip.nginx
#
# did not find the use of https in definition for proxy_pass
#
# proxy_pass http://49.12.60.238:5066;
#
# Warning: You have this server defined for https, but in
#
# /etc/bigbluebutton/nginx/sip.nginx
#
# did not find the use of port 7443 in definition for proxy_pass
#
# proxy_pass http://49.12.60.238:5066;
#
I tested everything i found and the problem still exists. is there any way?
Thanks in advance.
run the command:
sudo nano /etc/bigbluebutton/nginx/sip.nginx
something like this will open:
location /ws {
proxy_pass http://49.12.60.238:4066;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_read_timeout 6h;
proxy_send_timeout 6h;
client_body_timeout 6h;
send_timeout 6h;
}
change the (proxy_pass http://...) line to the following:
proxy_pass https://49.12.60.238:7443;
Please note that I followed the posts at https://github.com/bigbluebutton/bigbluebutton/issues/2628 but didn't get the issue resolved until I ran the following command:
$ wget -qO- https://ubuntu.bigbluebutton.org/bbb-install.sh | bash -s -- -v xenial-22 -s myserver.com -c turn.myserver.com:1004a200000000004cb3f2d6f000d85d -e johndoe#myserver.com -d -g | tee bbb-install.log
I have nginx between puppet agents and two puppet masters . say puppetmaster1 and puppetmaster2
so the connection flow that is expected is
puppetagent1 ->> Nginx ->> puppetmaster1 &
puppetagent2 ->> Nginx ->> puppetmaster2
puppetmaster1 and puppetagent1 runs on puppet version- 2.7.26
puppetmaster2 and puppetagent2 runs on puppet version- 3.6.2
Here is my nginx stream.conf that does a proxy pass to the upstream
map $ssl_preread_server_name $name {
puppetmaster1 upstream_puppetmaster1;
puppetmaster2 upstream_puppetmaster2;
}
upstream upstream_puppetmaster1 {
server core-services_puppetmaster1:8140;
}
upstream upstream_puppetmaster2 {
server core-services_puppetmaster2:8140;
}
server {
listen 8140;
ssl_preread on;
proxy_pass $name;
}
Issue:
When puppet agnet -tov is run on the puppetagent2 the Nginx redirects it to upstream_puppetmaster2 and everything works fine
But when puppet agnet -tov is run on the puppetagent1 the Nginx errors out with the below log
2018/09/11 17:38:11 [info] 25#25: *3 client 10.255.0.2:51290 connected to 0.0.0.0:8140
2018/09/11 17:38:11 [error] 25#25: *3 no host in upstream "", client: 10.255.0.2, server: 0.0.0.0:8140, bytes from/to client:0/0, bytes from/to upstream:0/0
The only difference between puppetmaster1 and puppetmaster2 is version
Can anyone help me figure out why nginx is not able to serve requests from puppetagent1 and pass it to puppetmaster1 ?
How can i achieve this ?
Thanks in Advance
I can't get Nginx working with memcached module, the requirement is to query remote service, cache data in memcached and never fetch remote endpoint until backend invalidates the cache. I have 2 containers with memcached v1.4.35 and one with Nginx v1.11.10.
The configuration is the following:
upstream http_memcached {
server 172.17.0.6:11211;
server 172.17.0.7:11211;
}
upstream remote {
server api.example.com:443;
keepalive 16;
}
server {
listen 80;
location / {
set $memcached_key "$uri?$args";
memcached_pass http_memcached;
error_page 404 502 504 = #remote;
}
location #remote {
internal;
proxy_pass https://remote;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
I tried to set memcached upstream incorrectly but I get HTTP 499 instead and warnings:
*3 upstream server temporarily disabled while connecting to upstream
It seems with described configuration Nginx can reach memcached successfully but can't write or read from it. I can write and read to memcached with telnet successfully.
Can you help me please?
My guesses on what's going on with your configuration
1. 499 codes
HTTP 499 is nginx' custom code meaning the client terminated connection before receiving the response (http://lxr.nginx.org/source/src/http/ngx_http_request.h#0120)
We can easily reproduce it, just
nc -k -l 172.17.0.6 172.17.0.6:11211
and curl your resource - curl will hang for a while and then press Ctrl+C — you'll have this message in your access logs
2. upstream server temporarily disabled while connecting to upstream
It means nginx didn't manage to reach your memcached and just removed it from the pool of upstreams. Suffice is to shutdown both memcached servers and you'd constantly see it in your error logs (I see it every time with error_log ... info).
As you see these messages your assumption that nginx can freely communicate with memcached servers doesn't seem to be true.
Consider explicitly setting http://nginx.org/en/docs/http/ngx_http_memcached_module.html#memcached_bind
and use the -b option with telnet to make sure you're correctly testing memcached servers for availability via your telnet client
3. nginx can reach memcached successfully but can't write or read from it
Nginx can only read from memcached via its built-in module
(http://nginx.org/en/docs/http/ngx_http_memcached_module.html):
The ngx_http_memcached_module module is used to obtain responses from
a memcached server. The key is set in the $memcached_key variable. A
response should be put in memcached in advance by means external to
nginx.
4. overall architecture
It's not fully clear from your question how the overall schema is supposed to work.
nginx's upstream uses weighted round-robin by default.
That means your memcached servers will be queried once at random.
You can change it by setting memcached_next_upstream not_found so a missing key will be considered an error and all of your servers will be polled. It's probably ok for a farm of 2 servers, but unlikely is it what your want for 20 servers
the same is ordinarily the case for memcached client libraries — they'd pick a server out of a pool according to some hashing scheme => so your key would end up on only 1 server out of the pool
5. what to do
I've managed to set up a similar configuration in 10 minutes on my local box - it works as expected. To mitigate debugging I'd get rid of docker containers to avoid networking overcomplication, run 2 memcached servers on different ports in single-threaded mode with -vv option to see when requests are reaching them (memcached -p 11211 -U o -vv) and then play with tail -f and curl to see what's really happening in your case.
6. working solution
nginx config:
https and http/1.1 is not used here but it doesn't matter
upstream http_memcached {
server 127.0.0.1:11211;
server 127.0.0.1:11212;
}
upstream remote {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name server.lan;
access_log /var/log/nginx/server.access.log;
error_log /var/log/nginx/server.error.log info;
location / {
set $memcached_key "$uri?$args";
memcached_next_upstream not_found;
memcached_pass http_memcached;
error_page 404 = #remote;
}
location #remote {
internal;
access_log /var/log/nginx/server.fallback.access.log;
proxy_pass http://remote;
proxy_set_header Connection "";
}
}
server.py:
this is my dummy server (python):
from random import randint
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello: {}\n'.format(randint(1, 100000))
This is how to run it (just need to install flask first)
FLASK_APP=server.py [flask][2] run -p 8080
filling in my first memcached server:
$ telnet 127.0.0.1 11211
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
set /? 0 900 5
cache
STORED
quit
Connection closed by foreign host.
checking:
note that we get a result every time although we stored data
only in the first server
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
this one is not in the cache so we'll get a response from server.py
$ curl http://server.lan/?q=1 && echo
Hello: 32337
whole picture:
the 2 windows on the right are
memcached -p 11211 -U o -vv
and
memcached -p 11212 -U o -vv
As mentioned in the title, I'm experiencing a permission denied error in my nginx + uwsgi setup in my CentOS 6.4. I'm running uwsgi as root already. Below are my configuration files. Take note that I've already linked (ln -s) the mysite_nginx.conf to /etc/nginx/sites-enabled/. Also I've already changed the owner of /home/user1/location/mysite to nginx user.
mysite_uwsgi.ini
#mysite_uwsgi.ini file
[uwsgi]
# Django-related settings
# the base directory (full path)
chdir = /home/user1/location/mysite/myapp
# Django wsgi file
module = mysite.wsgi
# the virtualenv (full path)
home = /home/user1/location/mysite
# process-related settings
# master
master = true
# maximum number of worker processes
processes = 4
# the socket (use the full path to be safe
socket = /home/user1/location/mysite/myapp/myapp.sock
# ... with appropriate permissions - may be needed
chmod-socket = 666
chown-socket = nginx:nginx
# clear environment on exit
vacuum = true
# other config options
uid = nginx
gid = nginx
processes = 4
mysite_nginx.conf
# mysite_nginx.conf
# the upstream component nginx needs to connect to
upstream mysite {
server unix:///home/user1/location/mysite/myapp/myapp.sock; se this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 8082;
# the domain name it will serve for
server_name 192.168.X.X; #
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /home/user1/location/mysite/media;
}
location /static {
alias /home/user1/location/mysite/static;
}
I already followed the answers related to the same issue here in stackoverflow but none of them help. What am I lacking? doing wrong?
Thanks in advance!
I had the same problem. the problem is because of selinux policies. you can find the solution in the following link, follow Option 2: Extend the httpd_t Domain Permissions instructions:
http://nginx.com/blog/nginx-se-linux-changes-upgrading-rhel-6-6/
This might be kind of a strange problem, but I'm not too experienced with these things and I don't know how to search for this kind of error.
I have a server configured with nginx and uWSGI. Everything runs fine, no errors in the logs that I can see. However, when I'm executing the below code:
from flask import Flask
app = Flask(__name__)
#app.route('/test/')
def page1():
return 'Hello World'
#app.route('/')
def index():
return 'Index Page'
I can not view http://ezte.ch/test/ UNLESS the /test/ directory exists inside linux once I create that directory, everything loads fine. Otherwise I get a 404 error passed to the uWSGI (it does show that it's receiving the request in the terminal) process.
Here is my config.ini for uWSGI:
[uwsgi]
project = eztech
uid = www-data
gid = www-data
plugins = http,python
socket = /usr/share/nginx/www/eztech/uwsgi.sock
chmod-socket = 775
chown-socket = www-data:www-data
wsgi-file hello.py
callable app
processes 4
threads 2
Here is my nginx configuration:
server {
listen 80; ## listen for ipv4; this line is default and implied
listen [::]:80 default ipv6only=on; ## listen for ipv6
autoindex on;
root /usr/share/nginx/www/eztech/public_html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name ezte.ch;
location / {
uwsgi_pass unix:/usr/share/nginx/www/eztech/uwsgi.sock;
include uwsgi_params;
uwsgi_param UWSGI_CHDIR /usr/share/nginx/www/eztech/public_html;
uwsgi_param UWSGI_MODULE hello;
uwsgi_param UWSGI_CALLABLE app;
# First attempt to serve request as file, then
# as directory, then fall back to index.html
try_files $uri $uri/ /index.html;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
Below is what I get when running uWSGI with my config file:
[uWSGI] getting INI configuration from config.ini
open("./http_plugin.so"): No such file or directory [core/utils.c line 3347]
!!! UNABLE to load uWSGI plugin: ./http_plugin.so: cannot open shared object file: No such file or directory !!!
open("./python_plugin.so"): No such file or directory [core/utils.c line 3347]
!!! UNABLE to load uWSGI plugin: ./python_plugin.so: cannot open shared object file: No such file or directory !!!
*** Starting uWSGI 1.9.8 (64bit) on [Sat Apr 27 06:29:18 2013] ***
compiled with version: 4.6.3 on 27 April 2013 00:06:22
os: Linux-3.2.0-36-virtual #57-Ubuntu SMP Tue Jan 8 22:04:49 UTC 2013
nodename: ip-10-245-51-230
machine: x86_64
clock source: unix
detected number of CPU cores: 1
current working directory: /usr/share/nginx/www/eztech
detected binary path: /usr/local/bin/uwsgi
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 4595
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
uwsgi socket 0 bound to UNIX address /usr/share/nginx/www/eztech/uwsgi.sock fd 3
setgid() to 33
setuid() to 33
Python version: 2.7.3 (default, Aug 1 2012, 05:25:23) [GCC 4.6.3]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x2505520
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 72688 bytes (70 KB) for 1 cores
*** Operational MODE: single process ***
*** no app loaded. going in full dynamic mode ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI worker 1 (and the only) (pid: 12800, cores: 1)
Thank you for any assistance you can offer!
As Blender already says, there should be no try_files where is your upstream called.
The following nginx config is enough to host flask application:
server {
listen 80;
server_name ezte.ch;
location / {
uwsgi_pass unix:/usr/share/nginx/www/eztech/uwsgi.sock;
include uwsgi_params;
}
}
my flask config:
<uwsgi>
<autostart>true</autostart>
<master/>
<pythonpath>/var/www/apps/someapp/</pythonpath>
<plugin>python</plugin>
<module>someapp:app</module>
<processes>4</processes>
</uwsgi>
So there is path /var/www/apps/someapp/ and flask file someapp.py
I had the same issue. just remove this line from the nginx configuration :
root /usr/share/nginx/www/eztech/public_html;