nginx: [emerg] host not found in upstream - nginx

I'm trying to setup a new site on my server, I've updated the nginx settings to redirect the new site, created the directories, setup the correct permissions and created the DNS entries on my domain.
However when I restart nginx I get the following error:
nginx: [emerg] host not found in upstream "petproject" in
/usr/local/nginx/conf/nginx-vhosts.conf:277 nginx: configuration file
/usr/local/nginx/conf/nginx.conf test failed
Could someone tell me how to fix this? I have identical settings with a different domain and directory and it works fine so I can't quite pinpoint whats wrong here.
I have included my nginx.conf below.
server {
listen 217.23.14.107:80;
server_name petproject.com;
access_log /var/log/nginx/petproject.com_access.log;
error_log /var/log/nginx/petproject_error.log;
root /home/petproject/laravel/public;
location ~* \.(jpg|jpeg|gif|css|js|ico|rar|gz|zip|pdf|tar|bmp|xls|doc|swf|mp3|avi|png|htc|txt|htc|flv$
access_log off;
expires 7d;
}
location / {
index index.php index.html index.htm;
proxy_pass http://petproject:8080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
# deny access to apache .htaccess files
location ~ /\.ht
{
deny all;
}
}

Related

Artifactory CE Edition Configure HTTPS

I have install a free Artifactory Server (Community Edition and edition license 7.29.8 rev 72908900 )
So when I can't configure url HTTP or HTTPS url
When I launch Artifactory web In http (Administration ==> General ==> HTTP Setting) are unavailable.
I have install NGINX server and I can't launch artifactory in https.
I use the same VM to NGIX and Artifactory.
I have found this documentation: https://www.jfrog.com/confluence/display/JFROG/HTTP+Settings & https://www.jfrog.com/confluence/display/JFROG/HTTP+Settings & https://www.jfrog.com/confluence/display/JFROG/Configuring+NGINX
My configuration nginx server:
## add ssl entries when https has been set in config
##ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_certificate /etc/ssl/certs/domain.crt;
ssl_certificate_key /etc/ssl/private/domain.key;
ssl_session_cache shared:SSL:1m;
##ssl_prefer_server_ciphers on;
## server configuration
server {
listen 443 ssl;
listen 8080;
server_name <Server_Name>;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
## access_log /var/log/nginx/<Server_Name>-access.log timing;
## error_log /var/log/nginx/<Server_Name>-error.log;
rewrite ^/$ /ui/ redirect;
rewrite ^/ui$ /ui/ redirect;
chunked_transfer_encoding on;
client_max_body_size 0;
location / {
proxy_read_timeout 2400s;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_buffer_size 128k;
proxy_buffers 40 128k;
proxy_busy_buffers_size 128k;
proxy_pass https://<Artifactory_IP>:8082;
proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location ~ ^/artifactory/ {
proxy_pass https://<Artifactory_IP>:8081;
}
}
}
And all are KO
Can you help me?
I juste want to launch artifactory in https://x.x.x.x:80802 for example
HTTP Settings is not supported in Artifactory Community Edition. That said, you may want to check out the free-tier option for testing this configuration and additional features at: https://jfrog.com/start-free
similar query: HTTPS Settings is disabled in freshly started artifactory-cpp-ce - how do I enable it?

nginx invalid URL prefix with rewrite

I'm using docker and running nginx alongside varnish.
Because I'm running docker, I've set the resolver manually at the top of the nginx configuration (resolver 127.0.0.11 ipv6=off valid=10s;) so that changes to container IPs will be picked up without needing to restart nginx.
This is the relevant part of the config that's giving me trouble:
location ~^/([a-zA-Z0-9/]+)$ {
set $args ''; #clear out the entire query string
set $card_name $1;
set $card_name $card_name_lowercase;
rewrite ^ /cards?card=$card_name break;
proxy_set_header x-cache-key card-type-$card_name;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header REQUEST_URI $request_uri;
proxy_http_version 1.1;
set $backend "http://varnish:80";
proxy_pass $backend;
proxy_intercept_errors on;
proxy_connect_timeout 60s;
proxy_send_timeout 86400s;
proxy_read_timeout 86400s;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
error_page 503 /maintenance.html;
}
When I visit a URL for this, e.g. https://example.com/Test, I get 500 internal server error.
In the nginx error log, I see the following:
2022/04/27 23:59:45 [error] 53#53: *1 invalid URL prefix in "", client: 10.211.55.2, server: example.com, request: "GET /Test HTTP/2.0", host: "example.com"
I'm not sure what's causing this issue -- http:// is included in the backend, so it does have a proper prefix.
If I just use proxy_pass http://varnish:80, it works fine, but the backend needs to be a variable in order to force docker to use the resolver.
I've stumble across similar issue. I'm not sure why but defining the
set $backend "http://varnish:80";
outside of location block

How to set up two Nginx containers as a reverse proxy in an active-passive set up with failover?

I have set up a Nginx container on a Linux-EC2 server. My Nginx config file is as follows:
server {
listen 80;
server_name client-dev.com;
location / {
proxy_pass http://dev-client.1234.io:5001/;
proxy_redirect off;
##proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
server {
listen 80;
server_name client-test.com;
location / {
proxy_pass http://test-client-1234.io:5005/;
proxy_redirect off;
##proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
It passes the requests with different ports to port 80. Now, I need to make a redundant Nginx in an active-passive mode. In case, if the Nginx container goes down/stops.
To do so, would I need to set another Nginx container in the same server? If so, how should it be setup to do the failover automatically?
I have looked at "upstream" option, but as I found, it would not work for this case. The proxy_pass that I have are external and dynamic that I get them using a script from docker-cloud.
There is another way named "docker-gen" however, I'm not sure how much useful it would be, and I prefer to use another way if there is any?
Any help would be appreciated.
I can think of following options:
Kubernetes: You can create a deployment for your nginx setup and use liveliness probes. Kubernetes will probe the nginx container with http request/interval you provide, if the pod is not healthy, it will be killed and recreated. Using multi nodes in your kubernetes cluster you can even mitigate from node failure.
Docker Swarm: Using docker swarm mode with multiple nodes, you can mitigate from node failure, but nginx health should be checked by an external custom script that can be done with bash and curl.
Using standalone hosts with keepalived: This is traditional nginx active/passive cluster using keepalived. You can also use this with docker but it would be dirty. because all of your containers will be passive on one host.

nginx 403 Forbidden error

I'm trying to set up graphite to work with grafana in docker based on this project : https://github.com/kamon-io/docker-grafana-graphite
and when I run my dockerfile I get 403 Forbidden error for nginx.
my configurations for nginx are almost the same as the project's configurations. I run my dockerfiles on a server and test them on my windows machine. So the configurations are not exactly the same ... for example I have :
server {
listen 80 default_server;
server_name _;
location / {
root /src/grafana/dist;
index index.html;
}
location /graphite/ {
proxy_pass http:/myserver:8000/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header Host $host;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
add_header Access-Control-Allow-Origin "*";
add_header Access-Control-Allow-Methods "GET, OPTIONS";
add_header Access-Control-Allow-Headers "origin, authorization, accept";
}
But I still keep getting 403 forbidden. Checking the error log for nginx says :
directory index of "/src/grafana/dist/" is forbidden
Stopping and running it again it says :
directory index of "/src/grafana/dist/" is forbidden
I'm very new to nginx ... was wondering if there's something in the configurations that I'm misunderstanding.
Thanks in advance.
That's because you are hitting the first location block and the index file is not found.
A request to '/' will look for 'index.html' in '/src/grafana/dist'.
Confirm that:
1. 'index.html' exists.
2. You have the right permissions.
nginx has read-access to the entire directory tree leading up to 'index.html'. That is, it must be able to read directories 'src', 'src/grafana' and 'src/grafana/dist' as well as 'index.html' itself.
A hacky quick-fix to achieve this would be to do 'sudo chmod -R 755 /src', but I don't recommend it.

Nginx load balancer websocket issue

I'm new in NGINX and WebSocket systems but as per my project requirements I need to check some complex things to finish.
I'm trying to create one example using NGINX, which handles my WebSocket (Port: 1234) and HTTP Requests (Port: 80) using same Url (load balancer url).
I'm using three NGINX server, one as Load Balancer (10.0.0.163) and other two as my application server where I have installed my real APIs, 10.0.0.152 and 10.0.0.154 respectively. Right now, I have configured WebSocket on my application servers.
As per above configuration, my all requests will pass over 10.0.0.163 (load balancer) and it's proxy setting will pass the request (HTTP/WebSocket) to my application server (10.0.0.152/154).
Note : Each application server contain separate Nginx, php, websocket
Here is default (location : /etc/nginx/sites-available/) file for 10.0.0.154 server, which handles WebSocket and HTTP requests on same domain.
server{
listen 80;
charset UTF-8;
root /var/www;
index index.html index.htm index.php
server_name localhost 10.0.0.154 ;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
}
location / {
try_files $uri $uri/ #proxy; autoindex on;
}
location #proxy{
proxy_pass http://wb1;
}
location =/ {
proxy_pass http://wb;
proxy_http_version 1.1;
proxy_buffers 8 16k;
proxy_buffer_size 32k;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Referer $http_referer;
proxy_redirect off;
}
}
Following is default file (location : /etc/nginx/sites-available/) for load balancer at 10.0.0.163.
upstream wb{
server 10.0.0.154;
server 10.0.0.152;
}
server{
listen 80;
charset UTF-8;
root /var/www;
index index.html index.htm index.php
server_name 10.0.0.163 ;
location / {
try_files $uri $uri/ #proxy; autoindex on;
}
location #proxy{
proxy_pass http://wb;
}
location =/ {
proxy_pass http://wb;
proxy_http_version 1.1;
proxy_buffers 8 16k;
proxy_buffer_size 32k;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Referer $http_referer;
proxy_redirect off;
}
}
I found that, load balancer is working properly for HTTP requests but it's unable to proceed my WebSocket requests to my application server.
I don't know what I'm missing here .. If you guys can help me out would be great appriciate
I seen your configuration looks proper. I think you should check your load balancer & your application server configuration or versions. It maybe problem of incompatibility.

Resources