I'm want get content of my page via cURL soo...
First time of execute script, I'm got content
Refreshing page
I'm getting 'cURL error 7: Failed to connect to <my.site> port 443: Connection refused'
I'm wait 5 minutes
Refreshing page
And... I'm getting content now, but when refresh: goto point 3...
What's fun, when I'm make request via Postman so I'm all time getting content of my page...??
Map of connection
Domain going to router, router say: 'goto nginx server', next nginx server pass connection via proxy_pass to page_vm.
nginx config
server {
listen 5443 ssl;
server_name <my.site>;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
proxy_buffering off;
proxy_pass http://192.168.42.10:82;
proxy_http_version 1.1;
}
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_certificate fullchain.pem;
ssl_certificate_key privkey.pem;
}
Did someone can help me? I'm spend 4 hours today to this...
There can be a lot of different things in this case, first on top of my mind is that you may have something that rate limits how many times you can make requests to your website, tool like fail2ban or something else.
Did you try to copy curl command from your browser and try like that? https://everything.curl.dev/usingcurl/copyas If that also fails probably some rate limiting thing…
OK. I'm run now my cURL command with -vv parameter and..
First time it trying to connect to my public IP and im got content of page, but when im trying second time it trying to connect to my router ip and im not got content of page...
Related
I have a website that is hosted on google cloud platform using NGINX server. Some hour before it was working well but suddenly an error of 502 bad gateway occured.
NGINX server is hosted on another instance and main project is another instance and following is the configuration of my server :
server {
listen 443 ssl;
server_name www.domain.com;
ssl_certificate /path-to/fullchain.pem;
ssl_certificate_key /path-to/privkey.pem;
# REST API Redirect
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http:/internal-ip:3000;
}
# Server-side CMS Redirect
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://internal-ip:4400;
}
}
when restarted the instance of nginx server then website loaded successfully but after three or four refresh it started giving Bad Gateway and after that anytime I open it is giving bad gateway error.
Sometimes, automatically it reloads well but again down.
Tried to know the error log of nginx server and following is the output of error log:
Sometimes this popular issue is logged:
and sometimes this:
Regarding first issue I tried some recommendations such that increase the proxy send and read time to some higher value as suggested here in server configuration and also shown in the image as follows :
Also backend code is working fine because I can access the deployed backend services in local during development but hosted website can not access any backend service.
But nothing worked and sadly my website is down.
Please suggest any solution.
By default nginx is having 1024 worker connections you can change it with
events {
worker_connections 4096;
}
Also you can try to increase amount of workers as workers*worker_connections is giving you amount of connections you can handle. All that is in the context your site is receiving a traffic and you simply running out of connections.
I have setup a reverse proxy between Nginx and Artifactory, following instructions from here : https://www.jfrog.com/confluence/display/RTF/nginx
I've also enabled HTTP SSO in Artifactory so that a user authenticated by Artifactory is able to log in to Artifactory automatically. Instructions followed from here : https://www.jfrog.com/confluence/display/RTF/Single+Sign-on
Everything is working except that Artifactory is really slow. When I go to the website (eg. artifactory.myorg.com/webapp/#/home,) a progress wheel comes up and it keeps rolling and on every page.
If I turn off Nginx and access Artifactory using its embedded Tomcat engine then everything works fine.
Is there anything I can do to fix this ?
Update
The browsing is fine as soon as I turn off the following setting:
proxy_set_header REMOTE_USER $remote_user;
I am guessing that Artifactory is currently processing this user setting for every request and maybe I need to do something at Tomcat side or to Artifactory settings to resolve that.
Here's how my nginx/artifactory config looks (They were generated by Reverse Proxy setup page in Artifactory 4.4):
ssl_certificate /etc/ssl/certs/dummy.crt;
ssl_certificate_key /etc/ssl/keys/dummy.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
server {
listen 443 ssl;
server_name dummy.net;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
access_log /var/log/nginx/dummy-access.log;
error_log /var/log/nginx/dummy-error.log;
rewrite ^/$ /artifactory/webapp/ redirect;
rewrite ^/artifactory$ /artifactory/webapp/ redirect;
location /artifactory/ {
auth_pam "Secure Zone";
auth_pam_service_name "sevice";
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://127.0.0.1:8081/artifactory/;
proxy_set_header DUMMY_USER $remote_user;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port/artifactory;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Yes. Using Nginx as a reverse proxy should not add noticeable overhead, and could speed up the experience if you use it to serve the static assets.
Your testing so far as implicated Nginx, so posting your related Nginx configuration would be helpful.
But I'll go out a limb and make a guess without seeing it. You are likely using proxy_pass in Nginx to send requests on to Artifactory. If Artifactory is on the same host as Nginx, the proxy_pass address should be a port on 127.0.0.1. If you are instead including a domain name there, then your traffic might doing some like being routed from Nginx back to a load balancer, through CloudFlare, or some other inefficient route.
After trying to reproduce your scenario a few times would recommend to try one more thing to isolate the problem.
Try to set a fix username in the REMOTE_USER value, instead of a variable.
proxy_set_header REMOTE_USER username;
BTW, from the snippet it appears the header name is DUMMY_USER and in the example you specified REMOTE_USER. Make sure you the header name is the same as configured in Artifactory under the Admin > Security | HTTP-SSO .
If this issue still reproduces, please contact support#jfrog.com.
I've installed GitLab 8.0.2 on a VM, and I have an nginx reverse proxy set up to direct HTTP traffic to the VM. I am able to view the main login page for GitLab, but when I try to login using the Google OAuth2 method, the callback fails to log me in after entering my correct credentials. I simply get directed back to the GitLab login page.
Where might the problem be? The reverse proxy settings? GitLab settings (ie. Google OAuth config)?
Below is my nginx conf:
upstream gitlab {
server 192.168.122.134:80;
}
server {
listen 80;
server_name myserver.com;
access_log /var/log/nginx/gitlab.access.log;
error_log /var/log/nginx/gitlab.error.log;
root /dev/null;
## send request back to gitlab ##
location / {
proxy_pass http://gitlab;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Interestingly, the old setup I had used iptables to redirect port 81 on the host machine to port 80 on the GitLab VM, and, in that case, the Google OAuth callback worked. I'd prefer to have people simply use standard port 80 for accessing my GitLab instance, though, so I want this reverse proxy method to work.
GitLab 8.x has quite a few new things. Although I don't see anything specifically wrong with your nginx.conf file, it is pretty short compared to the example in the GitLab repository. Look through https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/support/nginx/gitlab-ssl to get an idea of the configuration you should consider adding.
Once your nginx.conf file is updated, read through GitLab OmniAuth documentation and the Google OAuth2 integration documentation under 'Providers' on that OmniAuth page. Make sure you provide the correct callback URL to Google when registering.
I have elasticsearch with the head plugin installed running on a different server. I also set up an nginx reverse proxy for my ES instance. The configuration looks like below:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name es.mydomain.net;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://127.0.0.1:9200;
}
}
}
Hitting the link http://es.mydomain.net/ works fine and I get a status 200 response. However, if I try to hit the link http://es.mydomain.net/_plugin/head/, I seemingly get a blank page. Note, the page loads fine if I access the head plug-in directly without the reverse proxy, via http://SERVERIP:PORT/_plugin/head/.
EDIT:
After doing some more debugging, I saw a net::ERR_CONTENT_LENGTH_MISMATCH error in the console for the page. After looking at nginx's log, to see what the error was, I came upon the true culprit, which is this error:
2015/05/27 16:26:48 [crit] 29765#0: *655 open() "/home/web/nginx/proxy_temp/6/0
0/0000000006" failed (13: Permission denied) while reading upstream, client: 10.
183.6.63, server: es.mydomain.com, request: "GET /_plugin/head/dist/app.js HTT
P/1.1", upstream: "http://127.0.0.1:9200/_plugin/head/dist/app.js", host: "es.my
domain.com", referrer: "http://es.mydomain.com/_plugin/head/"
I googled this one particularly, and it seems this can happen because the worker process is nobody, and the folder it is trying to read/write to may not have the right permissions. Still looking into this, but will update with an answer when found
EDIT 2: Removed unnecessary information to make issue more direct.
I was able to work out two solutions to get around the permission, so I'll present them both.
One thing to know about my nginx set-up is that I did not use sudo to install it. I unarchived the tar file, configured, and make installed it, so it was residing in /home/USERNAME/nginx/.
The issue was that starting nginx was creating a worker process under "nobody", which was then trying to read/write in /home/USERNAME/nginx/proxy_temp/, which it did not have permission to do. Solutions on the web said to just chown nobody to the temp folders, but this solution wasn't really appropriate in my particular case since we were inside USERNAME's home.
Solution 1:
Add user USERNAME; to top of nginx.conf, so that it would run the worker process as the specified username. This no longer led to a permission issue, as USERNAME had the permissions to read/write in the desired temp folders.
Solution 2:
Add proxy_temp_path to the server config. With this, you could specify a folder for the nobody process to create where it would have read/write permission. Note, you might still run into permission issues if the other *_temp folders are used by your nginx server.
server {
listen 80;
server_name es.mydomain.net;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://127.0.0.1:9200;
proxy_temp_path /foo/bar/proxy_temp
}
}
I personally preferred solution 1, as it would apply to all the server blocks, and I would not have to worry about the other *_temp folders once the conf file got more complex.
You have to install the plugin head on all ES nodes.
To give you a bit of background on my problem:
I have an app that installs software and serves it publically. When it installs:
Installs base files to root path (i.e. /var/www/{site})
Creates a new nginx configuration file and places it in /etc/nginx/sites-available/{site}
Creates a symlink to that file from /etc/nginx/sites-enabled/{site}
Reloads the nginx configuration: service nginx reload
Sends an API call to CloudFlare to direct all requests from {site}.mydomain.com to the server's IP address
After that, {site}.mydomain.com should work, except... it doesn't!
... Until you wait ~5 minutes, and then it magically starts working. Is there a delay before proxying comes into effect with nginx?
If I delete {site} and readd it (same process as above), even if it was working before, it stops working for awhile before starting to work again.
I'm at a loss to explain what's happening!
nginx config (where {site} is foobar)
upstream mydomain_foobar {
server 127.0.0.1:4567;
}
server {
listen 80;
server_name foobar.mydomain.com;
root /var/www/mydomain/foobar/;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://mydomain-foobar/;
proxy_redirect off;
# Socket.io Support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
After looking at this issue on and off over the past month, it turns out the issue was not related to nginx at all.
When a rec_new API call is sent to CloudFlare, it takes roughly five minutes (300s TTL) for their records to update. The same can be said for any other DNS related API call to CloudFlare.
This explains the five minute gap.
From Cloudflare:
Hi,
DNS updates should take place after about five minutes (the ttl is 300 seconds). It may take a little longer to propagate out everywhere on the web (recursive DNS caching by your ISP, for example).