Elasticsearch head plugin not working through nginx reverse proxy - nginx

I have elasticsearch with the head plugin installed running on a different server. I also set up an nginx reverse proxy for my ES instance. The configuration looks like below:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name es.mydomain.net;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://127.0.0.1:9200;
}
}
}
Hitting the link http://es.mydomain.net/ works fine and I get a status 200 response. However, if I try to hit the link http://es.mydomain.net/_plugin/head/, I seemingly get a blank page. Note, the page loads fine if I access the head plug-in directly without the reverse proxy, via http://SERVERIP:PORT/_plugin/head/.
EDIT:
After doing some more debugging, I saw a net::ERR_CONTENT_LENGTH_MISMATCH error in the console for the page. After looking at nginx's log, to see what the error was, I came upon the true culprit, which is this error:
2015/05/27 16:26:48 [crit] 29765#0: *655 open() "/home/web/nginx/proxy_temp/6/0
0/0000000006" failed (13: Permission denied) while reading upstream, client: 10.
183.6.63, server: es.mydomain.com, request: "GET /_plugin/head/dist/app.js HTT
P/1.1", upstream: "http://127.0.0.1:9200/_plugin/head/dist/app.js", host: "es.my
domain.com", referrer: "http://es.mydomain.com/_plugin/head/"
I googled this one particularly, and it seems this can happen because the worker process is nobody, and the folder it is trying to read/write to may not have the right permissions. Still looking into this, but will update with an answer when found
EDIT 2: Removed unnecessary information to make issue more direct.

I was able to work out two solutions to get around the permission, so I'll present them both.
One thing to know about my nginx set-up is that I did not use sudo to install it. I unarchived the tar file, configured, and make installed it, so it was residing in /home/USERNAME/nginx/.
The issue was that starting nginx was creating a worker process under "nobody", which was then trying to read/write in /home/USERNAME/nginx/proxy_temp/, which it did not have permission to do. Solutions on the web said to just chown nobody to the temp folders, but this solution wasn't really appropriate in my particular case since we were inside USERNAME's home.
Solution 1:
Add user USERNAME; to top of nginx.conf, so that it would run the worker process as the specified username. This no longer led to a permission issue, as USERNAME had the permissions to read/write in the desired temp folders.
Solution 2:
Add proxy_temp_path to the server config. With this, you could specify a folder for the nobody process to create where it would have read/write permission. Note, you might still run into permission issues if the other *_temp folders are used by your nginx server.
server {
listen 80;
server_name es.mydomain.net;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://127.0.0.1:9200;
proxy_temp_path /foo/bar/proxy_temp
}
}
I personally preferred solution 1, as it would apply to all the server blocks, and I would not have to worry about the other *_temp folders once the conf file got more complex.

You have to install the plugin head on all ES nodes.

Related

nginx multiple virtual hosts - error - too much files?

We use nginx for Hosting multiple hosts.
We have haves mixes, some sites are only available at http:// and other sites are able with https://
We create a new config-file for every virtual hosts (domain) if there is a new customer with a new Homepage.
All works correctly.
Today we create 2 new config-files on the nginx, copy the file to sites-enabled and make a nginx reload.
Now, no sites works again.
In the Browser we get the error that the Site is not available.
In the nginx error.log we get the message
*2948... no "ssl_certificate" is defined in server listening on SSL port while SSL handshaking, client: 178...., server 0.0.0.0:443
The Virtual-Host config File we create look like:
server {
listen 80;
server_name example.de;
return 301 http://www.$http_host$request_uri;
}
server {
listen 80;
server_name *.example.de;
location / {
access_log off;
proxy_pass http://example.test.de;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwareded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_sestheader Connection "upgrade";
}
}
We get the error only if we create a new virtual-host file in the sites-enabled. if we copy the code into a exisiting virtual host file it works correctly and all other sites works again.
Any Ideas why it doesn't work if we create a new file?
We deleted the new file, create it again but always get the same effect with the error Message in the error-log file.
I don't know if its important but we have 196 Files in the sites-enabled directory. If we create a new one the error come again, if we delete the file and write the code into a existing file it works correctly?!
We don't think that is a ssl error, we think that the count of files are the problem?!
We want to create always a new virtual-host config-file for each customer and don't edit add the config to a existing file.

Nginx seems to be reverting to a directory of files?

I'm trying to run Ghost on my own VPS mostly for the learning experience (and here we are).
When I SSH in and start/restart nginx my blog URL seems to show the blog I'm trying to host, but I exit and leave it alone for a while and it seems to start showing an index of files:
Index of /
HEAD
branches/
cgi-bin/
config
description
hooks/
info/
objects/
refs/
I'm not exactly sure where that directory is coming from or what's going on, despite hours of digging into the documentation.
EDIT: Here is the [url].conf file located in /etc/nginx/conf.d
server {
listen 80;
server_name [url].com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:2368;
}
}
There's nothing in /etc/nginx/sites-available or /etc/nginx/sites-enabled.

Artifactory Browsing With Nginx & HTTP SSO Too Slow

I have setup a reverse proxy between Nginx and Artifactory, following instructions from here : https://www.jfrog.com/confluence/display/RTF/nginx
I've also enabled HTTP SSO in Artifactory so that a user authenticated by Artifactory is able to log in to Artifactory automatically. Instructions followed from here : https://www.jfrog.com/confluence/display/RTF/Single+Sign-on
Everything is working except that Artifactory is really slow. When I go to the website (eg. artifactory.myorg.com/webapp/#/home,) a progress wheel comes up and it keeps rolling and on every page.
If I turn off Nginx and access Artifactory using its embedded Tomcat engine then everything works fine.
Is there anything I can do to fix this ?
Update
The browsing is fine as soon as I turn off the following setting:
proxy_set_header REMOTE_USER $remote_user;
I am guessing that Artifactory is currently processing this user setting for every request and maybe I need to do something at Tomcat side or to Artifactory settings to resolve that.
Here's how my nginx/artifactory config looks (They were generated by Reverse Proxy setup page in Artifactory 4.4):
ssl_certificate /etc/ssl/certs/dummy.crt;
ssl_certificate_key /etc/ssl/keys/dummy.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
server {
listen 443 ssl;
server_name dummy.net;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
access_log /var/log/nginx/dummy-access.log;
error_log /var/log/nginx/dummy-error.log;
rewrite ^/$ /artifactory/webapp/ redirect;
rewrite ^/artifactory$ /artifactory/webapp/ redirect;
location /artifactory/ {
auth_pam "Secure Zone";
auth_pam_service_name "sevice";
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://127.0.0.1:8081/artifactory/;
proxy_set_header DUMMY_USER $remote_user;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port/artifactory;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Yes. Using Nginx as a reverse proxy should not add noticeable overhead, and could speed up the experience if you use it to serve the static assets.
Your testing so far as implicated Nginx, so posting your related Nginx configuration would be helpful.
But I'll go out a limb and make a guess without seeing it. You are likely using proxy_pass in Nginx to send requests on to Artifactory. If Artifactory is on the same host as Nginx, the proxy_pass address should be a port on 127.0.0.1. If you are instead including a domain name there, then your traffic might doing some like being routed from Nginx back to a load balancer, through CloudFlare, or some other inefficient route.
After trying to reproduce your scenario a few times would recommend to try one more thing to isolate the problem.
Try to set a fix username in the REMOTE_USER value, instead of a variable.
proxy_set_header REMOTE_USER username;
BTW, from the snippet it appears the header name is DUMMY_USER and in the example you specified REMOTE_USER. Make sure you the header name is the same as configured in Artifactory under the Admin > Security | HTTP-SSO .
If this issue still reproduces, please contact support#jfrog.com.

Vaadin, Nginx. unsaved data

See image below of vaadin 7, nginx. What could be wrong?
web.xml
sample config:
server {
listen 80;
server_name crm.komrus.com;
root /home/deploy/apache-tomcat-7.0.57/webapps/komruscrm;
proxy_cache one;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8080/komruscrm/;
}
}
As it seems (because you don't provide enough info about your problem) you are using nginx as reverse proxy for tomcat/jboss/jetty, and you are deploying a Vaadin application in it.
Just when you enter in the application, session expired message appears.
I had this problem 3 months ago. In my escenario Nginx was 1.0 and Vaadin 7.0+. The issue comes because of the cookies. I know that nginx must set or rewrite something in the cookies, but, you must set it manually in nginx.conf file, else, you will get that error.
Sadly, in my nginx version I wasn't able to pass cookies in the right way, so, I wasn't able to deploy my application under that scenario.
After some issues, I've decided to use Apache's reverse proxy, and never saw that issue again. Hope you can write a rule that enables to pass the cookies in the right way.
EDIT: I remembered this post How to rewrite the domain part of Set-Cookie in a nginx reverse proxy?, this is the case!

github oauth and nginx proxy

I am feeling that I have searched the complete internet and tried nearly everything to solve my problem. Now I decided to ask you and hope that there is anybody out there who is able to help me.
I have a node application running on sub2.domain.tld:3000. Now I want to proxy this application to port 80 with nginx in the way that I am able to reach the app with sub.domain.tld. But that is not the problem. I am able to reach the first site.
The problem follows by an authentification routine with OAuth-API to verify the user for the application.
When surfing to sub2.domain.tld:3000 the process works fine. But when I change the url in the configs and try to surf to sub.domain.tld the authentification process runs into an error (error=redirect_uri_mismatch&error_description=The+redirect_uri+MUST+match+the+registered+callback+URL.....).
So I guess I am making a mistake in the redirecting of the url with nginx.
I am using nginx 1.4.7 and node 0.10.26
My nginx configuration file looks like that:
server {
listen 80;
access_log /var/log/nginx/access_log_sub;
server_name sub.domain.tld;
location / {
include proxy_params;
proxy_pass http://IP:3000;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Client-IP $remote_addr;
proxy_set_header X-Forwarded-for $remote_addr;
}
}
But I belive OAuth is verifying sub2.domain.tld:3000 and that it gets in conflict with sub.domain.tld
I hope you are able to help me, solving this issue.
The error isn't coming from nginx, it's coming from your OAuth provider:
The redirect_uri parameter is optional. If left out, GitHub will redirect users to the callback URL configured in the OAuth Application settings. If provided, the redirect URL's host and port must exactly match the callback URL. The redirect URL's path must reference a subdirectory of the callback URL.
-- https://developer.github.com/v3/oauth/#redirect-urls
This is an old question, but...
Try changing your Host header to
proxy_set_header Host $host:$server_port
This may or may not work depending on your application.
As an aside, X-Forwarded-For should include a comma-separated list of the originating client and any proxies it passes through.

Resources