SonarQube - HAProxy HTTPs Invalid Header - http

I have all of my development systems stood up behind haproxy (everything dockerized) and I can access my Jenkins / Gitlab / and Sonar but not Nexus. After looking at the docker logs for the nexus container, I can see it is getting the requests, however it is saying the forward header is invalid. My goal is to have haproxy use https and the apps behind haproxy only use http. That way the apps have https by the proxy but don't need the configurations themselves.
Here is the log message:
nexus_1 | 2018-03-23 17:35:08,874-0500 WARN [qtp1790585161-43] *SYSTEM
org.sonatype.nexus.internal.web.HeaderPatternFilter - rejecting request
from 98.192.146.97 due to invalid header 'X-Forwarded-Proto: \http'
Here is my haproxy config for nexus:
frontend www-https
bind *:443 ssl crt /etc/haproxy/certs/server.pem
reqadd X-Forwarded-Proto:\http
acl jenkins hdr_beg(host) -i jenkins.
acl nexus hdr_beg(host) -i nexus.
acl git hdr_beg(host) -i git.
acl sonar hdr_beg(host) -i sonar.
use_backend jenkins if jenkins
use_backend nexus if nexus
use_backend git if git
use_backend sonar if sonar
backend nexus
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
server nexus1 nexus:8081 check
I had this line in originally which exists in all my other apps:
http-request add-header X-Forwarded-Proto https if { ssl_fc }
But when that is enabled Sonar throws this error:
nexus_1 | 2018-03-23 23:54:38,132-0500 WARN [qtp1790585161-43]
*SYSTEM org.sonatype.nexus.internal.web.HeaderPatternFilter - rejecting
request from 98.192.146.97 due to invalid header 'X-Forwarded-Proto:
\http,https'
Is there something special nexus requires to work with haproxy?
EDIT: I have confirmed that if "docker-compose exec haproxy sh" I can curl "nexus:8081" and it gives me the index.html. So I know the container can correctly communicate with the nexus container.

http-request add-header X-Forwarded-Proto https if { ssl_fc }
This is essentially wrong in most cases. Wherever you are using it, you may need to change it, because you don't want to add this header -- you want to set it. You need your version to overwrite anything that may be in the incoming request.
http-request set-header X-Forwarded-Proto https if { ssl_fc }
Adding a header preserves previous values, including the one you are adding incorrectly with this line:
reqadd X-Forwarded-Proto:\http
You either need a space after that \ or you need to remove the \ entirely... but really, this would optimally be done with http-request set-header since this is operationally preferred over req* actions.
But it really doesn't make obvious sense to have that line, anyway, because your frontend uses bind *:443 ssl, so ssl_fc is always true in this frontend. Hence, your frontend could simply set the correct header.
http-request set-header X-Forwarded-Proto https

Related

Nginx resolver for Kubernetes with skydns

I can't find a way to make an nginx pod resolve another kubernetes services URLs.
I am NOT using kube-dns , we are using kube2sky solely and we are not going to implement kube-dns yet, so I need to fix in this scenario.
For example, I want nginx to resolve a service URL app.mynamespace.svc.skydns.local but if I run a ping to that URL it resolves successfully.
My nginx config part is:
location /api/v1/namespaces/mynamespace/services/app/proxy/ {
resolver 127.0.0.1
set \$endpoint \"http://app.mynamespace.svc.skydns.local/\";
proxy_pass \$endpoint;
proxy_http_version 1.1;
proxy_set_header Connection \"upgrade\";
}
I need to specify the target upstream in a variable because I want nginx to starts even if the target is not available, if I don't specify in variable nginx crashes when starting up because the upstream needs to be available and resolvable.
The problem I think is the resolver value, I've tried with 127.0.0.1, with 127.0.0.11, and with the skydns IP specified in configuration 172.40.0.2:53:
etcdctl get /skydns/config
{"dns_addr":"0.0.0.0:53","ttl":4294967290,"nameservers":["172.40.0.2:53"]}
But nginx cannot resolve the URL yet.
What IP should I specify in the resolver field in nginx config for kubernetes and skydns config?
Remember that we don't have kube-dns.
Thank you.
I don't think resolving app.mynamespace.svc.skydns.local has anything to do with configuring the upstream DNS servers. Generally, for that, you configure a well-known DNS server like 8.8.8.8 or your cloud infrastructure DNS server which would be perhaps 172.40.0.2. For example as described in the docs:
$ curl -XPUT http://127.0.0.1:4001/v2/keys/skydns/config \
-d value='{"dns_addr":"127.0.0.1:5354","ttl":3600, "nameservers": ["8.8.8.8:53","8.8.4.4:53"]}'
You might want to check the logs of your kube2sky2 pod, for any guidance and that all the config options are specified like --kube-master-url, --etcd-server. Maybe it can't talk to the Kubernetes master and receive updates of running pods so that the SRV entries get updates.

Nginx doesn't route according to my custom rules

Problem:
Nginx doesn't route traffic based on the rule I have defined in a separate config file, and just displays the default 404 response.
Context:
I have a small middleware application written in Go that provides a simple response to GET requests. The application is deployed on port 8080:
$ curl localhost:8080
ok
I wish to write an Nginx configuration that allows me to route calls from /api to localhost:8080, which would allow me to do the following
$ curl localhost/api
ok
To achieve this, I have written the following config:
/etc/nginx/sites-available/custom-nginx-rules
server {
listen 80;
location /api {
proxy_pass http://localhost:8080;
}
}
I have also created a softlink in /etc/nginx/sites-enabled/ for the above file
$ ls -l /etc/nginx/sites-enabled
total 0
lrwxrwxrwx 1 root root 34 Jan 19 16:42 default -> /etc/nginx/sites-available/default
lrwxrwxrwx 1 root root 32 Feb 20 14:56 custom-nginx-rules -> /etc/nginx/sites-available/custom-nginx-rules
The rest of the setup is vanilla Nginx, nothing is changed.
Despite this simple setup, I get a 404 when making the following call:
$ curl localhost/api
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.10.3</center>
</body>
</html>
Other info: the following are nginx packages installed on my system (running on raspberry pi)
$ dpkg -l | grep nginx
ii libnginx-mod-http-auth-pam 1.10.3-1+deb9u1 armhf PAM authentication module for Nginx
ii libnginx-mod-http-dav-ext 1.10.3-1+deb9u1 armhf WebDAV missing commands support for Nginx
ii libnginx-mod-http-echo 1.10.3-1+deb9u1 armhf Bring echo and more shell style goodies to Nginx
ii libnginx-mod-http-geoip 1.10.3-1+deb9u1 armhf GeoIP HTTP module for Nginx
ii libnginx-mod-http-image-filter 1.10.3-1+deb9u1 armhf HTTP image filter module for Nginx
ii libnginx-mod-http-subs-filter 1.10.3-1+deb9u1 armhf Substitution filter module for Nginx
ii libnginx-mod-http-upstream-fair 1.10.3-1+deb9u1 armhf Nginx Upstream Fair Proxy Load Balancer
ii libnginx-mod-http-xslt-filter 1.10.3-1+deb9u1 armhf XSLT Transformation module for Nginx
ii libnginx-mod-mail 1.10.3-1+deb9u1 armhf Mail module for Nginx
ii libnginx-mod-stream 1.10.3-1+deb9u1 armhf Stream module for Nginx
ii nginx 1.10.3-1+deb9u1 all small, powerful, scalable web/proxy server
ii nginx-common 1.10.3-1+deb9u1 all small, powerful, scalable web/proxy server - common files
ii nginx-full 1.10.3-1+deb9u1 armhf nginx web/proxy server (standard version)
I also require that this setup is independent of any host or server names. It should do the routing regardless of host.
Without seeing your full configuration, it seems like it could be the case that the default nginx server block is accepting the request, rather than yours. You can try to fix this by changing the listen line to be:
listen 80 default_server;
You can also confirm that this is the case by adding a server_name and curling using that:
server_name api.example.com;
Then:
curl -H "Host: api.example.com" http://localhost/api
If that works, the issue is definitely the default_server handling.
From the NGINX docs on server selection:
In this configuration nginx tests only the request’s header field “Host” to determine which server the request should be routed to. If its value does not match any server name, or the request does not contain this header field at all, then nginx will route the request to the default server for this port. In the configuration above, the default server is the first one — which is nginx’s standard default behaviour. It can also be set explicitly which server should be default, with the default_server parameter in the listen directive:

How to push docker images through reverse proxy to artifactory

I've an issue pushing my docker image to artifactory [Artifactory Pro Power Pack 3.5.2.1 (rev. 30160)] (which is used as a docker registry).
I have docker version:
$ sudo docker version
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.3.3
Git commit (client): a8a31ef/1.5.0
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.3.3
Git commit (server): a8a31ef/1.5.0
I've followed this link http://www.jfrog.com/confluence/display/RTF/Docker+Repositories and this one artifactory as docker registry
I create a docker registry in artifactory called docker-local and enable docker support for it.
My artifactory doesn't have an option where I can say docker v1 or v2 like in this document so I'm assuming it uses docker v1.
Artifactory generated these for me:
<distributionManagement>
<repository>
<id>sdpvvrwm812</id>
<name>sdpvvrwm812-releases</name>
<url>http://sdpvvrwm812.ib.tor.company.com:8081/artifactory/docker-local</url>
</repository>
<snapshotRepository>
<id>sdpvvrwm812</id>
<name>sdpvvrwm812-snapshots</name>
<url>http://sdpvvrwm812.ib.tor.company.com:8081/artifactory/docker-local</url>
</snapshotRepository>
</distributionManagement>
Though something's not working with these settings.
I installed the reverse proxy nginx and copied these settings into its /etc/nginx/nginx.conf:
http {
##
# Basic Settings
##
[...]
server {
listen 443;
server_name sdpvvrwm812.ib.tor.company.com;
ssl on;
ssl_certificate /etc/ssl/certs/sdpvvrwm812.ib.tor.company.com.crt;
ssl_certificate_key /etc/ssl/private/sdpvvrwm812.ib.tor.company.com.key;
access_log /var/log/nginx/sdpvvrwm812.ib.tor.company.com.access.log;
error_log /var/log/nginx/sdpvvrwm812.ib.tor.company.com.error.log;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_read_timeout 900;
client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v1 {
proxy_pass http://sdpvvrwm812.ib.tor.company.com:8081/artifactory/api/docker/docker-local/v1;
}
} }
I generated my ssl key as shown at http://www.akadia.com/services/ssh_test_certificate.html and placed in the 2 directories
/etc/ssl/certs/sdpvvrwm812.ib.tor.company.com.crt;
/etc/ssl/private/sdpvvrwm812.ib.tor.company.com.key;
I'm not sure how to ping the new docker registry, but doing
sudo docker login -u adrianus -p AT65UTJpXEFBHaXrzrdUdCS -e adrian#company.com http://sdpvvrwm812.ib.tor.company.com
gives this error:
FATA[0000] Error response from daemon: v1 ping attempt failed with
error: Get https://sdpvvrwm812.ib.tor.company.com/v1/_ping: dial tcp
172.25.10.44:443: connection refused. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add
--insecure-registry sdpvvrwm812.ib.tor.company.com to the daemon's
arguments. In the case of HTTPS, if you have access to the registry's
CA certificate, no need for the flag; simply place the CA certificate
at /etc/docker/certs.d/sdpvvrwm812.ib.tor.company.com/ca.crt
BUT the certificate /etc/docker/certs.d/sdpvvrwm812.ib.tor.company.com/ca.crt exists so what's going on?
sudo curl -k -uadrianus:AP2pKojAeMSpXEFBHaXrzrdUdCS "https://sdpvvrwm812.ib.tor.company.com"
gives this error:
curl: (35) SSL connect error
I do start docker client with:
sudo docker -d --insecure-registry https://sdpvvrwm812.ib.tor.company.com
Could it be that since my docker registry is http://sdpvvrwm812.ib.tor.company.com:8081/artifactory/docker-local and docker and nginx are looking for http://sdpvvrwm812.ib.tor.company.com:8081/artifactory/docker-local/v1?
Any clues how to get docker to push images to artifactory?
The <distributionManagement/> part is for maven. It's a bit facepalm that Artifactory 3 shows maven snippet for Docker repos (fixed in Artifactory 4, you're welcome to upgrade), so please disregard it.
Generally with Docker you can't use /artifactory/repoName. It's Docker limitation, your registry must be hostname:port, without any additional path.
That's exactly why you have to configure the reverse proxy. What you are doing in your nginx config is forwarding all the requests to sdpvvrwm812.ib.tor.company.com:443/v1 to http://sdpvvrwm812.ib.tor.company.com:8081/artifactory/api/docker/docker-local/v1, which is correct thing to do.
Please note that the location for certificates should be /etc/docker/certs.d/sdpvvrwm812.ib.tor.company.com/, not /etc/ssl/certs/.

How to set authentication in kibana

Is it possible to enable authentication in Kibana in order to restrict access to a dashboard to only be accessible to particular users?
Kibana itself doesn't support authentication or restricting access to dashboards.
You can restrict access to Kibana 4 using nginx as a proxy in front of Kibana as described here: https://serverfault.com/a/345244. Just set proxy_pass to port 5601 and disable this port on firewall for others. This will completly enable or disable Kibana.
Elastic also has a tool called Shield which enables you to manage security of elasticsearch. With Shield you can for example allow someone to analyze data in specific indexes with read-only permissions. https://www.elastic.co/products/shield
Edit: Elastic has an issue on github and they recommend to use Shield.
Install Shield (plugin for elasticsearch) following these instructions
Configure roles for Kibana users
Configure Kibana to work with Shield
Remember Shield provides only index-level access control. That means user A will be able to see all dashboards but some of them will be empty (because he doesn't have access to all indices).
Check this plugin named elasticsearch-readonlyrest.
It allow easy access control, by authentication or ip/network, x-forwarded-for header and allows one to setup read-write or read-only access in kibana and limit indexes access per user. It is simple to setup and should give enough control for most people.
If more control is needed, you can use the search-guard, a free alternative to shield.
Kibana4 doesn't currently support this.
I have achieved authentication by installing haproxy.
Restrict kibana locally
$sudo nano /etc/kibana/kibana.yml
server.host: "localhost"
2.Install haproxy in same machine where kibana installed
$ sudo apt update && sudo apt install haproxy
$ sudo nano /etc/haproxy/haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 10m
timeout client 10m
timeout server 10m
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
userlist UsersFor_Kibana
user kibana insecure-password myPASSWORD
frontend localnodes
bind *:80
mode http
default_backend nodes
backend nodes
acl AuthOkay_Kibana http_auth(UsersFor_Kibana)
http-request auth realm Kibana if !AuthOkay_Kibana
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server server1 127.0.0.1:5601 check
username :-"kibana"
password :- "myPASSWORD"
When you browse http://IP:80 one pop-up ll come for authentication.
Old question but I wanted to add that there is an open source version of elk from aws. You might be able to use the plugin in the version from elastic.co. https://github.com/opendistro-for-elasticsearch/security

RabbitMQ connection through Nginx

I am trying to setup rabbitmq it can be accessed externally (from non-localhost) through nginx.
nginx-rabbitmq.conf:
server {
listen 5672;
server_name x.x.x.x;
location / {
proxy_pass http://localhost:55672/;
}
}
rabbitmq.conf:
[
{rabbit,
[
{tcp_listeners, [{"127.0.0.1", 55672}]}
]
}
]
By default guest user can only interact from localhost, so we need to create another user with required permissions, like so:
sudo rabbitmqctl add_user my_user my_password
sudo rabbitmqctl set_permissions my_user ".*" ".*" ".*"
However, when I attempt a connection to rabbitmq through pika I get ConnectionClosed exception
import pika
credentials = pika.credentials.PlainCredentials('my_username', 'my_password')
pika.BlockingConnection(
pika.ConnectionParameters(host=ip_address, port=55672, credentials=credentials)
)
--[raises ConnectionClosed exception]--
If I use the same parameters but change host to localhost and port to 5672 then I connect ok:
pika.ConnectionParameters(host=ip_address, port=55672, credentials=credentials)
I have opened port 5672 on the GCE web console, and communication through nginx is happening: nginx access.log file shows
[30/Apr/2014:22:59:41 +0000] "AMQP\x00\x00\x09\x01" 400 172 "-" "-" "-"
Which shows a 400 status code response (bad request).
So by the looks the request fails when going through nginx, but works when we request rabbitmq directly.
Has anyone else had similar problems/got rabbitmq working for external users through nginx? Is there a rabbitmq log file where I can see each request and help further troubleshooting?
Since nginx 1.9 there is stream module for the tcp or udp (not compiled with by default).
I configured my nginx (1.13.3) with ssl stream
stream {
upstream rabbitmq_backend {
server rabbitmq.server:5672
}
server {
listen 5671 ssl;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_handshake_timeout 30s;
ssl_certificate /path/to.crt;
ssl_certificate_key /path/to.key;
proxy_connect_timeout 1s;
proxy_pass rabbitmq_backend;
}
}
https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-tcp/
You have configured nginx as an HTTP reverse proxy, however rabbitmq is configured to use the AMQP protocol (see description of tcp_listeners at https://www.rabbitmq.com/configure.html)
In order for nginx to do anything meaningful you will need to reconfigure rabbitmq to use HTTP - for example http://www.rabbitmq.com/web-stomp.html.
Of course, this may have a ripple effect because any clients that are accessing rabbitmq via AMQP must be reconfigured/redesigned to use HTTP.
You can try and proxy to tcp, installing a tcp-proxy module for nginx to work with AMQP.
https://github.com/yaoweibin/nginx_tcp_proxy_module
Give it a go.
Nginx was originally only HTTP server, I also suggest looking into that above referred tcp proxy module, but if you would like to have proven load-balancer which is general TCP reverse proxy (not just HTTP, but can handle any protocol in general), you might consider using HAproxy.
since amqp is on tcp/udp level you need to configure nginx for tcp/udp connection
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer
I might be late to the party, but I am very much sure that my article will surely help a lot of people in the upcoming days.
In the article I have explained how to install Letsencrypt certificate for RabbitMQ Management GUI with NGINX as reverse proxy on Port: 15672 which runs on HTTP protocol.
I have also used the same SSL certificates to power up the RabbitMQ Server that runs on AMQP protocol.
Kindly go through the following article for detailed description:
https://stackcoder.in/posts/install-letsencrypt-ssl-certificate-for-rabbitmq-server-and-rabbitmq-management-tool
NOTE: Don't configure RabbitMQ Server running on port 5672 as a reverse proxy. Even if you do then kindly use NGINX streams. But I
highly recommend sticking with adding certificate paths in
rabbitmq.conf file as RabbitMQ works on TCP/UDP

Resources