How to set authentication in kibana - kibana

Is it possible to enable authentication in Kibana in order to restrict access to a dashboard to only be accessible to particular users?

Kibana itself doesn't support authentication or restricting access to dashboards.
You can restrict access to Kibana 4 using nginx as a proxy in front of Kibana as described here: https://serverfault.com/a/345244. Just set proxy_pass to port 5601 and disable this port on firewall for others. This will completly enable or disable Kibana.
Elastic also has a tool called Shield which enables you to manage security of elasticsearch. With Shield you can for example allow someone to analyze data in specific indexes with read-only permissions. https://www.elastic.co/products/shield
Edit: Elastic has an issue on github and they recommend to use Shield.
Install Shield (plugin for elasticsearch) following these instructions
Configure roles for Kibana users
Configure Kibana to work with Shield
Remember Shield provides only index-level access control. That means user A will be able to see all dashboards but some of them will be empty (because he doesn't have access to all indices).

Check this plugin named elasticsearch-readonlyrest.
It allow easy access control, by authentication or ip/network, x-forwarded-for header and allows one to setup read-write or read-only access in kibana and limit indexes access per user. It is simple to setup and should give enough control for most people.
If more control is needed, you can use the search-guard, a free alternative to shield.

Kibana4 doesn't currently support this.

I have achieved authentication by installing haproxy.
Restrict kibana locally
$sudo nano /etc/kibana/kibana.yml
server.host: "localhost"
2.Install haproxy in same machine where kibana installed
$ sudo apt update && sudo apt install haproxy
$ sudo nano /etc/haproxy/haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 10m
timeout client 10m
timeout server 10m
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
userlist UsersFor_Kibana
user kibana insecure-password myPASSWORD
frontend localnodes
bind *:80
mode http
default_backend nodes
backend nodes
acl AuthOkay_Kibana http_auth(UsersFor_Kibana)
http-request auth realm Kibana if !AuthOkay_Kibana
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server server1 127.0.0.1:5601 check
username :-"kibana"
password :- "myPASSWORD"
When you browse http://IP:80 one pop-up ll come for authentication.

Old question but I wanted to add that there is an open source version of elk from aws. You might be able to use the plugin in the version from elastic.co. https://github.com/opendistro-for-elasticsearch/security

Related

Send nginx logs to google cloud logging from OVH VPS

I have an OVH VPS with nginx server setup on it. I'm looking for a way to send nginx access and error logs to Google Cloud Logging service, but all info I could find was about sending logs from Google Cloud VMs. Is it even possible at this moment? I've tried also to find anything about sending syslog to GCP as a workaround but no luck too. Since my dotnet services succesfully send logs to GCP I suppose it should be possible. Any suggestions?
In GCP there is an integration with NGINX to collect connection metrics and access logs. There are some prerequisites that you need to accomplish before you start collecting logs from NGINX.
You must install Ops Agent in your instance . The Ops Agent collects logs and metrics on Compute Engine instances, sending your logs to Cloud Logging and your metrics to Cloud Monitoring. If you are using a single VM on a Linux SO, you can install the agent with the following command:
curl -sSO https://dl.google.com/cloudagents/add-google-cloud-ops-agent-repo.sh
sudo bash add-google-cloud-ops-agent-repo.sh --also-install
You can consult the details about the Ops Agent installation on this link
You will need to configure your NGINX instance enabling the stub_status module in the nginx configuration file to set up a locally reachable URL, like the following example:
http://www.example.com/status
If you don't have the stub_status module enabled, you can run the following command to enable it:
sudo tee /etc/nginx/conf.d/status.conf > /dev/null << EOF
server {
listen 80;
server_name 127.0.0.1;
location /status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
location / {
root /dev/null;
}
}
EOF
sudo service nginx reload
curl http://127.0.0.1:80/status
Please note that: 127.0.0.1 can be replaced with the real server name, for example, server_name mynginx.domain.com.
All these steps are detailed in the following link, it is a guide to setup all the prerequisites before you start collecting logs from your NGINX deployment. Also, there is an example to configure your deployment

Kubernetes Dashboard : Dashboard keeps cancelling connection to pod, resulting in bad gateway to user

I am using kubernetes-dashboard to view all pods, check status, login, pass commands, etc. It works good, but there is a lot of connectivity issues related to it. I am currently running it on port-8443, and forwarding the connection from 443 to 8443 via Nginx's proxy pass. But I keep getting bad gateway, and connection keeps dropping. It's not an nginx issue, since I have kubernetes error. I am using Letsencrypt certificate in nginx, What am I doing wrong?
Error log :
E0831 05:31:45.839693 11324 portforward.go:385] error copying from local connection to remote stream: read tcp4 127.0.0.1:8443->127.0.0.1:33380: read: connection reset by peer
E0831 05:33:22.971448 11324 portforward.go:340] error creating error stream for port 8443 -> 8443: Timeout occured
Theses are the 2 errors I constantly get. I am running this command as a nohup process :
nohup kubectl port-forward -n kubernetes-dashboard service/kubernetes-dashboard 8443:443 --address 0.0.0.0 &
And finally my nginx config :
default :
location / {
proxy_intercept_errors off;
proxy_pass https://localhost:8443/;
}
Thank you. :-)
Unfortunately this is an on-going issue with Kubernetes' port forwarding. You may find it not particularly reliable when used for long-running connections. If possible, try to setup a direct connection instead. A more extended discussion regarding this can be found here and here.

SonarQube - HAProxy HTTPs Invalid Header

I have all of my development systems stood up behind haproxy (everything dockerized) and I can access my Jenkins / Gitlab / and Sonar but not Nexus. After looking at the docker logs for the nexus container, I can see it is getting the requests, however it is saying the forward header is invalid. My goal is to have haproxy use https and the apps behind haproxy only use http. That way the apps have https by the proxy but don't need the configurations themselves.
Here is the log message:
nexus_1 | 2018-03-23 17:35:08,874-0500 WARN [qtp1790585161-43] *SYSTEM
org.sonatype.nexus.internal.web.HeaderPatternFilter - rejecting request
from 98.192.146.97 due to invalid header 'X-Forwarded-Proto: \http'
Here is my haproxy config for nexus:
frontend www-https
bind *:443 ssl crt /etc/haproxy/certs/server.pem
reqadd X-Forwarded-Proto:\http
acl jenkins hdr_beg(host) -i jenkins.
acl nexus hdr_beg(host) -i nexus.
acl git hdr_beg(host) -i git.
acl sonar hdr_beg(host) -i sonar.
use_backend jenkins if jenkins
use_backend nexus if nexus
use_backend git if git
use_backend sonar if sonar
backend nexus
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
server nexus1 nexus:8081 check
I had this line in originally which exists in all my other apps:
http-request add-header X-Forwarded-Proto https if { ssl_fc }
But when that is enabled Sonar throws this error:
nexus_1 | 2018-03-23 23:54:38,132-0500 WARN [qtp1790585161-43]
*SYSTEM org.sonatype.nexus.internal.web.HeaderPatternFilter - rejecting
request from 98.192.146.97 due to invalid header 'X-Forwarded-Proto:
\http,https'
Is there something special nexus requires to work with haproxy?
EDIT: I have confirmed that if "docker-compose exec haproxy sh" I can curl "nexus:8081" and it gives me the index.html. So I know the container can correctly communicate with the nexus container.
http-request add-header X-Forwarded-Proto https if { ssl_fc }
This is essentially wrong in most cases. Wherever you are using it, you may need to change it, because you don't want to add this header -- you want to set it. You need your version to overwrite anything that may be in the incoming request.
http-request set-header X-Forwarded-Proto https if { ssl_fc }
Adding a header preserves previous values, including the one you are adding incorrectly with this line:
reqadd X-Forwarded-Proto:\http
You either need a space after that \ or you need to remove the \ entirely... but really, this would optimally be done with http-request set-header since this is operationally preferred over req* actions.
But it really doesn't make obvious sense to have that line, anyway, because your frontend uses bind *:443 ssl, so ssl_fc is always true in this frontend. Hence, your frontend could simply set the correct header.
http-request set-header X-Forwarded-Proto https

RabbitMQ connection through Nginx

I am trying to setup rabbitmq it can be accessed externally (from non-localhost) through nginx.
nginx-rabbitmq.conf:
server {
listen 5672;
server_name x.x.x.x;
location / {
proxy_pass http://localhost:55672/;
}
}
rabbitmq.conf:
[
{rabbit,
[
{tcp_listeners, [{"127.0.0.1", 55672}]}
]
}
]
By default guest user can only interact from localhost, so we need to create another user with required permissions, like so:
sudo rabbitmqctl add_user my_user my_password
sudo rabbitmqctl set_permissions my_user ".*" ".*" ".*"
However, when I attempt a connection to rabbitmq through pika I get ConnectionClosed exception
import pika
credentials = pika.credentials.PlainCredentials('my_username', 'my_password')
pika.BlockingConnection(
pika.ConnectionParameters(host=ip_address, port=55672, credentials=credentials)
)
--[raises ConnectionClosed exception]--
If I use the same parameters but change host to localhost and port to 5672 then I connect ok:
pika.ConnectionParameters(host=ip_address, port=55672, credentials=credentials)
I have opened port 5672 on the GCE web console, and communication through nginx is happening: nginx access.log file shows
[30/Apr/2014:22:59:41 +0000] "AMQP\x00\x00\x09\x01" 400 172 "-" "-" "-"
Which shows a 400 status code response (bad request).
So by the looks the request fails when going through nginx, but works when we request rabbitmq directly.
Has anyone else had similar problems/got rabbitmq working for external users through nginx? Is there a rabbitmq log file where I can see each request and help further troubleshooting?
Since nginx 1.9 there is stream module for the tcp or udp (not compiled with by default).
I configured my nginx (1.13.3) with ssl stream
stream {
upstream rabbitmq_backend {
server rabbitmq.server:5672
}
server {
listen 5671 ssl;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_handshake_timeout 30s;
ssl_certificate /path/to.crt;
ssl_certificate_key /path/to.key;
proxy_connect_timeout 1s;
proxy_pass rabbitmq_backend;
}
}
https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-tcp/
You have configured nginx as an HTTP reverse proxy, however rabbitmq is configured to use the AMQP protocol (see description of tcp_listeners at https://www.rabbitmq.com/configure.html)
In order for nginx to do anything meaningful you will need to reconfigure rabbitmq to use HTTP - for example http://www.rabbitmq.com/web-stomp.html.
Of course, this may have a ripple effect because any clients that are accessing rabbitmq via AMQP must be reconfigured/redesigned to use HTTP.
You can try and proxy to tcp, installing a tcp-proxy module for nginx to work with AMQP.
https://github.com/yaoweibin/nginx_tcp_proxy_module
Give it a go.
Nginx was originally only HTTP server, I also suggest looking into that above referred tcp proxy module, but if you would like to have proven load-balancer which is general TCP reverse proxy (not just HTTP, but can handle any protocol in general), you might consider using HAproxy.
since amqp is on tcp/udp level you need to configure nginx for tcp/udp connection
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer
I might be late to the party, but I am very much sure that my article will surely help a lot of people in the upcoming days.
In the article I have explained how to install Letsencrypt certificate for RabbitMQ Management GUI with NGINX as reverse proxy on Port: 15672 which runs on HTTP protocol.
I have also used the same SSL certificates to power up the RabbitMQ Server that runs on AMQP protocol.
Kindly go through the following article for detailed description:
https://stackcoder.in/posts/install-letsencrypt-ssl-certificate-for-rabbitmq-server-and-rabbitmq-management-tool
NOTE: Don't configure RabbitMQ Server running on port 5672 as a reverse proxy. Even if you do then kindly use NGINX streams. But I
highly recommend sticking with adding certificate paths in
rabbitmq.conf file as RabbitMQ works on TCP/UDP

https://localhost:8080 is not working but http://localhost:8080 is working well

I am using Ubuntu 12.04LTS 64 bit pc.JBOSS as my local pc server and i have a project which is using mysql as database and struts framework.I can easily access my project using
http://localhost:8080
but when I want to access my project using
https://localhost:8080
It shows an error.
The connection was interrupted
The connection to 127.0.0.1:8080 was interrupted while the page was loading.
I have also checked
$ sudo netstat -plntu | grep 8080
this command which output is
"tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 5444/java"
If i kill this process,my project also killed.
and i also mentioned u that my 80 port is free also.
Can you tell me what is the problem is occured for which I cannot access my project in my local pc using https.
Advance Thanks for helping.
SSL has to be on a different port. Here is the breakdown:
http:// watched on port, typically 80
https:// watched on a different port, typically 443
You need to RUN SSL on a different port.
Listen 8081
SSL VirtualHost
<VirtualHost *:8081>
# SSL Cert info here
....
</VirtualHost>
> service httpd restart

Resources