Kibana interface with ELK on virtual host - nginx

I use this tutorial to install ELK (Elasticsearch Logstack Kibana) to analyse logs.
I use to install ELK on a virtual vmware machine running ubuntu server 14.04.
My computer using Windows 7.
Actually I have do this :
Install Java 7
Install elascticsearch
Install kibana
Install nginx
But now the tuto says :
Kibana is now accessible via your FQDN or the public IP address of your Logstash Server i.e. http:// logstash_server_public_ip/. If you go there in a web browser, you should see a Kibana welcome page which will allow you to view dashboards but there will be no logs to view because Logstash has not been set up yet. Let's do that now.
But I cannot connect from browser (windows real-machine) to the kibana interface (on linux machine).
I looking a lot on how configure ubuntu server but I'm probably lost.
Thanks to people that stop here to read my bad english!

Usually you would point your browser at http://localhost:9200/ to browse Kibana but you need to work out the IP Address of your VM Box. You can run ifconfig on the ubuntu box to check the IP Address, which you can then hit. As per:
https://superuser.com/questions/245156/how-can-i-connect-to-a-web-server-running-in-a-vm-when-the-vm-is-in-nat-mode

Related

Installing Wazuh Server in Windows Server

We do have one server [Windows Server 2016] and i want to monitor that server, by installing Wazuh Tool.
I saw the documentation, but still i am getting confused. Should i need to install,
Wazuh Server
Wazuh Agent
Kibana
in server.? I don't see any article related to installing Wazuh Server in Windows Machine.
After following up the wazuh documentation, i can able to go up to a certain limit.
Installed Virtual Box in Windows Server.
Downloaded Wazuh OVA file and imported the same into virtual box.
Now i can able to connect to Wazuh Server, using the default credentials.
Now i stuck up at one place. I need to get the IP. I tried with 'Ip addr' command. But still, it is showing 127.0.0.1/8
As far as i checked, it is creating some dynamic IP's. Is there a way to setup Static IP. So that, i can able to access Wazuh Web console
through that IP.
Some of my findings:
It seems that the eth0 network interface for the VM does not have an IPv4 address assigned to it.
In the video in the documentation when running 'ip addr' it shows a dynamic IPv4 address as well as the IPv6 address so I suspect that this is the reason you cannot access the web console. This could be caused by the type of network interface you created for the VM in virtual box.
-------- Edited----------
As per your guidence, i did the following things.
Wazuh Server:
Virtual Box -> Adapter 1 -> Bridged Adapter
Virtual Box -> Adapter 2 -> Host-only Adapter
Started the Virtual Box and checked the 'Ip addr' command. Got the following IP's, eth0 [192.168..] and eth1 [10.0..]
In browser, i tried https://192.168.. and i can able to login to kibana.
Wazuh Agent:
The server which ever i am going to monitor, i installed Wazuh Agent. In the Wazuh Config file, i need to specify
Here i am bit confused. Should i need to give the actual server IP [where the wazuh server is] or i need to specify the IP's which i am getting in 'Ip Addr' command.?
I have tried all the IP's. When i check the Logs, it is showing like,
start_agent.c:100 at connect_server(): ERROR: (1216): Unable to connect to 'xx.xx.xx.xxx': 'Bad file descriptor'.
I recommend you reading the Architecture guide for a better understanding of how Wazuh works. Its architecture is based on agents, which means you need to install Wazuh agent on those endpoints you want to monitor (for example, your Windows server), and then connect these agents to a Wazuh Manager server (which need to be installed in a Linux machine, so you will need another server).
Kibana/Splunk are optional and useful tools to index the data generated by the manager for better visualization. I recommend using Kibana and the Elasticsearch Stack.
For the Linux Wazuh Manager server I recommend trying the all in one deployment, or, if you will have few agents connected and doesn't want to deploy any instance from scratch, you could try the pre-built Virtual Machine appliance (OVA)
I hope this helps you. The best point to start using Wazuh is the Getting started guide. I recommend you read that first of all.
------------------------ edit --------------------
Hello,
I'm sorry if I weren't clear enough. Wazuh has two main components: Manager (server in the documentation) and Agent.
The manager is also called a server because it serves the Wazuh service itself. That means the part of Wazuh that analyzes security events and generates alerts.
But Wazuh agent (despite its name) is also installed on servers that you want to monitorize and it is used to send security events to the Wazuh Manager (server) so they could be analyzed.
That said, if you want to correctly monitorize a Windows server you need to install the Wazuh Windows agent on it because it is designed to monitorize Windows servers. And you need to connect this agent to a Wazuh server. Here, you have different options:
You could install the Wazuh Manager in another (Linux) server.
You could install docker and docker-compose on your Windows server and use the wazuh-docker GitHub repository to deploy a Wazuh manager stack (with Wazuh, Elasticsearch and Kibana) to connect you, agent, to.
You could install the Wazuh OVA (VM appliance) on Virtualbox or similar software (this Virtual machine has installed by default Wazuh Manager, Elasticsearch and Kibana as well).
I see that you're trying with the 4th, deploying the Wazuh OVA on Virtualbox. Nevertheless, remember that you must have to install the Windows agent as well and connect it to the Wazuh Manager.
Regarding the IP question. My advice here is to enter the VirtualBox configuration for the machine and set up two network interfaces (or adapters). One host-only adapter (which will have a static IP that you could use to connect from your local browser) and other with a bridged adapter (to connect to the internet). Then, I recommend using nmtui (a console user interface for network manager) to set up your static IP as in the attached capture. That should be enough.

LetsEncrypt Install on a Raspberry Pi web server

I've created a web server on my Raspberry Pi 4 and using it for a web project that I'm currently working on as well as future website projects. Currently, I'm running on Pop OS Linux distro on my main laptop and SSH to the Raspberry Pi running as a web server. I'd like to install the letsencrypt SSL on the webserver. I've found some tutorials on a Google search and have had no luck with the installation of certbot. I'm currently running the Nginx web server on the Raspberry Pi.
I have changed some of the settings on the /etc/Nginx/sites-available & sites-enabled and still no luck with the SSL running on the webserver. Are there any other suggestions or tips that anyone can throw my way to get this web server installed with a secure socket layer of encryption? I'm currently running the web server on my Raspberry Pi's IP Address. Maybe I need to change it to an actual domain name beforehand and see if that works?
When I run:
sudo systemctl status nginx
It returns as active. Which is good. Any suggestions?
You should go for following the steps, serially:
You need to register a domain name with a official DNS (Domain Name System) Register, e.g. NameCheap, Google Domains, Go Daddy.
Install certbot following instructions on Let's Encrypt tutorials all SSL/TLS certificates will be installed automatically, (assuming that you're not requesting a wildcard certificate, I too recommend not doing so as it's a hassle in getting a wildcard certificate).
Make sure all ports are correctly forwarded to the Raspberry Pi and that there is no firewall interfering with the ports 443 and 80 and make sure that your ISP is not blocking them whatsoever, since Let's Encrypt needs to verify that your domain name and website exists and is accessible.

Not able to access nginx from outside world

Not able to access nginx from outside the server
I have used ansible role written by me to download nginx on linux machines. But i'm not bale to access that nginx service outside of the server (the one on which it is installed)
https://github.com/kishanagarwal/ansible_poc/tree/master/roles/nginx
You can access the code from above url
I am able to get a welcome page of nginx service running on Centos machines, but can't get anything when i tried to access ip address of machine running on ubuntu 14.04 and having nginx installed on it.
Its simply means nginx port is not opened.
steps to follows:
login to your machine which is trying to access nginx.
if it is windows ,open DOS prompt OR if it is linux, open terminal.
run following command :
telnet
Based on output , if port is not open, you can refer following guide to open port:
https://www.cyberciti.biz/faq/howto-rhel-linux-open-port-using-iptables/

Connect to a remote Jupyter runtime over HTTPS with Google Colab

I'm trying to use Google's Colab feature to connect to a remote run-time that is configured with HTTPS. However, I only see an option to inform the port on the UI, not the protocol.
I've checked the Network panel and the website starts a WebSocket connection with http://localhost:8888/http_over_websocket?min_version=0.0.1a3, HTTP-style.
Full details of my setup:
I have a public Jupyter server at https://123.123.123.123:8888 with self-signed certificate and password authentication
I've followed jupyter_http_over_ws' setup on the remote
I started the remote process with jupyter notebook --no-browser --keyfile key.pem --certfile crt.pem --ip 0.0.0.0 --notebook-dir notebook --NotebookApp.allow_origin='https://colab.research.google.com'
I've created a local port forwarding with ssh -L 8888:localhost:8888 dev#123.123.123.123
I've turned on network.websocket.allowInsecureFromHTTPS on Firefox
I've went to https://localhost:8888 and logged in
Naturally, when the UI calls http://localhost:8888/http_over_websocket?min_version=0.0.1a3 it fails. If I manually access https://localhost:8888/http_over_websocket?min_version=0.0.1a3 (note the extra s) it gets through.
I see three options to solve it:
Tell the UI to use secure WS connection
Run a proxy on my local machine to transform the HTTPS into plain HTTP
Turn off HTTPS on my remote
The last two I think will work, but I wouldn't like that way.
How to do #1?
Thanks a lot!
Your option 1 isn't possible in colab today.
Why do you want to use HTTPS over an SSH tunnel that already encrypts forwarded traffic?

Set Up Realm Mobile Platform on Ubuntu (Digital Ocean)

I just set up a new Ubuntu 16.04 Droplet on Digital Ocean and I'm following this tutorial to get a Realm Object Server running: https://realm.io/docs/realm-mobile-platform/install-realm-object-server/
I have installed the service, but now I'm trying to figure out how to adjust my configuration.yml. Right now I have an IP address that points to /var/www/html on my server. I have changed all instances of listen_address in the configuration.yml file to my Droplet's IP address, but when I visit the IP address, it still shows the default landing page: http://d.pr/i/KznK
Is there more I am supposed to do either with Apache or with the Realm config to get it to point to the Realm Server's admin screens?
It looks like you're not actually hitting the Realm Object Server. This page looks like a default page for apache or something, on DigitalOcean. You need to specify the port of the server in order to reach it: http://<ip>:9080 by default.
Solution found here: https://github.com/realm/realm-mobile-platform/issues/7
Installing an Ubuntu Droplet without the LAMP stack worked without any changes to configuration.yml.

Resources