Cloudera RImpala connection not working - r

I am trying to use R on AWS to connect to our cluster running Cloudera hadoop. Following the steps mentioned here - http://blog.cloudera.com/blog/2013/12/how-to-do-statistical-analysis-with-impala-and-r/
So far, I could initiate the jdbc driver but not able to connect to impala.
.
From some investigation, I can see that the impala daemon is running in all our worker nodes. And the ports are configured like this.
Also, I logged in to one of the worker node and checked the ports which are listening. I can see port 21050 listening, Here it is,
Here in rimpala connect, I am using public IP of the worker node. Still not able to connect to that. I can use the public IP and port 25000 to see impala web UI, but cannot connect to this port listening jdbc requests. Can anyone help me in this?

In case anyone is looking for help, here is the answer I got from Cloudera support.
"The problem is not with the Impala or Cloudera distro. The problem
is with the driver being used by “Rimpala”. RImapla is using HIVE JDBC
driver. If you check the source code at
https://github.com/Mu-Sigma/RImpala/blob/master/java/src/main/java/com/musigma/ird/bigdata/RImpala.java
you will see that the calls being used as the drive is
“org.apache.hive.jdbc.HiveDriver” . So ideally RImpala package is
outdated and it is not updated to work."

Related

Unable to access Kafka Broker from separate LAN machine

EDIT: OBE - figured it out. Provided in answer for anyone else who has this issue.
I am working in an offline environment and am unable to connect to a kafka broker, on machine 1, from a separate machine, machine 2, on a LAN connection through a single switch.
Machine 1 (where Kafka and ZK are running):
server.properties
listeners=PLAINTEXT://<ethernet_IPv4_m1>:9092
advertised.listeners=PLAINTEXT://<ethernet_IPv4_m1>:9092
zookeeper.connect=localhost:2181
I am starting kafka/ZK from the config files located in kafka_2.12-2.8.0/config and the running the appropritate .bat from kafka_2.12-2.8.0/bin/windows.
On machine 2 I am able to ping <ethernet_IPv4_m1> and get results; however, I fail to get a TCP connection if I run Test-NetConnection <ethernet_IPv4_m1> -p 9092 while kafka is running. In python 3.8.11, using KafkaConsumer from kafka-python, I receive the NoBrokersAvailable error when using <ethernet_IPv4_m1>:9092 as the bootstrap_server. Additionally if I run a python:3.8.12-buster docker container with a '/bin/bash' entrypoint, and follow along with the kafka-listener walkthrough I am unable to connect to the broker. I'm in the exact situation as Scenario 1 provided in the link, but the walkthrough assumes you can connect to the broker. I have also tried opening the 9092 port in my Windows Defender for in/outbound traffic (on both machines) and still have no luck. Neither Kafka, nor networking, are my strong suits and every tutorial/answer I find refers to changing the listener and advertised.listener in the kafka server.properties file - I think I correctly did this, but am unsure. This is everything I have tried so far, any recommendations would be greatly appreciated. Thank you.
For M1, the private network was the active network.
Go to control panel -> Firewall & network protection -> advanced settings (must be admin) -> setup inbound/outbound rules for port 9092 for the active network.

Installing Wazuh Server in Windows Server

We do have one server [Windows Server 2016] and i want to monitor that server, by installing Wazuh Tool.
I saw the documentation, but still i am getting confused. Should i need to install,
Wazuh Server
Wazuh Agent
Kibana
in server.? I don't see any article related to installing Wazuh Server in Windows Machine.
After following up the wazuh documentation, i can able to go up to a certain limit.
Installed Virtual Box in Windows Server.
Downloaded Wazuh OVA file and imported the same into virtual box.
Now i can able to connect to Wazuh Server, using the default credentials.
Now i stuck up at one place. I need to get the IP. I tried with 'Ip addr' command. But still, it is showing 127.0.0.1/8
As far as i checked, it is creating some dynamic IP's. Is there a way to setup Static IP. So that, i can able to access Wazuh Web console
through that IP.
Some of my findings:
It seems that the eth0 network interface for the VM does not have an IPv4 address assigned to it.
In the video in the documentation when running 'ip addr' it shows a dynamic IPv4 address as well as the IPv6 address so I suspect that this is the reason you cannot access the web console. This could be caused by the type of network interface you created for the VM in virtual box.
-------- Edited----------
As per your guidence, i did the following things.
Wazuh Server:
Virtual Box -> Adapter 1 -> Bridged Adapter
Virtual Box -> Adapter 2 -> Host-only Adapter
Started the Virtual Box and checked the 'Ip addr' command. Got the following IP's, eth0 [192.168..] and eth1 [10.0..]
In browser, i tried https://192.168.. and i can able to login to kibana.
Wazuh Agent:
The server which ever i am going to monitor, i installed Wazuh Agent. In the Wazuh Config file, i need to specify
Here i am bit confused. Should i need to give the actual server IP [where the wazuh server is] or i need to specify the IP's which i am getting in 'Ip Addr' command.?
I have tried all the IP's. When i check the Logs, it is showing like,
start_agent.c:100 at connect_server(): ERROR: (1216): Unable to connect to 'xx.xx.xx.xxx': 'Bad file descriptor'.
I recommend you reading the Architecture guide for a better understanding of how Wazuh works. Its architecture is based on agents, which means you need to install Wazuh agent on those endpoints you want to monitor (for example, your Windows server), and then connect these agents to a Wazuh Manager server (which need to be installed in a Linux machine, so you will need another server).
Kibana/Splunk are optional and useful tools to index the data generated by the manager for better visualization. I recommend using Kibana and the Elasticsearch Stack.
For the Linux Wazuh Manager server I recommend trying the all in one deployment, or, if you will have few agents connected and doesn't want to deploy any instance from scratch, you could try the pre-built Virtual Machine appliance (OVA)
I hope this helps you. The best point to start using Wazuh is the Getting started guide. I recommend you read that first of all.
------------------------ edit --------------------
Hello,
I'm sorry if I weren't clear enough. Wazuh has two main components: Manager (server in the documentation) and Agent.
The manager is also called a server because it serves the Wazuh service itself. That means the part of Wazuh that analyzes security events and generates alerts.
But Wazuh agent (despite its name) is also installed on servers that you want to monitorize and it is used to send security events to the Wazuh Manager (server) so they could be analyzed.
That said, if you want to correctly monitorize a Windows server you need to install the Wazuh Windows agent on it because it is designed to monitorize Windows servers. And you need to connect this agent to a Wazuh server. Here, you have different options:
You could install the Wazuh Manager in another (Linux) server.
You could install docker and docker-compose on your Windows server and use the wazuh-docker GitHub repository to deploy a Wazuh manager stack (with Wazuh, Elasticsearch and Kibana) to connect you, agent, to.
You could install the Wazuh OVA (VM appliance) on Virtualbox or similar software (this Virtual machine has installed by default Wazuh Manager, Elasticsearch and Kibana as well).
I see that you're trying with the 4th, deploying the Wazuh OVA on Virtualbox. Nevertheless, remember that you must have to install the Windows agent as well and connect it to the Wazuh Manager.
Regarding the IP question. My advice here is to enter the VirtualBox configuration for the machine and set up two network interfaces (or adapters). One host-only adapter (which will have a static IP that you could use to connect from your local browser) and other with a bridged adapter (to connect to the internet). Then, I recommend using nmtui (a console user interface for network manager) to set up your static IP as in the attached capture. That should be enough.

ORA-12541: TNS: no listener in SSIS

We have oracle oledb connections in SSIS packages that are working well on windows server 2008.
We moved them to windows server 2012 and installed the needed softwares. We installed oracle client (oraoledb driver), moved tnsnames.ora, ldap.ora and sqlnet.ora to %Oracle_Home%\Network\admin path, add %Oracle_Home% and %Oracle_Home%\bin to path variable.
But on server 2012 oracle connections are giving this error ORA-12541: TNS: no listener. Where as on server 2008 same oracle connections are working fine.
Looked so much across internet but found these solutions:
Check tnsnames.ora
Check listener is running
Check path variable contain oracle home, oracle_home\bin
I don't see a problem with tnsnames.ora because same file is present on both window servers. Correct path variables are also set. Listener is also running (since SSIS on server 2008 is connecting). And I am able to ping oracle db server from both machines.
Can anyone suggest anything that we may try.
To put a formal answer in here.
Basic troubleshooting steps with SSIS:
Use the database native tools to check connectivity
In this case for Oracle that is SQLPLUS.EXE
If you have an issue with native tools then it isn't an SSIS issue
Check that you can resolve the host by using PING <hostname>.
If that doesn't work try PING <ip address>
If ping works, check the port with TELNET <host> <port>
If that doesn't work, either the service is not listening or you need to get your network guys to open the port
This goes for any network service
i.e.
SQL Server (default port 1433)
a web server (default port 80 for unencrypted comms)

Connectivity error while connecting from "Aginity workbench for Redshift" tool to AWS Redshift cluster

I am trying to connect to a redshift cluster using aginity tool but see this below error.
Error-message
I am able to connect to other cluster within the same aws account. The cluster to which I am able to connect is in "us-east-1" region. The cluster to which I am not able to connect is in "us-west-2" region. That is the only difference. All other parameters/configurations are same.
I verified inbound rules in security group, ssl-mode in redshift cluster parameter group and if redshift role was attached to the cluster. They are fine.
I tried googling the error message but it didn't help. I am stuck with this since a day. Any help is highly appreciated. Thanks in advance.
Typically, 08S01 is a network communications error. You've confirmed that the AWS side is properly configured, but do you have an on-prem firewall that could be causing an issue? One way you can test network connectivity is telneting to the instance port to confirm that it's reachable.
Have you tried using Aginity Pro, which uses the Redshift JDBC driver. One thing that's nice is you can copy the JDBC connection screen to the cli and isolate that the issue isn't with with application.

how can I troubleshoot meteor just hanging?

Hi I am trying out meteor for first time today.
my symptoms: meteor just hangs when trying to connect to port 3000 (it is listening, checked with lsof and looking at ps) a mongo instance is started on port 3002 but i can not connect to it with mongo (so perhaps neither can node ?)
background: I do already have mongo 2.0.3 installed and running (can it be a conflict?)
What can I do to troubleshoot and get meteor started ?
Site was bugging me to accept an answer or start a bounty...So here is explanation of my comment:
localhost on my machine resolves to ipv6 address first and meteor
binds only to 127.0.0.1.
So to answer the specific question of "how to troubleshoot":
I used lsof -i to verify that the meteor mongo instance was actually listening. This showed me that is was listening on 127.0.0.1. This eliminated the concept of mongo not listening. next i did host my machine's name and noticed the ipv6 came back first. this sparked a hunch and led me to force meteor to connect to 127.0.0.1 instead of localhost and it worked.
Well, check that port 3000 is open netstat -a
try a telnet localhost 3000
Use firefox extension TamperData or any other flow analysis tools to see what's going on at the HTTP level http://tamperdata.mozdev.org/
Have you tried to run against the bundled node and mongodb ?

Resources