I'm trying to set up Apache Kafka to communicate between two virtual machines running CentOS on the same network. I originally set up a Kafka producer and consumer on one machine and everything was running smoothly. I then set up Kafka on the other machine and in the process of trying to get them to connect, I get the error "bootstrap-server is not a recognized option" (I'm running the most recent version of Kafka, 2.2).
This is what I used to attempt a producer connection:
bin/kafka-console-producer.sh --bootstrap-server 10.0.0.11:9092 --topic test
And on the consumer side:
bin/kafka-console-producer.sh --bootstrap-server 10.0.0.11:9092 --topic test
The 10.0.0.11 machine is running the server itself.
According do Apache Kafka Documentation, that can be found here: https://kafka.apache.org/documentation/#quickstart_send, you should use --broker-list property to pass broker address.
Command will be:
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
EDIT
From Apache Kafka 2.5 both options are supported --broker-list and --bootstrap-server. Suggested one is --bootstrap-server
Following command should work.
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
./kafka-console-consumer.sh --zookeeper localhost:2181 --topic fred
Simply giving this command worked. However in upcoming major release using the option bootstrap-server will be mandatory.
When I use the above command it gives a warning: "Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper]."
Related
Can anyone guide me if am doing anything wrong :
Objective : Want to set up scheduler HA .
Versions : Backend db - Postgres 12.6 , Airflow 2.1.1
Challenges: When scheduler is started on first machine , it works as expected and i was able to trigger the example _bash_operator but when scheduler is started on another host with the same backend connection .
My first scheduler fails and it gives me the below error when am trying to click on the bash_oprator_example dag in WebUI
ValueError: unsupported pickle protocol: 5
ValueError: unsupported pickle protocol: 5 generally occurs when you have a different version of Python running on both machines.
Verify that you have same versions of Python on both machines
Automatic upgrade in mid-May.
1.14.10-gke.27 → 1.14.10-gke.36
https://cloud.google.com/kubernetes-engine/docs/release-notes?_ga=2.196679278.-315187236.1572486593#may_13_2020
After that, I got Memorystore(Redis)connection error and crul6 error.
crul error
cURL error 6: Could not resolve host: www.googleapis.com (see https://curl.haxx.se/libcurl/c/libcurl-errors.html)
This problem happens occasionally, not always.
It worked fine before the upgrade.
Role-based access control not use
Workload Identity not use
Please advise.
Close it once.
Probably not a gke version issue.
The reason is that the node was a preemptible node.
That is likely.
When I ran the reboot test of the Pretentive Node, I was able to reproduce the error.
gcloud compute instances simulate-maintenance-event <instance name> --zone <zone>
I am working on kafka . I have created kafka producer on my server . I want to get data from kafkaproducer to my local system in r.
I have tried following code in R:
library(rkafka)
consumer1<-rkafka.createConsumer("ipaddress:9092","mytest")
consumer11 <- rkafka.read(consumer1)
It throws following error:
[1] "Java-Object{com.musigma.consumer.MuConsumer#3349e9bb}"
Unable to connect to zookeeper server
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within
timeout: 100000
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:880)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:98)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:84)
at kafka.consumer.ZookeeperConsumerConnector.connectZk(ZookeeperConsumerConnector.scala:156)
at kafka.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:114)
at kafka.javaapi.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:65)
at kafka.javaapi.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:67)
at kafka.consumer.Consumer$.createJavaConsumerConnector(ConsumerConnector.scala:100)
at kafka.consumer.Consumer.createJavaConsumerConnector(ConsumerConnector.scala)
at com.musigma.consumer.MuConsumer.CreateConsumer(MuConsumer.java:99)
java.lang.NullPointerException
at com.musigma.consumer.MuConsumer.startConsumer(MuConsumer.java:133)
My zookeeper is running on the ipaddress successfully.
The first parameter is Zookeeper, which runs on port 2181
You've given it Kafka port
Source - https://github.com/cran/rkafkajars/blob/master/java/com/musigma/consumer/MuConsumer.java#L87
Note: Looks like that library isn't maintained and using Zookeeper to connect with a consumer is practically deprecated, so maybe try finding another library
I'm working on a project which requires me to connect to an existing Kafka cluster using dotnet and the Confluent library. The Kafka cluster uses Kerberous/Keytab authentication. Looking at some of the documentation it looks like you can pass through the keytab file using the JAAS configuration, but when I look at the properties for the ProudcerConfig in Confluent I don't see anything about authentication. So how do I specify the keytab file so that I can authenticate against the Kafka cluster?
I think this section of Confluent docs mentions how to configure clients:
In your client.properties file you'd need the following configuration:
sasl.mechanism=GSSAPI
# Configure SASL_SSL if SSL encryption is enabled, otherwise configure SASL_PLAINTEXT
security.protocol=SASL_SSL
sasl.kerberos.service.name=kafka
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
useKeyTab=true \
storeKey=true \
keyTab="/etc/security/keytabs/kafka_client.keytab" \
principal="kafkaclient1#EXAMPLE.COM";
# optionally - kafka-console-consumer or kafka-console-producer, kinit can be used along with useTicketCache=true
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
useTicketCache=true;
In order to pass client.properties to e.g. kafka-console-consumer you need to provide --consumer.config parameter as well:
For Linux:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --consumer.config client.properties --from-beginning
For Windows:
bin/kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test --consumer.config client.properties --from-beginning
What is the proper way to run Kibana 4.5 as service on CentOS 7?
When I run it as ./kibana, I can conenct to it form another machine without any problem. When I run it with systemctl start kibana and check with ps -ef | grep '.*node/bin/node.*src/cli'it looks like running but refuses to connect. And goes down. What can be the problem? Thanks in advance.
Here is content of kibana.service file
[Unit]
Description=no description given
[Service]
Type=simple
User=kibana
Group=root
Environment=CONFIG_PATH=/opt/kibana/config/kibana.yml
ExecStart=/opt/kibana/bin/kibana
Restart=always
[Install]
WantedBy=multi-user.target
I am not that much of a linux expert but i recently installed kibana using yum (https://www.elastic.co/guide/en/kibana/4.5/setup.html#kibana-yum) on a minimal installation of CentOS 7 and did not face any issues whatsoever.
In order to have some debug logs and find out what is wrong in your case, edit the kibana configurations file
/opt/kibana/config/kibana.yml
and set a filename for the logging.dest property.
logging.dest: /var/log/kibana.log
Good luck
Igor,
I noticed a few questions you posted on Kafka so sounds like you need to set up a cluster that can ingest data and pass to Elastic. Kibana would be just user interface.
In my experience, components like ELK, Kafka, Zookeeper, etc should be managed by a watchdog process. I highly recommend looking at something like supervisord. http://supervisord.org/
You should run it as a service and the rest managed by the supervisor. It will guarantee starting components at boot but whats more important restart in case of failure and collecting logs. In case of Kibana, it is a NodeJS app that writes to stdout/stderr so to know what fails, you need to collect them.