I am struggling to find out the issue, for what the reason the consumer is getting stopped.
The issue is the consumer is getting stopped after a certain time ( around 4:52 sec) But be a able to consume the messages and able to process.
As per my understanding the reason for the consumer to stop is, the consumer will not be able to commit the offset (processing time is more than max.poll.interval.ms) within max.poll.interval.ms.
are there any other reasons ?
Here are my basic consumer properties :
max.poll.records = 2
auto.offset.reset = latest
max.poll.interval.ms = 300000
idle.poll.interval = 60000 (between two polls)
no.of.consumers =1
consumer.group.id = test2
listener.auto.start = true
I see some statements in log -
Received user wakeup,
Raising WakeupException in response to user wakeup,
Executing onLeavePrepare with generation Generation
Can someone help on this ?
Note : We are a consumer to the event hub, and on this connectivity we are seeing this issue.But when we connect to the Kafka we do not see any issues.
#Gary , Can you please help on this?
It seems the kafka configuration side, everything looks fine. The issue we found is at the docker pods, where the health of the pod is not being sent. Because livenessProbe - port is different than the application configuration port (By mistake i have hard coded a different port) . Any way thanks for this forum !!!!
Related
There is an issue in a topology of 3 hosts.
Primary has a scheduled OS-task (every hour) to delete archive logs older 3 hours with RMAN. Archivelog deletion policy is configured to "Applied on all standby".
There are 2 remote log_archive_dest entries - Physical Standby and Downstream. Every day there appears a "RMAN-08120: WARNING: archived log not deleted, not yet applied by standby" in the task's logs and than it resolves in 2-3 hours.
I've checked V$ARCHIVE_LOG during the issue and figured out, that the redos are not applied on the Downstream server. I have not caught the issue on the Downstream server yet, but during the "good" periods all the apply processes are enabled, but the dba_apply_progress view tells me, that apply_time of the messages is 01-JAN-1970.
The dba_capture view tells that capture processes' statuses are ENABLED, status_change_time is approximately the time of the RMAN-issue resolved.
I'm new to the Golden Gate, Streams and Downstream technologies. I've read the reference Oracle Docs, but couldn't find anything about some schedule for capture or apply processes.
Could someone please help to figure out, what else to check or what to read?
Grateful for every response.
Thanks.
I am trying to execute a simple Wordcount stream application but I face the error "Could not create internal topics - Stream-thread exception"
I have seen a similar thread but that seems to be more of a network issue.
Here is no security enabled on the kafka broker.
Only one broker is configured and still this issue.
Can someone let me know how to fix this?
Clean your temporary kafka queues.
Run --list command on kafka to see all the queues starting with your names and ending with -changelog & -repartition and manually run delete on them.
This one worked for me.
Also, check your settings on delete.topic.enable for actual deletion happening. It was not the default setting until 1.0.0 - see https://issues.apache.org/jira/browse/KAFKA-5384
i have connected to kafka using kafka tool and delete them manually
I am using rabbitmq to for messaging in my service. Lets suppose there are 2 micro service A and B.
there are more 3 exchange and respective queue is there in between.
A is publisher and B is consumer here. while sending message from A it is successfully updating in queue( able to see in console queue is increases). But here consumer is not able to receive messages. previously it was working.
but for other exchange and queue , consumer is working fine.
I tried purze the queue and restarted application , didnt helped me. there is always 4 unached message in queue and rest is ready to Go. finally I deleted queue and exchange and respective routing key and recreated the same. then all working fine..
Can any one help me here what happened to this. Why it didnt worked?
Sometimes when some failure occur for message processing then if we throw error. it stucks there so go in infinite loop(queue-> processing -> queue > .....) if method keep throwing error.
for other messages who all are in queue, by increasing the batch concurrency we can execute the other. but unack message will be there.that will only go if someone stop consumer..
Now I have one question can i set limit retry for unacked message processing. if some one knows about it then can help here.
Even with debug enabled for RemoteConfig, I still managed to get the following:
Error fetching remote config values Optional(Error Domain=com.google.remoteconfig.ErrorDomain Code=8002 "(null)"
UserInfo={error_throttled_end_time_seconds=1483110267.054194})
Here is my debug code:
let debug = FIRRemoteConfigSettings(developerModeEnabled: true)
FIRRemoteConfig.remoteConfig().configSettings = debug!
Shouldn't the above prevent throttling?
How long will the throttle error remain in effect?
I've experienced the same error due to throttling. I was calling FIRRemoteConfig.remoteConfig().fetchWithExpirationDuration with an expiry that was less than 60 seconds.
To immediately get around this issue during testing, use an alternative device. The throttling occurs against a particular device. e.g. move from your simulator to a device.
The intention is not to have a single client flooding the server with fetch requests every second. Make sensible use of the caching it offers out of the box and fetch only when necessary.
When you receive this error, plug the value of error_throttled_end_time_seconds into an epoch converter (like this one at https://www.epochconverter.com) and it will tell you the time when throttling ends. I've tested this myself, and the throttling remains in effect for 1 hour from the first moment you are throttled. So either wait an hour or try some of the other recommendations given here.
UPDATE: Also, if you continue making config requests and receive the throttle error, the expire timeout does not increase (i.e. "you are not further penalized").
The quick and easy hack to get your app running is to delete the application and reinstall it. Firebase identifies your device as new device on reinstalling.
Hope it helps and save your time.
I have written a very simple Flink streaming job which takes data from Kafka using a FlinkKafkaConsumer082.
protected DataStream<String> getKafkaStream(StreamExecutionEnvironment env, String topic) {
Properties result = new Properties();
result.put("bootstrap.servers", getBrokerUrl());
result.put("zookeeper.connect", getZookeeperUrl());
result.put("group.id", getGroup());
return env.addSource(
new FlinkKafkaConsumer082<>(
topic,
new SimpleStringSchema(), result);
}
This works very well and whenever I put something into the topic on Kafka, it is received by my Flink job and processed. Now I tried to see what happens if my Flink Job isn't online for some reason. So I shut down the flink job and kept sending messages to Kafka. Then I started my Flink job again and was expecting that it would process the messages that were sent meanwhile.
However, I got this message:
No prior offsets found for some partitions in topic collector.Customer. Fetched the following start offsets [FetchPartition {partition=0, offset=25}]
So it basically ignored all messages that came since the last shutdown of the Flink job and just started to read at the end of the queue. From the documentation of FlinkKafkaConsumer082 I gathered, that it automatically takes care of synchronizing the processed offsets with the Kafka broker. However that doesn't seem to be the case.
I am using a single-node Kafka installation (the one that comes with the Kafka distribution) with a single-node Zookeper installation (also the one that is bundled with the Kafka distribution).
I suspect it is some kind of misconfiguration or something the like but I really don't know where to start looking. Has anyone else had this issue and maybe solved it?
I found the reason. You need to explicitly enable checkpointing in the StreamExecutionEnvironment to make the Kafka connector write the processed offsets to Zookeeper. If you don't enable it, the Kafka connector will not write the last read offset and it will therefore not be able to resume from there when the collecting Job is restarted. So be sure to write:
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(); // <-- this is the important part
Anatoly's suggestion for changing the initial offset is probably still a good idea, in case checkpointing fails for some reason.
https://kafka.apache.org/08/configuration.html
set auto.offset.reset to smallest(by default it's largest)
auto.offset.reset:
What to do when there is no initial offset in Zookeeper or if an
offset is out of range:
smallest : automatically reset the offset to the smallest offset
largest : automatically reset the offset to the largest offset
anything else: throw exception to the consumer.
If this is set to largest, the consumer may lose some messages when the number of partitions, for the topics it subscribes to, changes on the broker. To
prevent data loss during partition addition, set auto.offset.reset to
smallest
Also make sure getGroup() is the same after restart