I'm using ReplyingKafkaTemplate to make synchronous call with reply. From what i found till now, every time i'm going to use template i call start() and after receiving response the stop() method. However, I came across a problem with the message commit, the offset of my consumer was not increasing. I assumed, that is because consumer did not have time to make a commit, because basic commit time (property "auto.commit.interval.ms") is set to 5 seconds in ConsumerConfig class and I'm stopping him immediatelly after receiving a message. So i changed this time to 1 ms, to commit immediatelly after receiving message. This way it's working, but i would like to understand it better
My question is : How start() and stop() methods should be used properly, is there a purpose to start it before every call and stop after ? And what is a right way to make sure that commit was made ?
Btw. I would be honored if Gary answered the question
You should not start and stop the container each time; just leave the reply container running all the time.
In any case, you should leave enable.auto.commit alone - while it's default is true, Spring will set it to false unless you explicitly set it to true.
The container will commit the offset in a more deterministic manner than the built-in auto commit mechanism.
Related
I have this kafka consumer:
new ReactiveKafkaConsumerTemplate<>(createReceivingOptions())
it happily processes messages, I set
max-poll-records=1
so that things don't happen to fast for me. I can verify via logging breakpoint in poll method on
final Map<TopicPartition, List<ConsumerRecord<K, V>>> records = pollForFetches(timer);
how many records poll returned, and yes it's one. Then I asked it to pause all assigned partitions. In log is can see that it worked!
o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=Testing, groupId=Testing] Pausing partitions [TestTopic-0]
and from that point on I can see, that poll gets only 0 records and also this log:
Skipping fetching records for assigned partition TestTopic-0 because it is paused
OK great, it works! But wait, why is my whole topic getting processed then?
Then I found out, that at certain point there is also this log:
Consumer clientId=Testing, groupId=Testing] Resuming partitions [TestTopic-0]
what? Who is calling that? And then I also found out, that there are multiple requests for pausing all over place, not just the one I actually invoked.
Pausing is somehow used by reactive and cannot be used manually? Or does someone have explanation why …clients.consumer.KafkaConsumer does pause/resume topic on it's own all the time, and manual pause because of that gets unpaused?
After reviewing the ConsumerEventLoop code, the reactive client is using pause/resume internally, to handle back pressure - when the downstream can't receive any more data he pauses all assigned partitions and unconditionally resumes them when the back pressure is relieved.
It seems to me that it needs to keep track of whether the pause was done because of back-pressure and only resume in that case.
It looks like it used to do that before this commit.
Perhaps you could use back pressure instead to force the pause?
I'm working with TcpStream. The basic structure I'm working with is :
loop {
if /* new data in the stream */ { /* handle it */ }
/* do a lot of other stuff */
}
So set_timeout() appears to be what I need, but I'm a little puzzled about how it works. The documentation says :
This function will set a timeout for all blocking operations (including reads and writes) on this stream. The timeout specified is a relative time, in milliseconds, into the future after which point operations will time out. This means that the timeout must be reset periodically to keep it from expiring.
So I would expect to have to reset the timeout each time before checking if new data is available, otherwise I would only have Err(TimeOut) after some time.
But it appears not to be the case : actually if I set a very low timeout (like 10 ms) once and for all, the loop does exactly what I want. It returns new data if there is some, and returns Err(TimeOut) if there is none.
Am I misunderstanding the documentation ? Is it safe for me to use this behavior ?
I would have expected it to work like a socket timeout, like you have as the property for sockets in most operating systems and which is available from with the programming languages with SO_TIMEOUT or similar things. With such socket timeout the timer will be started whenever you start a blocking operation on the socket, like read, write, connect. Either the operation will succeed within the time frame or the timer will be triggered and the operation fail because of a timeout. The timeout is a property of the socket and not of the operation, so there is no need to set it again before each operation.
But according to the documentation Rust implemented a completely different thing. If I interpret the documentation correctly they don't set a timeout per operation, but instead set a deadline for all operations of this type on the socket. That is, when the timer is set up to 10 seconds you can have multiple reads within this time but if there is still a read active after 10 seconds it will be stopped.
When one is used to work with socket timeouts in other languages this behavior is not the expected one and it looks like the Rust developers have similar objections to this (experimental) API. In https://github.com/rust-lang/rust/issues/15802 they suggest to rename these kind of functions from set..timeout to set..deadline to make the name reflect the behavior.
When a workflow has a receive activity that occurs after another receive activity and the second receive activity is called first the workflow holds the caller by blocking for 1 minute before timing out.
I want the workflow to return immediately when there are no matching workflow instances.
I do not want to change the timeout on the client as some calls may take a while.
This is a known issue in WF4, at least I am not aware of it being fixed yet.
I have a workflow that contains a Pick activity. Each PickBranch is triggered by a WCF request. The triggered branch then sends a response to the request and performs an Action activity. But the behaviour I'm seeing indicates the response is not being sent until the Action activity is complete which is causing the original request to timeout, depending on how long the Action activity takes to complete.
In the PickBranch above, I'm adding work orders to a mobile database. Each work order takes up to 16 seconds to be added to the database. As the number of work orders increases, the greater the likelihood that the original request will timeout. What am I doing wrong?
Ok, I think I have a resolution for this. As per Maurice's answer here, I added a Delay activity following the SendReplyToReceive and the workflow then started behaving as expected.
Just tested this and it works fine. If I have a Pick with a send and receive inside a trigger and a delay inside the action, the reply is received immediately.
Are you sure the Request on your SendReply activity appears to be set correctly?
Patrick is still right, you should implement your database activity as an AsyncCodeActivity but this would not be the reason for your reply being delayed.
I my experience checking PersistBeforeSend on SendReplyToReceive to True fixes this problem. Putting Persist block after SendReplyToReceive also helps.
This is working as intended. If the operations take such a long time, would you be better served by calling them asynchronously? Check out AsyncCodeActivity here:
http://msdn.microsoft.com/en-us/library/system.activities.asynccodeactivity.aspx
and by unresponsive I mean that after the first three successful connections, the fourth connection is initiated and nothing happens, no crashes, no delegate functions called, no data is sent out (according to wireshark)... it just sits there?!
I've been beating my head against this for a day and half...
iOS 4.3.3
latest xCode, happens the same way on a real device as in the simulator.
I've read all the NSURLConnection posts in the Developer Forums... I'm at a loss.
From my application delegate, I kick off an async NSURLConnection according to Apple docs, using the App Delegate as the delegate for the NSURLConnection.
From my applicationDidFinishLaunching... I trigger the initial two queries which successfully return XML that I then pass off to a OperationQueue to be parsed.
I can even loop, repeating these queries with no issues, repeated them 10 times and worked just fine.
The next series of five queries are triggered via user input. The first query runs successfully and returns the correct response, then the next query is created and when used to create a NSURLConnection (just like all the others), just sits there.?!
The normal delegate calls I see on all the other queries are never seen.
Nothing goes over the wire according to Wireshark?
I've reordered the queries and regardless of the query, after the first one the next one fails (fails as in does nothing, no errors or aborts, just sits there)
It's obviously in my code, but I am blind to it.
So what other tools can I use to debug the async NSURLConnection... how can I tell what it's doing? if at all.
Any suggestions for debugging a NSURLConnection or other ways accomplish doing the same thing a NSURLConnection does??
Thanks for any help you can offer...
OK tracked it down...
I was watching the stack dump in each thread as I was about to kick off each NSURLConnection, the first three were all in the main thread as expected... the fourth one ended up in a new thread?! In one of my NSOperation thread?!?!
As it turns out I inadvertently added logic(?) that started one my NSURLConnection in the last NSOperation call to didFinishParsing: so the NSURLConnection was async started and then the NSOperation terminated... >.<
So I'll move the NSURLConnection out of the didFinishParsing and it should stay in the main loop and I should be good!