Spring data redis stream receiver completes prematurely - spring-data-redis

I am using Spring data Redis to consume from a Redis stream ,using the reactive stream receiver to listen over a consumer group works ,but have observed that the Flux stream closes prematurely sometimes and doesn't listen to new messages any more and the flux terminates prematurely .
Code
StreamReceiverOptions<String, MapRecord<String, String, String>> options = StreamReceiverOptions.builder()
.build();
StreamReceiver.create(reactiveConnFactory, options)
.receiveAutoAck("CONSUMER_GRP", "CONSUMER_ID_1"), StreamOffset.create(
"CONSUMER_STREAM",
ReadOffset.lastConsumed()))
.doOnNext(msg -> LOG.info("Got [{}] message from stream", msg))
.flatMap(msg -> Mono.fromRunnable(() -> process("reactive", msg))
.subscribeOn(streamConsumerExecutor))
.onErrorResume(t -> Flux.empty())
.doOnCancel(() -> LOG.info("Consumer Stream was cancelled"))
.doOnComplete(() -> {
LOG.info("Consumer Stream Completed");
})
.doOnTerminate(() -> {
LOG.info("Consumer Stream terminated");
})
.subscribe();
After some time of reading messages from the stream get the log that the "consumer stream terminated"
version : 2.2.0.RELEASE
Is this a bug or am I missing anything ,Could any one help ?
UPDATE
Looks like redis commands are timing out as I get a RedisCommandTimeoutException, is there a way to retry the streaming process on such errors rather than cancelling it . Also figured out it happens in the XREADGROUP operation though when running through the nodejs redis-cli issuing the same command worked fine?

Related

In Firebase and Kotlin, in case of no network connection is there an easy way to handle endless network request looping?

In the case of network connectivity loss, the following code just loops endlessly and keeps making API calls. Is there a way to cancel with a timeout (for example, 5000 ms) using Firebase API? Or would I have to make my own Coroutine to handle this?
fun updateUserFieldInDB(
collectionPath: String,
strArr: ArrayList<String>,
onSuccess: (() -> Unit),
onFail: (() -> Unit)
) {
val fbUser = Firebase.auth.currentUser
if (fbUser == null) {
Log.i(TAG, "user is null....")
return
}
val db = Firebase.firestore
when (strArr.size) {
2 -> {
db.collection(collectionPath).document(fbUser.uid).update(strArr[0], strArr[1])
.addOnSuccessListener {
onSuccess()
}
.addOnFailureListener {
onFail()
}
}
}
}
The onSuccess ad onFail completion handlers for Firestore only fire once the write operation has been committed or rejected on the server. You should only use them if you're interested in detecting that situation, in which case the looping is to be expected.
If you only care whether the write operation was recorded by the Firestore client (in its local cache), the best way to detect that is when the update(strArr[0], strArr[1]) call completes.
So pretty much: when the next line of code executes, the write has been recorded locally; when the completion listeners fire, the write has been handled on the server.

How to make a gRPC firestore listen request in Rust?

Using gRPC bindings from https://github.com/gkkachi/firestore-grpc I was able to puzzle together something that is seemingly working but does not receive any content:
Creating the request:
let req = ListenRequest {
database: format!("projects/{}/databases/(default)", project_id),
labels: HashMap::new(),
target_change: Some(TargetChange::AddTarget(Target {
// "Rust" in hex: https://github.com/googleapis/python-firestore/issues/51
target_id: 0x52757374,
once: false,
target_type: Some(TargetType::Documents(DocumentsTarget {
documents: vec![users_collection],
})),
resume_type: None,
})),
};
Sending it:
let mut req = Request::new(stream::iter(vec![req]));
let metadata = req.metadata_mut();
metadata.insert(
"google-cloud-resource-prefix",
MetadataValue::from_str(&db).unwrap(),
);
println!("sending request");
let res = get_client(&token).await?.listen(req).await?;
let mut res = res.into_inner();
while let Some(msg) = res.next().await {
println!("getting response");
dbg!(msg);
}
(full code in this repo).
The request can be made but the stream does not contain any actual content. The only hint I get from the debug logs is
[2021-10-27T14:54:39Z DEBUG h2::codec::framed_write] send frame=GoAway { error_code: NO_ERROR, last_stream_id: StreamId(0) }
[2021-10-27T14:54:39Z DEBUG h2::proto::connection] Connection::poll; connection error error=GoAway(b"", NO_ERROR, Library)
Any idea what is missing?
The crucial thing I was missing as pointed out in the rust users forum was that the request stream was immediately ending which caused the connection to close. The send frame=GoAway was actually send by the client (facepalm).
To keep the connection open and receive responses we can keep the input stream pending: Request::new(stream::iter(vec![req]).chain(stream::pending())). There will be a better way to set things up and keep control over subsequent input requests but this is enough to fix the example.

Ocaml / Async socket issue

I am quite new to OCaml and I am working on a small TCP client utility, using Async/Core.
The connection is opened using
Tcp.with_connection (Tcp.Where_to_connect.of_host_and_port { host = "localhost"; port = myPort })
I need to be able to accept keyboard input, as well as read input from the socket. I use the Deferred.any for this purpose.
Calling Reader.read reader buf on the socket results in `Eof, which is OK, but when the method (containing the Deferred.any code) is called recursively, I get an exception:
“unhandled exception in Async scheduler”
(“unhandled exception”
((monitor.ml.Error
(“can not read from reader” (reason “in use”)
.....
Reader.is_closed on the reader returns false.
How can I “monitor” the socket recursively without this exception?
Michael

Redis long-polling Pub/Sub frequent message blocking

I'm trying to wrap my head around the Redis Pub/Sub API and setup a long-polling server.
This lua script subscribes to a 'test' channel and returns new messages received:
nginx.conf:
location /poll {
lua_need_request_body on;
default_type 'text/plain';
content_by_lua_file '/usr/local/nginx/html/poll.lua';
}
poll.lua:
local redis = require "redis";
local red = redis:new();
local cjson = require "cjson";
red:set_timeout(30000) -- 30 sec
local resCon, err = red:connect("127.0.0.1", 6379)
if not resCon then
ngx.print("error")
return
end
local resSub, err = red:subscribe('r:' .. ngx.var["arg_r"]:gsub('%W',''))
if not resSub then
ngx.print("error")
return
end
if resSub == ngx.null then
ngx.print("error")
return
end
local resMsg, err = red:read_reply()
if not resMsg then
ngx.say("0")
return
end
ngx.say(cjson.encode(resMsg))
client.js:
var tmpR = 'test';
function poll() {
$.get('/poll', {'r':tmpR}, function(data){
if (data !== "error") {
console.log(data);
window.setTimeout(function(){
poll();
},1000);
} else {
console.log('poll fail');
}
})
}
Now, if I send publish r:test hello from redis-cli, I receive the message on the client and the server responds to redis-cli with 1. But, if I send two messages quickly, the second message doesn't broadcast and the server responds with 0.
Are my channels only capable of receiving a message per second, or, is this a throttle on the frequency of messages a user can broadcast to a channel?
Is this the right way to approach this polling server on nginx assuming many users may be connected at one time? Would it be more efficient to use GET requests on a timer?
Given two consecutive messages only one is going to have a subscriber listening to the result. No subscriber is listening when the second message is sent. The only subscriber is processing the previous result and returning that to the user.
Redis is not maintaining a message queue or similar to make sure that previously listening clients will receive the missing messages upon reconnect.

akka io tcp server

I am using the new Akka IO and followed this tutorial(which is a simple server-client application). My server actor system code looks like this:
// create the sever system
ActorSystem tcpServerSystem = ActorSystem.create("tcp-server-system");
// create the tcp actor
final ActorRef tcpServer = Tcp.get(tcpServerSystem).manager();
// create the server actor;
ActorRef serverActor = tcpServerSystem.actorOf(new Props(ServerActor.class).withRouter(new RoundRobinRouter(5)), "server");
// tell the tcp server to use an actor for listen connection on;
final List<Inet.SocketOption> options = new ArrayList<Inet.SocketOption>();
options.add(TcpSO.reuseAddress(true));
tcpServer.tell(TcpMessage.bind(serverActor, new InetSocketAddress("127.0.0.1", 12345), 10, options),
serverActor);
The ServerActor class it's just a plain actor that on it's onReceive does the followings:
logger.info("Received: " + o);
if (o instanceof Tcp.Connected){
connectionActor = getSender();
connectionActor.tell(TcpMessage.register(getSelf()), getSelf());
ByteStringBuilder byteStringBuilder = new ByteStringBuilder();
byteStringBuilder.putBytes("Hello Worlds".getBytes());
connectionActor.tell(TcpMessage.write(byteStringBuilder.result()), getSelf());
}
I am trying to test the server actor using netcat and have this "strange" behaviour: only the first client that connect tot the server is receiving the message send from the server. The nexts clients could connect to the server but does not receive the message. Also in debug mode the server actor doesn't get the Tcp.Connected message(except for the first connected client), so a registration message could not be sent to the client, althought the next clients could connect.
this is a known issue in the 2.2-M1 milestone, where the problem was that the TcpListener didn't register AcceptInterest on the selector unless it reached the configured BatchAcceptLimit, leading to it not being notified of new accepts if there where only a few connections pending.
It has been fixed and will be part of the next milestone release.

Resources