Redis long-polling Pub/Sub frequent message blocking - nginx

I'm trying to wrap my head around the Redis Pub/Sub API and setup a long-polling server.
This lua script subscribes to a 'test' channel and returns new messages received:
nginx.conf:
location /poll {
lua_need_request_body on;
default_type 'text/plain';
content_by_lua_file '/usr/local/nginx/html/poll.lua';
}
poll.lua:
local redis = require "redis";
local red = redis:new();
local cjson = require "cjson";
red:set_timeout(30000) -- 30 sec
local resCon, err = red:connect("127.0.0.1", 6379)
if not resCon then
ngx.print("error")
return
end
local resSub, err = red:subscribe('r:' .. ngx.var["arg_r"]:gsub('%W',''))
if not resSub then
ngx.print("error")
return
end
if resSub == ngx.null then
ngx.print("error")
return
end
local resMsg, err = red:read_reply()
if not resMsg then
ngx.say("0")
return
end
ngx.say(cjson.encode(resMsg))
client.js:
var tmpR = 'test';
function poll() {
$.get('/poll', {'r':tmpR}, function(data){
if (data !== "error") {
console.log(data);
window.setTimeout(function(){
poll();
},1000);
} else {
console.log('poll fail');
}
})
}
Now, if I send publish r:test hello from redis-cli, I receive the message on the client and the server responds to redis-cli with 1. But, if I send two messages quickly, the second message doesn't broadcast and the server responds with 0.
Are my channels only capable of receiving a message per second, or, is this a throttle on the frequency of messages a user can broadcast to a channel?
Is this the right way to approach this polling server on nginx assuming many users may be connected at one time? Would it be more efficient to use GET requests on a timer?

Given two consecutive messages only one is going to have a subscriber listening to the result. No subscriber is listening when the second message is sent. The only subscriber is processing the previous result and returning that to the user.
Redis is not maintaining a message queue or similar to make sure that previously listening clients will receive the missing messages upon reconnect.

Related

VUE Front end to go server (http) and clients connected to go server (tcp) error

I'm currently creating a go TCP server that handles file sharing between multiple go clients, that works fine. However, I'm also building a front end using vue.js showing some server stats like the number of users, bytes sent, etc.
The problem occurs when I include the 'http.ListenAndServe(":3000", nil)' function handles the requests from the front end of the server. Is it impossible to have a TCP and an HTTP server on the same go file?
If so, how can a link the three (frontend, go-server, clients)
Here is the code of the 'server.go'
func main() {
// Create TCP server
serverConnection, error := net.Listen("tcp", ":8085")
// Check if an error occured
// Note: because 'go' forces you to use each variable you declare, error
// checking is not optional, and maybe that's good
if error != nil {
fmt.Println(error)
return
}
// Create server Hub
serverHb := newServerHub()
// Close the server just before the program ends
defer serverConnection.Close()
// Handle Front End requests
http.HandleFunc("/api/thumbnail", requestHandler)
fs := http.FileServer(http.Dir("../../tcp-server-frontend/dist"))
http.Handle("/", fs)
fmt.Println("Server listening on port 3000")
http.ListenAndServe(":3000", nil)
// Each client sends data, that data is received in the server by a client struct
// the client struct then sends the data, which is a request to a 'go' channel, which is similar to a queue
// Somehow this for loop runs only when a new connection is detected
for {
// Accept a new connection if a request is made
// serverConnection.Accept() blocks the for loop
// until a connection is accepted, then it blocks the for loop again!
connection, connectionError := serverConnection.Accept()
// Check if an error occurred
if connectionError != nil {
fmt.Println("1: Woah, there's a mistake here :/")
fmt.Println(connectionError)
fmt.Println("1: Woah, there's a mistake here :/")
// return
}
// Create new user
var client *Client = newClient(connection, "Unregistered_User", serverHb)
fmt.Println(client)
// Add client to serverHub
serverHb.addClient(client)
serverHb.listClients()
// go client.receiveFile()
go client.handleClientRequest()
}
}

Async server does not process requests while a request is stuck

I am new to GRPC so please let me know if I am doing something wrong here. I am looking at the greeter_async_server.cc example code. This seems to work fine for normal requests but I wanted to simulate a request getting stuck on the server so I added a sleep in the processing loop. I added this right before Finish is called on the responder so that it was in the actual processing logic of the request. While the server thread is sleeping it will not accept any new requests until the thread is free. I attempted to create another client request while the original request on the server is sleeping but the grpc server would not process the request. The client seemed to be stuck until the server came out of the sleep.
I also broke this process into debugger as well but the only request I saw was the one that was sleeping. The other threads were waiting on the completion queue.
I am new to grpc so if I am doing this wrong please let me know what I need to do to handle request while another request is stuck.
void Proceed() {
if (status_ == CREATE) {
// Make this instance progress to the PROCESS state.
status_ = PROCESS;
// As part of the initial CREATE state, we *request* that the system
// start processing SayHello requests. In this request, "this" acts are
// the tag uniquely identifying the request (so that different CallData
// instances can serve different requests concurrently), in this case
// the memory address of this CallData instance.
service_->RequestSayHello(&ctx_, &request_, &responder_, cq_, cq_,
this);
} else if (status_ == PROCESS) {
// Spawn a new CallData instance to serve new clients while we process
// the one for this CallData. The instance will deallocate itself as
// part of its FINISH state.
new CallData(service_, cq_);
// The actual processing.
std::string prefix("Hello ");
reply_.set_message(prefix + request_.name());
Sleep((DWORD)-1);
// And we are done! Let the gRPC runtime know we've finished, using the
// memory address of this instance as the uniquely identifying tag for
// the event.
status_ = FINISH;
responder_.Finish(reply_, Status::OK, this);
} else {
GPR_ASSERT(status_ == FINISH);
// Once in the FINISH state, deallocate ourselves (CallData).
delete this;
}
}

Spring Integration TCP. Get connection ID of the connected clients

I have a problem here with the dynamic TCP connection approach (Spring-IP Dynamic FTP Sample). When a message is received, I want to get the TCP connection details for the received message. this way I can keep track in my application of the sender sending that message. But in Service activator I am not able to get this detail.
Also need the connection details when my TCP client is connected to the server. This way if the server wants to initiate the communication, it will have the connection details.
For info my application has more than one TCP clients and servers.
Got an answer reply in another post from Mr. Gary Russell.
Answer
For normal request/reply processing, using an inbound gateway, the framework will take care of routing the service activator reply to the correct socket. It does this by using the connection id header.
If you need to provide arbitrary replies (e.g. more than one reply for a message, you have to use inbound and outbound channel adapters and your application is responsible for setting up the connection id header.
There are two ways to access the required header in a POJO invoked by a service activator:
public void foo(byte[] payload, #Header(IpHeaders.CONNECTION_ID) String connectionId) {
...
}
public void foo(Message<byte[]> message) {
String connectionId = message.getHeaders().get(...);
}
Then, when you send your replies, you need to set that header somehow.
EDIT
Below Is My Implementation
To get all the connected clients simply get the ServerConnectionFactory from the context and access the method .getConnectedClients(). It returns the list connectionIds for each connected client.
AbstractServerConnectionFactory connFactory = (AbstractServerConnectionFactory) appContext.getBean("server");
List<String> openConns = connFactory.getOpenConnectionIds();
As mentioned above in Gary's response, use this connectionId and set it in conneciton header while sending the message to a client. Sample code as follows.
MessageChannel serverOutAdapter = null;
try{
serverOutAdapter = (MessageChannel) appContext.getBean("toObAdapter");
}catch(Exception ex){
LOGGER.error(ex.getMessage());
throw ex;
}
if(null == serverOutAdapter){
throw new Exception("output channel not available");
}
AbstractServerConnectionFactory connFactory = (AbstractServerConnectionFactory) appContext.getBean("serverConnFactoryBeanId");
List<String> openConns = connFactory.getOpenConnectionIds();
if(null == openConns || openConns.size() == 0){
throw new Exception("No Client connection registered");
}
for (String connId: openConns) {
MessageBuilder<String> mb = MessageBuilder.withPayload(message).setHeader(IpHeaders.CONNECTION_ID, connId);
serverOutAdapter.send(mb.build());
}
Note 1: If u want to send messages from the server then be cautious to configure the server and client connection factories in a way that they do not time-out. i.e put so-keep-alive = true in client connection factory.
Note 2: If the server has to communicate with the client then make sure that the client connects to the server as soon as the context is loaded. Because Spring-IP client connection factory connects only when the first message is sent out. In order to connect client after context load, put client-mode="true" in tcp client context for the "tcp-outbound-channel-adapter".

AsyncSocket: always listen to incoming TCP messages

I would like to have a service which connects via TCP to a server and then continuously listens to incoming data. I'm using CocoaAsyncSocket which I'm using in the following way:
self.socket = [[GCDAsyncSocket alloc] initWithDelegate:self delegateQueue:dispatch_get_main_queue()];
NSError *err = nil;
if (![self.socket connectToHost:#"..." onPort:... error:&err]) {
return;
}
[self.socket readDataWithTimeout:-1 tag:1];
and then in the reading delegate method:
- (void)socket:(GCDAsyncSocket *)sock didReadData:(NSData *)data withTag:(long)tag {
NSLog(#"%#", data);
[self.socket readDataWithTimeout:-1 tag:1];
}
is this correct that I'm immediately calling readDataWithTimout:tag: again? Or is there a (better) way to always listen to incoming messages?
For what you are doing this is fine. You need to call
-[readDataWithTimeout] in -didReadData, because otherwise you would only receive the first message from the server. GCDAsyncSocket is designed this way, because there are a few other ways you can receive incoming data.

SignalR connect error

I use SignalR 2.0.0 Win2012 iis8 with two environment with two different ips.
one environment service is up and second is down(purposely)
use websocket protocol.
i have the following scenario:
When i connect to first environment and want to connect to the second.
i disconnected from first environment and try connect to second environment i get error(its correct behavior)
i try to reconnect back to the first environment but I get still the same error.
the error is "Error during negotiation request."
after refresh the browser i can connect success again to first environment.
What am i doing wrong?
this is part of my code:
function connect(host)
{
var hubConnection = $.hubConnection.('');
hubConnection.url = host;
hubConnection.start()
.done(open)
.fail(error);
}
function open()
{
console.log('login success')
}
function disconnect()
{
var self = this,
hubConnection = $.hubConnection("");
console.log('disconnect ')
hubConnection.stop(true, true);
}
function error(error)
{
var self = this,
hubConnection = $.hubConnection("");
console.log('connection error ')
if(error && hubConnection.state !== $.connection.connectionState.connected)
{
.....
.....
//logic detemninate wich environment ip was previous
connect(environment ip)
}
}
//occured when button disconnect clicked
function disconnectFromFirstEnvironmentAndConnectToSecond()
{
disconect();
connect(second environment ip);
}
.....
.....
connect(first environment ip);
You're not retaining your first connection reference.
Aka you create a HubConnection and then never capture it in a scope that can be used later; therefore when you disconnect later the connection.stop does nothing because it's not calling stop on the HubConnection that was originally started.
This could ultimately lead to you having too many concurrently open requests which will then not allow you to negotiate with a server hence your error.
I'd recommend fixing how you stop/start connections. Next if the issue still occurs I'd inspect the network traffic to ensure that valid requests are being made.

Resources