MOAR process ballooning while running Perl6 socket server - asynchronous

I have a socket server using IO::Socket::Async and Redis::Async for message publishing. Whenever there is a message received by the server, the script would translate the message and generate acknowledge message to be sent back to the sender so that the sender would send subsequent messages. Since translating the message is quite expensive, the script would run that portion under a 'start' method.
However, I noticed that the Moar process eating my RAM as the script is running. Any thought where should I look to solve this issue? Thanks!
https://pastebin.com/ySsQsMFH
use v6;
use Data::Dump;
use experimental :pack;
use JSON::Tiny;
use Redis::Async;
constant $SOCKET_PORT = 7000;
constant $SOCKET_ADDR = '0.0.0.0';
constant $REDIS_PORT = 6379;
constant $REDIS_ADDR = '127.0.0.1';
constant $REDIS_AUTH = 'xxxxxxxx';
constant $IDLING_PERIOD_MIN = 180 - 2; # 3 minutes - 2 secs
constant $CACHE_EXPIRE_IN = 86400; # 24h hours
# create socket
my $socket = IO::Socket::Async.listen($SOCKET_ADDR, $SOCKET_PORT);
# connnect to Redis ...
my $redis;
try {
my $error-code = "110";
$redis = Redis::Async.new("$SOCKET_ADDR:$SOCKET_PORT");
$redis.auth($REDIS_AUTH);
CATCH {
default {
say "Error $error-code ", .^name, ': Failed to initiate connection to Redis';
exit;
}
}
}
# react whenever there is connection
react {
whenever $socket -> $conn {
# do something when the connection wants to talk
whenever $conn.Supply(:bin) {
# only process if data length is either 108 or 116
if $_.decode('utf-8').chars == 108 or $_.decode('utf-8').chars == 116 {
say "Received --> "~$_.decode('utf-8');
my $ack = generateAck($_.decode('utf-8')); # generate ack based on received data
if $ack {
$conn.print: $ack;
}else{
say "No ack. Received data maybe corrupted. Closing connection";
$conn.close;
}
}
}
}
CATCH {
default {
say .^name, ': ', .Str;
say "handled in $?LINE";
}
}
}
### other subroutines down here ###

The issue was using the Async::Redis. Jonathon Stowe had fixed the Redis module so I'm using Redis module with no issue.

Related

Dialing TCP Error: Timeout or i/o Timeout after a while of high concurrency request

I recently run into a problem when I develope a high concurrency http client via valyala/fasthttp: The client works fine in the first 15K~ requests but after that more and more dial tcp4 127.0.0.1:80: i/o timeout and dialing to the given TCP address timed out error occours.
Sample Code
var Finished = 0
var Failed = 0
var Success = 0
func main() {
for i := 0; i < 1000; i++ {
go get()
}
start := time.Now().Unix()
for {
fmt.Printf("Rate: %.2f/s Success: %d, Failed: %d\n", float64(Success)/float64(time.Now().Unix()-start), Success, Failed)
time.Sleep(100 * time.Millisecond)
}
}
func get() {
ticker := time.NewTicker(time.Duration(100+rand.Intn(2900)) * time.Millisecond)
defer ticker.Stop()
client := &fasthttp.Client{
MaxConnsPerHost: 10000,
}
for {
req := &fasthttp.Request{}
req.SetRequestURI("http://127.0.0.1:80/require?number=10")
req.Header.SetMethod(fasthttp.MethodGet)
req.Header.SetConnectionClose()
res := &fasthttp.Response{}
err := client.DoTimeout(req, res, 5*time.Second)
if err != nil {
fmt.Println(err.Error())
Failed++
} else {
Success++
}
Finished++
client.CloseIdleConnections()
<-ticker.C
}
}
Detail
The server is built on labstack/echo/v4 and when client got timeout error, the server didn't have any error, and manually perform the request via Postman or Browser like Chrome are works fine.
The client runs pretty well in the first 15K~ request, but after that, more and more timeout error occours and the output Rate is decreasing. I seached for google and github and I found this issue may be the most suitable one, but didn't found a solution.
Another tiny problem...
As you can notice, when the client start, it will first generate some the server closed connection before returning the first response byte. Make sure the server returns 'Connection: close' response header before closing the connection error, and then works fine till around 15K issues, and then start generating more and more timeout error.Why it would generate the Connection closed error in the begining?
Machine Info
Macbook Pro 14 2021 (Apple M1 Pro) with 16GB Ram and running macOS Monterey 12.4
So basically, If you trying to open a connection and then close it as soon as possibile, it's not going to be like "connection#1 use a port then immediately return it back", there gonna be lots of processing needs to be done, so If you want to send many request at the same time, I think it's better to reuse the connection as possible as you can.
For example, in fasthttp:
req := fasthttp.AcquireRequest()
res := fasthttp.AcquireResponse()
defer fasthttp.ReleaseRequest(req)
defer fasthttp.ReleaseResponse(res)
// Then do the request below

gRPC server request_iterator do not finish in loop (in case C# client Python Server)

We are trying to apply gRPC bidirectional streaming. Proto:
message Request {
oneof requestTypes{
ConfigRequest configRequest = 1;
DataRequest dataRequest = 2;
}
}
message ConfigRequest {
request_quantity = 1;
...
}
message DataRequest {
string id = 1;
bytes data = 2;
...
}
service Service {
rpc FuncService(stream Request) returns (stream Response);
}
The client side is written in C# and contains asynchrony.
If necessary, I can clarify the client code.
The server side is in python. Python code:
def FuncServe(self, request_iterator, context):
i = 0
for request in request_iterator:
...
if (i==request_quantity):
break
i+=1
The problem is the server hangs in a loop on requests, so I introduced an if-break statement. So far, everything is working, but we would like to work out the option when not all the requests stated in the config have arrived. In this case, we cannot avoid suspension in the loop and catch it. Also, I was unable to move the loop into a separate thread to control the execution time, it ends due to an empty iterator.
I would be very grateful for help in this loop problem.

Preventing NGINX from buffering packages

I need to modify the topic of MQTT publish packages using NGINX proxy. I created a njs function which is called by js_filter.
function updateTopic(s) {
s.log("buf: " + s.buffer.toString('hex'));
if (!s.fromUpstream) {
if(s.buffer.indexOf("topic") != -1){
s.buffer = s.buffer.replace("topic", "mopic").toBytes();
s.log("new buffer: " + s.buffer.toString('hex'));
}
}
return s.OK;
}
returns:
buf: 300f0005746f7069636d65737361676533
new buffer: 300f00056d6f7069636d65737361676533
This function updates s.buffer correctly, but the packages are not transmitted until another type of package is received. When a subscribe, disconnect, ping message is received, all the buffered messages are transmitted at once.
If this function does not replace a package, it is transmitted instantaneously.
Should I do something special after changing s.buffer?
It turned out to be due to a bug, which will be fixed in later versions.
https://github.com/nginx/njs/issues/45

Titanium TCP/IP socket read buffered data

I am reading a tcp-ip socket response (JSON) from the server. The issue is that sometimes if the data that is received from the server is very large it comes in batches of 2 or 3 with a random break in the JSON. Is there a way to detect how many batches are being sent from the server or is there a mechanism to tackle this at the client end? Below is the titanium code for TCP/IP:
var socket = Ti.Network.Socket.createTCP({
host: '127.0.0.1',
port: 5000,
connected: function (e) {
Ti.API.info('Socket opened!');
Ti.Stream.pump(socket, readCallback, 2048, true);
},
error: function (e) {
Ti.API.info('Error (' + e.errorCode + '): ' + JSON.stringify(e));
},
});
socket.connect();
function writeCallback(e) {
Ti.API.info('Successfully wrote to socket.'+JSON.stringify(e));
}
function readCallback(e) {
Ti.API.info('e ' + JSON.stringify(e));
if (e.bytesProcessed == -1)
{
// Error / EOF on socket. Do any cleanup here.
Ti.API.info('DONE');
}
try {
if(e.buffer) {
var received = e.buffer.toString();
Ti.API.info('Received: ' + received);
} else {
Ti.API.error('Error: read callback called with no buffer!');
socket.close();
}
} catch (ex) {
Ti.API.error('Catch ' + ex);
}
}
TCP/IP is a streaming interface and there is no guarantee from the client side how much data will be received when a read is attempted from the socket.
You would have to implement some sort of protocol between the server and the client.
TCP/IP Protocol is not designed like that, you cannot know how much data you will receive, it's in darkness lol
There are two ways to solve this problem, as I know.
The first One is EOF, you will add something as a prefix at the end of the packet, and it's a unique string like [%~Done~%]
so? you will receive everything and search in it about this string, did you find it? let's go to processing.
but I saw it as wasting of time, memory, and very primitive
the second one is Prefix Packet header, and it's the best for me, an example in .NET can be located here: Thanks to Stephen Cleary

Redis long-polling Pub/Sub frequent message blocking

I'm trying to wrap my head around the Redis Pub/Sub API and setup a long-polling server.
This lua script subscribes to a 'test' channel and returns new messages received:
nginx.conf:
location /poll {
lua_need_request_body on;
default_type 'text/plain';
content_by_lua_file '/usr/local/nginx/html/poll.lua';
}
poll.lua:
local redis = require "redis";
local red = redis:new();
local cjson = require "cjson";
red:set_timeout(30000) -- 30 sec
local resCon, err = red:connect("127.0.0.1", 6379)
if not resCon then
ngx.print("error")
return
end
local resSub, err = red:subscribe('r:' .. ngx.var["arg_r"]:gsub('%W',''))
if not resSub then
ngx.print("error")
return
end
if resSub == ngx.null then
ngx.print("error")
return
end
local resMsg, err = red:read_reply()
if not resMsg then
ngx.say("0")
return
end
ngx.say(cjson.encode(resMsg))
client.js:
var tmpR = 'test';
function poll() {
$.get('/poll', {'r':tmpR}, function(data){
if (data !== "error") {
console.log(data);
window.setTimeout(function(){
poll();
},1000);
} else {
console.log('poll fail');
}
})
}
Now, if I send publish r:test hello from redis-cli, I receive the message on the client and the server responds to redis-cli with 1. But, if I send two messages quickly, the second message doesn't broadcast and the server responds with 0.
Are my channels only capable of receiving a message per second, or, is this a throttle on the frequency of messages a user can broadcast to a channel?
Is this the right way to approach this polling server on nginx assuming many users may be connected at one time? Would it be more efficient to use GET requests on a timer?
Given two consecutive messages only one is going to have a subscriber listening to the result. No subscriber is listening when the second message is sent. The only subscriber is processing the previous result and returning that to the user.
Redis is not maintaining a message queue or similar to make sure that previously listening clients will receive the missing messages upon reconnect.

Resources