Titanium TCP/IP socket read buffered data - tcp

I am reading a tcp-ip socket response (JSON) from the server. The issue is that sometimes if the data that is received from the server is very large it comes in batches of 2 or 3 with a random break in the JSON. Is there a way to detect how many batches are being sent from the server or is there a mechanism to tackle this at the client end? Below is the titanium code for TCP/IP:
var socket = Ti.Network.Socket.createTCP({
host: '127.0.0.1',
port: 5000,
connected: function (e) {
Ti.API.info('Socket opened!');
Ti.Stream.pump(socket, readCallback, 2048, true);
},
error: function (e) {
Ti.API.info('Error (' + e.errorCode + '): ' + JSON.stringify(e));
},
});
socket.connect();
function writeCallback(e) {
Ti.API.info('Successfully wrote to socket.'+JSON.stringify(e));
}
function readCallback(e) {
Ti.API.info('e ' + JSON.stringify(e));
if (e.bytesProcessed == -1)
{
// Error / EOF on socket. Do any cleanup here.
Ti.API.info('DONE');
}
try {
if(e.buffer) {
var received = e.buffer.toString();
Ti.API.info('Received: ' + received);
} else {
Ti.API.error('Error: read callback called with no buffer!');
socket.close();
}
} catch (ex) {
Ti.API.error('Catch ' + ex);
}
}

TCP/IP is a streaming interface and there is no guarantee from the client side how much data will be received when a read is attempted from the socket.
You would have to implement some sort of protocol between the server and the client.

TCP/IP Protocol is not designed like that, you cannot know how much data you will receive, it's in darkness lol
There are two ways to solve this problem, as I know.
The first One is EOF, you will add something as a prefix at the end of the packet, and it's a unique string like [%~Done~%]
so? you will receive everything and search in it about this string, did you find it? let's go to processing.
but I saw it as wasting of time, memory, and very primitive
the second one is Prefix Packet header, and it's the best for me, an example in .NET can be located here: Thanks to Stephen Cleary

Related

How to make a gRPC firestore listen request in Rust?

Using gRPC bindings from https://github.com/gkkachi/firestore-grpc I was able to puzzle together something that is seemingly working but does not receive any content:
Creating the request:
let req = ListenRequest {
database: format!("projects/{}/databases/(default)", project_id),
labels: HashMap::new(),
target_change: Some(TargetChange::AddTarget(Target {
// "Rust" in hex: https://github.com/googleapis/python-firestore/issues/51
target_id: 0x52757374,
once: false,
target_type: Some(TargetType::Documents(DocumentsTarget {
documents: vec![users_collection],
})),
resume_type: None,
})),
};
Sending it:
let mut req = Request::new(stream::iter(vec![req]));
let metadata = req.metadata_mut();
metadata.insert(
"google-cloud-resource-prefix",
MetadataValue::from_str(&db).unwrap(),
);
println!("sending request");
let res = get_client(&token).await?.listen(req).await?;
let mut res = res.into_inner();
while let Some(msg) = res.next().await {
println!("getting response");
dbg!(msg);
}
(full code in this repo).
The request can be made but the stream does not contain any actual content. The only hint I get from the debug logs is
[2021-10-27T14:54:39Z DEBUG h2::codec::framed_write] send frame=GoAway { error_code: NO_ERROR, last_stream_id: StreamId(0) }
[2021-10-27T14:54:39Z DEBUG h2::proto::connection] Connection::poll; connection error error=GoAway(b"", NO_ERROR, Library)
Any idea what is missing?
The crucial thing I was missing as pointed out in the rust users forum was that the request stream was immediately ending which caused the connection to close. The send frame=GoAway was actually send by the client (facepalm).
To keep the connection open and receive responses we can keep the input stream pending: Request::new(stream::iter(vec![req]).chain(stream::pending())). There will be a better way to set things up and keep control over subsequent input requests but this is enough to fix the example.

MOAR process ballooning while running Perl6 socket server

I have a socket server using IO::Socket::Async and Redis::Async for message publishing. Whenever there is a message received by the server, the script would translate the message and generate acknowledge message to be sent back to the sender so that the sender would send subsequent messages. Since translating the message is quite expensive, the script would run that portion under a 'start' method.
However, I noticed that the Moar process eating my RAM as the script is running. Any thought where should I look to solve this issue? Thanks!
https://pastebin.com/ySsQsMFH
use v6;
use Data::Dump;
use experimental :pack;
use JSON::Tiny;
use Redis::Async;
constant $SOCKET_PORT = 7000;
constant $SOCKET_ADDR = '0.0.0.0';
constant $REDIS_PORT = 6379;
constant $REDIS_ADDR = '127.0.0.1';
constant $REDIS_AUTH = 'xxxxxxxx';
constant $IDLING_PERIOD_MIN = 180 - 2; # 3 minutes - 2 secs
constant $CACHE_EXPIRE_IN = 86400; # 24h hours
# create socket
my $socket = IO::Socket::Async.listen($SOCKET_ADDR, $SOCKET_PORT);
# connnect to Redis ...
my $redis;
try {
my $error-code = "110";
$redis = Redis::Async.new("$SOCKET_ADDR:$SOCKET_PORT");
$redis.auth($REDIS_AUTH);
CATCH {
default {
say "Error $error-code ", .^name, ': Failed to initiate connection to Redis';
exit;
}
}
}
# react whenever there is connection
react {
whenever $socket -> $conn {
# do something when the connection wants to talk
whenever $conn.Supply(:bin) {
# only process if data length is either 108 or 116
if $_.decode('utf-8').chars == 108 or $_.decode('utf-8').chars == 116 {
say "Received --> "~$_.decode('utf-8');
my $ack = generateAck($_.decode('utf-8')); # generate ack based on received data
if $ack {
$conn.print: $ack;
}else{
say "No ack. Received data maybe corrupted. Closing connection";
$conn.close;
}
}
}
}
CATCH {
default {
say .^name, ': ', .Str;
say "handled in $?LINE";
}
}
}
### other subroutines down here ###
The issue was using the Async::Redis. Jonathon Stowe had fixed the Redis module so I'm using Redis module with no issue.

Preventing NGINX from buffering packages

I need to modify the topic of MQTT publish packages using NGINX proxy. I created a njs function which is called by js_filter.
function updateTopic(s) {
s.log("buf: " + s.buffer.toString('hex'));
if (!s.fromUpstream) {
if(s.buffer.indexOf("topic") != -1){
s.buffer = s.buffer.replace("topic", "mopic").toBytes();
s.log("new buffer: " + s.buffer.toString('hex'));
}
}
return s.OK;
}
returns:
buf: 300f0005746f7069636d65737361676533
new buffer: 300f00056d6f7069636d65737361676533
This function updates s.buffer correctly, but the packages are not transmitted until another type of package is received. When a subscribe, disconnect, ping message is received, all the buffered messages are transmitted at once.
If this function does not replace a package, it is transmitted instantaneously.
Should I do something special after changing s.buffer?
It turned out to be due to a bug, which will be fixed in later versions.
https://github.com/nginx/njs/issues/45

What does UDPConn Close really do?

If UDP is a connectionless protocol, then why does UDPConn have a Close method? The documentation says "Close closes the connection", but UDP is connectionless. Is it good practice to call Close on a UDPConn object? Is there any benefit?
http://golang.org/pkg/net/#UDPConn.Close
Good Question, let's see the code of udpconn.Close
http://golang.org/src/pkg/net/net.go?s=3725:3753#L124
func (c *conn) Close() error {
if !c.ok() {
return syscall.EINVAL
}
return c.fd.Close()
}
Closes c.fd but what is c.fd ?
type conn struct {
fd *netFD
}
ok is a netFD net File Descriptor. Let's look at the Close method.
func (fd *netFD) Close() error {
fd.pd.Lock() // needed for both fd.incref(true) and pollDesc.Evict
if !fd.fdmu.IncrefAndClose() {
fd.pd.Unlock()
return errClosing
}
// Unblock any I/O. Once it all unblocks and returns,
// so that it cannot be referring to fd.sysfd anymore,
// the final decref will close fd.sysfd. This should happen
// fairly quickly, since all the I/O is non-blocking, and any
// attempts to block in the pollDesc will return errClosing.
doWakeup := fd.pd.Evict()
fd.pd.Unlock()
fd.decref()
if doWakeup {
fd.pd.Wakeup()
}
return nil
}
Notice all the decref
So to answer your question. Yes. Is good practice or you will leave hanging around in memory network file descriptors.

When I close a http server, why do I get a socket hang up?

I implemented a graceful stop to our node.js server. Basically something like this:
var shutDown = function () {
server.on('close', function () {
console.log('Server ' + process.pid + ' closed.');
process.exit();
});
console.log('Shutting down ' + process.pid + '...');
server.close();
}
However, when I close the server like this, I get a Error: socket hang up error in my continuous requests.
I thought that server.close() would make the server stop listening and accepting new requests, but keep processing all pending/open requests. However, that should result in an Error: connect ECONNREFUSED.
What am I doing wrong?
Additional info: The server consists of a master and three forked children/workers. However, the master is not listening or binding to a port, only the children are, and they are shut down as stated above.
Looking at the docs, it sounds like server.close() only stops new connections from coming in, so I'm pretty sure the error is with the already-open connections.
Maybe your shutDown() can check server.connections and wait until there are no more?
var shutDown = function(){
if(server.connections) return setTimeout(shutDown, 1000);
// Your shutDown code here
}
Slightly uglier (and much less scalable), but without the wait: you can keep track of connections as they occur and close them yourself
server.on('connection', function(e){
// Keep track of e.connection in a list.
// You'll want to remove e.connection
// from the list if it closes on its own
}
var shutDown = function(){
// Close all remaining connections in the list
// Your shutDown code here
}

Resources