LuaSocket TCP client can receive messages from server, but not vice versa - tcp

My client receives every message from the server just fine after connecting, but the server's output is always this:
Client #1 connected.
first packet error: timeout
Looping client #1
message error: timeout
Looping client #1
message error: timeout
Looping client #1
message error: timeout
...etc...
I've messed around with some values on each side. It usually does nothing, but sometimes when messing with the server's timeout values it makes it so the server just hangs forever on client:send() or client:receive(). I can never get it so that both of them can exchange messages with eachother. Why?
My goal is to have a client and server that spend minimal time (<2-3ms) every loop so that I can spend a lot of time doing other things (this was part of a bigger program but I isolated it for easier testing of the issue) on both the client or server. I can't afford to have long, blocking calls so I set the timeout really low.
Server source:
socket = require("socket");
server = assert(socket.bind("*", 534));
ip, port = server:getsockname();
server:settimeout(0.03);
nClients = 0;
clients = {};
print("Server started.");
while true do
local client, err = server:accept();
if (not err) then
nClients = nClients + 1;
print("Client #"..nClients.." connected.");
clients[nClients] = client;
clients[nClients]:settimeout(0.03);
clients[nClients]:send("test");
local msg, err = clients[nClients]:receive();
if (not err) then
print("we got it");
else
print("first packet error: "..err);
end
end
for i=1,nClients do
if (clients[i] ~= nil) then
print("Looping client #"..i);
local msg, err = clients[i]:receive();
if (not err) then
print("message received: "..msg);
else
print("message error: "..err);
end
clients[nClients]:send("another message");
end
end
socket.sleep(0.01);
end
Client source:
socket = require("socket");
tcp = assert(socket.tcp());
ip, port = "127.0.0.1", 534;
print("Client started.");
tcp:settimeout(0.03);
tcp:connect(ip, port);
while true do
local a, b, msg = tcp:receive();
if (msg and #msg > 1) then
print("message received: "..msg);
end
tcp:send("message");
socket.sleep(0.01);
end

Related

VUE Front end to go server (http) and clients connected to go server (tcp) error

I'm currently creating a go TCP server that handles file sharing between multiple go clients, that works fine. However, I'm also building a front end using vue.js showing some server stats like the number of users, bytes sent, etc.
The problem occurs when I include the 'http.ListenAndServe(":3000", nil)' function handles the requests from the front end of the server. Is it impossible to have a TCP and an HTTP server on the same go file?
If so, how can a link the three (frontend, go-server, clients)
Here is the code of the 'server.go'
func main() {
// Create TCP server
serverConnection, error := net.Listen("tcp", ":8085")
// Check if an error occured
// Note: because 'go' forces you to use each variable you declare, error
// checking is not optional, and maybe that's good
if error != nil {
fmt.Println(error)
return
}
// Create server Hub
serverHb := newServerHub()
// Close the server just before the program ends
defer serverConnection.Close()
// Handle Front End requests
http.HandleFunc("/api/thumbnail", requestHandler)
fs := http.FileServer(http.Dir("../../tcp-server-frontend/dist"))
http.Handle("/", fs)
fmt.Println("Server listening on port 3000")
http.ListenAndServe(":3000", nil)
// Each client sends data, that data is received in the server by a client struct
// the client struct then sends the data, which is a request to a 'go' channel, which is similar to a queue
// Somehow this for loop runs only when a new connection is detected
for {
// Accept a new connection if a request is made
// serverConnection.Accept() blocks the for loop
// until a connection is accepted, then it blocks the for loop again!
connection, connectionError := serverConnection.Accept()
// Check if an error occurred
if connectionError != nil {
fmt.Println("1: Woah, there's a mistake here :/")
fmt.Println(connectionError)
fmt.Println("1: Woah, there's a mistake here :/")
// return
}
// Create new user
var client *Client = newClient(connection, "Unregistered_User", serverHb)
fmt.Println(client)
// Add client to serverHub
serverHb.addClient(client)
serverHb.listClients()
// go client.receiveFile()
go client.handleClientRequest()
}
}

Dialing TCP Error: Timeout or i/o Timeout after a while of high concurrency request

I recently run into a problem when I develope a high concurrency http client via valyala/fasthttp: The client works fine in the first 15K~ requests but after that more and more dial tcp4 127.0.0.1:80: i/o timeout and dialing to the given TCP address timed out error occours.
Sample Code
var Finished = 0
var Failed = 0
var Success = 0
func main() {
for i := 0; i < 1000; i++ {
go get()
}
start := time.Now().Unix()
for {
fmt.Printf("Rate: %.2f/s Success: %d, Failed: %d\n", float64(Success)/float64(time.Now().Unix()-start), Success, Failed)
time.Sleep(100 * time.Millisecond)
}
}
func get() {
ticker := time.NewTicker(time.Duration(100+rand.Intn(2900)) * time.Millisecond)
defer ticker.Stop()
client := &fasthttp.Client{
MaxConnsPerHost: 10000,
}
for {
req := &fasthttp.Request{}
req.SetRequestURI("http://127.0.0.1:80/require?number=10")
req.Header.SetMethod(fasthttp.MethodGet)
req.Header.SetConnectionClose()
res := &fasthttp.Response{}
err := client.DoTimeout(req, res, 5*time.Second)
if err != nil {
fmt.Println(err.Error())
Failed++
} else {
Success++
}
Finished++
client.CloseIdleConnections()
<-ticker.C
}
}
Detail
The server is built on labstack/echo/v4 and when client got timeout error, the server didn't have any error, and manually perform the request via Postman or Browser like Chrome are works fine.
The client runs pretty well in the first 15K~ request, but after that, more and more timeout error occours and the output Rate is decreasing. I seached for google and github and I found this issue may be the most suitable one, but didn't found a solution.
Another tiny problem...
As you can notice, when the client start, it will first generate some the server closed connection before returning the first response byte. Make sure the server returns 'Connection: close' response header before closing the connection error, and then works fine till around 15K issues, and then start generating more and more timeout error.Why it would generate the Connection closed error in the begining?
Machine Info
Macbook Pro 14 2021 (Apple M1 Pro) with 16GB Ram and running macOS Monterey 12.4
So basically, If you trying to open a connection and then close it as soon as possibile, it's not going to be like "connection#1 use a port then immediately return it back", there gonna be lots of processing needs to be done, so If you want to send many request at the same time, I think it's better to reuse the connection as possible as you can.
For example, in fasthttp:
req := fasthttp.AcquireRequest()
res := fasthttp.AcquireResponse()
defer fasthttp.ReleaseRequest(req)
defer fasthttp.ReleaseResponse(res)
// Then do the request below

Keeping a TCP connection alive in a goroutine and check if it timesout if the connection is lost

I have a TCP server up and running listetning to a port and a go routine for handeling the connections. I wonder if it's possible to have a go routine running for every connection keeping them alive with net.SetKeepAlive(true). Also with error handling so that if the connection times out it will execute cleanup functions like removing the connection from a list?
Handle routine:
func handleConnection(conn net.Conn, rec chan string) {
var item QueItem
buf := make([]byte, bufSize)
l, err := conn.Read(buf)
if err != nil || l < 0 {
fmt.Println("Error reading from conn: ", conn)
fmt.Println("Error reading: ", err)
}
err = json.Unmarshal(buf[:l], &item)
if err != nil {
fmt.Println("Error converting to JSON", jErr)
}
fmt.Printf("Received : %+v\n", item)
fmt.Println("recived from:", conn.RemoteAddr())
rec <- item.IP
}
TCPserver:
for {
conn, err := ln.Accept()
if err != nil {
fmt.Println("No accept", err)
log.Println("Unable to accept connection", err)
}
go handleConnection(conn, recived)
}
To keep a check on the established TCP connections, one can do keep-alive mechanism in two ways.
At application level:
In idle conditions both client and server agree on a protocol to send pre-determined packets to each other. Lack of a message from peer with in certain time can signal a problem in the connection.
At TCP layer:
Enable the TCP stack to check the connection status. The TCP layer at which this mechanism is enabled will send keep-alive messages at regular determined interval. It would expect that the peer TCP stack send keep-alive-ack. Absence of ACK after needed re-transmission signals connection problem and the application would be duly notified.
I think net.SetKeepAlive(true) is a go way of informing the TCP to do keep-alive. You need not do any thing special constructs in go routine.
TCP keep-alive works well. Application need not be burdened to check for connection status

net.TCPConn allowing Write after FIN packet

I'm trying to write unit tests for some server-side code, but I'm having trouble being deterministic with my shutdown test cases. It seems a loopback TCP connection isn't correctly handling a clean shutdown. I've reprod this in a sample app which does the following in lockstep:
Create a client & server connection.
Verify connectivity by sending a message successfully from client to server.
Use channels to tell the server to call conn.Close() and wait until that call has completed.
(Try to) verify the connection is cleanly broken by calling Write on the client connection again.
Step 4 succeeds without error. I've tried using a json.Encoder and a bare call to TCPConn.Write. I checked the traffic with WireShark. The server sent a FIN packet, but the client never does (even with a 1s sleep) The server even sent a RST packet in response to (4) and the client conn.Write still returned nil for its error.
This seems totally bonkers. Am I missing something here? Currently running Go v1.2.1/Darwin
Edit: Obligatory repro
package main
import (
"bufio"
"fmt"
"net"
)
var (
loopback = make(chan string)
shouldClose = make(chan struct{})
didClose = make(chan struct{})
)
func serve(listener *net.TCPListener) {
conn, err := listener.Accept()
if err != nil {
panic(err)
}
s := bufio.NewScanner(conn)
if !s.Scan() {
panic(fmt.Sprint("Failed to scan for line: ", s.Err()))
}
loopback <- s.Text() + "\n"
<-shouldClose
conn.Close()
close(didClose)
if s.Scan() {
panic("Expected error reading from a socket closed on this side")
}
}
func main() {
listener, err := net.ListenTCP("tcp", &net.TCPAddr{})
if err != nil {
panic(err)
}
go serve(listener)
conn, err := net.Dial("tcp", listener.Addr().String())
if err != nil {
panic(fmt.Sprint("Dialer got error ", err))
}
oracle := "Mic check\n"
if _, err = conn.Write([]byte(oracle)); err != nil {
panic(fmt.Sprint("Dialer failed to write oracle: ", err))
}
test := <-loopback
if test != oracle {
panic("Server did not receive the value sent by the client")
}
close(shouldClose)
<-didClose
// For giggles, I can also add a <-time.After(500 * time.Millisecond)
if _, err = conn.Write([]byte("This should fail after active disconnect")); err == nil {
panic("Sender 'successfully' wrote to a closed socket")
}
}
This is how an active close of a TCP connection works. When the client detects that the server has closed, it is then expected to close its half of the connection.
In your case, instead of closing the client you're sending more data. This causes the server to send an RST packet to force the connection closed since the message received isn't valid.
If you're still unsure, here's and equivalent python client+server which displays the same behavior. (I find using python helpful, since it closely follows the underlying BSD socket API, without using C)
Server:
import socket, time
server = socket.socket()
server.bind(("127.0.0.1", 9999))
server.listen(1)
sock, addr = server.accept()
msg = sock.recv(1024)
print msg
print "closing"
sock.close()
time.sleep(3)
print "done"
Client:
import socket, time
sock = socket.socket()
sock.connect(("127.0.0.1", 9999))
sock.send("test\n")
time.sleep(1)
print "sending again!"
sock.send("no error here")
time.sleep(1)
print "sending one last time"
sock.send("broken pipe this time")
To properly detect a remote close on the connection, you should do Read(), and look for an io.EOF error in return.
// we technically need to try and read at least one byte,
// or we will get an EOF even if the connection isn't closed.
buff := make([]byte, 1)
if _, err := conn.Read(buff); err != io.EOF {
panic("connection not closed")
}

Redis long-polling Pub/Sub frequent message blocking

I'm trying to wrap my head around the Redis Pub/Sub API and setup a long-polling server.
This lua script subscribes to a 'test' channel and returns new messages received:
nginx.conf:
location /poll {
lua_need_request_body on;
default_type 'text/plain';
content_by_lua_file '/usr/local/nginx/html/poll.lua';
}
poll.lua:
local redis = require "redis";
local red = redis:new();
local cjson = require "cjson";
red:set_timeout(30000) -- 30 sec
local resCon, err = red:connect("127.0.0.1", 6379)
if not resCon then
ngx.print("error")
return
end
local resSub, err = red:subscribe('r:' .. ngx.var["arg_r"]:gsub('%W',''))
if not resSub then
ngx.print("error")
return
end
if resSub == ngx.null then
ngx.print("error")
return
end
local resMsg, err = red:read_reply()
if not resMsg then
ngx.say("0")
return
end
ngx.say(cjson.encode(resMsg))
client.js:
var tmpR = 'test';
function poll() {
$.get('/poll', {'r':tmpR}, function(data){
if (data !== "error") {
console.log(data);
window.setTimeout(function(){
poll();
},1000);
} else {
console.log('poll fail');
}
})
}
Now, if I send publish r:test hello from redis-cli, I receive the message on the client and the server responds to redis-cli with 1. But, if I send two messages quickly, the second message doesn't broadcast and the server responds with 0.
Are my channels only capable of receiving a message per second, or, is this a throttle on the frequency of messages a user can broadcast to a channel?
Is this the right way to approach this polling server on nginx assuming many users may be connected at one time? Would it be more efficient to use GET requests on a timer?
Given two consecutive messages only one is going to have a subscriber listening to the result. No subscriber is listening when the second message is sent. The only subscriber is processing the previous result and returning that to the user.
Redis is not maintaining a message queue or similar to make sure that previously listening clients will receive the missing messages upon reconnect.

Resources