fileevent in TCL is very slow - asynchronous

I tried to write a code in TCL. The idea is to write code that does a proxy over cisco. My cisco is cisco 3700 version 12.4 and the version of TCL is 8.3. I work with GNS (Graphical Network Simulator), therefore all components are virtual(including the cisco).
In my code I opened 2 sockets with two diffrent computers: sock1 and sock2.
I configured these sockets in the following way:
fconfigure $sock1 -translation binary -buffering none -blocking 0
fconfigure $sock2 -translation binary -buffering none -blocking 0
Now I tried to transfer information between them (proxy).
As I read, the language is single-threaded and therefore I need to use events. So, I created two file event handlers that called a function:
fileevent $sock1 readable [list proxy $sock1 $sock2]
fileevent $sock2 readable [list proxy $sock2 $sock1]
The proxy function read data from the first socket and send it to the other socket.
The code works well: I transfred rdp and smb over this proxy. The problem is that it really slow: It takes something like 1000-1100 ms. Therefore I can't use remote desktop througth the proxy and even smbclient is very slow. The proxy function is really fast (I checked it and also I tried to print at the start and at the end of the function). Therefore, I assume that the interrupts from the os are very slow (or tcl executes the script slow). In addition I opened wireshark on both sides of the cisco and it takes second between the incoming message and the outgoing message.
Another information:
I want that some clients will communicate at the same time, therefore my TCL code defines a server socket:
set server_socket [socket -server handle_conn $port]
vwait is_finish
and the function "handle_conn" opens socket to the second side and create file event handlers:
proc handle_conn{sock1 addr port} {
CREATE THE SECOND SOCKET (sock2)
fileevent $sock1 readable [list proxy $sock1 $sock2]
fileevent $sock2 readable [list proxy $sock2 $sock1]
}
Therefore, I need asynchronous code (I tried to write a synchronous version: it works fast but the problem is that I can't create more than one connection at the same time (and for example my proxy doesn't work with program that need two ports or with two diffrent programs at the same time)).
I can't understand if the problem is with fconfigure, with events in tcl, with gns or another problem.
Hope for help!
Edit:
proc proxy {s1 s2} {
if {([eof $s1] || [eof $s2]) || ([catch {read $s1} data] || [catch {puts -nonewline $s2 $data}])} {
catch {close $s1}
catch {close $s2}
}

I find it curious that the code is slow for you; Tcl's fast enough to be used to implement full web servers handling complex content. It makes me suspect that something else is going on. For example, the proxy command sounds like it is mainly just copying bytes from one channel to another, but there are slow ways to do this and there are fast ways. One of the best methods is to put both channels in binary mode (fconfigure $chan -translation binary) and then use fcopy in asynchronous mode to move the bytes over; it has been internally optimised to use efficient buffer sizes and limit the amount of copying between memory buffers. Here's how a proxy command might look:
proc proxy {sourceChannel destinationChannel} {
fconfigure $sourceChannel -translation binary
fconfigure $destinationChannel -translation binary
fcopy $sourceChannel $destinationChannel -command [list \
copydone $sourceChannel $destinationChannel]
}
The copydone procedure gets called when everything is moved. Here's a basic example, but you might need to be a bit more careful since you've got copies going in both directions:
proc copydone {src dst numBytes {errorMsg ""}} {
# $numBytes bytes of data were moved
close $src
close $dst
if {$error != ""} {
puts stderr "error in fcopy: $error"
}
}
If it wasn't for the fact that you are running on a cisco device, I'd also suggest upgrading the version of Tcl in use. Formally, 8.3 hasn't been supported for a very long time.

Related

golang errors with bind address already in use even though nothing is running on the port

I have a setup in golang which basically gets a free port from OS and then starts a http sever on it. It started to give random errors with port signup failures. I simplified it into the following program which seems to error after grabbing a few free ports. It happens very randomly and there is no real process running on the port it errors. Doesn't make sense at all to me on why this has to error. Any help would be appreciated.
Output of the program:
..
..
58479
..
..
58867
58868
58869
..
bound well! 58867
bound well! 58868
bound well! 58869
..
..
..
2015/04/28 09:05:09 Error while binding port: listen tcp :58479: bind: address already in use
I made sure to check that the free port that came out never repeated.
package main
import (
"net"
"net/http"
"log"
"fmt"
)
func main() {
for {
l, _ := net.Listen("tcp", ":0")
var port = l.Addr().String()[5:]
l.Close()
fmt.Println(port)
go func() {
l1, err := net.Listen("tcp", ":"+port)
if (err != nil) {
log.Fatal("Error while binding port: ", err.Error())
} else {
fmt.Println("bound well! ", port)
}
http.Serve(l1, nil)
}()
}
}
What you do is checking whether the port is free at one point and then you try to use it basing on the fact it was free in the past. This is not going to work.
What happens: with every iteration of a for loop, you generate a port number and make sure it's free. Then you spawn a routine with intention of using this port (which is already released back to the pool of free ports). You don't really know when the routine kicks in. It might be activated while the main routine (the for loop) has just generated another free port – maybe the same one again? Or maybe another process has taken this port in a meantime. Essentially you can have a race condition on a single port.
After some more research:
There's a small caveat though. Two different sockets can be bound to the same ip+port as long as the local+remote pair is unique. I've once written a response about it. So when I've been creating listener with :0 I was able to get a "collision"; proved by netstat -an:
10.0.1.11.65245 *.* LISTEN
10.0.1.11.65245 17.143.164.101.5223 ESTABLISHED
Now, the thing is that if you want to explicitly bind the socket the port being used, this is not possible. Probably because you would be able to specify local address only and remote address wouldn't be known until call to listen or connect (we're talking about syscalls now, not the Go interface). In other words, when you leave port unspecified, OS has a wider choice. So if it happens you got a local address that is also being used by another socket, you're unable to bind to it manually.
How to sovle it:
As I've mentioned in the comments, your server process should be using :0 notation in order to be able to choose available resource from OS. Once it's listening, the address should be announced to interested processes. You can do it, for example, through a file or a standard output.
Firstly I check the port:
$ lsof -i :8080
The results are:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
WeChat 1527 wonz 241u IPv4 0xbfe135d7b32e86f1 0t0 TCP wonzdembp:63849->116.128.133.101:http-alt (ESTABLISHED)
__debug_b 41009 wonz 4u IPv6 0xbfe135e149b1e659 0t0 TCP *:http-alt (LISTEN)
So I kill PID:
$ kill 41009
Then it works.
It is possible you were previously running or debugging an application on this port, and it did not shut down cleanly. The process might still be hanging out in your system's memory. Kill that process completely, and any other network daemons that might be lurking in the shadows, and try to run your application again.
If you haven't checked for this already, you can use (if using Linux) top, htop, or any GUI system monitor like Windows' Task Manager, Gnome3's System Monitor or KDE's KSysGuard to search for the offending process.
For an example, I have observed that Visual Studio Code's debugger/runner utility (F5/Ctrl+F5) does not always clean up the process, especially if you hit F5 too quickly and the old debugger did not shut down.
Use reuseport to afftectively use the port for listen.
"github.com/libp2p/go-reuseport"
l, err := reuseport.Listen("tcp", ":"+strconv.Itoa(tcpPort))
instead of
l1, err := net.Listen("tcp", ":"+port)

Concurrent TCP Server in Tcl

I have to write a TCP Server for my legacy application (standalone) to create a client - server interface for it.
I am planning to write pre-forked (can't user threads because of thread safety issues) concurrent server. I need following two things.
Q. A simple example program (may be an echo-server) explaining concerns and ingredients of a pre-forked concurrent server.
Q. Server will exchange data in JSON format. How to configure client socket, so that server know properly whether the client has completely written the json on channel or it is still in the process of writing.
Why use threads or forks? Just use Tcl's event-driven model.
proc accept {sock host port} {
fconfigure $sock -blocking 0 -buffering line
fileevent $sock readable [list readsock $sock]
}
proc readsock {sock} {
global sockbuf
if {[read $sock data] < 0} {
if {[eof $sock]} {
# Socket was closed by remote
unset sockbuf($sock)
close $sock
return
}
}
append sockbuf($sock) $data\n
# Check if you got all the necessary data. For Tcl lists, use [info complete]
if {[info complete $sockbuf($sock)]} {
set data $sockbuf($sock)
unset sockbuf($sock)
# process data
puts $sock $data
}
}
socket -server accept 12345
vwait forever; # This will enter the event loop.
If you really need (or want) to use Threads, thread safety is also not a problem in Tcl.
But using forks usually end up in reimplementing more or less the thread API.
If you really want to fork, then here are some of the problems that you will encounter:
You have to communicate with the children. I suggest using pipes.
You have to pass the socket from the master to the other children. While this might be possible with unix domain sockets at C level, but I don't know about any Tcl extension that can do that.
You have to write the pool stuff yourself (or use a C library for that).
IMHO it is not worth all the needed effort for forks.

Erlang socket doesn't receive until the second setopts {active,once}

First I would like to apologize, I'm giving so much information to make it as clear as possible what the problem is. Please let me know if there's still anything which needs clarifying.
(Running erlang R13B04, kernel 2.6.18-194, centos 5.5)
I have a very strange problem. I have the following code to listen and process sockets:
%Opts used to make listen socket
-define(TCP_OPTS, [binary, {packet, raw}, {nodelay, true}, {reuseaddr, true}, {active, false},{keepalive,true}]).
%Acceptor loop which spawns off sock processors when connections
%come in
accept_loop(Listen) ->
case gen_tcp:accept(Listen) of
{ok, Socket} ->
Pid = spawn(fun()->?MODULE:process_sock(Socket) end),
gen_tcp:controlling_process(Socket,Pid);
{error,_} -> do_nothing
end,
?MODULE:accept_loop(Listen).
%Probably not relevant
process_sock(Sock) ->
case inet:peername(Sock) of
{ok,{Ip,_Port}} ->
case Ip of
{172,16,_,_} -> Auth = true;
_ -> Auth = lists:member(Ip,?PUB_IPS)
end,
?MODULE:process_sock_loop(Sock,Auth);
_ -> gen_tcp:close(Sock)
end.
process_sock_loop(Sock,Auth) ->
try inet:setopts(Sock,[{active,once}]) of
ok ->
receive
{tcp_closed,_} ->
?MODULE:prepare_for_death(Sock,[]);
{tcp_error,_,etimedout} ->
?MODULE:prepare_for_death(Sock,[]);
%Not getting here
{tcp,Sock,Data} ->
?MODULE:do_stuff(Sock,Data);
_ ->
?MODULE:process_sock_loop(Sock,Auth)
after 60000 ->
?MODULE:process_sock_loop(Sock,Auth)
end;
{error,_} ->
?MODULE:prepare_for_death(Sock,[])
catch _:_ ->
?MODULE:prepare_for_death(Sock,[])
end.
This whole setup works wonderfully normally, and has been working for the past few months. The server operates as a message passing server with long-held tcp connections, and it holds on average about 100k connections. However now we're trying to use the server more heavily. We're making two long-held connections (in the future probably more) to the erlang server and making a few hundred commands every second per each of those connections. Each of those commands, in the common case, spawn off a new thread which will probably make some kind of read from mnesia, and send some messages based on that.
The strangeness comes when we try to test those two command connections. When we turn on the stream of commands, any new connection has about 50% chance of hanging. For instance, using netcat if I were to connect and send along the string "blahblahblah" the server should immediately return back an error. In doing this it won't make any calls outside the thread (since all it's doing is trying to parse the command, which will fail because blahblahblah isn't a command). But about 50% of the time (when the two command connections are running) typing in blahblahblah results in the server just sitting there for 60 seconds before returning that error.
In trying to debug this I pulled up wireshark. The tcp handshake always happens immediately, and when the first packet from the client (netcat) is sent it acks immediately, telling me that the tcp stack of the kernel isn't the bottleneck. My only guess is that the problem lies in the process_sock_loop function. It has a receive which will go back to the top of the function after 60 seconds and try again to get more from the socket. My best guess is that the following is happening:
Connection is made, thread moves on to process_sock_loop
{active,once} is set
Thread receives, but doesn't get data even though it's there
After 60 seconds thread goes back to the top of process_sock_loop
{active, once} is set again
This time the data comes through, things proceed as normal
Why this would be I have no idea, and when we turn those two command connections off everything goes back to normal and the problem goes away.
Any ideas?
it's likely that your first call to set {active,once} is failing due to a race condition between your call to spawn and your call to controlling_process
it will be intermittent, likely based on host load.
When doing this, I'd normally spawn a function that blocks on something like:
{take,Sock}
and then call your loop on the sock, setting {active,once}.
so you'd change the acceptor to spawn, set controlling_process then Pid ! {take,Sock}
something to that effect.
note: I don't know if the {active,once} call actually throws when you aren't the controlling processes, if it doesn't, then what I just said makes sense.

Prevent FIFO from closing / reuse closed FIFO

Consider the following scenario:
a FIFO named test is created. In one terminal window (A) I run cat <test and in another (B) cat >test. It is now possible to write in window B and get the output in window A. It is also possible to terminate the process A and relaunch it and still be able to use this setup as suspected. However if you terminate the process in window B, B will (as far as I know) send an EOF through the FIFO to process A and terminate that as well.
In fact, if you run a process that does not terminate on EOF, you'll still not be able to use your FIFO you redirected to the process. Which I think is because this FIFO is considered closed.
Is there anyway to work around this problem?
The reason to why I ran into this problem is because I'd like to send commands to my minecraft server running in a screen session. For example: echo "command" >FIFO_to_server. This is problably possible to do by using screen by itself but I'm not very comfortable with screen I think a solution using only pipes would be a simpler and cleaner one.
A is reading from a file. When it reaches the end of the file, it stops reading. This is normal behavior, even if the file happens to be a fifo. You now have four approaches.
Change the code of the reader to make it keep reading after the end of the file. That's saying the input file is infinite, and reaching the end of the file is just an illusion. Not practical for you, because you'd have to change the minecraft server code.
Apply unix philosophy. You have a writer and a reader who don't agree on protocol, so you interpose a tool that connects them. As it happens, there is such a tool in the unix toolbox: tail -f. tail -f keeps reading from its input file even after it sees the end of the file. Make all your clients talk to the pipe, and connect tail -f to the minecraft server:
tail -n +1 -f client_pipe | minecraft_server &
As mentioned by jilles, use a trick: pipes support multiple writers, and only become closed when the last writer goes away. So make sure there's a client that never goes away.
while true; do sleep 999999999; done >client_pipe &
The problem is that the server is fundamentally designed to handle a single client. To handle multiple clients, you should change to using a socket. Think of sockets as “meta-pipes”: connecting to a socket creates a pipe, and once the client disconnects, that particular pipe is closed, but the server can accept more connections. This is the clean approach, because it also ensures that you won't have mixed up data if two clients happen to connect at the same time (using pipes, their commands could be interspersed). However, it require changing the minecraft server.
Start a process that keeps the fifo open for writing and keeps running indefinitely. This will prevent readers from seeing an end-of-file condition.
From this answer -
On some systems like Linux, <> on a named pipe (FIFO) opens the named pipe without blocking (without waiting for some other process to open the other end), and ensures the pipe structure is left alive. For instance in:
So you could do:
cat <>up_stream >down_stream
# the `cat pipeline keeps running
echo 1 > up_stream
echo 2 > up_stream
echo 3 > up_stream
However, I can't find documentation about this behavior. So this could be implementation detail which is specific to some systems. I tried the above on MacOS and it works.
You can add multiple inputs ino a pipe by adding what you require in brackets with semi-colons in your 'mkfifo yourpipe':
(cat file1; cat file2; ls -l;) > yourpipe

Is there a way to close a Unix socket for only reading or writing?

Is there a way to only close "one end" of a TCP socket to cleanly indicate one side of a connection is done writing to the connection? (Just like you do with a pipe in every Unix pipe tutorial ever.) Or should I use some in-band solution like a sentinel value or some such?
You can shutdown a socket for read or write using the second parameter to the method:
shutdown(sock, SHUT_RD)
shutdown(sock, SHUT_WR)
If the server is doing the writing, and does a shutdown() for write, the client should get an end of file when it tries to read (rather than blocking and waiting for data to arrive). It will however still be able to write to the socket.

Resources