I have a UDP server that reflects every ping message it receives (this works well I think). I the client side I would then like to do two things:
make sure that I fired off N (e.g. 10000) messages, and
count the number of correctly received responses.
It seems that either because of the nature of UDP or because of the forkIO thing, my client code below ends prematurely/does not do any counting at all.
Also I am very surprised to see that the function tryOnePing returns 250 times the Int 4. Why could this be?
main = withSocketsDo $ do
s <- socket AF_INET Datagram defaultProtocol
hostAddr <- inet_addr host
thread <- forkIO $ receiveMessages s
-- is there any better way to eg to run that in parallel and make sure
-- that sending/receiving are asynchronous?
-- forM_ [0 .. 10000] $ \i -> do
-- sendTo s "ping" (SockAddrInet port hostAddr)
-- actually this would be preferred since I can discard the Int 4 that
-- it returns but forM or forM_ are out of scope here?
let tryOnePing i = sendTo s "ping" (SockAddrInet port hostAddr)
pings <- mapM tryOnePing [0 .. 1000]
let c = length $ filter (\x -> x==4) pings
-- killThread thread
-- took that out to make sure the function receiveMessages does not
-- end prematurely. still seems that it does
sClose s
print c
-- return()
receiveMessages :: Socket -> IO ()
receiveMessages socket = forever $ do
-- also tried here forM etc. instead of forever but no joy
let recOnePing i = recv socket 1024
msg <- mapM recOnePing [0 .. 1000]
let r = length $ filter (\x -> x=="PING") msg
print r
print "END"
The main problem here is that when your main thread finishes, all other threads gets killed automatically. You have to get the main thread to wait for the receiveMessages thread, or it will in all likelyhood simply finish before any responses have been received. One simple way of doing this is to use an MVar.
An MVar is a synchronized cell that can either be empty or hold exactly one value. The current thread will block if it tries to take from an empty MVar or insert into a full one.
In this case, we don't care about the value itself, so we'll just store a () in it.
We'll start with the MVar empty. Then the main thread will fork off the receiver thread, send all the packets, and try to take the value from the MVar.
import Control.Concurrent.MVar
main = withSocketsDo $ do
-- prepare socket, same as before
done <- newEmptyMVar
-- we need to pass the MVar to the receiver thread so that
-- it can use it to signal us when it's done
forkIO $ receiveMessages sock done
-- send pings, same as before
takeMVar done -- blocks until receiver thread is done
In the receiver thread, we will receive all the messages and then put a () in the MVar to signal that we're done receiving.
receiveMessages socket done = do
-- receive messages, same as before
putMVar done () -- allows the main thread to be unblocked
This solves the main issue, and the program runs fine on my Ubuntu laptop, but there are a couple more things you want to take care of.
sendTo does not guarantee that the whole string will be sent. You'll have to check the return value to see how much was sent, and retry if not all of it was sent. This can happen even for a short message like "ping" if the send buffer is full.
recv requires a connected socket. You'll want to use recvFrom instead. (Although it still works on my PC for some unknown reason).
Printing to standard output is not synchronized, so you might want to alter this so that the MVar will be used to communicate the number of received packets instead of just (). That way, you can do all the output from the main thread. Alternatively, use another MVar as a mutex to control access to standard output.
Finally, I recommend reading the documentation of Network.Socket, Control.Concurrent and Control.Concurrent.MVar carefully. Most of my answer is stitched together from information found there.
Related
Current wrk configuration allows sending continuous requests for seconds (duration parameter).
Is there a way to use wrk to send requests and then exit.
My use case: I want to create large number of threads + connections (e.g. 1000 threads with 100 connections per thread) and send instantaneous bursts towards the server.
You can do it with LUA script:
local counter = 1
function response()
if counter == 100 then
wrk.thread:stop()
end
counter = counter + 1
end
Pass this script with -s command line parameter.
I make changes to wrk to introduce new knobs. Let me know if anyone is interested in the patch and I could post it.
I added a -r to send exactly requests and bail out.
Artem,
I have this code-change in my fork:
https://github.com/bhakta0007/wrk
I'm a fresh tcl user and I've been working on recently on a script, that uses serial port. My greatest problem so far is that I can't effectively read my serial port.
I'm using fileevent to read, but it catches what was send previously. However if I dont send anything before, it waits for external data and catches it. My code:
global forever
proc read_serial {serial} {
set msg [read $serial]
set listOfLetters [split $msg {} ]
set serialIPinHex ""
foreach iChar $listOfLetters { ;#conversion to hex
scan $iChar "%c" decimalValue
set hexValue [format "%02X" $decimalValue ]
set hexValue "0x$hexValue"
set serialIPinHex "$serialIPinHex $hexValue"
}
puts "MCU RESPONSE: $serialIPinHex"
global forever
set forever 1 ;# end event loop
}
set serial [open com1 r+]
fconfigure $serial -mode "19200,n,8,1"
fconfigure $serial -blocking 0 -buffering none
fconfigure $serial -translation binary -encoding binary
fileevent $serial readable [list read_serial $serial ]
global forever
puts -nonewline $serial \x31\xce
flush $serial
delay 10
vwait forever
flush $serial
close $serial
The effect is "MCU RESPONSE: x31 xce", while it should await for sth to come by serial (in my understanding). I'm sure there is no echo function on the other end. Thanks in advance for your help. I hope my bug is not embarassing, I spend last few hours looking for it...
The main thing to note about serial ports is that they are slow. Really slow. A modern computer can run round the block three times before a serial port has passed one character over (well, metaphorically). That means you're going to get characters coming one at a time and you've got to deal with that.
You're advised to use read $serial 1 and to append that to a buffer you are keeping, or write your code to handle a single byte at a time. If you were using line-oriented messaging, you could take advantage of gets's good behaviour in non-blocking mode (the fblocked command is designed to support this), but read isn't quite so friendly (since it knows nothing about record separators).
Don't worry about missing a byte if you use read $serial 1; if there's one left, your callback will get called again immediately to deal with it.
I have an FTDI USB serial device which I use via the termios serial API. I set up the port so that it will time-out on read() calls in half a second (by using the VTIME parameter), and this works on Linux as well as on FreeBSD. On OpenBSD 5.1, however, the read() call simply blocks forever when no data is available (see below.) I would expect read() to return 0 after 500ms.
Can anyone think of a reason that the termios API would behave differently under OpenBSD, at least with respect to the timeout feature?
EDIT: The no-timeout problem is caused by linking against pthread. Regardless of whether I'm actually using any pthreads, mutexes, etc., simply linking against that library causes read() to block forever instead of timing out based on the VTIME setting. Again, this problem only manifests on OpenBSD -- Linux and FreeBSD work as expected.
if ((sd = open(devPath, O_RDWR | O_NOCTTY)) >= 0)
{
struct termios newtio;
char input;
memset(&newtio, 0, sizeof(newtio));
// set options, including non-canonical mode
newtio.c_cflag = (CREAD | CS8 | CLOCAL);
newtio.c_lflag = 0;
// when waiting for responses, wait until we haven't received
// any characters for 0.5 seconds before timing out
newtio.c_cc[VTIME] = 5;
newtio.c_cc[VMIN] = 0;
// set the input and output baud rates to 7812
cfsetispeed(&newtio, 7812);
cfsetospeed(&newtio, 7812);
if ((tcflush(sd, TCIFLUSH) == 0) &&
(tcsetattr(sd, TCSANOW, &newtio) == 0))
{
read(sd, &input, 1); // even though VTIME is set on the device,
// this read() will block forever when no
// character is available in the Rx buffer
}
}
from the termios manpage:
Another dependency is whether the O_NONBLOCK flag is set by open() or
fcntl(). If the O_NONBLOCK flag is clear, then the read request is
blocked until data is available or a signal has been received. If the
O_NONBLOCK flag is set, then the read request is completed, without
blocking, in one of three ways:
1. If there is enough data available to satisfy the entire
request, and the read completes successfully the number of
bytes read is returned.
2. If there is not enough data available to satisfy the entire
request, and the read completes successfully, having read as
much data as possible, the number of bytes read is returned.
3. If there is no data available, the read returns -1, with errno
set to EAGAIN.
can you check if this is the case?
cheers.
Edit: OP traced back the problem to a linking with pthreads that caused the read function to block. By upgrading to OpenBSD >5.2 this issue was resolved by the change to the new rthreads implementation as the default threading library on openbsd. more info on guenther# EuroBSD2012 slides
I'm coding on a simple web crawler and have generated a bunch gf static files I try to crawl by the code at bottom. I have two issues/questions I don't have an idea for:
1.) Looping over the sequence 1..200 throws me an error exactly after 100 pages have been crawled:
** exception error: no match of right hand side value {error,socket_closed_remotely}
in function erlang_test_01:fetch_page/1 (erlang_test_01.erl, line 11)
in call from lists:foreach/2 (lists.erl, line 1262)
2.) How to parallelize the requests, e.g. 20 cincurrent reqs
-module(erlang_test_01).
-export([start/0]).
-define(BASE_URL, "http://46.4.117.69/").
to_url(Id) ->
?BASE_URL ++ io_lib:format("~p", [Id]).
fetch_page(Id) ->
Uri = to_url(Id),
{ok, {{_, Status, _}, _, Data}} = httpc:request(get, {Uri, []}, [], [{body_format,binary}]),
Status,
Data.
start() ->
inets:start(),
lists:foreach(fun(I) -> fetch_page(I) end, lists:seq(1, 200)).
1. Error message
socket_closed_remotely indicates that the server closed the connection, maybe because you made too many requests in a short timespan.
2. Parallellization
Create 20 worker processes and one process holding the URL queue. Let each process ask the queue for a URL (by sending it a message). This way you can control the number of workers.
An even more "Erlangy" way is to spawn one process for each URL! The upside to this is that your code will be very straightforward. The downside is that you cannot control your bandwidth usage or number of connections to the same remote server in a simple way.
I'm trying to download the 3000+ photos referenced from the xml backup of my blog. The problem I came across is that if just one of those photos is no longer available, the whole async gets blocked because AsyncGetResponse doesn't do timeouts.
ildjarn helped me to put together a version of AsyncGetResponse which does fail on timeout, but using that gives a lot more timeouts - as though requests that are just queued timeout. It seems like all the WebRequests are launched 'immediately', the only way to make it work is to set the timeout to the time required to download all of them combined: which isn't great because it means I have adjust the timeout depending on the number of images.
Have I reached the limits of vanilla async? Should I be looking at reactive extensions instead?
This is a bit embarassing, because I've already asked two questions here on this particular bit of code, and I still haven't got it working the way I want!
I think there must be a better way to find out that a file is not available than using a timeout. I'm not exactly sure, but is there some way to make it throw an exception if a file cannot be found? Then you could just wrap your async code inside try .. with and you should avoid most of the problems.
Anyway, if you want to write your own "concurrency manager" that runs certain number of requests in parallel and queues remaining pending requests, then the easiest option in F# is to use agents (the MailboxProcessor type). The following object encapsulates the behavior:
type ThrottlingAgentMessage =
| Completed
| Work of Async<unit>
/// Represents an agent that runs operations in concurrently. When the number
/// of concurrent operations exceeds 'limit', they are queued and processed later
type ThrottlingAgent(limit) =
let agent = MailboxProcessor.Start(fun agent ->
/// Represents a state when the agent is blocked
let rec waiting () =
// Use 'Scan' to wait for completion of some work
agent.Scan(function
| Completed -> Some(working (limit - 1))
| _ -> None)
/// Represents a state when the agent is working
and working count = async {
while true do
// Receive any message
let! msg = agent.Receive()
match msg with
| Completed ->
// Decrement the counter of work items
return! working (count - 1)
| Work work ->
// Start the work item & continue in blocked/working state
async { try do! work
finally agent.Post(Completed) }
|> Async.Start
if count < limit then return! working (count + 1)
else return! waiting () }
working 0)
/// Queue the specified asynchronous workflow for processing
member x.DoWork(work) = agent.Post(Work work)
Nothing is ever easy. :)
I think the issues you're hitting are intrinsic to the problem domain (as opposed to merely being issues with the async programming model, though they do interact somewhat).
Say you want to download 3000 pictures. First, in your .NET process, there is something like System.Net.ConnectionLimit or something I forget the name of, that will e.g. throttle the number of simultaneous HTTP connections your .NET process can run simultaneously (and the default is just '2' I think). So you could find that control and set it to a higher number, and it would help.
But then next, your machine and internet connection have finite bandwidth. So even if you could try to concurrently start 3000 HTTP connections, each individual connection would get slower based on the bandwidth pipe limitations. So this would also interact with timeouts. (And this doesn't even consider what kinds of throttles/limits are on the server. Maybe if you send 3000 requests it will think you are DoS attacking and blacklist your IP.)
So this is really a problem domain where a good solution requires some intelligent throttling and flow-control in order to manage how the underlying system resources are used.
As in the other answer, F# agents (MailboxProcessors) are a good programming model for authoring such throttling/flow-control logic.
(Even with all that, if most picture files are like 1MB but then there is a 1GB file mixed in there, that single file might trip a timeout.)
Anyway, this is not so much an answer to the question, as just pointing out how much intrinsic complexity there is in the problem domain itself. (Perhaps it's also suggestive of why UI 'download managers' are so popular.)
Here's a variation on Tomas's answer, because I needed an agent which could return results.
type ThrottleMessage<'a> =
| AddJob of (Async<'a>*AsyncReplyChannel<'a>)
| DoneJob of ('a*AsyncReplyChannel<'a>)
| Stop
/// This agent accumulates 'jobs' but limits the number which run concurrently.
type ThrottleAgent<'a>(limit) =
let agent = MailboxProcessor<ThrottleMessage<'a>>.Start(fun inbox ->
let rec loop(jobs, count) = async {
let! msg = inbox.Receive() //get next message
match msg with
| AddJob(job) ->
if count < limit then //if not at limit, we work, else loop
return! work(job::jobs, count)
else
return! loop(job::jobs, count)
| DoneJob(result, reply) ->
reply.Reply(result) //send back result to caller
return! work(jobs, count - 1) //no need to check limit here
| Stop -> return () }
and work(jobs, count) = async {
match jobs with
| [] -> return! loop(jobs, count) //if no jobs left, wait for more
| (job, reply)::jobs -> //run job, post Done when finished
async { let! result = job
inbox.Post(DoneJob(result, reply)) }
|> Async.Start
return! loop(jobs, count + 1) //job started, go back to waiting
}
loop([], 0)
)
member m.AddJob(job) = agent.PostAndAsyncReply(fun rep-> AddJob(job, rep))
member m.Stop() = agent.Post(Stop)
In my particular case, I just need to use it as a 'one shot' 'map', so I added a static function:
static member RunJobs limit jobs =
let agent = ThrottleAgent<'a>(limit)
let res = jobs |> Seq.map (fun job -> agent.AddJob(job))
|> Async.Parallel
|> Async.RunSynchronously
agent.Stop()
res
It seems to work ok...
Here's an out of the box solution:
FSharpx.Control offers an Async.ParallelWithThrottle function. I'm not sure if it is the best implementation as it uses SemaphoreSlim. But the ease of use is great and since my application doesn't need top performance it works well enough for me. Although since it is a library if someone knows how to make it better it is always a nice thing to make libraries top performers out of the box so the rest of us can just use the code that works and just get our work done!