How to implement sampling in ns3? - networking

I wonder how to implement sampling in ns3. What exactly I want to implement is to create a simple network of switches and hosts using p2p links. Then, setting a probability (lets say 0.1) for an specific switch and expecting that every packet passing the switch will be captured with probability that I defined earlier. (Pretty much like the sampling in sflow or netflow).
I browsed nsnam.org, and the only tool I found regarding my question is Flow Monitor which I think is not helpful for my purpose.

There isn't a direct way to implement the behavior you want, but there is a solution.
Set up a normal hook to get all packets going through one of the switches. Refer to the tutorial to learn how to use the tracing system.
Then, use a RandomVariable at the beginning of your function to determine whether you want ignore that packet or not. The RandomVariable will need to be in global scope or passed in as parameter to the function.

Related

Reliable QTcpSocket::write without waitForBytesWritten

I am confused by a number of aspects of QTcpSocket::write.
The documentation suggests that it can write fewer bytes than the length of the buffer being sent. This implies that multiple calls are potentially needed. What is the recommended way to deal with this (bearing in mind following points)?
My initial attempt at calling write did not appear actually to send any data. I found that calling waitForBytesWritten solved this. If I need multiple write calls as per previous point, how do I use waitForBytesWritten in conjunction with these? Do I associate a waitForBytesWritten with each write or do I loop over write and then use waitForBytesWritten.
The documentation suggests that waitForBytesWritten can fail randomly on Windows so ideally I do not want to rely on it at all. It suggests using the bytesWritten signal instead but I have found very little information on how one is supposed to use this properly. In particular, if I have to deal with my concern in the first point, do I not get into the recursive call situation warned about in the documentation of bytesWritten?

MPI inter communication: What is the peer communicator?

I'm trying to understand how to use MPI_Intercomm_create to create a communication handle from one group to another. These two groups are also written in their own C files so there is no way of one group directly accessing the other's communication handle unless I use a global variable or the like. How do I get the "peer_comm" (3rd argument of the call) for the other group? Or am I just not understanding something?
MPI_Intercomm_create() operates on communicator (e.g. MPI_Comm) and not on groups (e.g. MPI_Group) so let's use the right semantic here.
If you launch several binaries with the same mpirun command line, then they are all in MPI_COMM_WORLD, and this is likely what you want to use for peer_comm.
If you use MPI_Comm_spawn() in order to launch "the other binaries", then it returns your inter communicator, so you likely do not even need MPI_Intercomm_create().
I strongly encourage you to write a Minimal, Complete, and Verifiable example. Not only it will help you clear some confusion, you will more likely get a precise answer once the issue is clearly stated.

OpenMDAO: finite difference flag for Component.solve_nonlinear

For some of our components it would be useful to know whether it's being executed as part of a finite difference calculation or not. One example could be a meshing component where we'd want to maintain the same node count and distribution function during FD and allow for remeshing during major iteration steps. In the old OpenMDAO we could detect this from a component's itername. Would it be possible to reintroduce this or is that info already available to the Component class?
I can't think of any current way to figure out if you are inside an FD when solve_nonlinear is being called, but it's a good idea for the reasons that you mention.
We don't currently have that capability, but others have also asked to be informed when solve_nonlinear is being run for complex step as well.
One way to do this would be to introduce an optional_argument to solve_nonlinear such as call_mode="fd" or call_mode="cs" or call_mode="solve". The only problem with this approach is that its very backwards incompatible.
Another approach would be to add a regular python attribute to the component that you could check like self.call_mode="solve", etc. This one would be a pretty easy change and I think it would serve the purpose.
One last possible way would be to put a flag into the unknowns/params vector. So you would check params.call_mode to see what mode. This is somewhat sensible since its the param values that change when you're going to complex-step.
I think I like the last option the best. Both solve_nonlinear and apply_nonlinear need to know about this information. But none of the other methods do. So making it a component attribute seems a little out of place.

MPI: Local and non-local calls

I want to know what exactly are the two, and how they are different. What are the advantage or disadvantage of the two types of calls? Really appreciate using some small example code.
These are detailed, e.g., in here: http://www.netlib.org/utk/papers/mpi-book/node14.html
It's not really proper to talk about advantages of either. They are for different purposes. An example of a local call MPI_Comm_rank, since it doesn't need input from other processes and an example of a non-local call is MPI_Send, since it will have to communicate with some other process that will recieve the message.

Graph based instead of stack based

I wonder about the idea of representing and executing programs using graphs. Some kind of stackless model where the each node in the graph represents a function and the edges represent arguments to the functions. In this way a function doesn't return the result to its caller,but passes the result as an arg to another function node. Total nonsense? Or maybe it is just a state machine in disguise? Any actual implementations of this anywhere?
This sounds a lot like a State machine.
I think Dybvig's dissertation Three Implementation Models for Scheme does this with Scheme.
I'm pretty sure the first model is graph-based in the way you mean. I don't remember whether the third model is or not. I don't think I got all the way through the dissertation.
for javascript you might want to checkout node-red (visual) or jsonflow (json)

Resources