Discrete Event Simulation without global Queue? - functional-programming

I am thinking about modelling a material flow network. There are processes which operate at a certain speed, buffers which can overflow or underflow and connections between these.
I don't see any problems modelling this in a classic Discrete Event Simulation (DES) fashion using a global event queue. I tried modelling the system without a queue but failed in early stages. Still I do not understand the underlying reason why a queue is needed, at least not for events which originate "inside" the network.
The idea of a queue-less DES is to treat the whole network as a function which takes a stream of events from the outside world and returns a stream of state changes. Every node in the network should only be affected by nodes which are directly connected to it. I have set some hopes on Haskell's arrows and Functional Reactive Programming (FRP) in general, but I am still learning.
An event queue looks too "global" to me. If my network falls apart into two subnets with no connections between them and I only ask questions about the state changes of one subnet, the other subnet should not do any computations at all. I could use two event queues in that case. However, as soon as I connect the two subnets I would have to put all events into a single queue. I don't like the idea, that I need to know the topology of the network in order to set up my queue(s).
So
is anybody aware of DES algorithms which do not need a global queue?
is there a reason why this is difficult or even impossible?
is FRP useful in the context of DES?

To answer the first point, no I'm not aware of any discrete-event simulation (DES) algorithms that do not need a global event queue. It is possible to have a hierarchy of event queues, in which each event queue is represented in its parent event queue as an event (corresponding to the time of its next event). If a new event is added to an event queue such that it becomes the queue's next event, then the event queue needs to be rescheduled in its parent to preserve the order of event execution. However, you will ultimately still boil down to a single, global event queue that is the parent of all of the others in hierarchy, and which dispatches each event.
Alternatively, you could dispense with DES and perform something more akin to a programmable logic controller (PLC) which reevaluates the state of the entire network every small increment of time. However, typically, that would be a lot slower (it may not even run as fast as real-time), because most of the time it would have nothing to do. If you pick too big a time increment, the simulation may lose accuracy.
The simplest answer to the second point is that, ultimately, to the best of my knowledge, it is impossible to do without a global event queue. Each simulation event needs to execute at the correct time, and - since time cannot run backwards - the order in which events are dispatched matters. The current simulation time is defined by the time that the current event executes. If you have separate event queues, you also have separate clocks, which would make things very confusing, to say the least.
In your case, if your subnetworks are completely independent, you could simulate each subnetwork individually. However, if the state of one subnetwork affects the state of the total network, and the state of the total network affects the state of each subnetwork, then - since an event is influenced by the events that preceded it, can only influence the events that follow, but cannot influence what preceded it - you have to simulate the whole network with a global event queue.
If it's any consolation, a true DES simulation does not perform any processing in between events (other that determining what the next event is), so there should be no wasted processing in one subnetwork if all the action is taking place in another.
Finally, functional reactive programming (FRP) is absolutely useful in the context of a DES. Indeed, I now write of lot of my DES simulations in Scala using this approach.
I hope this helps!
UPDATE: Since writing the above, I've used Sodium (an excellent FRP library, which was referenced by the OP in the comments below), and can add some further explanation: Sodium provides a means for subscribing to events, and for performing actions when those events occur. However, here I'm using the term event in a general sense, such as a button being clicked by a user in a GUI, or a network package arriving, etc. In other words, the events are not necessarily simulation events.
You can still use Sodium—or any other FRP library—as part of a simulation, to subscribe to simulation events and perform actions when they occur; however, these tools typically have no built-in support for simulation, and so you must incorporate a simulation engine as the source of simulation events, in the same way that a GUI is incorporated as the source of user interaction events. It is within this engine that the global event queue must reside.
Incidentally, if you are trying to perform parallel or distributed simulation model execution, things get considerably more complicated. You have multiple event queues in these situations, but they must be synchronized (giving the appearance of a single queue). The two basic approaches are conservative synchronization and optimistic synchronization.

Related

Async tasks for commanding external hardware controllers

Let's say I have a stage controller and I want to write a method to move the stage. I want to be able to have the method either return after the stage has physically completed the stage move, or has started the stage move. For any kind of external control of hardware, I typically write async methods with a Task return. This way, users can await on the completion of the task, e.g. await the stage to finish it's move, or just call the move method, and await the returned task at a later point if necessary.
Is this the right approach for controller external hardware? Should these kind of methods be written synchronously with with separate methods used to determine operation completed? People I talk to seem to have an issue with using async methods; mostly because they feel it is too indeterminate for them for hardware control.
Is this the right approach for controller external hardware? Should these kind of methods be written synchronously with with separate methods used to determine operation completed?
I hesitate to use async for any kind of system that is driven by external forces. One that I've seen a lot is people try to use tasks to represent "the user pressed this button". And your example reminds me of that, but with external hardware in place of a person.
The problem with these kinds of approaches is twofold. First, it restricts you to a very linear logic flow. Second, it doesn't easily provide results other than success/fail. What if the hardware does something other than what was instructed? How easy is it to do logic that tries to do A but then times out waiting for state A' to be reached so it tries to do B?
Bear in mind that tasks must be completed. While it's possible to handle this using something like task cancellation (or hardware-specific exceptions), that can considerably complicate the logic code. Particularly when you consider timeouts, retries, and fallback logic.
So, I generally avoid using tasks for modeling that kind of domain. Something like an observable may be a better fit, or even just a Channel of state updates. Both of those permit the hardware to "push" its state and allows the logic code to respond appropriately, usually with a state machine of its own.

time-based simulation with actors model

we have a single threaded application that simulates the interaction of a hundred of thousands of objects over time with the shared memory model.
obviously, it suffers from its inability to scale over multi CPU hardware.
after reading a little about agent based modeling and functional programming/actor model I was considering a rewrite with the message-passing paradigm.
the idea is very simple - each object will be an actor and their interactions will be messages so that the simulation could happen in parallel. given a configuration of objects at a certain time - its future consequences can be easily computed.
the question is how to model the time:
for example let's assume the the behavior of object X depends on A and B, as the actors and the messages calculations order is not guaranteed it could be that when X is to be computed A has already sent its message to X but B didn't.
how to make sure the computation happens correctly?
I hope the question is clear
thanks in advance.
Your approach of using message passing to parallelize a (discrete-event?) simulation is well-known and does not require a functional style per se (although, of course, this does not prevent you to implement it like that).
The basic problem you describe w.r.t. to the timing of events is also known as the local causality constraint (see, for example, this textbook). Basically, you need to use a synchronization protocol to ensure that each object (or agent) processes its messages in the right order. In the domain of parallel discrete-event simulation, such objects are called logical processes, and they communicate via events (i.e. time-stamped messages).
Correctly implementing a synchronization protocol for these events is challenging and the right choice of protocol is highly application-specific. For example, one important factor is the average amount of computation required per event: if there is little computation required, the communication costs dominate the overall execution time and it will be hard to scale the simulation.
I would therefore recommend to look for existing solutions/libraries on top of the actors framework you intend to use before starting from scratch.

What are the differences between event and signal in Qt

It is hard for me to understand the difference between signals and events in Qt, could someone explain?
An event is a message encapsulated in a class (QEvent) which is processed in an event loop and dispatched to a recipient that can either accept the message or pass it along to others to process. They are usually created in response to external system events like mouse clicks.
Signals and Slots are a convenient way for QObjects to communicate with one another and are more similar to callback functions. In most circumstances, when a "signal" is emitted, any slot function connected to it is called directly. The exception is when signals and slots cross thread boundaries. In this case, the signal will essentially be converted into an event.
Events are something that happened to or within an object. In general, you would treat them within the object's own class code.
Signals are emitted by an object. The object is basically notifying other objects that something happened. Other objects might do something as a result or not, but this is not the emitter's job to deal with it.
My impression of the difference is as follows:
Say you have a server device, running an infinite loop, listening to some external client Events and reacting to them by executing some code.
(It can be a CPU, listening to interrupts from devices, or Client-side Javascript browser code, litsening for user clicks or Server-side website code, listening for users requesting web-pages or data).
Or it can be your Qt application, running its main loop.
I'll be explaining with the assumption that you're running Qt on Linux with an X-server used for drawing.
I can distinguish 2 main differences, although the second one is somewhat disputable:
Events represent your hardware and are a small finite set. Signals represent your Widgets-layer logic and can be arbitrarily complex and numerous.
Events are low-level messages, coming to you from the client. The set of Events is a strictly limited set (~20 different Event types), determined by hardware (e.g. mouse click/doubleclick/press/release, mouse move, keyboard key pressed/released/held etc.), and specified in the protocol of interaction (e.g. X protocol) between application and user.
E.g. at the time X protocol was created there were no multitouch gestures, there were only mouse and keyboard so X protocol won't understand your gestures and send them to application, it will just interpret them as mouse clicks. Thus, extensions to X protocol are introduced over time.
X events know nothing about widgets, widgets exist only in Qt. X events know only about X windows, which are very basic rectangles that your widgets consist of. Your Qt events are just a thin wrapper around X events/Windows events/Mac events, providing a compatibility layer between different Operating Systems native events for convenience of Widget-level logic layer authors.
Widget-level logic deals with Signals, cause they include the Widget-level meaning of your actions. Moreover, one Signal can be fired due to different events, e.g. either mouse click on "Save" menu button or a keyboard shortcut such as Ctrl-S.
Abstractly speaking (this is not exactly about Qt!), Events are asynchronous in their nature, while Signals (or hooks in other terms) are synchronous.
Say, you have a function foo(), that can fire Signal OR emit Event.
If it fires signal, Signal is executed in the same thread of code as the function, which caused it, right after the function.
On the other hand, if it emits Event, Event is sent to the main loop and it depends on the main loop, when it delivers that event to the receiving side and what happens next.
Thus 2 consecutive events may even get delivered in reversed order, while 2 consecutively fired signals remain consecutive.
Though, terminology is not strict. "Singals" in Unix as a means of Interprocess Communication should be better called Events, cause they are asynchronous: you call a signal in one process and never know, when the event loop is going to switch to the receiving process and execute the signal handler.
P.S. Please forgive me, if some of my examples are not absolutely correct in terms of letter. They are still good in terms of spirit.
An event is passed directly to an event handler method of a class. They are available for you to overload in your subclasses and choose how to handle the event differently. Events also pass up the chain from child to parent until someone handles it or it falls off the end.
Signals on the other hand are openly emitted and any other entity can opt to connect and listen to them. They pass through the event loops and are processed in a queue (they can also be handled directly if they are in the same thread).

How to write integration test for systems that interact asynchronously

Assume that i have function called PlaceOrder, which when called inserts the order details into local DB and puts a message(order details) into a TIBCO EMS Queue.
Once message received, a TIBCO BW will then invoke some other system(say ExternalSystem) to pass on the order details.
Now the way i wrote my integration tests is
Call the Place Order
Sleep, and check details exists in local DB
Sleep and check details exists in ExternalSystem.
Is the above approach correct? Above test gives me confidence that, End to End integration is working, but are there any better way to test above scenario?
The problem you describe is quite common, and your approach is a very typical solution.
The problem with this solution is that if the delay is too short, your tests may sometimes pass and sometimes fail, but if the delay is very long, then your just wasteing time waiting, and with many tests, it can add a lot of delay. But unless you can get some signal to tell you the order arrived in the database, then you just have to wait.
You can reduce the delay by doing lots of checks with short intervals. If you're order is not there after timeout, then you would fail the test.
In "Growing Object-Oriented Software, Guided by Tests"*, there is a chapter on this very subject, so you might want to get a copy if you will be doing a lot of this sort of testing.
"There are two ways a test can observe the system: by sampling its observable
state or by listening for events that it sends out. Of these, sampling is
often the only option because many systems don’t send any monitoring
events. It’s quite common for a test to include both techniques to interact
with different “ends” of its system"
(*) http://my.safaribooksonline.com/book/software-engineering-and-development/software-testing/9780321574442

Why are Asynchronous processes not called Synchronous?

So I'm a little confused by this terminology.
Everyone refers to "Asynchronous" computing as running different processes on seperate threads, which gives the illusion that these processes are running at the same time.
This is not the definition of the word asynchronous.
a⋅syn⋅chro⋅nous
–adjective
1. not occurring at the same time.
2. (of a computer or other electrical machine) having each operation started only after the preceding operation is completed.
What am I not understanding here?
It means that the two threads are not running in sync, that is, they are not both running on the same timeline.
I think it's a case of computer scientists being too clever about their use of words.
Synchronisation, in this context, would suggest that both threads start and end at the same time. Asynchrony in this sense, means both threads are free to start, execute and end as they require.
The word "synchronous" implies that a function call will be synchronized with some other event.
Asynchronous implies that no such synchronization occurs.
It seems like the definition that you have there should really be the definition for "concurrent," or something. That definition looks wrong.
PS:
Here is the wiktionary definition:
asynchronous
Not synchronous; occurring at different times.
(computing, of a request or a message) allowing the client to continue during processing.
Which just so happens to be the exact opposite of what you posted.
I believe that the term was first used for synchronous vs. asynchronous communication. There synchronous means that the two communicating parts have a common clock signal that they run by, so they run in parallel. Asynchronous communication instead has a ready signal, so one part asks for data and gets a signal back when it's available.
The terms was then adapted to processes, but as there are obvious differences some aspects of the terms work differently. For a single thread process the natural way to request for something to be done is to make a synchronous call that transfers control to the subprocess, and then control is returned when it's done, and the process continues.
An asynchronous call works just like asynchronous communication in the aspect that you send a request for something to be done, and the process doing it returns a signal when it's done. The difference in the usage of the terms is that for processes it's in the asynchronous processing that the processes runs in parallel, while for communication it is the synchronous communication that run in parallel.
So "computer or electrical machine" is really a too wide scope for making a correct definition of the term, as it's used in slightly different ways for different techniques.
I would guess it's because they are not synchronized ;)
In other words... if one process gets stopped, killed, or is waiting for something, the other will carry on
I think there's a slant that is slightly different to most of the answers here.
Asynchronous means "not happening at the same time".
In the specific case of threading:
Synchronous means "execute this code now".
Asynchronous means "enqueue this work on a different thread that will be executed at some indeterminate time in the future"
This usually allows you to "do two things at once" because of reasons like:
one thread is just waiting (e.g. for data to arrive on a serial port) so is asleep
You have multiple processors, so the two threads can run concurrently.
However, even with 128 processor cores, the case is the same: the work will be executed "at some time in the future" (if perhaps the very near future) rather than "now".
Your second definition is more helpful here:
2. [...] having each operation started only after the preceding operation is completed.
When you make an asynchronous call, that call might not be completed before the next operation is started. When the call is synchronous, it will be.
It really means that an asynchronous event is happening independently of other events whereas a synchronous event would be happening dependent of other events.
It's like: Flammable, Inflammable ( which mean the same thing )
Seriously -- it's just one of those quirks of the English language. It doesn't really make sense. You can try to explain it, but it would be just as easy to justify the reverse meanings.
Many of the answers here are not correct. IN-dependently has a beginning particle that says NOT dependently, just like A-synchronous, but the meaning of dependent and synchronous are not the same! :D
So three dependent persons would wait for an order, because they are dependent to the order, but they wait, so they are not synchronous.
In english and any other language with common roots with a, syn and chrono (italian: asincrono; spanish: asincrónico; french:
asynchrone; greek: a= not syn=together chronos=time)it means exactly the opposite.
The terminology is UTTERLY counter-intiutive. Async functions ARE synchronous, they happen at the same time, and that's their power. They DO NOT wait, they DO NOT depend, they DO NOT hold the user waiting, but all those NOTs refer to anything but synchronicity :)
The only answer possibly right is the CLOCK one, although it is still confusing. My personal interpretation is this story:
"A professor has an office, and he makes SYNCHRONOUS CALLS for students to come. He says out loud in the main university hall: 'Hey guys who wants to talk to me should come at 10 in the morning tomorrow.', or simply puts a sign saying the same stuff.
RESULT: at 10 in the morning you see a long queue. People had the same time so they came in in the same moment and they got "piled up in the process".
So the professor thinks it would be nice for students not to waste time in the queue (and do synchronous operations, that is, do parallel stuff in their lives at the same time, and that's where the confusion comes).
He decides students can substitute him in making ASYNCHRONOUS CALLS, that is, every time a student ends talking with him, the students may, e.g., call another student saying the professor is free to talk, in a room where students may do whatever they like in the meantime. So every student does not have a single SYNCHRONOUS CALL (10 in the morning, the same time for all) but they have 10, 10.10, 10.18, 10.27.. etc. according to the needed time for each discussion in the professor office."
Is that the meaning of having the same clock, #Guffa?

Resources