Concurrency in Rhapsody statecharts: state actions vs transitions - rhapsody

In IBM Rhapsody's statecharts, are there situations where a transition between state A and state B could happen before the actions in state A finished their execution?

In UML and in Rhapsody actions cannot be interrupted - only behaviours (actions are atomic). So even if you have an interrupt-able region in an activity diagram you cannot stop an action in the middle - you can only interrupt the activity and make the control flow "jump" out of that region.
What you could do is create a behavioural classifier on the entry action or call an operation with an activity diagram and then send an event to interrupt its behaviour.

Related

In SCORM 2004 (4th ed), why does a choice navigation to a cluster activity change the Current Activity?

The SCORM 2004 4th Edition pseudocode handles the case for a choice request (SB.2.9, steps 12 onward) like so:
If the target activity is a leaf activity Then
Exit Choice Sequencing Request Process (Delivery Request: the target activity; Exception: n/a)
End If
Apply the Flow Subprocess to the target activity in the Forward direction with consider children equal to True
// The identified activity is a cluster. Enter the cluster and attempt to find a descendent leaf to deliver.
If the Flow Subprocess returns False Then
// Nothing to deliver, but we succeeded in reaching the target activity - move the current activity.
Apply the Terminate Descendent Attempts Process to the common ancestor
Apply the End Attempt Process to the common ancestor
Set the Current Activity to the target activity
Exit Choice Sequencing Request Process (Delivery Request: n/a; Exception: SB.2.9-9)
// Nothing to deliver.
Else
Exit Choice Sequencing Request Process (Delivery Request: for the activity identified by the Flow Subprocess; Exception: n/a)
End If
It looks like this means that if the target activity resolves to a cluster activity but the Flow Subprocess can not find any available descendent leaf activity, the Current Activity is still modified and the sequencing request "succeeds" despite returning an exception.
What is the expected behavior for the LMS in this scenario? A cluster activity can't be delivered but this terminates the previous activity. Should the LMS simply deliver a blank page instead of an activity and hope the learner will be available to navigate away to another activity using the navigation controls?
The definition of the Overall Sequencing Process helpfully doesn't specify how an exception is supposed to be handled, but considering that this behavior sets the Current Activity and all successive requests will reference that one instead of the previously active activity, clearly something needs to happen or the LMS will be stuck in an inconsistent state.
Your reading of the pseudocode is correct. Choice is a bit special compared to the other flow events, but the steps of termination and showing the user a "please select an activity from the activity tree" screen could happen in several cases. The only somewhat unique part is the setting of the current activity, which makes it so other flow navigation events the user may select start from their last intentional choice, and not from whatever was previously loaded. It's not unusual for the Current Activity to be on a cluster, as stated on SN-4-18: "During termination behavior, Sequencing Exit Action rules are evaluated on all of the ancestors of the current activity – this is done in the Sequencing Exit Action Rule Subprocess. The result of this subprocess will be that either the “just terminated” leaf activity remains the Current Activity, or an ancestor of the leaf activity becomes the Current Activity".
You're also correct that OP.1 ("Overall Sequencing Process") is mute on the topic, going so far as to say "Behavior not specified." for a Not Valid sequencing request. I believe the most common choice is the aforementioned "please select an activity from the activity tree" style screen in place of where the visible SCO would have been.
The spec tries very hard to separate LMS display choices from how sequencing itself operates. SN-5-3 states: "SCORM imposes no requirements on the type or style of the user interface presented to a learner at run-time, including any user interface devices for navigation. The nature of the user interface and the mechanisms for capturing interactions between the learner and the LMS are intentionally unspecified. Issues such as look and feel, presentation style and placement of user interface devices or controls are outside the scope of SCORM."
But the specification otherwise says some instructive things. Page SN-3-6 states that "As depicted in Figure 3.2.1c, the target of the Choice navigation request (Activity B) has a Sequencing Control Flow defined to be False. In this case, no activity can be identified for delivery (clusters cannot be delivered). Because Activity B has Sequencing Control Choice defined to be True, an LMS shall provide some mechanism for the learner to select (trigger a navigation request for) one of Activity B’s children directly, but not Activity B."
While this doesn't explicitly state that there should be instructive text displayed to the learner in that SCO area or that the choice shouldn't be allowed, it does state that something should be done that should guide the learner into intentionally performing another step to launch something else. Again, this isn't exactly the same use case, but it's probably the closest it gets relative to choice.

Pause SAMD21 TCC counter

The Atmel SAMD21 TCC peripheral provides a STOP command, which pauses the counter. The counter can be resumed with a RETRIGGER command.
When STOP is issued, the TCC enters a fault state, in which the outputs are either tristated, or driven to states specified in a config register. Presumably this mechanism is designed to support a fixed failsafe output state.
In my case I want the output pins to freeze in the state they're in at the time of the STOP command. The only way I can see to to do this is to update the configured fault output state register every time the outputs are updated - requiring interrupt processing which kind of defeats the purpose of much of the TCC's output waveform extension architecture, as well as being a processing load I'd prefer to avoid. There are other complications too, such as accounting for the dead time mechanism, and hardware/software races.
So I've been looking at ways to achieve this that don't involve the STOP command - but I can't see any other way of stopping the counter. There's no way to gate the peripheral clock input, and disabling it in GCLK is ruled out as it also runs TCC1. (And who knows what other effects this would have.) Negating the ENABLE bit, besides being overkill, unsurprisingly also tristates the outputs. Modifying the configuration in various other ways usually requires writing to enable-protected registers, thus requiring disabling the peripheral first.
(One idea I haven't investigated that yet is to drive the counter from the event system, and control the event generation/gating instead.)
So: is there any way of pausing the peripheral in its current state, while maintaining the state of the output pins?
All that I can think of to try is the async 'COUNT' event, which sounds like it is a gate for the clock to the counter.
(page numbers from the 03/2016 manual)
31.6.4.3. Events, p.712;
Count during active state of an asynchronous event (increment or decrement, depending on counter direction). In this case, the counter will be incremented or decremented on each cycle of the prescaled clock, as long as the event is active.
31.8.9. Event Control, p.734;
EVCTRL register,
Bits 2:0 – EVACT0[2:0]: Timer/Counter Event Input 0 Action
0x5 COUNT (async) Count on active state of asynchronous event
The downside is that software events have to be synchronous.

Is using Timeline for scheduling correct

I've found a lot of suggestions to use Timeline with KeyFrame.onFinished to schedule tasks in the JavaFX. But the AnimationTimer doc says, that its handle method is called 60 times per second.
It is unclear from documentation but it seems that Timeline uses AnimationTimer internally. Does that means that the timeline solution for scheduling imposes CPU-intensive polling mode? If that is actually how JavaFX works, what other methods for scheduling is recommended?
Advised Usage Rules for Scheduling in JavaFX
Given how Timeline works (see below), some basic rules can be derived:
Use a Timeline when you want to schedule quick operations that modify attributes of the scene graph.
Do not use a Timeline when you want to do any of the following:
Perform operations that are themselves time-consuming (e.g. the work for a given scheduled activity takes longer than 1/30th of a second).
Perform work which really doesn't have anything to do with the scene graph.
Have real-time scheduling at a resolution greater than a pulse (1/60th of a second).
Control your scheduling using traditional java.util.concurrent mechanisms or a third party library.
Run your scheduled tasks on their own threads.
Also, for a one-off execution that updates the scene graph after a fixed delay, I like to use a PauseTransition.
Answers to specific questions and concerns
It is unclear from documentation but it seems that Timeline uses AnimationTimer internally.
No, Timeline does not use AnimationTimer internally. However, Timeline is still pulse based and will receive code to update the current state of the timeline will be invoked on each pulse (usually 60 times a second). To understand what a pulse is read the background section below.
Does that means that the timeline solution for scheduling imposes CPU-intensive polling mode?
Yes, there is some overhead, but it is likely minimal enough that it can be disregarded. The JavaFX system is going to be running some work on each pulse anyway, regardless whether you create any timelines yourself. Unless you are heavily animating objects or doing a lot of work on each pulse, then the time to process each pulse will be completely trivial as there is simply very little work to be done on each pulse.
If that is actually how JavaFX works, what other methods for scheduling is recommended?
There are numerous alternatives to Timeline:
Use a javafx.concurrent facility such as a Task or Service, and specifically a ScheduledService.
This is good if you want to provide feedback (such as progress, message updates and return values) to the JavaFX UI.
Use a java.util.concurrent facility.
This is good if you don't need the additional features of javafx.concurrent for progress and messaging or if you want the additional control provided by java.util.concurrent facilities.
If you use this, you can still get intermittent or final feedback to your JavaFX UI by invoking Platform.runLater().
Use a java.util.TimerTask.
Simpler API than java.util.concurrent, no direct interface for JavaFX feedback, though again you can use Platform.runLater() for that.
Use a third party scheduling system (such as Quartz):
More complicated than other solutions, but also more flexible, though it adds a dependent library and requires Platform.runLater if you want to modify scene graph elements.
If you do use something other than a timeline, you need to take care:
You invoke Platform.runLater if you want to modify anything on the scene graph from your non-JavaFX application threads.
You don't invoke Platform.runLater too often so as to overload the JavaFX application thread (e.g. by coalescing update calls).
Background Info
Timeline Operation
For example, let's say you have a Timeline which has a Keyframe that has a duration of 1 second. The internal Timeline will still be updated each pulse. This is required because a Timeline is more than just a scheduler, it has other properties and tasks. For instance the Timeline has a currentTime property. The currentTime will be updated each pulse, so that it always reflects the current time accurately. Similarly, internal logic with the Timeline will check to see if there are any KeyValues associated with the Timeline, and update their values on each pulse, in accordance with their associated Interpolator. And the internal logic will check on each pulse to see if the Timeline is finished, to call the onFinished handler. It will also evaluate the current time versus the duration of each keyframe and, if there is a match, fire the appropriate action event for the keyframe.
So the Timeline is acting like a scheduler in that it can execute key frames at specific times, but it is more than a scheduler as it also maintains it's current time, current state and ability to continuously vary associated key values. Additionally, you can vary the rate, direction and number of cycles of a timeline, as well as pause it in position, then resume it, which also differs from a traditional scheduler.
An aspect of the Timeline is that because it is based upon callbacks from pulses in the JavaFX system; everything runs on the JavaFX application thread. Therefore, when you use timelines (even 1000 of them), no additional threads are spawned, so from that perspective it is lightweight. Also all logic invoked by the timeline occurs on the JavaFX application thread. Because everything runs on the JavaFX application thread, this makes the Timeline nice for manipulating elements and attributes of the active JavaFX scene graph (such manipulations must be performed on the JavaFX application thread). However, a timeline would be a bad choice if you wanted to do a lot of CPU time-consuming work or blocking I/O in the associated timeline keyframes. This is because the JavaFX application thread will be blocked while that work occurs, freezing your UI.
Pulses
The JavaFX runtime system is based upon pulses.
A pulse is an event that indicates to the JavaFX scene graph that it is time to synchronize the state of the elements on the scene graph with Prism. A pulse is throttled at 60 frames per second (fps) maximum and is fired whenever animations are running on the scene graph. Even when animation is not running, a pulse is scheduled when something in the scene graph is changed. For example, if a position of a button is changed, a pulse is scheduled.
When a pulse is fired, the state of the elements on the scene graph is synchronized down to the rendering layer. A pulse enables application developers a way to handle events asynchronously. This important feature allows the system to batch and execute events on the pulse.
The JavaFX animation system stores a list of all of the animations and animation timers which been created and not garbage collected. On each pulse it will iterate through this list. Each animation timer will have it's handle callback invoked on each pulse. Each timeline or transition will similarly have its internal state update and update any associated key values or fire key frame action events as appropriate.
You can use ScheduledExecutorService
final Runnable runnable = new Runnable() {
public void run() {
//Code goes here
}
}
ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
executor.scheduleAtFixedRate(runnable, 0, 1, TimeUnit.SECONDS);

State duplication in FSM

I'm trying to implement a FSM which handles a button in the following way:
When in Standby mode, it just waits for button to get pressed.
When it is pressed, it moves to intButtonPress state, where a 2 second timer is started. If it times out, it means that the button was held for 2 seconds and the next state must be Action. If the button is released before timeout, state returns to Standby as the button wasn't held long enough.
When in Action mode, some action is performed, but it can be interrupted by the button press. Problem is that i can't reuse intButtonPress state since it's timeout transition would lead back to Action state, so an obvious solution is to use an identical state whose only difference is that it leads to Standby state, but it's ugly.
Are there better ways to handle this?
FSM is here: http://i.imgur.com/m57yaMw.png (can't embed pictures)
Answering my own question - use hierarchical state machines: http://i.imgur.com/DzBApeY.png
Super-Standby state is not strictly needed.

Discrete Event Simulation without global Queue?

I am thinking about modelling a material flow network. There are processes which operate at a certain speed, buffers which can overflow or underflow and connections between these.
I don't see any problems modelling this in a classic Discrete Event Simulation (DES) fashion using a global event queue. I tried modelling the system without a queue but failed in early stages. Still I do not understand the underlying reason why a queue is needed, at least not for events which originate "inside" the network.
The idea of a queue-less DES is to treat the whole network as a function which takes a stream of events from the outside world and returns a stream of state changes. Every node in the network should only be affected by nodes which are directly connected to it. I have set some hopes on Haskell's arrows and Functional Reactive Programming (FRP) in general, but I am still learning.
An event queue looks too "global" to me. If my network falls apart into two subnets with no connections between them and I only ask questions about the state changes of one subnet, the other subnet should not do any computations at all. I could use two event queues in that case. However, as soon as I connect the two subnets I would have to put all events into a single queue. I don't like the idea, that I need to know the topology of the network in order to set up my queue(s).
So
is anybody aware of DES algorithms which do not need a global queue?
is there a reason why this is difficult or even impossible?
is FRP useful in the context of DES?
To answer the first point, no I'm not aware of any discrete-event simulation (DES) algorithms that do not need a global event queue. It is possible to have a hierarchy of event queues, in which each event queue is represented in its parent event queue as an event (corresponding to the time of its next event). If a new event is added to an event queue such that it becomes the queue's next event, then the event queue needs to be rescheduled in its parent to preserve the order of event execution. However, you will ultimately still boil down to a single, global event queue that is the parent of all of the others in hierarchy, and which dispatches each event.
Alternatively, you could dispense with DES and perform something more akin to a programmable logic controller (PLC) which reevaluates the state of the entire network every small increment of time. However, typically, that would be a lot slower (it may not even run as fast as real-time), because most of the time it would have nothing to do. If you pick too big a time increment, the simulation may lose accuracy.
The simplest answer to the second point is that, ultimately, to the best of my knowledge, it is impossible to do without a global event queue. Each simulation event needs to execute at the correct time, and - since time cannot run backwards - the order in which events are dispatched matters. The current simulation time is defined by the time that the current event executes. If you have separate event queues, you also have separate clocks, which would make things very confusing, to say the least.
In your case, if your subnetworks are completely independent, you could simulate each subnetwork individually. However, if the state of one subnetwork affects the state of the total network, and the state of the total network affects the state of each subnetwork, then - since an event is influenced by the events that preceded it, can only influence the events that follow, but cannot influence what preceded it - you have to simulate the whole network with a global event queue.
If it's any consolation, a true DES simulation does not perform any processing in between events (other that determining what the next event is), so there should be no wasted processing in one subnetwork if all the action is taking place in another.
Finally, functional reactive programming (FRP) is absolutely useful in the context of a DES. Indeed, I now write of lot of my DES simulations in Scala using this approach.
I hope this helps!
UPDATE: Since writing the above, I've used Sodium (an excellent FRP library, which was referenced by the OP in the comments below), and can add some further explanation: Sodium provides a means for subscribing to events, and for performing actions when those events occur. However, here I'm using the term event in a general sense, such as a button being clicked by a user in a GUI, or a network package arriving, etc. In other words, the events are not necessarily simulation events.
You can still use Sodium—or any other FRP library—as part of a simulation, to subscribe to simulation events and perform actions when they occur; however, these tools typically have no built-in support for simulation, and so you must incorporate a simulation engine as the source of simulation events, in the same way that a GUI is incorporated as the source of user interaction events. It is within this engine that the global event queue must reside.
Incidentally, if you are trying to perform parallel or distributed simulation model execution, things get considerably more complicated. You have multiple event queues in these situations, but they must be synchronized (giving the appearance of a single queue). The two basic approaches are conservative synchronization and optimistic synchronization.

Resources