In AnyLogic, I am wondering if there is a way to free my agents in the wait block after a specified amount of time? - wait

One of the processes in my production line is a clamp station. Pieces of wood are glued together and can't be moved until after their drying time is complete. What would you suggest using to demonstrate this in AnyLogic? I was thinking a wait block, but I am not sure how to free an agent after a given amount of time.

There's a timeout option in the wait block that you can use and setup a defined timeout... you can find that in the advanced section of the properties, a checkbox called "enabled exit on timeout"
Note that the exit port for the timeout is on the top right of the block.

Related

Can a parent process determine if its child has received a SIGINT?

Suppose, I launch a parent process, which launches a subprocess, but then the parent receives a SIGINT. I want the parent to exit, but I don't want the child process to linger and/or become a zombie. I need to make sure it dies.
If I can determine that the child also received a SIGINT, then it is probably cleaning up on its own. In that case, I'd prefer to briefly wait while it finishes and exits on its own. But if it did not receive a SIGINT, then I will send it a SIGTERM (or SIGKILL) immediately and let the parent proceed with its own cleanup.
How can I figure out if the child recevied the SIGINT? (Leaving aside the fact that it might not even respond to SIGINT...) Do I just have to guess, based on whether or not the parent is running in the foreground process group? What if the SIGINT was sent programmatically, not via Ctrl+C?
How can I figure out if the child received the SIGINT?
Perhaps you can't. And what should matter to you is if the child handled SIGINT (it could have ignored it). See my answer to your other question.
However, in many cases, the signal sent by Ctrl C was sent to a process group. Then you might have got that signal too.
In pathological cases, your entire system experiments thrashing and the child process had not even being scheduled yet to process the signal.
I want the parent to exit, but I don't want the child process to linger and/or become a zombie. I need to make sure it dies.
Maybe you want to use somewhere daemon(3) ?
BTW, I don't understand your question, because I have to guess its (ungiven) motivations. Are you caring about job control or implementing a shell? In what concrete cases do you really care that the child got the SIGINT and what does that mean to you?

Erlang supervisor with one critical child

We are in the process of re-organizing our applications supervision tree to make it more robustly handle failures and re-starts. However, we have a scenario where we have one parent supervisor that starts four child supervisors. The problem we have is that the first child supervisor starts several children gen_servers that must be started and initialized prior to the second child supervisor starting or it will fail.
So, I need a startup like the following:
test_app.erl -> super_supervisor -> [config_supervisor, auth_supervisor, rest_supervisor]
The trick I'm having trouble with is that config_supervisor must complete all initialization prior to auth_supervisor or rest_supervisor being started. With the rest_for_one startup strategy I get, essentially, this behavior but only by allowing auth_supervisor to fail because the needed config is not there. I would prefer to just request that config_supervisor is completed with it's initialization (which includes starting several gen_servers) prior to moving-on to auth_supervisor.
This seems like a common scenario that would have been conquered previously but, I am having a hard time "googling" a solution. Does anybody have advice or sample code that might exist to handle this scenario?
Supervisors do a synchronous start of their children, starting each one in turn before starting the next in the order they occur in the childspeclist. So your super_supervisor should start its children in the right order, first config_supervisor, then auth_supervisor and finally rest_supervisor by having them in that order. A supervisor must (successfully) start all its children before it is considered to be started. So if config_supervisor has all the necessary processes which must be started during the initialization as its children then super_supervisor will not start the other supervisors until the config_supervisor is done.
In this case you would not need rest_for_one to ensure starting in the right order if the children are in the right order in the childspeclist.
For a worker process, gen_server/gen_fsm/gen_event, they are considered started when their init callback returns.
Have I understood your description and question correctly?
You may try to move config_supervisor into its own application and set the application as a requirement for the main one, in this case the config application will be started first and then the main supervisor with auth_supervisor, etc will start their initialisation.
Did you look at the rest_for_one restart strategy? It seems that it should be covenient in this case, the middle supervisor starts the gen_servers in a defined order and last the leaf supervisor who in turn start the critical process.

How to automatically update time in cics

I have two questions first is the main one.
1. I was able to display date in a cics map but what i need is, i want it to be ticking i.e., it should be display everysecond updated.
2. I have a COBOL-DB2 program which automatically inserts the data from database(DB2) to a file. I want this program to be called on a timestamp basis i.e., every 1hr, 2hr, or every day.
Thank you
You can do this, but you will need to change modify traditional psuedo-conversationl approach. Instead of returning and waiting for a user event, you can start your tran after some number of seconds with your current commarea and quit. If a user event occurs in that time, you can cancel your start request, if it doesn't, you can refresh the screen timestamp and repeat.
It is kinda a pain just to get a timestamp refreshed. Doesn't make much sense to bother with unless you have a really good reason.
The DB2 stuff is plain easy. Start your tran using interval control, the same START AFTER() described above, and you can have it run hourly, or bihourly, or whatever.
I don't think that you need to modify your pseudo-conversational approach to achieve what you need. Just issue a EXEC CICS START command with a one second delay (just do this once) for a small program that just issues a Send Map (or TC Write) to the terminal facility. Ideally reserve a common area on the screen so all transactions can use a common program. At some point, when the updates are no longer required, CANCEL the START request.The way I see it, the timer update transaction will mix in nicely with you user-initiated transaction flow. If a user transaction is active when the start timer pops, the timer update program will just be delayed a little.
While this should work, you need to bear in mind that you might be driving 3,600 transactions per hour for each user. Is this feature really worth all that?
This is not possible in standard CICS using maps. The 3270 protocol does not lend itself to continually updating screens. The majority of automatic updating screens such as consoles and monitoring displays use native VTAM methods, building their own data streams.
It might be possible to do this using unformatted data, but I would not recommend it in CICS. Pseudo-conversational CICS does not have a program in control during screen display, and conversational programming is highly discouraged.
You can't really do this in CICS, which was designed for pseudo-interactive responses at best. It was designed for use on mainframes where your terminal was sent a whole page or screen, the program read the screen as received (which has some fields the user would update and if you didn't change them the terminal did not send the data back) then, the CICS transaction having taken a part of a screen containing changes, sends the response back and quits.
This makes for very efficient data entry and inquiry programs. But realize, when the program has finished processing the screen, it's quit, it's gone, and it's not even in memory any more, all the resources have been reclaimed. This allows the company to run a mainframe with 300 terminals and maybe 10 megabytes of real memory, because when the program is waiting for you to respond, it's not using any resources at all, if there are 200 people running a data entry program, they are running a re-entrant program in which all 200 of them are running the same copy of the same program and the only thing they're using is maybe 1K of writable storage per user for the part that has to read a screen or a file record and do some calculations. Think about that, 200 people are running the same program and all of them, simultaneously, are using one module that uses 20K of memory for the application - and it's the same 20K for every single one of them - and 1K each of actual read/write data.
Think about that for a moment, the first user to start that data entry program uses 20K of memory for the application, plus 1K for the writable data. Each user after that who is being processed on that program uses an additional 1K of memory, that's all. When they're sitting there looking at the terminal, all they might be using is 4 bytes in a table to tell the system there's a terminal connected. No resources are used at all.
To be able to have a screen updated on a regular basis means that something has to keep running, which is not something CICS does very well. CICS is not intended to be used for interactive processing the way a PC does because you're actually running live on the PC.
EXEC CICS ASK TIME END-EXEC to update the timestamp.
EXEC CICS SEND MAP DATA ONLY END-EXEC to update the screen.
However, using the suggested
EXEC CICS START TRANSID ('name' | namefld)
DELAY (time)
END-EXEC.
is actually the better way.

Receiving a specific message from POSIX Message Queue

I am supposed to write a C application in Unix such that N children processes will be forked from the parent process and I will send messages to these children and children are supposed to send messages each other.
However the problem is, I need to send messages to a specific target child process. i.e. parent will send to child 1, child 1 will send to child 2, ... and child n will send to 1 (circularly).
The problem is, if I create only one message queue, any of n children may dequeue the message (since any of them may run after parent process due to kernel scheduler) therefore the message will be dequeued in wrong process!
In my application, there will be max. 1 message in the queue at a time. The only solution comes to my mind is to create n different message queues and pass messages to appropriate queue so that a specific target process can receive it. But I think there must me a more legitimate solution.
Any ideas?
Contraints: Pipes between processes are not allowed, I know that mq is inefficient here. I'll also implement them, both are required. P.S. This is kinda homework (damn I am the creator of http://canyoudomyhomework.com), however this is not just a homework, a challenging question IMHO.)
Depending on the performance requirements, a brokered (router) solution feels most appropriate.
The parent could act as the router, or could spawn a specific process to do this job.
Define a simple message structure that has its first element as the intended target, we can also designate the parent process as zero.
Each process has only one queue, between itself and the broker. All messages are processed and routed in one place, thereby avoiding the NxN fan-out you mention.
Good Luck

VXML Beep at recording timeout

The VXML application I mentioned in a previous question is now in testing. After allowing the user to record a message (max length 5 minutes) we go into a standard menu (submit, playback, re-record, etc).
One of our testers, bored as anything and tired of leaving a 5-minute message was reading an email we had sent, including the phone number. She said 'Two' just after the menu started, having heard only a split second of the menu.
Needless to say, she was very confused.
The right way to fix this, to me, seems to be to add a definitive stop to the recording, like the beep that begins it.
The record item has a beep property that beeps at the start of the recording, which we use. I can't find reference to any property that would beep when the user reaches max time.
How can I add an uninterruptible beep at the end of a when it has reached the maxtime?
Well, a bit more digging tells me that there's no property for it on the object.
But I did turn up a property on the resulting variable 'maxtime' that tells you if it ended due to a maximum time exceeded event.
By checking that in the section, but before sending them to the menu, I am able to use an block to play an audio file with bargein="false"
It's a bit more code than I'd like, but it seems to work.

Resources