AnyLogic Initialize Parameters Before Sim Run - initialization

I'm definitely an AnyLogic newbie. I am trying to setup an initialization period to initialize a couple of defined variables for say 10 seconds prior to each simulation run. I want the output to show this "stabilization" period before the actual simulation begins. Can this be done from the "Before Simulation Run" section on the Simulation properties? If so, how?
Thanks,
Chris

No, there is no pre-build warm-up period setup.
Easiest way I found is to add a statechart to Main where you move from a warmup state to a "running" state after some time.
Whenever you collect data, first check if myStateChart.isStateActive(Main.running) so you only collect when not warminp up

Related

Delay between tasks in Airflow or any other option?

We are using airflow 2.00. I am trying to implement a DAG that does two things:
Trigger Reports via API
Download reports from source to destination.
There needs to at least 2-3 hours gap between tasks 1 and 2. From my research I two options
Two DAGs for two tasks. Schedule the 2nd DAG two hour apart from 1st DAG
Delay between two tasks as mentioned here
Is there a preference between the two options. Is there a 3rd option with Airflow 2.0? Please advise.
The other option would be to have a sensor waiting for the report to be present. You can utilise reschedule mode of sensors to free up workers slots.
generate_report = GenerateOperator(...)
wait_for_report = WaitForReportSensor(mode='reschedule', poke_interval=5 * 60, ...)
donwload_report = DonwloadReportOperator(...)
generate_report >> wait_for_report >> donwload_report
A third option would be to use a sensor between two tasks that waits for a report to become ready. An off-the-shelf one if there is one for your source, or a custom one that subclasses the base sensor.
The first two options are different implementations of a fixed waiting time. Two problems with it: 1. What if the report is still not ready after the predefined time? 2. Unnecessary waiting if the report is ready earlier.

How to clear message error that occur from the Corda flow test

I try to run the test that I created from Corda 3.3 in Corda 4.1
I have 2 test case for test the flow
in the first test I expected fail that came from contract
and the result from first test is also correct as I expected to
but I error that I got from first test was send to hospital flow
and the error have been shown in the second test
actually the error that come for the first test not effect to the second test but it make the second test to slow
I really don't know how to clear the the error message before sun the second test
If someone have any idea please let me know thank you.
Note: If you have the way that not stop the nodes and re-create mock node again before run new test, it will be the solution that I looking for.
==============================
I have 6 tests in one file
first I try to create the network and use that net work for all of 6 test in this way I can reduce the time for initiate the network
but I need to clear the database after each test finish for avoid create duplicate data.
everything work until I change to Corda 4.1
in the 4.1 I don't know why the way that I use for clear database in Corda 3.3 not work like before (In 4.1 take long times for tuncate the table)
so I need to change the way to create the network and stop after finish each test.
In this way take more time for initiate the network (around 20-30 seconds per test)
and the point that surprised me is when I finish 5 tests in the 6th test take the long time (the log show house keeper clean) they use 6 minutes for finished
but when I run only that test, they use 1 minute for finished.
my question are
1. How I clear everything after finish each test
2. Have another way for initiate the network and use for every test? and how to clear the database and message after finish each test
It is not visible actual cause of exception.
But be aware that for 4.x corda you have to put
subFlow(ReceiveFinalityFlow(otherPartySession))
As last operation.
Dunno if this helps
It sounds like you are sharing state between tests, which is generally bad. Consider creating a MockNetwork in JUnit's #Before method or use the DriverDSL to create an isolated test for each test case.

How to run Airflow DAG for specific number of times?

How to run airflow dag for specified number of times?
I tried using TriggerDagRunOperator, This operators works for me.
In callable function we can check states and decide to continue or not.
However the current count and states needs to be maintained.
Using above approach I am able to repeat DAG 'run'.
Need expert opinion, Is there is any other profound way to run Airflow DAG for X number of times?
Thanks.
I'm afraid that Airflow is ENTIRELY about time based scheduling.
You can set a schedule to None and then use the API to trigger runs, but you'd be doing that externally, and thus maintaining the counts and states that determine when and why to trigger externally.
When you say that your DAG may have 5 tasks which you want to run 10 times and a run takes 2 hours and you cannot schedule it based on time, this is confusing. We have no idea what the significance of 2 hours is to you, or why it must be 10 runs, nor why you cannot schedule it to run those 5 tasks once a day. With a simple daily schedule it would run once a day at approximately the same time, and it won't matter that it takes a little longer than 2 hours on any given day. Right?
You could set the start_date to 11 days ago (a fixed date though, don't set it dynamically), and the end_date to today (also fixed) and then add a daily schedule_interval and a max_active_runs of 1 and you'll get exactly 10 runs and it'll run them back to back without overlapping while changing the execution_date accordingly, then stop. Or you could just use airflow backfill with a None scheduled DAG and a range of execution datetimes.
Do you mean that you want it to run every 2 hours continuously, but sometimes it will be running longer and you don't want it to overlap runs? Well, you definitely can schedule it to run every 2 hours (0 0/2 * * *) and set the max_active_runs to 1, so that if the prior run hasn't finished the next run will wait then kick off when the prior one has completed. See the last bullet in https://airflow.apache.org/faq.html#why-isn-t-my-task-getting-scheduled.
If you want your DAG to run exactly every 2 hours on the dot [give or take some scheduler lag, yes that's a thing] and to leave the prior run going, that's mostly the default behavior, but you could add depends_on_past to some of the important tasks that themselves shouldn't be run concurrently (like creating, inserting to, or dropping a temp table), or use a pool with a single slot.
There isn't any feature to kill the prior run if your next schedule is ready to start. It might be possible to skip the current run if the prior one hasn't completed yet, but I forget how that's done exactly.
That's basically most of your options there. Also you could create manual dag_runs for an unscheduled DAG; creating 10 at a time when you feel like (using the UI or CLI instead of the API, but the API might be easier).
Do any of these suggestions address your concerns? Because it's not clear why you want a fixed number of runs, how frequently, or with what schedule and conditions, it's difficult to provide specific recommendations.
This functionality isn't natively supported by Airflow
But by exploiting the meta-db, we can cook-up this functionality ourselves
we can write a custom-operator / python operator
before running the actual computation, check if 'n' runs for the task (TaskInstance table) already exist in meta-db. (Refer to task_command.py for help)
and if they do, just skip the task (raise AirflowSkipException, reference)
This excellent article can be used for inspiration: Use apache airflow to run task exactly once
Note
The downside of this approach is that it assumes historical runs of task (TaskInstances) would forever be preserved (and correctly)
in practise though, I've often found task_instances to be missing (we have catchup set to False)
furthermore, on large Airflow deployments, one might need to setup routinal cleanup of meta-db, which would make this approach impossible

openCL : No. of iterations in profiling API

Trying to use clGetEventProfilingInfo for timing my kernels.
Is there any facility to give no. of iterations before which the values of start time and end time are reported?
If the kernel is run only once then , of ourse it has lots of over heads associated with it. So to get the best timing we should run the kernel several times and take the average time.
Do we have such a parameter in profiling using API? (We do have such parameters when we use third party software Tools for profiling)
The clGetEventProfilingInfo function will return profiling information for a single event, which corresponds to a single enqueued command. There is no built-in mechanism to automatically report information across a number of calls; you'll have to code that yourself.
It's pretty straightforward to do - just query the start and end times for each event you care about and add them up. If you are only running a single kernel (in a loop), then you could just use a wall-clock timer (with clFinish before you start and stop timing), or take the difference between the time the first event started and the last event finished.

steady state initialization in Modelica

For example, I have a multibody vehicle model with an initial height of, say 0.1 meter (all wheel vertical loads = 0), as the sim runs, the vehicle will drop onto the ground, after 10 seconds, it reaches its steady state.
I wonder if it is possible to initialize the model exactly at the steady state? I read something about the homotopy command, but I was not even sure if it is something that I was looking for due to lack of examples, so I am not able to implement it to my model. I wonder if there are any other solutions to this kind of initialization problem?
Thanks in advance!
Thanks for Matths comments.
The web page matth has provided is very helpful, and if anyone wants to start your simulation from steady state, you should take a look.
I found some details on simulation continuation and more commands from User Manual 1, "Simulator API" section.
Here's one more additional question based on this one,
Is there an equivalent C function in the Dymola/source folder of ImportInitial(), Or ImportInitialResult()? Thanks.

Resources