In TBB, is there a way to find out if there is an existing task scheduler? - initialization

In Threading Building Blocks (TBB), if I initialize 2 task schedulers within the same scope, the argument of the second initialization will be ignored unless the argument of the first initialization is deferred. In order to avoid any conflicts, I would like to find out if a task scheduler has been initialized earlier in my program. Is there a way to do it? If so, what was the argument to that?

You may want to consider tbb::this_task_arena::current_thread_index() and tbb::this_task_arena::max_concurrency() functions.
The tbb::this_task_arena::current_thread_index() function returns "tbb::task_arena::not_initialized if the thread has not yet initialized the task scheduler." (documentation link).
If the task scheduler is already initialized you can obtain the requested number of threads with the tbb::this_task_arena::max_concurrency() function. However, you cannot get the stack size used during the previous task scheduler initialization.

Related

Is there a way to make_ref() for spawned processes in erlang?

I have tried to make references to spawned processes in erlang in several ways in order to make them compatible with the logging of From in a call to gen_server. So far I have tried P1ID = {spawn(fun() -> self() end), make_ref()}, in order to capture the structure of from() as stated in the documentation about gen_server:reply: erlang documentation I have not yet succeeded and the documentation about make_ref() is rather scarce.
were you attempting to built that {Pid, Ref} tuple in order to test the handle_call() gen_server callback from your tests?
if yes, you should not test these gen_server internals directly. instead add higher level functions to your module(that will call the gen_server call/cast/.. functions) and test those
spawn() already returns a pid() so there was no reason to return self() from from the spawned process.
Hope it helps

Airflow tasks not running in correct order?

cleanup_pbp is downstream of all 4 of load_pbp_30629, load_pbp_30630, load_to_bq_30629, load_to_bq_30630. cleanup_pbp started at 2021-12-05T08:54:48.
however, load_pbp_30630, one of the 4 upstream tasks, did not end until 2021-12-05T09:02:23.
How is cleanup_pbp running before load_pbp_30630 ends? I've never seen this before. I know our task dependencies have a bit of criss-cross going on, but that shouldn't explain why the tasks run out of order?
We have exactly the same problem and after checking, finally we found out that the problem is caused by the loop function in Dag script.
Actually we use a for loop to create tasks as well as their relationships. As in each iteration it will create the upstream task and the downstream task (like the cleanup_pbp in your case) and always give the same id to downstream one and define its relations (ex. Load_pbp_xxx >> cleanup_pbp), in graph or tree view it considers this downstream id has serval dependences but when Dag runs it takes only the relation defined in the last iteration. If you check Task Instance you will see only one task in its upstream_list.
The solution is to move the definition of this final task out of the loop but keep the definition of dependencies inside the loop. This well resolves our problem of running order, the final task won’t start until all dependences are finished. (with tigger rule all_success of course)
I’m not sure if you are in the same situation as us but hope this will give you some idea.

Is a subprocess required in the activiti diagram for this use case?

Use case Description:
Person1 starts the workflow assigning user tasks to multiple assignees (parallelly) , similarly, those assignees assign user task to multiple assignees.
Confusion:
Is a subprocess required for this case?
First of all i don't think the diagram you provide is a valid BPMN definition, you can't make a sequence flow that goes to a start event.
Is a subprocess required for this case?
It's not required but you can use it. the main reasons to use a sub-process:
For clarity purposes: sub-processes make it easier to communicate you process to your clients.
Reusability : You can reuse the sub-process logic in another process.
Events seperation : When creating a sub-process you are also creating a new scope for events.
Looping* : You can make your sub-process repeating until it reaches a specific a condition, just like a looping task.
Multiple instances* : You can use sub-process when you want to allow multiple execution in parallel.
P.S: Looping and Multiple instances are techniques that are also achievable using simple tasks, but if the process is fairly complex, using sub-process will be a better approach for maintenance and clarity reasons.

Apache Airflow "greedy" DAG execution

Situation:
We have a DAG (daily ETL process) with 20 tasks. Some tasks are independent and most have a dependency structure.
Problem:
When an independent task fails, Airflow stops the whole DAG execution and marks it as failed.
Question:
Would it be possible to force Airflow to keep executing the DAG as long as all dependencies are satisfied? This way one failed independent task would not block the whole execution of all the other streams.
It's seems like such a trivial and fundamental problem, I was really surprised that nobody else has an issue with that behaviour. (Maybe I'm just missing something)
You can set the trigger rules for each individual operarors.
All operators have a trigger_rule argument which defines the rule by which the generated task get triggered. The default value for trigger_rule is all_success and can be defined as “trigger this task when all directly upstream tasks have succeeded”. All other rules described here are based on direct parent tasks and are values that can be passed to any operator while creating tasks:
all_success: (default) all parents have succeeded
all_failed: all parents are in a failed or upstream_failed state
all_done: all parents are done with their execution
one_failed: fires as soon as at least one parent has failed, it does not wait for all parents to be done
one_success: fires as soon as at least one parent succeeds, it does not wait for all parents to be done

Windows Workflow 4 Abort from Internal Object?

I have a workflow (simple sequence) that calls InvokeMethod on an object. I would like to abort the entire workflow based on code within the object.
This is kind of like http://msdn.microsoft.com/en-us/site/dd560894 but that is aborting from top-down, and I want to just halt the whole workflow from bottom-up. How to do that?
Thanks.
You can do this from a NativeActivity by calling NativeActivityContext.Abort. How are you calling the InvokeMethod?
Do you really need to abort the workflow or is it OK if the workflow just ends? If you are happy with it ending, then I think you have two options depending on the complexity of the workflow:
Use an If activity and have one branch doing nothing when the condition (based on the result of your InvokeMethod call) is not met.
Use a Flowchart instead of a Sequence, as that will allow you to implement more complex exit logic.

Resources