I am a newbie in the field of OS and trying to learn it by hacking into xv6.My doubt is can we decide before making a call to fork whether to run parent or child using system calls.i,e i can have a function pass an argument to kernel space and decide whether to run parent or child to run first.The argument can be:
1-parent
0-child.
I think the problem is that fork() just creates a copy of the process and makes it runnable, but the module responsible for allowing it to run is the scheduler. Therefore, the parameter you mentioned should also provide this information to the scheduler in some way.
If you manage to do that, I think you can enqueue the two process in the order you prefer in the runnable queue and let the scheduler pick the first runnable process.
However, you cannot control for how long the first process will run. In fact, at the next scheduling event another process might be allowed to run and the previous one would be suspended.
Related
I have old xamarin android jobintentservice I'm replacing to support android 12.0.
I now inherit Worker instead of JobIntentService, and use WorkManager to enqueue its job.
Couple of questions:
Is there a way use await inside DoWork override method?
Is it better to inherit ListenableWorker instead in order to use await? Do I lose
anything if I switch to it?
If I Task.Factory.StartNew(LongRunning) from Worker's DoWork, and immediately that follow with return success result, will my long running task run to completion, or all work associated will be terminated?
I think you can know more about WorkManager first.
From document Schedule tasks with WorkManager we know that:
WorkManager is the recommended solution for persistent work. Work is persistent when it remains scheduled through app restarts and system reboots. Because most background processing is best accomplished through persistent work, WorkManager is the primary recommended API for background processing.
WorkManager handles three types of persistent work:
Immediate: Tasks that must begin immediately and complete soon. May
be expedited.
Long Running: Tasks which might run for longer, potentially longer
than 10 minutes.
Deferrable: Scheduled tasks that start at a later time and can run
periodically.
And from the Work chaining of Features,we know
For complex related work, chain individual work tasks together using an intuitive interface that allows you to control which pieces run sequentially and which run in parallel.
WorkManager.getInstance(...)
.beginWith(Arrays.asList(workA, workB))
.then(workC)
.enqueue();
For each work task, you can define input and output data for that work. When chaining work together, WorkManager automatically passes output data from one work task to the next.
Note:
If you still have any problem, get back to me.
I have single camunda job that is configured as a multi-instance call to another process. At present, multi instance asynchronous before, multi instance asynchronous after, and multi instance exclusive are all checked. We have multiple PODs deployed to handle the calls(1k at a time) and right now when I try to run this, it seems like no matter what I am doing, it is running them serially, or close to it. What is needed to actually send all 1000 elements to multiple instances of the child process?
Tried configuring the multi instance asynch settings
Multi Instance
Loop Cardinality-blank
Collection-builtJobList
Element Variable-builtRequestObject
I then have all three multi instance values checked. The Asynch Continuations are not checked.
Camunda BPM will only run a single thread (execution) within a given process instance at a time by default. You can change that behavior for a given task/activity by checking the "Asynchronous Before" and/or "Asynchronous After" checkboxes - thus electing to use the Job Executor - and deselecting the "Exclusive" checkbox. (This also applies to the similar checkboxes for multi-instance activities.) If you do that, beware that the behavior may not be what you want; specifically:
You will likely receive OptimisticLockingExceptions if you have a decent number of threads running simultaneously on a single instance. These are thrown when the individual threads attempt to update the information in the relational database for the process instance and discover that the data has been modified while they were performing their processing.
If those OptimisticLockingExceptions occur, the Job Executor will automatically retry the Job without decrementing the available retries. This will re-execute the Job, re-executing any included integration logic as well. This may not be desirable.
Although Camunda BPM has been proven to be fantastic at executing large numbers of process instances in parallel, it isn't designed to execute multiple threads simultaneously within an individual process instance. If you want that behavior within a given process instance, I would suggest that you handle the threading yourself within an individual Service Task, fire-and-forget launching the threads you need and letting the Service Task complete within Camunda immediately after launching them... of course if that's feasible given your application's desired behavior.
Hi I am facing a strange situation where I am trying to set a set of tasks as complete all concurrently.
The first one goes through and second one goes through sometimes (rarely) but mostly doesnt go through.
When I do these individually they work.
Something to do with database locking I feel. Is there some workaround or code for executing task and variable updates concurrently ?
Do they belong to the same process instance?
And yes, there will be a db locking mechanism in place, because when you complete each task a process instance will need to move forward.
Can you please clarify what are you trying to solve? what is your business scenario?
Cheers
Activiti uses pre-emptive locking and this can cause problems for parallel tasks.
Typically if you use the "exclusive" flag the problems go away (https://www.activiti.org/userguide/#exclusiveJobs).
Keep in mind that jobs never actually run in parallel, the job engine selects jobs to run and if there are multiple they will be run sequentially (which appears to be parallel to the user).
I recently learned about the exec() system call in unix. Consider a process executing an exec() and the "transformed process" again executes an exec() and so on. And suddenly the currently executing thing fails, so the context of the previous proc has to be restored.
My question is if the failures keep on occurring in a cascading fashion then would the "original" context still be available. In other words, how much memory can unix spend to go on and saving contexts.
exec() family are replacing system calls - they completely replace the original process with the new one, so there is no turning back. To keep the original context use system() call (which is a wrapper around fork() and exec())
Is the new parent always "init" or is there some way to control who gets to be the new parent?
Wikipedia seems indicates that it's always "init". I really hope that this is not the case. I have tried everything I can think of with setpgid and setsid, but no luck. And now that I see this wikipedia article I need advice.
In a Unix-like operating system any
orphaned process will be immediately
adopted by the special init system
process. This operation is called
re-parenting and occurs automatically.
Even though technically the process
has the "init" process as its parent,
it is still called an orphan process
since the process that originally
created it no longer exists.
Taken from wikipedia
The reason I'm asking is because I'm making a Mac app that runs a number of worker processes. I want these worker processes to appear as children of the main process in the process-hierarchy of the task manager. Some of the workers run as different users and on Mac OS X I need to fork twice to pass privileges to the child process. Because I "double fork" the workers currently run as deamons, and when looking with task manager I see the workers are having "init" as their parent process.
Orphaned children are always adopted by init. There is no Unix way of changing the parent to some non-init process.
As of Linux 3.4 this is no longer strictly true. There's still no portable Unix way of doing this but as Andy Lutomirski points out Linux 3.4 adds PR_SET_CHILD_SUBREAPER for prctl.
In effect, a subreaper fulfills the role of init(1) for its
descendant processes.
On Linux, you can use PR_SET_CHILD_SUBREAPER to indicate that your orphaned descendants should be re-parented to you instead of to init.
I think reptyr can perform what you want. Check it out:
https://linux.die.net/man/1/reptyr
https://github.com/nelhage/reptyr