cascading failures in exec system call - unix

I recently learned about the exec() system call in unix. Consider a process executing an exec() and the "transformed process" again executes an exec() and so on. And suddenly the currently executing thing fails, so the context of the previous proc has to be restored.
My question is if the failures keep on occurring in a cascading fashion then would the "original" context still be available. In other words, how much memory can unix spend to go on and saving contexts.

exec() family are replacing system calls - they completely replace the original process with the new one, so there is no turning back. To keep the original context use system() call (which is a wrapper around fork() and exec())

Related

How to kill a thread, stop a promise execution in Raku

I am looking for a wait to stop (send an exception) to a running promise on SIGINT. The examples given in the doc exit the whole process and not just one worker.
Does someone know how to "kill", "unschedule", "stop" a running thread ?
This is for a p6-jupyter-kernel issue or this REPL issue.
Current solution is restarting the repl but not killing the blocked thread
await Promise.anyof(
start {
ENTER $running = True;
LEAVE $running = False;
CATCH {
say $_;
reset;
}
$output :=
self.repl-eval($code,:outer_ctx($!save_ctx),|%adverbs);
},
$ctrl-c
);
Short version: don't use threads for this, use processes. Killing the running process probably is the best thing that can be achieved in this situation in general.
Long answer: first, it's helpful to clear up a little confusion in the question.
First of all, there's no such thing as a "running Promise"; a Promise is a data structure for conveying a result of an asynchronous operation. A start block is really doing three things:
Creating a Promise (which it evaluates to)
Scheduling some code to run
Arranging that the outcome of running that code is reflected by keeping or breaking the Promise
That may sound a little academic, but really matters: a Promise has no awareness of what will ultimately end up keeping or breaking it.
Second, a start block is not - at least with the built-in scheduler - backed by a thread, but rather runs on the thread pool. Even if you could figure out a way to "take out" the thread, the thread pool scheduler is not going to be happy with having one of the threads it expects to eat from the work queue on disappear. You could write your own scheduler that really does back work with a fresh thread each time, but that still isn't a complete solution: what if the piece of code the user has requested execution of schedules work of its own, and then awaits that? Then there is no one thread to kill to really bring things to a halt.
Let's assume, however, that we did manage to solve all of this, and we get ourselves a list of one or more threads that we really want to kill without their cooperation (cooperative situations are fairly easy; we use a Promise and have code poll that every so often and die if that cancellation Promise is ever kept/broken).
Any such mechanism that wants to be able to stop a thread blocked on anything (not just compute, but also I/O, locking, etc.) would need deep integration and cooperation from the underlying runtime (such as MoarVM). For example, trying to cancel a thread that is currently performing garbage collection will be a disaster (most likely deadlocking the VM as a whole). Other unfortunate cancellation times could lead to memory corruption if it was half way through an operation that is not safe to interrupt, deadlocks elsewhere if the killed thread was holding locks, and so forth. Thus one would need some kind of safe-pointing mechanism. (We already have things along those lines in MoarVM to know when it's safe to GC, however cancellation implies different demands. It probably cross-cuts numerous parts of the VM codebase.)
And that's not all: the same situation repeats at the Raku language level too. Lock::Async, for example, is not a kind of lock that the underlying runtime is aware of. Probably the best one can do is try to tear down the callstack and run all of the LEAVE phasers; that way there's some hope (if folks used the .protect method; if they just called lock and unlock explicitly, we're done for). But even if we manage not to leak resources (already a big ask), we still don't know - in general - if the code we killed has left the world in any kind of consistent state. In a REPL context this could lead to dubious outcomes in follow-up executions that access the same global state. That's probably annoying, but what really frightens me is folks using such a cancellation mechanism in a production system - which they will if we implement it.
So, effectively, implementing such a feature would entail doing a significant amount of difficult work on the runtime and Rakudo itself, and the result would be a huge footgun (I've not even enumerated all the things that could go wrong, just the first few that came to mind). By contrast, killing a process clears up all resources, and a process has its own memory space, so there's no consistency worries either.
There is currently no way to stop a thread if it doesn't want to be stopped.
A thread can check a flag every so often, and decide to call it quits if that flag is set. It would be very nice if we would have a way to throw an exception inside a thread from another thread. But we do not, at least not as far as I know.

activiti taskService complete fails when executed concurrently

Hi I am facing a strange situation where I am trying to set a set of tasks as complete all concurrently.
The first one goes through and second one goes through sometimes (rarely) but mostly doesnt go through.
When I do these individually they work.
Something to do with database locking I feel. Is there some workaround or code for executing task and variable updates concurrently ?
Do they belong to the same process instance?
And yes, there will be a db locking mechanism in place, because when you complete each task a process instance will need to move forward.
Can you please clarify what are you trying to solve? what is your business scenario?
Cheers
Activiti uses pre-emptive locking and this can cause problems for parallel tasks.
Typically if you use the "exclusive" flag the problems go away (https://www.activiti.org/userguide/#exclusiveJobs).
Keep in mind that jobs never actually run in parallel, the job engine selects jobs to run and if there are multiple they will be run sequentially (which appears to be parallel to the user).

Race Condition in xv6

I am a newbie in the field of OS and trying to learn it by hacking into xv6.My doubt is can we decide before making a call to fork whether to run parent or child using system calls.i,e i can have a function pass an argument to kernel space and decide whether to run parent or child to run first.The argument can be:
1-parent
0-child.
I think the problem is that fork() just creates a copy of the process and makes it runnable, but the module responsible for allowing it to run is the scheduler. Therefore, the parameter you mentioned should also provide this information to the scheduler in some way.
If you manage to do that, I think you can enqueue the two process in the order you prefer in the runnable queue and let the scheduler pick the first runnable process.
However, you cannot control for how long the first process will run. In fact, at the next scheduling event another process might be allowed to run and the previous one would be suspended.

process re-parenting: controlling who is the new parent

Is the new parent always "init" or is there some way to control who gets to be the new parent?
Wikipedia seems indicates that it's always "init". I really hope that this is not the case. I have tried everything I can think of with setpgid and setsid, but no luck. And now that I see this wikipedia article I need advice.
In a Unix-like operating system any
orphaned process will be immediately
adopted by the special init system
process. This operation is called
re-parenting and occurs automatically.
Even though technically the process
has the "init" process as its parent,
it is still called an orphan process
since the process that originally
created it no longer exists.
Taken from wikipedia
The reason I'm asking is because I'm making a Mac app that runs a number of worker processes. I want these worker processes to appear as children of the main process in the process-hierarchy of the task manager. Some of the workers run as different users and on Mac OS X I need to fork twice to pass privileges to the child process. Because I "double fork" the workers currently run as deamons, and when looking with task manager I see the workers are having "init" as their parent process.
Orphaned children are always adopted by init. There is no Unix way of changing the parent to some non-init process.
As of Linux 3.4 this is no longer strictly true. There's still no portable Unix way of doing this but as Andy Lutomirski points out Linux 3.4 adds PR_SET_CHILD_SUBREAPER for prctl.
In effect, a subreaper fulfills the role of init(1) for its
descendant processes.
On Linux, you can use PR_SET_CHILD_SUBREAPER to indicate that your orphaned descendants should be re-parented to you instead of to init.
I think reptyr can perform what you want. Check it out:
https://linux.die.net/man/1/reptyr
https://github.com/nelhage/reptyr

what are threads in actionscript functions?

I've seen a lot of other developers refer to threads in ActionScript functions. As a newbie I have no idea what they are referring to so:
What is a thread in this sense?
How would I run more than one thread at a time?
How do I ensure that I am only running one thread at a time?
Thanks
~mike
Threads represent a way to have a program appear to perform several jobs concurrently. Although whether or not the jobs can actually occur simultaneously is dependent on several factors (most importantly, whether the CPU the program is running on has multiple cores available to do the work). Threads are useful because they allow work to be done in one context without interfering with another context.
An example will help to illustrate why this is important. Suppose that you have a program which fetches the list of everyone in the phone book whose name matches some string. When people click the "search" button, it will trigger a costly and time-consuming search, which might not complete for a few seconds.
If you have only a single-threaded execution model, the UI will hang and be unresponsive until the search completes. Your program has no choice but to wait for the results to finish.
But if you have several threads, you can offload the search operation to a different thread, and then have a callback -- a trigger which is invoked when the work is completed -- to let you know that things are ready. This frees up the UI and allows it to continue to respond to events.
Unfortunately, because ActionScript's execution model doesn't support threads natively, it's not possible to get true threading. There is a rough approximation called "green threads", which are threads that are controlled by an execution context or virtual machine rather than a larger operating system, which is how it's usually done. Several people have taken a stab at it, although I can't say how widespread their usage is. You can read more at Alex Harui's blog here and see an example of green threads for ActionScript here.
It really depends on what you mean. The execution model for ActionScript is single-threaded, meaning it can not run a process in the background.
If you are not familiar with threading, it is essentially the ability to have something executed in the background of a main process.
So, if you needed to do a huge mathematical computation in your flex/flash project, with a multi-threaded program you could do that in the background while you simultaneously updated your UI. Because ActionScript is not multi-threaded you can not do such things. However, you can create a pseudo-threading class as demonstrated here:
http://blogs.adobe.com/aharui/pseudothread/PseudoThread.as
The others have described what threading is, and you'd need threading if you were getting hardcore into C++ and 3D game engines, among many other computationally-expensive operations, and languages that support multi-threading.
Actionscript doesn't have multi-threading. It executes all code in one frame. So if you create a for loop that processes 100,000,000 items, it will cause the app to freeze. That's because the Flash Player can only execute one thread of code at a time, per frame.
You can achieve pseudo-threading by using:
Timers
Event.ENTER_FRAME
Those allow you to jump around and execute code.
Tween engines like TweenMax can operate on 1000's of objects at once over a few seconds by using Timers. You can also do this with Event.ENTER_FRAME. There is something called "chunking" (check out Grant Skinner's AS3 Optimizations Presentation), which says "execute computationally expensive tasks over a few frames", like drawing complex bitmaps, which is a pseudo-multi-threading thing you can do with actionscript.
A lot of other things are asynchronous, like service calls. If you make an HTTPService request in Flex, it will send a request to the server and then continue executing code in that frame. Once it's done, the server can still be processing that request (say it's saving a 30mb video to a database on the server), and that might take a minute. Then it will send something back to Flex and you can continue code execution with a ResultEvent.RESULT event handler.
So Actionscript basically uses:
Asynchronous events, and
Timers...
... to achieve pseudo-multi-threading.
a thread allows you to execute two or more blocks of actionscrpt simultaniously by default you will always be executing on the same default thread unless you explcitly start a new thread.

Resources