Include dump of child processes when running procdump - procdump

Is there a procdump option such that when you run procdump on a process P it also takes a dump of all the child processes of process P?

Turns out the answer was no. I ended up writing a wrapper for procdump to take dumps of the child processes

Related

Sleep built in keyword pauses the spawned process

I am using Process Library in Robot Framework to spawn a process(a bash shell is spawned to execute the required process) to prepare runtime environment for test cases. But when I use Sleep keyword in my robot files, the spawned process is also become paused. How can I avoid this issue?
Found the cause of this problem. It is not related with the process.
You can try keyword Wait For Process https://robotframework.org/robotframework/latest/libraries/Process.html#Wait%20For%20Process

How to stop the running cell if interupt kernel does not work in Jupyter Notebook

I have been using Jupyter Notebook for a while. Often when I try to stop a cell execution, interrupting the kernel does not work. In this case, what else can I do, other than just closing the notebook and relaunching it again? I guess this might be a common situation for many people.
Currently this is an issue in the github jupyter repository as well,
https://github.com/ipython/ipython/issues/3400
there seems to be no exact solution for that except killing the kernel
If you're ok with losing all currently defined variables, then going to Kernel > Restart will stop execution without closing the notebook.
This worked for me:
- Put the laptop to sleep (one of the power options)
- Wait 10 s
- Wake up computer (with power button)
Kernel then says reconnecting and its either interrupted or you can press interrupt.
Probably isn't fool proof but worth a try so you don't waste previous computation time.
(I had Windows 10 running a Jupyter Notebook that wouldn't stop running a piece of Selenium code)
There are a few options here:
Change the folder name of data:
Works if the cell is running already and pulling data from a particular folder. For example I had a for loop that when interrupted just moved to the next item in list it was processing.
Change the code in the cell to generate an error:
Works if the cell has not been run yet but is just in queue.
Restart Kernel:
If all else fails
Recently I also faced a similar issue.
Found out that there is an issue in Python https://github.com/ipython/ipython/issues/3400 and it was there for 6 some years and it has been resolved as of 1st March 2020.
One thing that might work is hitting interrupt a bunch of times. It's possible that a library you are using catches the interrupt signal and only stops after receiving the signal multiple times.
For example, when using sklearn's cross_val_score() I found that I have to interrupt once for each cross validation fold.
If you know in advance that you might want to stop without losing all your variables, the following solution might be useful:
In cells that take a while because of long loops, you may implement something like this in the loop:
if os.path.exists(os.path.join(os.getcwd(),'stop_true.txt')):
break
Then if you want to stop just create the file 'stop_true.txt'. And the loop stops before the next round.
Usually, the file is called 'stop_false.txt' until I rename it to stop the loop.
Additionally, the results of each loop are stored in a dictionary separately. Therefore I'm able to keep all results until the break happened and can restart the loop from this point onwards.
If the iPython kernel did not die, you might be able to inject Python code into it that saves important data using pyrasite. You need to install and run pyrasite as root, i.e. with sudo python -m pip install pyrasite or python3 as needed. Then you need to figure out the process id (PID) of the iPython kernel (e.g. via htop or ps aux | grep ipython), say 3873. Then, write a script that saves the state for example to a pickle in a file inject.py, say, it is a Pandas dataframe df in the global scope:
df.to_pickle("rescued_df.pkl")
Finally, and inject it into the process as follows:
sudo pyrasite 3873 inject.py
You may need to enable dtrace first like so:
echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
For me, setting up a time limit worked: https://github.com/scipopt/PySCIPOpt/issues/197. Specifically, I added "model.setRealParam("limits/time", 60)" piece of code and it automatically stops calculation after 60 seconds. You can set up any time instead of 60. But this is for pyscipopt package (solving optimization model). I am not sure how to set up the time limit for your specific problem.
Try this:
Close the browser tab in which Jupyter is running
Run jupyter server list
Kill each running server with jupyter server stop <PORT>
You can force the termination by deleting the cell. I copy the code, delete the cell, create a new cell, paste, and execute again. Works like a charm.
I suggest to restart the kernel (Kernel -> Restart Kernel) as suggested by #hamdog.
It will be ready to use after that. However, it will certainly delete all variables stored in memory.

Is it possible to pause an R script that is running?

I am running a some analysis in R which is going to take at least 24 hours to finish. Is it possible to pause the function midway, so that I can take my computer to work and back?
This is not possible AFAIK, but I believe you can just suspend your computer and the processes automatically will be paused.
If you are using Linux, you can also stop and continue a process manually using killall -STOP R and killall -CONT R commands. Take a look at this article and the comment section there, which contain useful information regarding this.
On Windows, you can maybe use the Task Manager or install special software that is capable of doing that. But I really do not know as I do not use Windows on a regular basis.
EDIT: even if you use kill or killall to pause the process, but shutdown the computer, you will lose the data.

Running unix scripts in Java EE env?

Can anyone please share their experience of invoking unix scripts from a Java EE env, either servlet or EJBs? Note that these scripts are to be invoked for real time processing and not offline processing.
Spawning processes from a Java EE container is probably not the right way to this.
If these are shell scripts they will not be portable.
If you want to use transaction support, the scripts could be rewritten as Jobs using Quartz Scheduler.
This is more likely the Java EE way to do things like that.
EDIT:With your requirements added in the commented this should work
Process process = new ProcessBuilder(command).start();
More details here
please note that if you use scripts and/or pipes (no native executables) you must include the shell to invoke the command (and setup pipes)
One possibility would be to write a small application that listens to a JMS queue and invokes the scripts. That way, the script execution is separated from the app server, so doesn't run into any spec limitations.
The biggest problem you will have is if your app server memory image is large, when you fork to run the script you may well run out of memory and have the fork fail. When you fork, the system needs to make a complete copy of the executable image. It doesn't make a physical copy, but it does need to make a virtual one. So, if you have a large Java EE heap, like 4G of real memory (i.e. not just Java heap, total process size), then you need an extra "free" 4G of real RAM and/or Swap for the fork to have enough virtual space to happen.
Yes, you're going to immediately exec sh or some other command that isn't going to suck up a gazillion resources. But the system can't know that, and so it needs to act as if it's going to have to run two copies of your Java EE container at once, even for a nanosecond.
If you don't have the resources for the fork, the fork fails.
If you're strapped for space, then what you can do is create a little mini exec launcher daemon. Then instead of your Java EE app forking the process, you just open a socket to your daemon, and IT forks the process. Obviously the expectation is that this little daemon is consuming much fewer resources than your container, so it's cheap to fork.
The daemon can be as simple taking the command line to execute over the socket, and just execing what it gets (potentially unsafe, naturally, but...), or simple rpc with a command code and some arguments. Whatever is appropriate for your project. You can write it in Java, a scripting language (Python, Perl, Ruby), whatever. Lots of ways to do that.

What is the ASP.NET process for IIS 7.0?

Looking at what's running and nothing jumps out.
Thanks!
It should be w3wp.exe
EDIT: In line with Darren's comment, you should also check the "Show processes from all users" in Task Manager if that is where you are looking for the process.
Just to add something here, process explorer comes in handy when trying to track down a process:
http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx
Beats task manager hands down and can be substituted.
make sure you have show all processes checked (in vs)
Furthermore if you need to look at the .NET/unmanaged stack just donwload Process Explorer and look at your w3wp.exe processes to examine memory and other stats without having to do a remote/local debugging (just look at the .NET Tab on the properties of the process). It will show all the .NET performance counters for that particular process.
Awesome tool!
Using TaskManager should show you the process W3WP.exe is the IIS worker process, if you have multiple instances adding the Column "Command Line" will show you which Application Pool is being hosted on each of them in the -ap switch.
Also, in the IIS Manager UI there is a "Worker Processes" feature that if you double click that you will see the list of processes, Memory and CPU they are consuming, and double clicking the instance will show you the list of executing requests on it, really useful when trying to figure out a "misbehaving" request.

Resources