My program runs(exec..) an external program.
While running, the external program asks user [Yes/No] to proceed next step.
Instead of typing [yes] in command line, how can I pass [Yes] to the external program from my program.
Unless the external program supports a respective flag (see #Jonathan Leffler's answer), your you have control over that program's source and can add it, you have to simulate the "yes" input.
Options:
Try launching the external program by piping the output of the yes helper application to it's stdin: yes | external_program. yes is a simple tool, should you not have it, that just writes "y" to it's stdout continually.
Manually write "yes" to to stdin of the external program.
Both options require your to use pipes in one way or the other. See this for more information on how to do that.
The classic way to provide a 'yes' response on the command line is a -y option (usually with a parallel -n option to indicate a 'no' — see fsck(1)).
There's also room to argue that running the program should be a 'yes, I mean to do it' operation. However, there are times when it makes sense to specify 'yes, I really mean to do it' (such as one-time initialization of an instance of a DBMS).
Related
When running gnu-make rules with -jN make creates a jobserver for managing job-count across submakes. Additionally you can "pass the jobserver environment" to a make recipe by prefixing it with + - eg:
target :
+./some/complex/call/to/another/make target
Now I instead of a sub-make I have a (python) script which runs some complex packaging actions (too complex for make). One of the actions that it can run into can actually spawn off a make command.
package.stamp : $(DEPS)
+./packaging.py $(ARGS)
touch $#
Now when that make command is invoked inside packaging.py
make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule.
This makes some sense because whatever environment is setup by make, may not be being honoured or passed through by python.
Is it possible to pass through the jobserver references through the python program to the sub-make - if so, how?
There are two aspects to the jobserver that must be preserved: the first is an actual environment variable, which make uses to send options to sub-makes. That value is being preserved properly, or else make would not know that it should even look for the jobserver and you would not see that warning message.
The second aspect are two open file descriptors which are passed to the children of make. Your script MUST preserve these two descriptors and leave them open when it invokes the sub-make.
You don't show us what Python code is being used to invoke the sub-make. By default, the subprocess module will not close file descriptors, but you can provide the close_fds=True option to have it do so... you should not use this option if you want parallel make invocations to work properly with the jobserver.
If you're not using subprocess, then you'll have to show us what you are doing.
You should probably mark this with a python tag as it's mainly a Python question.
To summarise and clarify the answer - for the jobserver to work in your sub-processes you need to preserve:
Environment variables
The jobserver fds
One of the environment variables passed looks (for me) as follows:
MAKEFLAGS= --jobserver-fds=3,4 -j -- NAME=VALUE
jobserver-fds communicates which fds make has opened to communicate with the jobserver. For the the submake to be able to use the jobserver you should thus preserve, or arrange to be available, those specific fds (or else re-write the environment variable appropriately to point them to whichever fd they end up on).
NAME=VALUE is arguments passed by me to the original make.
I believe what I am going to asked, will not be possible. Still trying to find if there is a way or approach that I wont be aware of.
I got a broken pipe error where I am having pid of destination process but not the name. Is there any way, I can find out the process name (which possibly already terminated) using pid.
As Barmar said in his comment, this isn't possible normally. The system forgets all information about processes as soon as they terminate.
But of course your processes might be able to comply in order to help you find out more. In case you can modify the processes in question you can let them log their PID into a special place where you can look up later which PID belonged once to which process.
This won't work for programs you cannot modify, though. In these cases it still might be possible to put a wrapper around them which first logs the PID and then execs to the wanted program.
#!/bin/bash
echo "$$: $*" >> /home/alfe/var/pid.log
exec "$#"
In case you are neither starting the program in question nor can you modify it, you are out of options I fear.
I created a client-server application and now I would like to deploy it.
While development process I started the server on a terminal and when I wanted to stop it I just had to type "Ctrl-C".
Now want to be able to start it in background and stop it when I want by just typing:
/etc/init.d/my_service {stop|stop}
I know how to do an initscript, but the problem is how to actually stop the process ?
I first thought to retrieve the PID with something like:
ps aux | grep "my_service"
Then I found a better idea, still with the PID: Storing it on a file in order to retrieve it when trying to stop the service.
Definitely too dirty and unsafe, I eventually thought about using sockets to enable the "stop" process to tell the actual process to shut down.
I would like to know how this is usually done ? Or rather what is the best way to do it ?
I checked some of the files in the init.d and some of them use PID files but with a particular command "start-stop-daemon". I am a bit suspicious about this method which seems unsafe to me.
If you have a utility like start-stop-daemon available, use it.
start-stop-daemon is flexible and can use 4 different methods to find the process ID of the running service. It uses this information (1) to avoid starting a second copy of the same service when starting, and (2) to determine which process ID to kill when stopping the service.
--pidfile: Check whether a process has created the file pid-file.
--exec: Check for processes that are instances of this executable
--name: Check for processes with the name process-name
--user: Check for processes owned by the user specified by username or uid.
The best one to use in general is probably --pidfile. The others are mainly intended to be used in case the service does not create a PID file. --exec has the disadvantage that you cannot distinguish between two different services implemented by the same program (i.e. two copies of the same service). This disadvantage would typically apply to --name also, and, additionally, --name has a chance of matching an unrelated process that happens to share the same name. --user might be useful if your service runs under a dedicated user ID which is used by nothing else. So use --pidfile if you can.
For extra safety, the options can be combined. For example, you can use --pidfile and --exec together. This way, you can identify the process using the PID file, but don't trust it if the PID found in the PID file belongs to a process that is using the wrong executable (it's a stale/invalid PID file).
I have used the option names provided by start-stop-daemon to discuss the different possibilities, but you need not use start-stop-daemon: the discussion applies just as well if you use another utility or do the matching manually.
I have a java program, let's say Test.class.
When I execute java Test the program ask for a Password and then continute.
The problem is that the stdout is redirected to a log and the program is launched with the & ( we are on UNIX).
How can i interact with this program launched java Test & with the stdin and stdout?
One possible solution is to start the program in foreground and then after a condition run it in background from java.
Thanks!
If the program can read the password from stdin, you can have a Unix script prompt for the password, then start the Java application and pass the password to it, e.g.:
echo $PASSWORD | java Test >log.out &
Or you can consider to split your Java application in two parts; there could be one interactive "front-end" part that validates the password, and then once the password is validated this could launch a "back-end" part as a background process and exit.
One option is to pipe the input to your program using echos as:
(echo input1
echo input2
....
) | java Test >& logfile &
Alternatively if the number of inputs are large you can also put your inputs in a file and redirect the file contents as:
< input_file java Test >& logfile &
I don't see anything Java specific in this question, if you want to drive the stdin based on the application output, you can use the Expect utility: http://en.wikipedia.org/wiki/Expect
Beware though, Expect is notoriously fragile, you'd do wise to refrain from using it in production scenarios.
Actually if you only want to be able to enter the password, perhaps you can try launching your app in foreground (without the trailing &).
Then, after you have entered the password, press Ctrl+Z or in another shell do kill -SIGSTP <pid> in order to suspend your program. Finally, type bg to put it in background.
Read the docs about your shell's job-control functionality for details.
I have an executable (no source) that I need to wrap, to make sure that it is not called more than once at a time. I immediately think of some sort of queue wrapper, but how do I actually make it so that my wrapper is called instead of the executable itself? Is there a better way to do this? The solution needs to be invisible because the users are other applications. Any information/recommendations are appreciated.
Method 1: Put the executable in some location not in the standard path. Create a shell script that checks a sentinel file and, if the sentinel file is absent, executes the program, waits for the ptogram to complete, then deletes the sentinel file. If the sentinel file is present, the script will enter a loop with a short delay (1 second? How long is the standard execution of this program? Take that and half it), check the sentential file again, and so on.
Method 2: Create a separate program that does the same thing as the script, but using a system-level semaphore or lock instead. You could even simply use a read/write lock on a file. The program would do a fork() and exec() on the real program, waiting for child exit before clearing the sentinel.
If the users are other applications, you can just rename the executable (e.g. name -> name.real) and call the wrapper with the original name. To make sure that it's only called once at a time, you can use the pidof command (e.g. pidof name.real) to check if the program is running already (pidof actually gives you the PID of the running process, so that you can use stuff such as kill or whatever to send signals to it).