I am using Putty plink command line utility to run a few scripts on my UNIX server. I use the -m option as:
plink -ssh -pw xxx myserver –m file.txt
The file file.txt contains a list of commands that are to be executed and is generated dynamically using some application program. Some of the commands in file.txt can run for hours, which will make the user wait for a long time. Moreover, I am interested in execution of the first line of each of the scripts.
So I want to make sure that a control+c command is sent just after the script is run so that complete script is not run. So instead of using the following in my file.txt:
script1
script2
script3
I want to use:
script1
control+C command
script2
control+C command
script3
control+C command
Can anyone help me in writing this control+c in my file.txt?
Thanks a lot
Related
Apologies, as i have not tried this earlier.
Hi,
I need to create oozie workflow that exeuctes a shell script. The shell script has curl command which downloads a specific file from client's repo.
As commands in shell scripts are only able to recognize hdfs directories, how could i execute the script.?
Lets say below is the Sample code:
curl -o ~/test.jar http://central.maven.org/maven2/commons-lang/commons-lang/2.6/commons-lang-2.6.jar
hdfs dfs -copyFromLocal ~/test.jar /user/sr/test2
How can i execute the script with above two commands using oozie.?
I found the answer...
data=curl http://central.maven.org/maven2/commons-lang/commons-lang/2.6/commons-lang-2.6.csv
echo "$data" | hdfs dfs -appendToFile - /path/to/hdfs/directory/PPP.csv
I am trying to run a Unix script which populates our Aged Debt table for our finance department from SSIS but cannot get my head around it. The script has to be run under user "username" and the script to run is :
P1='0*99999999' P2='2015_03_25*%%YY*Y' P3='Y*0.0' P4='Y*0.0' P5='Y*0.0' P6='Y*0.0' P7='Y*0.0' P8='Y*0.0' /cer_cerprod1/exe/par50219r
I believe that I need to have ssh configured on both sides to do this and I believe that I may do this from the "Execute Process Task" but I don't think that I am populating the parameters correctly.
Can anyone help.
I currently do this using putty/plink. Like sorrell says above, You use an execute process task to call a batch file. That batch file calls plink. I pass plink the shell script on the unix server that I want it to execute.
example of batch file:
echo y | "d:\program files\putty\plink.exe" [username#yourserver.com] -pw [password] -v sh /myremotescriptname.sh
the echo y at the beginning is to tell plink to accept the security credentials of the server.
I've got a script on my computer named test.py. What I've been doing so far to run the program is type python test.py into the terminal.
Is there a command on Unix operating systems that doesn't require the user to specify the program he/she uses to run the script but that will instead run the script using whichever program the shebang line is pointing to?
For example, I'm looking for a command that would let me type some_command test.txtinto the terminal, and if the first line of test.txt is #!/usr/bin/python, the script would be interpreted as a python script, but if the first line is #!/path/to/javascript/interpreter, the the script would be interpreted as javascript.
This is the default behavior of the terminal (or just executing a file in general) all you have to do is make the script executable with
chmod u+x test.txt
Then (assuming text.txt is in your current directory) every time you type
./text.txt
It will look at the sh-bang line and use the program there to run text.txt.
If you really want to duplicate built-in functionality, try this.
#!/bin/sh
x=$1
shift
p=$(sed -n 's/^#!//p;q' "$x" | grep .) && exec $p "$#"
exec "$x" "$#"
echo "$0: $x: No can do" >&2
Maybe call it start to remind you of the similarly useful Windows command.
I want to save the unix terminal command and out to store in a file for a session. Because some time unix command output are so large, so we con not get back by scrolling up in terminal.
Why not pipe the output to tee ? That will record in a file and dump to the console so you can see in real time what's going on.
$ mycommand | tee filename.log
Note that the above will only record stdout. If you need to record stderr too, then redirect accordingly:
$ mycommand 2>&1 | tee filename.log
(assuming you're using sh or a compatible shell - most likely)
Use the script filename command.
i am trying to remotely execute a perl script that takes data from stdin, over ssh.
The tricky part is that i don't want to upload the script itself to the remote server.
The data that the remote script will read from stdin is produced by another perl script run locally.
Let's assume the following:
my local script producing data is called cron_extract_section.pl
my local script that will be run remotely is called cron_update_section.pl
both scripts take one argument on the command line, a simple word
I manage to execute the script remotely, if the script is present on the remote machine:
./cron_extract_section.pl ${SECTION} 2> /dev/null | ssh user#remote ~/path/to/remote/script/cron_update_section.pl ${SECTION}
I know also that i can run a script on a remote server without having to upload it first, using the following syntax:
ssh user#remote "perl - ${SECTION}" < ./cron_update_section.pl
What i can't figure out is how to feed the local script cron_update_section.pl over ssh to perl, AND also pipe the result of the local script cron_extract_section.pl to perl.
I tried the following, the perl script executes fine, but there is nothing to read from stdin:
./cron_extract_section.pl ${SECTION} 2> /dev/null | ssh user#remote perl - ${SECTION} < ./cron_update_section.pl
Do you know if it's possible to do so without modifying the scripts ?
Use the DATA file handle. In example:
Local script to be run on the remote machine:
# script.pl
while(<DATA>) {
print "# $_";
}
__DATA__
Then, run it as:
(cat script.pl && /cron_extract_section.pl ${SECTION}) | ssh $host perl