nohup - dont want nohup.out but want log going to a different file on the remote server - unix

I'm running the following command (where variables have valid values for ssh command and $file - is a .sql file).
nohup ssh -qn ${ssh_user}#${dbs} "sqlplus $dbuser/${dbpswd}#${dbname} <<ENDSQL | tee "${sql_run_output_file}".ssh.log
set echo off
set echo on
set timing on
set time on
set serveroutput on size 1000000
#${file}
ENDSQL
"
When I was using the above command without "nohup" before ssh command, after 1 hour or so, my connection from source server (where im running ssh) was getting an error/message "Connection reset...." and hanging my BASH shell script (which contains this ssh command in it). When, I use nohup, i dont see the connection issue.
Here's what I'm trying to get and need your help.
Change the command shown above so that the command will NOT create a nohup.out
(Did I read that I can use > instead of | tee ... and use 2>&1)
I DO NOT want to run the command giving a "&" (background)
I DO want a LOG file for the sqlplus session that's running on the target DB server via ssh command/connection (initiated from source server).
Thanks.

You can still lose the connection when running ssh under nohup, so it's not really a good solution. If possible, I would recommend that you copy the sql file via scp to the target server, then ssh in to the server, open a screen and run the command from there (Or run it under nohup). Is that an option?

Related

Running ssh script with background process

Below is a simple example of what I'm trying to accomplish. I'm trying to force an ssh script to not wait for all child processes to exit before returning. The purpose is to launch a daemon process on a remote host via ssh.
test.sh
#!/bin/bash
(
sleep 2
echo "done"
) &
When I run the script on the console it returns immediately, with "done" appearing 2 seconds later.
When I run the script as an ssh script, the ssh command . It appears to wait until all child processes have terminated until ssh exits.
ssh example
$ ssh mike#127.0.0.1 /home/mike/test.sh
(2 seconds)
done
standard terminal example
$ ./test.sh
$
(2 seconds)
done
How can I make ssh return when the parent/main process has terminated?
EDIT:
I'm aware of the -f option to ssh to run the process in the background . It leaves the ssh process and connection open on the source host. For my purposes this is unsuitable.
ssh mike#127.0.0.1 /home/mike/test.sh
When you run ssh in this fashion, the remote ssh server will create a set of pipes (or socketpairs) which become the standard input, output, and error for the process which you requested it to run, in this case the script process. The ssh server doesn't end the session based on when the script process exits. Instead, it ends the session when it reads and end-of-file indication on the script process's standard output and standard error.
In your case, the script process creates a child process which inherits the script's standard input, output, and error. A pipe (or socketpair) only returns EOF when all possible writers have exited or closed their end of the pipe. As long as the child process is running and has a copy of the standard output/error file descriptors, the ssh server won't read an EOF indication on those descriptors and it won't close the session.
You can get around this by redirecting standard input and standard output in the command that you pass to the remote server:
ssh mike#127.0.0.1 '/home/mike/test.sh > /dev/null 2>&1'
(note the quotes are important)
This avoids passing the standard output and standard error created by the ssh server to the script process or the subprocesses that it creates.
Alternately, you could add a redirection to the script:
#!/bin/bash
(
exec > /dev/null 2>&1
sleep 2
echo "done"
) &
This causes the script's child process to close its copies of the original standard output and standard error.

paramiko and nohup ''

OK so I have paramiko v2.2.1 and I am trying to login to a machine and restart a service. Inside the service scripts it basically starts a process via nohup. However if I allow paramiko to disconnect as soon as it is done the process started terminates with a PIPE signal when it writes to stdout.
If I start the service by ssh'ing into the box and manually starting it there is no issue and it runs in the background fine. Also if I add long sleep 10 before disconnecting (close) paramiko it also seems to work just fine.
The service is started via a init.d script via a line like this:
env LD_LIBRARY_PATH=$bin_path nohup $bin_path/ServerLoop.sh \
"$bin_path/Service service args" "$#" &
Where ServerLoop.sh simply calls the service forever in a loop like this so it will never die:
SERVER=$1
shift
ARGS=$#
logger $ARGS
while [ 1 ]; do
$SERVER $ARGS
logger "$SERVER terminated with exit code: $STATUS. Server has been restarted"
sleep 1
done
I have noticed when I start the service by ssh'ing into the box I get a nohup.out file written to the root. However when I run through paramiko I get no nohup.out written anywhere on the system ... ie this after I manually ssh into the box and start the service:
root#ts4700:/mnt/mc.fw/bin# find / -name "nohup*"
/usr/bin/nohup
/usr/share/man/man1/nohup.1.gz
/nohup.out
And this is after I run through paramiko:
root#ts4700:/mnt/mc.fw/bin# find / -name "nohup*"
/usr/bin/nohup
/usr/share/man/man1/nohup.1.gz
As I understand it nohup will only redirect the output to nohup.out if "If standard output is a terminal" (from the manual), otherwise it thinks it is saving the output to a file so it does not redirect. Hence I tried the following:
In [43]: import paramiko
In [44]: paramiko.__version__
Out[44]: '2.2.1'
In [45]: ssh = paramiko.SSHClient()
In [46]: ssh.set_missing_host_key_policy(AutoAddPolicy())
In [47]: ssh.connect(ip, username='root', password=not_for_so_sorry, look_for_keys=False, allow_agent=False)
In [48]: stdin, stdout, stderr = ssh.exec_command("tty")
In [49]: stdout.read()
Out[49]: 'not a tty\n'
So I am thinking that nohup is not redirecting to nohup.out when I run it through paramiko because tty is not returning a terminal. I don't know why adding a sleep(10) would fix this though as the service if run on the command line is quite verbose.
I have also noticed that if the service is started from a manual ssh its tty in the ps ax output is still set to the ssh tty ... however if the process is started by paramiko its tty in the ps ax output is set to "?" .. since both processes are run through nohup I would have expected this to be the same.
If the problem is that nohup is indeed not redirecting the output to nohup.out because of the tty is there a way to force this to happen or a better way to run this sort of command via paramiko?
Thanks all, any help with this would be great :)

Unable to login to unix server using Plink in batch ifle

we are facing one issue with Plink while running the batch files, we are running batch files using autosys, the batch files are available in my windows client server and one of the batch file will call the plink to connect the unix server but we are facing the issue to connect the unix server, when I run the batch script using command prompt then the plink can be connected the unix server but it is not happening with autosys to run the batch scripts. below is the Plink command...
call %aScrDir%plink -l %hypSrvUser% -pw %hypSrvPwd% %essSvr% "sh /xxxxxxxxxxx"
when we see the error file which is generated by autosys there are some errors
"The server's host key is not cached in the registry. You
have no guarantee that the server is the computer you
think it is.
The server's rsa2 key fingerprint is:
ssh-rsa 2048 f9:5e:2a:4a:11:ed:40:91:80:3a:13:04:08:05:e7:ac
If you trust this host, enter "y" to add the key to
PuTTY's cache and carry on connecting.
If you want to carry on connecting just once, without
adding the key to the cache, enter "n".
If you do not trust this host, press Return to abandon the
connection.
Store key in cache? (y/n) Connection abandoned."
could you please give the suggestion to avoid this situations and where do we add the host key in the server.
appreciate your action on this.
Option 1 : If allowed, first start a manual connection to the server and confirm the SSH key
Option 2 : If you are running the last version, there is a -hostkey switch to indicate in command line the expected host key
Option 3 : Use something like
echo N | %aScrDir%plink -l %hypSrvUser% -pw %hypSrvPwd% %essSvr% "sh /xxxxxxxxxxx"
That is, pipe the n character to the plink command to answer no to the query to save the host key.
Try this it would be working fine.
cd to the plink.exe file directory
echo Y | .\plink.exe -pw xxyyxxyy root#host_ip 'ls -lah'

ssh to execute all commands in guest machine

i was created a bash script my_vp.sh that use 2 command:
setterm -cursor off
setterm -powersave off
[...]
#execute video commands
[...]
and is in a computerA
but when i execute it by ssh by another computerB_terminal:
ssh pi#192.168.1.1
execute video commands work correctly in the computerA (the same where is the script)
but the command setterm works in the computerB (the terminal where i execute the ssh command).
somebody can help me with solucione it?
thank you very much!
I am not sure I understood the question:
to execute a local script, but on another machine:
scp /path/to/local/script.bash pi#192.168.1.1:/tmp/copy_of_script.bash
and then, if it's copied correctly, execute it:
ssh pi#192.168.1.1 "chmod +x /tmp/copy_of_script.bash"
ssh pi#192.168.1.1 "bash /tmp/copy_of_script.bash"
to have the remote video (Xwindows, etc) commands appear on the originating machine:
replace : ssh with : ssh -x (to allow X-Forwarding, which will allocate a DISPLAY automatically on the remote machine that will be tunneled back to the originating machine)
for the X-forwarding to work, there are some requirements (usually ok by default, but ymmv) : read more about those requirements in this Unix.se answer

How to redirect local ouput to stdin over ssh to remotely execute a local script?

i am trying to remotely execute a perl script that takes data from stdin, over ssh.
The tricky part is that i don't want to upload the script itself to the remote server.
The data that the remote script will read from stdin is produced by another perl script run locally.
Let's assume the following:
my local script producing data is called cron_extract_section.pl
my local script that will be run remotely is called cron_update_section.pl
both scripts take one argument on the command line, a simple word
I manage to execute the script remotely, if the script is present on the remote machine:
./cron_extract_section.pl ${SECTION} 2> /dev/null | ssh user#remote ~/path/to/remote/script/cron_update_section.pl ${SECTION}
I know also that i can run a script on a remote server without having to upload it first, using the following syntax:
ssh user#remote "perl - ${SECTION}" < ./cron_update_section.pl
What i can't figure out is how to feed the local script cron_update_section.pl over ssh to perl, AND also pipe the result of the local script cron_extract_section.pl to perl.
I tried the following, the perl script executes fine, but there is nothing to read from stdin:
./cron_extract_section.pl ${SECTION} 2> /dev/null | ssh user#remote perl - ${SECTION} < ./cron_update_section.pl
Do you know if it's possible to do so without modifying the scripts ?
Use the DATA file handle. In example:
Local script to be run on the remote machine:
# script.pl
while(<DATA>) {
print "# $_";
}
__DATA__
Then, run it as:
(cat script.pl && /cron_extract_section.pl ${SECTION}) | ssh $host perl

Resources