This is a small snippet of code from a larger script I'm having difficulties with:
sftp -o "StrictHostKeyChecking=no" -o "IdentifyFile=/export/home/myid/.ssh/id_rsa" data_cc#111.222.333.444 <<EOF
put /u01/myfiles/file.zip
EOF
The above does the task fine but...
How do I log what the SFTP command is doing? I've tried various options but nothing seems to work.
e.g.
sftp -o "StrictHostKeyChecking=no" -o "IdentifyFile=/export/home/myid/.ssh/id_rsa" data_cc#111.222.333.444 > mysftp.log <<EOF
put /u01/myfiles/file.zip
EOF
Thanks.
Related
Start of script looks like this:
salt-key -L >/path/minions
SERVER=`shuf -n 1 /path/minions`
salt -C $SERVER file.write /tmp/salt_eicar.sh args='X5O!P%#
and so on...
This works fine. No problem.
This does not work:
salt -C '$SERVER and not G#os:Windows' file.write /tmp/salt_eicar.sh args='X5O!P%....
But above line works fine in prompt with a minion name instead of $SERVER. So something with a variable inside -C I suppose?
It's a shell issue. You should use double quotes " to allow bash to expand the variable $SERVER.
salt -C "$SERVER and not G#os:Windows" file.write /tmp/salt_eicar.sh args='X5O!P%....
I'm working on a home automation system using a Raspberry Pi. As part of this, I'd like the rPi to pull a file from my web server once a minute. I've been using rsync, but I've run into an error I can't figure out.
This command works fine when run at the command line on my rPi:
rsync -avz -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" --progress username#example.com:/home/user/example.com/cmd.txt /home/pi/sprinkler/input/cmd.txt
...but when it runs in cron, it produces this error in my log:
Unexpected local arg: /home/pi/sprinkler/input/
If arg is a remote file/dir, prefix it with a colon (:).
rsync error: syntax or usage error (code 1) at main.c(1375) [Receiver=3.1.2]
...and I just answered my own question. Extensive googling around didn't turn up an answer but I just tried putting my rsync command into a bash script, and running the script in cron instead of the command and now everything works!
I'll put this here in case anyone else stumbles over this issue. Here's a script I called "sync.sh"
#!/bin/bash
# attempting a bash shell to use rsync to grab our file
rsync -avz -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
--progress user#example.com:/home/user/example.com/vinhus/tovinhus
/cmd.txt /home/pi/sprinkler/input/
I have a system command in R studio which fails whereas the same command works from R using Terminal in Mac. Could someone hint what is wrong here:
From RStudio:
> system('/bin/qt query -i /var/folders/z0/kms9x7hd6hgdtbtk3kxnjcjxw2_l57/T//RtmprObPeS/9c810678b567d9a52dec9a86/5.gz -v -d /var/folders/z0/kms9x7hd6hgdtbtk3kxnjcjxw2_l57/T//RtmprObPeS/94047c0ddd9fb36a892047be/0.phe.db -p "Phe=2" -g "count(UNKNOWN)<=0" -p "Phe=2" -g "HOM_ALT" > /var/folders/z0/kms9x7hd6hgdtbtk3kxnjcjxw2_l57/T//RtmprObPeS/9c810678b567d9a52dec9a86/out.vcf')
gqt: SQL error 'no such table: ped' in query 'SELECT BCF_ID FROM ped WHERE Phe=2 ORDER BY BCF_ID;': No such file or directory
The error message appears to have something to do with the tool but it also shows No such file or directory error whereas the same command works from terminal as shown below.
From R Terminal:
> system('/bin/qt query -i /var/folders/z0/kms9x7hd6hgdtbtk3kxnjcjxw2_l57/T//RtmprObPeS/9c810678b567d9a52dec9a86/5.gz -v -d /var/folders/z0/kms9x7hd6hgdtbtk3kxnjcjxw2_l57/T//RtmprObPeS/94047c0ddd9fb36a892047be/0.phe.db -p "Phe=2" -g "count(UNKNOWN)<=0" -p "Phe=2" -g "HOM_ALT" > /var/folders/z0/kms9x7hd6hgdtbtk3kxnjcjxw2_l57/T//RtmprObPeS/9c810678b567d9a52dec9a86/out.vcf')
The terminal command writes the output to the out.vcf file.
Does tcsh support launching itself in a remote directory via an argument?
The setup I am dealing with does not allow me to chdir to the remote directory before invoking tcsh, and I'd like to avoid having to create a .sh file for this workflow.
Here are the available arguments I see for v6.19:
> tcsh --help
tcsh 6.19.00 (Astron) 2015-05-21 (x86_64-unknown-Linux) options wide,nls,dl,al,kan,rh,color,filec
-b file batch mode, read and execute commands from 'file'
-c command run 'command' from next argument
-d load directory stack from '~/.cshdirs'
-Dname[=value] define environment variable `name' to `value' (DomainOS only)
-e exit on any error
-f start faster by ignoring the start-up file
-F use fork() instead of vfork() when spawning (ConvexOS only)
-i interactive, even when input is not from a terminal
-l act as a login shell, must be the only option specified
-m load the start-up file, whether or not owned by effective user
-n file no execute mode, just check syntax of the following `file'
-q accept SIGQUIT for running under a debugger
-s read commands from standard input
-t read one line from standard input
-v echo commands after history substitution
-V like -v but including commands read from the start-up file
-x echo commands immediately before execution
-X like -x but including commands read from the start-up file
--help print this message and exit
--version print the version shell variable and exit
This works, but is suboptimal because it launches two instances of tcsh:
tcsh -c 'cd /tmp && tcsh'
How can I redirect only Dtrace's output when running a script with the -C flag?
like in this case:
dscript.d -s myscript.d -c date
Note: I found the answer to my question before posting it, but I'm putting it here so it's part of SO.
One solution with pipes is:
dtrace -o /dev/fd/3 -s dscript.d -c date 3>&1 1>out 2>err
which says:
dtrace's stdout goes to fd 3, which is dup'd from the current stdout
dtrace's stderr goes to 'err'
date's stdout is modified to 'out'
date's stderr is modified to 'err'
Or the more simpler method is to do:
dtrace -o log.txt -s dscript.d -c date