Automatically move a log file from unix based server to pc - unix

I'm trying to fetch few data from a Unix based server to my PC automatically ie I want the data to be transfered to my PC say every 30mins. I have the Unix code for fetching data but its through putty and it is getting stored in server only. I would like the data be stored in my local PC folder instead.
tail -n 10000 conveyor2.log | grep -P 'curing result OK' | sed 's/FT\/FT/g' | awk '{print $5 $13}' | uniq | sort -n | uniq >> my_data.txt

If you are currently using putty to connect to the server then you could also use "pscp" or "plink" on the Windows side to execute a transfer to your PC.
You would need to first understand how to do that from a command line.
For example:
pscp -i mykey.ppk user#serverName:logfileName targetName
(Using "-i mykey.ppk" allows you to bypass password prompts. You will need to create "mykey.ppk" using puttygen.)
You could then put that in a .BAT file or powershell or whatever and run it as a Windows "scheduled task" or get fancy and setup a service (which is well beyond the scope of this question).

For this first of all you can create a mount point of your PC on unix server.
this is called Samba.
Need root acess on both unix server and window machine
mount -t cifs //"ip-address of window system"/e$/ftp -o username="username",password="password" /"Mount point name"
after doing this you can directly create log file at window machine

Related

rsync : how to copy only latest file from target to source

We have a main Linux server, say M, where we have files like below (for 2 months, and new files arriving daily)
Folder1
PROCESS1_20211117.txt.gz
PROCESS1_20211118.txt.gz
..
..
PROCESS1_20220114.txt.gz
PROCESS1_20220115.txt.gz
We want to copy only the latest file on our processing server, say P.
So as of now, we were using the below command, on our processing server.
rsync --ignore-existing -azvh -rpgoDe ssh user#M:${TargetServerPath}/${PROCSS_NAME}_*txt.gz ${SourceServerPath}
This process worked fine until now, but from now, in the processing server, we can keep files only up to 3 days. However, in our main server, we can keep files for 2 months.
So when we remove older files from the processing server, the rsync command copies all files from main server to the processing server.
How can I change rsync command to copy only latest file from Main server?
*Note: the example above is only for one file. We have multiple files on which we have to use the same command. Hence we cannot hardcode any filename.
What I tried:
There are multiple solutions, but all seems to be when I want to copy latest file from the server I am running rsync on, not on the remote server.
Also I tried running below to get the latest file from main server, but I cannot pass variable to SSH in my company, as it is not allowed. So below command works if I pass individual path/file name, but cannot work as with variables.
ssh M 'ls -1 ${TargetServerPath}/${PROCSS_NAME}_*txt.gz|tail -1'
Would really appreciate any suggestions on how to implement this solution.
OS: Linux 3.10.0-1160.31.1.el7.x86_64
ssh quoting is confusing - to properly quote it, you have to double-quote it locally.
Handy printf %q trick is helpful - quote the relevant parts.
file=$(
ssh M "ls -1 $(printf "%q" "${getServerPath}/${PROCSS_NAME}")_*.txt.gz" |
tail -1
)
rsync --ignore-existing -azvh -rpgoDe ssh user#M:"$file" "${SourceServerPath}"
or maybe nicer to run tail -n1 on the remote, so that minimum amount of data are transferred (we only need one filename, not them all), invoke explicit shell and pass the variables as shell arguments:
file=$(ssh M "$(printf "%q " bash -c \
'ls -1 "$1"_*.txt.gz | tail -n1'
'_' "${TargetServerPath}/${PROCSS_NAME}"
)")
Overall, I recommend doing a function and using declare -f :
sshqfunc() { echo "bash -c $(printf "%q" "$(declare -f "$1"); $1 \"\$#\"")"; };
work() {
ls -1 "$1"_*txt.gz | tail -1
}
tmp=$(ssh M "$(sshqfunc work)" _ "${TargetServerPath}/${PROCSS_NAME}")
or you can also use the mighty declare to transfer variables to remote - then run your command inside single quotes:
ssh M "
$(declare -p TargetServerPath PROCSS_NAME);
"'
ls -1 ${TargetServerPath}/${PROCSS_NAME}_*txt.gz | tail -1
'

File Transfer to Hadoop HDFS from remote linux server

I need to transfer the Files from remote Linux server to directly HDFS.
I have keytab placed on remote server , after kinit command its activated however i cannot browse the HDFS folders. I know from edge nodes i can directly copy files to HDFS however i need to skip the edge node and directly transfer the files to HDFS.
how can we achieve this.
Let's assume a couple of things first. You have one machine on which the external hard drive is mounted (named DISK) and one cluster of machines with an ssh access to the master (we denote by master in the command line the user#hostname part of the master machine). You run the script on the machine with the drive. The data on the drive consists of multiple directories with multiple files in each (like a 100); the numbers don't matter, it's just to justify the loops. The path to the data will be stored in the ${DIR} variable (on Linux, it would be /media/DISK and on Mac OS X /Volumes/DISK). Here is what the script looks like:
DIR=/Volumes/DISK;
for d in $(ls ${DIR}/);
do
for f in $(ls ${DIR}/${d}/);
do
cat ${DIR}/${d}/${f} | ssh master "hadoop fs -put - /path/on/hdfs/${d}/${f}";
done;
done;
Note that we go over each file and we copy it into a specific file because the HDFS API for put requires that "when source is stdin, destination must be a file."
Unfortunately, it takes forever. When I came back the next morning, it only did a fifth of the data (100GB) and was still running... Basically taking 20 minutes per directory! I ended up going forward with the solution of copying the data temporarily on one of the machines and then copying it locally to HDFS. For space reason, I did it one folder at a time and then deleting the temporarily folder immediately after. Here is what the script looks like:
DIR=/Volumes/DISK;
PTH=/path/on/one/machine/of/the/cluster;
for d in $(ls ${DIR}/);
do
scp -r -q ${DIR}/${d} master:${PTH}/
ssh master "hadoop fs -copyFromLocal ${PTH}/${d} /path/on/hdfs/";
ssh master "rm -rf ${PTH}/${d}";
done;
Hope it helps!

Running a shell script to a remote linux server from the local window?

I am trying to run a shell script to execute a binary on a remote linux box. Both the binary and the shell script are on my local window machine. Is there any way through which i can run the binary to the remote machine directly from windows through command line tools like PLINK?
I don't want to put the binary and the script to all the remote linux boxes which
i want them to run on,Instead I want to run the shell script which will intern invoke the binary and do the desirable functions directly through my local machine.
You can run the shell script remotely, just by piping it through ssh:
cat my_script.sh | ssh -T my_server
(Or whatever the windows/plink equivalent is.)
However, you can't run the binary remotely through a pipe, the file will have to exist on the remote server. You can do this by pushing the file from your windows machine to a known location on the remote server, and then editing your script to expect the file to exist in that location:
scp my_binary my_server:/tmp
cat my_script.sh | ssh -T my_server
And then just have your script run that binary:
/tmp/my_binary
Or you can write the script so that it pulls the binary file from a central location where you're hosting it:
wget -O /tmp/my_binary http://my_fileserver/my_binary
/tmp/my_binary
Note, if the shell script doesn't do anything else besides invoke the binary, then you don't need it. You can just fire the commands directly through ssh:
ssh -T my_server "cd /tmp && wget http://my_fileserver/my_binary && ./my_binary"
You will have to copy the binary to the remote Linux box before it can be executed. However, you could have a script on the windows machine that uses sftp to transfer the binary program to a temporary directory under /tmp before running it, so there is no manual setup required.

nohup - dont want nohup.out but want log going to a different file on the remote server

I'm running the following command (where variables have valid values for ssh command and $file - is a .sql file).
nohup ssh -qn ${ssh_user}#${dbs} "sqlplus $dbuser/${dbpswd}#${dbname} <<ENDSQL | tee "${sql_run_output_file}".ssh.log
set echo off
set echo on
set timing on
set time on
set serveroutput on size 1000000
#${file}
ENDSQL
"
When I was using the above command without "nohup" before ssh command, after 1 hour or so, my connection from source server (where im running ssh) was getting an error/message "Connection reset...." and hanging my BASH shell script (which contains this ssh command in it). When, I use nohup, i dont see the connection issue.
Here's what I'm trying to get and need your help.
Change the command shown above so that the command will NOT create a nohup.out
(Did I read that I can use > instead of | tee ... and use 2>&1)
I DO NOT want to run the command giving a "&" (background)
I DO want a LOG file for the sqlplus session that's running on the target DB server via ssh command/connection (initiated from source server).
Thanks.
You can still lose the connection when running ssh under nohup, so it's not really a good solution. If possible, I would recommend that you copy the sql file via scp to the target server, then ssh in to the server, open a screen and run the command from there (Or run it under nohup). Is that an option?

How to redirect local ouput to stdin over ssh to remotely execute a local script?

i am trying to remotely execute a perl script that takes data from stdin, over ssh.
The tricky part is that i don't want to upload the script itself to the remote server.
The data that the remote script will read from stdin is produced by another perl script run locally.
Let's assume the following:
my local script producing data is called cron_extract_section.pl
my local script that will be run remotely is called cron_update_section.pl
both scripts take one argument on the command line, a simple word
I manage to execute the script remotely, if the script is present on the remote machine:
./cron_extract_section.pl ${SECTION} 2> /dev/null | ssh user#remote ~/path/to/remote/script/cron_update_section.pl ${SECTION}
I know also that i can run a script on a remote server without having to upload it first, using the following syntax:
ssh user#remote "perl - ${SECTION}" < ./cron_update_section.pl
What i can't figure out is how to feed the local script cron_update_section.pl over ssh to perl, AND also pipe the result of the local script cron_extract_section.pl to perl.
I tried the following, the perl script executes fine, but there is nothing to read from stdin:
./cron_extract_section.pl ${SECTION} 2> /dev/null | ssh user#remote perl - ${SECTION} < ./cron_update_section.pl
Do you know if it's possible to do so without modifying the scripts ?
Use the DATA file handle. In example:
Local script to be run on the remote machine:
# script.pl
while(<DATA>) {
print "# $_";
}
__DATA__
Then, run it as:
(cat script.pl && /cron_extract_section.pl ${SECTION}) | ssh $host perl

Resources