Delete files older than one day with SFTP in remote server - sftp

I want to establish a cron job in order to delete some files from a remote server in which I have only SFTP access. I don't have any shell access.
What is the best way to connect to the remote server and do that?
I have installed sshpass and did something like this:
sshpass -p pass sftp user#host
But how can I pass commands in order to list the old files and delete them?

It's rather difficult to implement this using the OpenSSH sftp client.
You would have to:
list the directory using ls -l command;
parse the results (in shell or other script) to find names and times;
filter the files you want;
generate another sftp script to remove (rm) the files you found.
A way easier and more reliable would be to give up on the command-line sftp. Instead, use your favorite scripting language (Python, Perl, PHP) and its native SFTP implementation.
For an example, see:
Python SFTP download files older than x and delete networked storage

In Perl:
# untested:
my ($host, $user, $pwd, $dir) = (...);
use Net::SFTP::Foreign;
use Fcntl ':mode';
my $deadline = time - 24 * 60 * 60;
my $sftp = Net::SFTP::Foreign->new($host, user => $user, password => $pwd);
$sftp->setcwd($dir);
my $files = $sftp->ls('.',
# callback function "wanted" is passed a reference to hash with 3 keys for each file:
# keys: filename, longname (like ls -l) and "a", a Net::SFTP::Foreign::Attributes object containing atime, mtime, permissions and size of file.
# if true is returned, then the file is passed to the returned Array.
# a similar function "no_wanted" also exists with the opposite effect.
wanted => sub {
my $attr = $_[1]->{a};
return $attr->mtime < $deadline and
S_ISREG($attr->perm);
} ) or die "Unable to retrieve file list";
for my $file (#$files) {
$sftp->remove($file->{filename})
or warn "Unable to remove '$file->{filename}'\n";
}

Related

How to prevent specific files from an R package to be pushed/uploaded to Github? [duplicate]

I have some files in my repository, and one contains a secret Adafruit key. I want to use Git to store my repository, but I don't want to publish the key.
What's the best way to keep it secret, without having to blank it out everytime I commit and push something?
Depending on what you're trying to achieve you could choose one of those methods:
keep file in the tree managed by git but ignore it with entry in gitignore
keep file content in environment variable,
don't use file with key at all, keep key content elsewhere (external systems like hashicorp's vault, database, cloud (iffy, I wouldn't recommend that), etc.)
First approach is easy and doesn't require much work, but you still has the problem of passing the secret key to different location where you'd use the same repository in a secure manner. Second approach requires slightly more work, has the same drawback as the first one.
Third requires certainly more work then 1st and 2nd, but could lead to a setup that's really secure.
This highly depends on the requirements of the project
Basically, the best strategy security-wise is not to store keys, passwords and in general any vulnerable information inside the source control system. If its the goal, there are many different approaches:
"Supply" this kind of information in Runtime and keep it somewhere else:
./runMyApp.sh -db.password=
Use specialized tools (like, for example Vault by Hashicorp) to manage secrets
Encode the secret value offline and store in git the encoded value. Without a secret key used for decoding, this encoded value alone is useless. Decode the value again in runtime, using some kind of shared keys infra / asymmetric key pair, in this case, you can use a public key for encoding, a private key for decoding
I want to use Git to store my repository, but I don't want to publish the key.
For something as critical as a secret key, I would use a dedicated keyring infrastructure located outside the development environment, optionally coupled to a secret passphrase.
Aside this case, I personnaly use submodules for this. Check out :
git submodule
In particular, I declare a global Git repository, in which I declare in turn another Git repository that will contain the actual project that will go public. This enables us to store at top level everything that is related to the given project, but not forcibly relevant to it and that is not to be published. This could be, for instance, all my drafts, automation scripts, worknotes, project specifications, tests, bug reports, etc.
Among all the advantages this facility provides, we can highlight the fact that you can declare as a submodule an already existing repository, this repository being located inside or outside the parent one.
And what's really interesting with this is that both main repository and submodules remains distinct Git repositories, that still can be configured independently. This means that you don't need your parent repository to have its remote servers configured.
Doing that way, you get all the benefits of a versioning system wherever you work, while still ensuring yourself that you'll never accidentally push outside something that is not stored inside the public submodule.
If you don't have too many secrets to manage, and you do want to keep the secrets in version control, I make the parent repository private. It contains 2 folders - a secrets folder, and a gitsubmodule for the public repository (in another folder). I use ansible crypt to encrypt anything in the secrets folder, and a bash script to pass the decrypted contents, and load those secrets as environment vars to ensure the secrets file always stays encrypted.
Ansible crypt can encrypt and decrypt an environment variable, which I wrap in a bash script to do these functions like so-
testsecret=$(echo 'this is a test secret' | ./scripts/ansible-encrypt.sh --vault-id $vault_key --encrypt)
result=$(./scripts/ansible-encrypt.sh --vault-id $vault_key --decrypt $testsecret)
echo $result
testsecret here is the encrypted base64 result, and it can be stored safely in a text file. Later you could source that file to keep the encrypted result in memory, and finally when you need to use the secret, you can decrypt it ./scripts/ansible-encrypt.sh --vault-id $vault_key --decrypt $testsecret
This bash script referenced above is below (ansible-encrypt.sh). It wraps ansible crypt functions in a way that can store encrypted variables in base64 which resolves some problems that can occur with encoding.
#!/bin/bash
# This scripts encrypts an input hidden from the shell and base 64 encodes it so it can be stored as an environment variable
# Optionally can also decrypt an environment variable
vault_id_func () {
if [[ "$verbose" == true ]]; then
echo "Parsing vault_id_func option: '--${opt}', value: '${val}'" >&2;
fi
vault_key="${val}"
}
secret_name=secret
secret_name_func () {
if [[ "$verbose" == true ]]; then
echo "Parsing secret_name option: '--${opt}', value: '${val}'" >&2;
fi
secret_name="${val}"
}
decrypt=false
decrypt_func () {
if [[ "$verbose" == true ]]; then
echo "Parsing secret_name option: '--${opt}', value: '${val}'" >&2;
fi
decrypt=true
encrypted_secret="${val}"
}
IFS='
'
optspec=":hv-:t:"
encrypt=false
parse_opts () {
local OPTIND
OPTIND=0
while getopts "$optspec" optchar; do
case "${optchar}" in
-)
case "${OPTARG}" in
vault-id)
val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
opt="${OPTARG}"
vault_id_func
;;
vault-id=*)
val=${OPTARG#*=}
opt=${OPTARG%=$val}
vault_id_func
;;
secret-name)
val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
opt="${OPTARG}"
secret_name_func
;;
secret-name=*)
val=${OPTARG#*=}
opt=${OPTARG%=$val}
secret_name_func
;;
decrypt)
val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
opt="${OPTARG}"
decrypt_func
;;
decrypt=*)
val=${OPTARG#*=}
opt=${OPTARG%=$val}
decrypt_func
;;
encrypt)
encrypt=true
;;
*)
if [ "$OPTERR" = 1 ] && [ "${optspec:0:1}" != ":" ]; then
echo "Unknown option --${OPTARG}" >&2
fi
;;
esac;;
h)
help
;;
*)
if [ "$OPTERR" != 1 ] || [ "${optspec:0:1}" = ":" ]; then
echo "Non-option argument: '-${OPTARG}'" >&2
fi
;;
esac
done
}
parse_opts "$#"
if [[ "$encrypt" = true ]]; then
read -s -p "Enter the string to encrypt: `echo $'\n> '`";
secret=$(echo -n "$REPLY" | ansible-vault encrypt_string --vault-id $vault_key --stdin-name $secret_name | base64 -w 0)
unset REPLY
echo $secret
elif [[ "$decrypt" = true ]]; then
result=$(echo $encrypted_secret | base64 -d | /snap/bin/yq r - "$secret_name" | ansible-vault decrypt --vault-id $vault_key)
echo $result
else
# if no arg is passed to encrypt or decrypt, then we a ssume the function will decrypt the firehawksecret env var
encrypted_secret="${firehawksecret}"
result=$(echo $encrypted_secret | base64 -d | /snap/bin/yq r - "$secret_name" | ansible-vault decrypt --vault-id $vault_key)
echo $result
fi
Storing encrypted values as environment variables is much more secure than decrypting something at rest and leaving the plaintext result in memory. That's extremely easy for any process to siphon off.
If you only wish to share the code with others and not the secrets, you can use a git template for the parent private repo structure so that others can inherit that structure, but use their own secrets. This also allows CI to pickup everything you would need for your tests.
Alternatively, If you don't want your secrets in version control, you can simply use git ignore on a containing folder that will house your secrets.
Personally, this makes me nervous, its possible for user error to still result in publicly committed secrets, since those files are still under the root of a public repo, any number of things could go wrong that could be embarrassing with that approach.
For Django
add another file called secrets.py or whatever you want and also another called .gitignore
type secrets.py in .gitignore
paste the secret key into the secrets.py file and to import it in the settings file by using
from foldername.secrets import *
this worked for me.

how to do ftp which will not ask for username and password in shell script?

I tried like this :
#!/bin/bash
hostname="xx.xx.xx.xx"
username="ftp"
password="123456"
ftp $username:$password#$hostname <<EOF
read filename
put $filename
quit
EOF
erorr is coming as below :
ftp: ftp:123456#10.64.40.11: Name or service not known
?Invalid command
Not connected.
If my question is too easy , please don't bother to answer.
I am beginner and trying to learn. Any help will be appreciated.
The problem you're running into is that the default FTP client doesn't allow you to specify user and password along with host at the command line like that. It's looking for a host named ftp:123456#10.64.40.11, which clearly wouldn't exist. You can specify the host at the command line, but that's it. This situation and solution is well described in this article, which contains other versions and examples.
The basic idea is to turn off "auto-login" with -n and specify the user and password inside the HERE document instead of at the command line:
#!/bin/bash
hostname="xx.xx.xx.xx"
username="ftp"
password="123456"
ftp -n $hostname <<EOF
user $username $password
read $filename
put $filename
quit
EOF
(Notice that I added the $ to read filename, which appeared to be a typo in your original version.)
There are other FTP clients that allow for user and password specification at the command line (such as ncftp), but using the one you have seems the simplest option.

Get the latest file from a remote server from an FTP in Unix

I need to get a file from a remote host in Unix. I am using the ftp command. The issue is I need the latest file from that location. This is how I am doing it:
dir=/home/user/nyfolders
latest_file=$(ls *abc.123.* | tail -1)
ftp -nv <<EOF
open $hostname
user $username $password
binary
cd $dir
get $latest_file
bye
EOF
But I get this error:
(remote-file) usage: get remote-file [ local-file ]
I think the way I am trying to get the file from within the ftp command is incorrect, can someone please help me out?
You cannot use shell features, like aliases, piping, variables, etc, in ftp command script.
The ftp does not support such advanced features using any syntax.
Though, you can do it two-step (to make use of shell features between the steps).
First get a listing of remote directory to a local file (/tmp/listing.txt):
ftp -nv <<EOF
open $hostname
user $username $password
cd $dir
nlist *abc.123.* /tmp/listing.txt
bye
EOF
Find the latest file:
latest_file=`tail -1 /tmp/listing.txt`
And download it:
ftp -nv <<EOF
open $hostname
user $username $password
binary
cd $dir
get $latest_file
bye
EOF

Unix FTP connection get file based on time stamp

I have a requirement to connect to an FTP server from Unix, and download a particular file which has the most recent date/time stamp.
For example here is what the file name might look like: FILE_NAME_W5215.ZIP
The "W5215" part is the date/time stamp.
If I was to try and get the latest file locally I would do something like this:
ls -t FILE_NAME_W*.ZIP | head -1
however that doesn't work on the remote server.
I don't know what OS the FTP server is running on. I know that when I establish the connection, a lot of the commands that I can do locally on Unix don't work when I'm connected to the FTP.
Any ideas, thoughts, suggestions would be greatly appreciated.
You can do something like this:
Get the file list on ftp server in a temp file
ftp -n $SERVER >tempfile <<EOF
user $USER $PASSWORD
ls -t
quit
EOF
Get the latest filename from the list
filename=`cut -c57- tempfile|head -1`
Note: In ls file list, filename starts from the 57th position, change it if necessary
Now get that particular filename from ftp server
ftp -n $SERVER<<EOF
user $USER $PASSWORD
get $filename
quit
EOF

moving from one to another server in shell script

Here is the scenario,
$hostname
server1
I have the below script in server1,
#!/bin/ksh
echo "Enter server name:"
read server
rsh -n ${server} -l mquser "/opt/hd/ca/scripts/envscripts.ksh"
qdisplay
# script ends.
In above script I am logging into another server say server2 and executing the script "envscripts.ksh" which sets few alias(Alias "qdisplay") defined in it.
I can able to successfully login to server1 but unable to use the alias set by script "envscripts.ksh".
Geting below error,
-bash: qdisplay: command not found
can some please point out what needs to be corrected here.
Thanks,
Vignesh
The other responses and comments are correct. Your rsh command needs to execute both the ksh script and the subsequent command in the same invocation. However, I thought I'd offer an additional suggestion.
It appears that you are writing custom instrumentation for WebSphere MQ. Your approach is to remote shell to the WMQ server and execute a command to display queue attributes (probably depth).
The objective of writing your own instrumentation is admirable, however attempting to do it as remote shell is not an optimal approach. It requires you to maintain a library of scripts on each MQ server and in some cases to maintain these scripts in different languages.
I would suggest that a MUCH better approach is to use the MQSC client available in SupportPac MO72. This allows you to write the scripts once, and then execute them from a central server. Since the MQSC commands are all done via MQ client, the same script handles Windows, UNIX, Linux, iSeries, etc.
For example, you could write a script that remotely queried queue depths and printed a list of all queues with depth > 0. You could then either execute this script directly against a given queue manager or write a script to iterate through a list of queue managers and collect the same report for the entire network. Since the scripts are all running on the one central server, you do not have to worry about getting $PATH right, differences in commands like tr or grep, where ksh or perl are installed, etc., etc.
Ten years ago I wrote the scripts you are working on when my WMQ network was small. When the network got bigger, these platform differences ate me alive and I was unable to keep the automation up and running. When I switched to using WMQ client and had only one set of scripts I was able to keep it maintained with far less time and effort.
The following script assumes that the QMgr name is the same as the host name except in UPPER CASE. You could instead pass QMgr name, hostname, port and channel on the command line to make the script useful where QMgr names do not match the host name.
#!/usr/bin/perl -w
#-------------------------------------------------------------------------------
# mqsc.pl
#
# Wrapper for M072 SupportPac mqsc executable
# Supply parm file name on command line and host names via STDIN.
# Program attempts to connect to hostname on SYSTEM.AUTO.SVRCONN and port 1414
# redirecting parm file into mqsc.
#
# Intended usage is...
#
# mqsc.pl parmfile.mqsc
# host1
# host2
#
# -- or --
#
# mqsc.pl parmfile.mqsc < nodelist
#
# -- or --
#
# cat nodelist | mqsc.pl parmfile.mqsc
#
#-------------------------------------------------------------------------------
use strict;
$SIG{ALRM} = sub { die "timeout" };
$ENV{PATH} =~ s/:$//;
my $File = shift;
die "No mqsc parm file name supplied!" unless $File;
die "File '$File' does not exist!\n" unless -e $File;
while () {
my #Results;
chomp;
next if /^\s*[#*]/; # Allow comments using # or *
s/^\s+//; # Delete leading whitespace
s/\s+$//; # Delete trailing whitespace
# Do not accept hosts with embedded spaces in the name
die "ERROR: Invalid host name '$_'\n" if /\s/;
# Silently skip blank lines
next unless ($_);
my $QMgrName = uc($_);
#----------------------------------------------------------------------------
# Run the parm file in
eval {
alarm(10);
#Results = `mqsc -E -l -h $_ -p detmsg=1,prompt="",width=512 -c SYSTEM.AUTO.SVRCONN &1 | grep -v "^MQSC Ended"`;
};
if ($#) {
if ($# =~ /timeout/) {
print "Timed out connecting to $_\n";
} else {
print "Unexpected error connecting to $_: $!\n";
}
}
alarm(0);
if (#Results) {
print join("\t", #Results, "\n");
}
}
exit;
The parmfile.mqsc is any valid MQSC script. One that gathers all the queue depths looks like this:
DISPLAY QL(*) CURDEPTH
I think the real problem is that the r(o)sh cmd only executes the remote envscripts.ksh file and that your script is then trying to execute qdisplay on your local machine.
You need to 'glue' the two commands together so they are both executed remotely.
EDITED per comment from Gilles (He is correct)
rosh -n ${server} -l mquser ". /opt/hd/ca/scripts/envscripts.ksh ; qdisplay"
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, or give it a + (or -) as a useful answer

Resources