Is there a way to read shell options in Deno? For example, to detect whether the current shell is in interactive mode, I would normally check to see if $- has an i in it:
if [[ $- == *i* ]]; then
echo "interactive"
else
echo "not interactive"
fi
I can of course use Deno.run to execute ['bash', '-c', 'echo $-'], but is there any more elegant way to get access to this information?
EDIT: Actually running a bash command to print out the shell options doesn't appear to work for me either. The subprocess always reports itself as non-interactive.
You can use Deno.isatty to make this determination. Example:
const isInteractive = Deno.isatty(Deno.stdin.rid);
console.log(`${isInteractive ? '' : 'not '}interactive`);
Need to check for distribution of a file in an array programmatically. Logging into a master server and then would like to check for file on workers using simple ssh. So far I have:
ssh $HOSTNAME "[ -e '$HOSTNAME:/directory/filename' ] && echo 'Exists'"
Based on some of the logging output, I know the ssh is successful, but how can I get the test to return a message to the master server? Running the above returns nothing.
SSH will exit with the same exit code as the command that you run on the remote host. If that command is a test, then the exit code will match what you would normally expect from a test.
I would suggest the following:
Simplify your command to only run the test over SSH
Run the echo on your local machine
It doesn't seem correct that you have $HOSTNAME: in front of your path.
ssh "$HOSTNAME" "test -e '/directory/filename'" && echo 'Exists'
I personally find if statements to be much more easily understandable, which is an optional change if you are willing to go that route:
if ssh "$HOSTNAME" "test -e '/directory/filename'"; then
echo "Exists"
else
echo "Does not exist" >&2
exit 1
fi
I am trying to write a script that will run and send a notification email regarding a successful server restart, however, how should i do so in the best way?
Weblogic 8.1
Probably not the best way, but assuming you are working in a Linux/Unix environment you could try this script. This will watch your Weblogic start up script for a keyword (I chose "in RUNNING mode").
COUNTER=0
while [ $COUNTER -le 5 ]
do
grep "started in RUNNING mode" <full path and name of log file>
if [ $? -eq 0 ];
then
mail -s 'Server started' your_email#mail.com </dev/null
break
fi
COUNTER=`expr $COUNTER + 1`
sleep 6
done
I want to check in a shell script if a local unix-user's passed username and password are correct. What is the easiest way to do this?
Only thing that I found while googling was using 'expect' and 'su' and then checking somehow if the 'su' was successful or not.
the username and passwords are written in the /etc/shadow file.
just get the user and the password hash from there (sed would help), hash your own password and check.
use mkpasswd to generate the hash.
you hve to look which salt your version is using. the newest shadow is using sha-512 so :
mkpasswd -m sha-512 password salt
manpages can help you there a lot.
Easier would be to use php and the pam-aut module. there you can check vie php on group access pwd user.
Ok, now this is the script that I used to solve my problem. I first tried to write a small c-programm as susgested by Aaron Digulla, but that proved much too difficult.
Perhaps this Script is useful to someone else.
#!/bin/bash
#
# login.sh $USERNAME $PASSWORD
#this script doesn't work if it is run as root, since then we don't have to specify a pw for 'su'
if [ $(id -u) -eq 0 ]; then
echo "This script can't be run as root." 1>&2
exit 1
fi
if [ ! $# -eq 2 ]; then
echo "Wrong Number of Arguments (expected 2, got $#)" 1>&2
exit 1
fi
USERNAME=$1
PASSWORD=$2
# Setting the language to English for the expected "Password:" string, see http://askubuntu.com/a/264709/18014
export LC_ALL=C
#since we use expect inside a bash-script, we have to escape tcl-$.
expect << EOF
spawn su $USERNAME -c "exit"
expect "Password:"
send "$PASSWORD\r"
#expect eof
set wait_result [wait]
# check if it is an OS error or a return code from our command
# index 2 should be -1 for OS erro, 0 for command return code
if {[lindex \$wait_result 2] == 0} {
exit [lindex \$wait_result 3]
}
else {
exit 1
}
EOF
On Linux, you will need to write a small C program which calls pam_authenticate(). If the call returns PAM_SUCCESS, then the login and password are correct.
Partial answere would be to check user name, is it defined in the passwd/shadow file in /etc
then calculate the passwords MD5 with salt. If you have your user password sended over SSL (or at least some server terminal service).
Its just a hint because I dont know what do You need actually.
Because "su" is mainly for authentication purposes.
Other topics which You might look at are kerberos/LDAP services, but those are hard topics.
I am currently creating an overnight job that calls a Unix script which in turn creates and transfers a file using ftp. I would like to check all possible return codes. The man page for ftp doesn't list return codes. Does anyone know where to find a list? Anyone with experience with this? We have other scripts that grep for certain return strings in the log, and they send an email when in error. However, they often miss unanticipated codes.
I am then putting the reason into the log and the email.
The ftp command does not return anything other than zero on most implementations that I've come across.
It's much better to process the three digit codes in the log - and if you're sending a binary file, you can check that bytes sent was correct.
The three digit codes are called 'series codes' and a list can be found here
I wrote a script to transfer only one file at a time and in that script use grep to check for the 226 Transfer complete message. If it finds it, grep returns 0.
ftp -niv < "$2"_ftp.tmp | grep "^226 "
Install the ncftp package. It comes with ncftpget and ncftpput which will each attempt to upload/download a single file, and return with a descriptive error code if there is a problem. See the “Diagnostics” section of the man page.
I think it is easier to run the ftp and check the exit code of ftp if something gone wrong.
I did this like the example below:
# ...
ftp -i -n $HOST 2>&1 1> $FTPLOG << EOF
quote USER $USER
quote PASS $PASSWD
cd $RFOLDER
binary
put $FOLDER/$FILE.sql.Z $FILE.sql.Z
bye
EOF
# Check the ftp util exit code (0 is ok, every else means an error occurred!)
EXITFTP=$?
if test $EXITFTP -ne 0; then echo "$D ERROR FTP" >> $LOG; exit 3; fi
if (grep "^Not connected." $FTPLOG); then echo "$D ERROR FTP CONNECT" >> $LOG; fi
if (grep "No such file" $FTPLOG); then echo "$D ERROR FTP NO SUCH FILE" >> $LOG; fi
if (grep "access denied" $FTPLOG ); then echo "$D ERROR FTP ACCESS DENIED" >> $LOG; fi
if (grep "^Please login" $FTPLOG ); then echo "$D ERROR FTP LOGIN" >> $LOG; fi
Edit: To catch errors I grep the output of the ftp command. But it's truly it's not the best solution.
I don't know how familier you are with a Scriptlanguage like Perl, Python or Ruby. They all have a FTP module which you can be used. This enables you to check for errors after each command. Here is a example in Perl:
#!/usr/bin/perl -w
use Net::FTP;
$ftp = Net::FTP->new("example.net") or die "Cannot connect to example.net: $#";
$ftp->login("username", "password") or die "Cannot login ", $ftp->message;
$ftp->cwd("/pub") or die "Cannot change working directory ", $ftp->message;
$ftp->binary;
$ftp->put("foo.bar") or die "Failed to upload ", $ftp->message;
$ftp->quit;
For this logic to work user need to redirect STDERR as well from ftp command as below
ftp -i -n $HOST >$FTPLOG 2>&1 << EOF
Below command will always assign 0 (success) as because ftp command wont return success or failure. So user should not depend on it
EXITFTP=$?
lame answer I know, but how about getting the ftp sources and see for yourself
I like the solution from Anurag, for the bytes transfered problem I have extended the command with grep -v "bytes"
ie
grep "^530" ftp_out2.txt | grep -v "byte"
-instead of 530 you can use all the error codes as Anurag did.
You said you wanted to FTP the file there, but you didn't say whether or not regular BSD FTP client was the only way you wanted to get it there. BSD FTP doesn't give you a return code for error conditions necessitating all that parsing, but there are a whole series of other Unix programs that can be used to transfer files by FTP if you or your administrator will install them. I will give you some examples of ways to transfer a file by FTP while still catching all error conditions with little amounts of code.
FTPUSER is your ftp user login name
FTPPASS is your ftp password
FILE is the local file you want to upload without any path info (eg file1.txt, not /whatever/file1.txt or whatever/file1.txt
FTPHOST is the remote machine you want to FTP to
REMOTEDIR is an ABSOLUTE PATH to the location on the remote machine you want to upload to
Here are the examples:
curl --user $FTPUSER:$FTPPASS -T $FILE ftp://$FTPHOST/%2f$REMOTEDIR
ftp-upload --host $FTPHOST --user $FTPUSER --password $FTPPASS --as $REMOTEDIR/$FILE $FILE
tnftp -u ftp://$FTPUSER:$FTPPASS#$FTPHOST/%2f$REMOTEDIR/$FILE $FILE
wput $FILE ftp://$FTPUSER:$FTPPASS#$FTPHOST/%2f$REMOTEDIR/$FILE
All of these programs will return a nonzero exit code if anything at all goes wrong, along with text that indicates what failed. You can test for this and then do whatever you want with the output, log it, email it, etc as you wished.
Please note the following however:
"%2f" is used in URLs to indicate that the following path is an absolute path on the remote machine. However, if your FTP server chroots you, you won't be able to bypass this.
for the commands above that use an actual URL (ftp://etc) to the server with the user and password embedded in it, the username and password MUST be URL-encoded if it contains special characters.
In some cases you can be flexible with the remote directory being absolute and local file being just the plain filename once you are familiar with the syntax of each program. You might just have to add a local directory environment variable or just hardcode everything.
IF you really, absolutely MUST use regular FTP client, one way you can test for failure is by, inside your script, including first a command that PUTs the file, followed by another that does a GET of the same file returning it under a different name. After FTP exits, simply test for the existence of the downloaded file in your shell script, or even checksum it against the original to make sure it transferred correctly. Yeah that stinks, but in my opinion it is better to have code that is easy to read than do tons of parsing for every possible error condition. BSD FTP is just not all that great.
Here is what I finally went with. Thanks for all the help. All the answers help lead me in the right direction.
It may be a little overkill, checking both the result and the log, but it should cover all of the bases.
echo "open ftp_ip
pwd
binary
lcd /out
cd /in
mput datafile.csv
quit"|ftp -iv > ftpreturn.log
ftpresult=$?
bytesindatafile=`wc -c datafile.csv | cut -d " " -f 1`
bytestransferred=`grep -e '^[0-9]* bytes sent' ftpreturn.log | cut -d " " -f 1`
ftptransfercomplete=`grep -e '226 ' ftpreturn.log | cut -d " " -f 1`
echo "-- FTP result code: $ftpresult" >> ftpreturn.log
echo "-- bytes in datafile: $bytesindatafile bytes" >> ftpreturn.log
echo "-- bytes transferred: $bytestransferred bytes sent" >> ftpreturn.log
if [ "$ftpresult" != "0" ] || [ "$bytestransferred" != "$bytesindatafile" ] || ["$ftptransfercomplete" != "226" ]
then
echo "-- *abend* FTP Error occurred" >> ftpreturn.log
mailx -s 'FTP error' `cat email.lst` < ftpreturn.log
else
echo "-- file sent via ftp successfully" >> ftpreturn.log
fi
Why not just store all output from the command to a log file, then check the return code from the command and, if it's not 0, send the log file in the email?