check unix username and password in a shellscript - unix

I want to check in a shell script if a local unix-user's passed username and password are correct. What is the easiest way to do this?
Only thing that I found while googling was using 'expect' and 'su' and then checking somehow if the 'su' was successful or not.

the username and passwords are written in the /etc/shadow file.
just get the user and the password hash from there (sed would help), hash your own password and check.
use mkpasswd to generate the hash.
you hve to look which salt your version is using. the newest shadow is using sha-512 so :
mkpasswd -m sha-512 password salt
manpages can help you there a lot.
Easier would be to use php and the pam-aut module. there you can check vie php on group access pwd user.

Ok, now this is the script that I used to solve my problem. I first tried to write a small c-programm as susgested by Aaron Digulla, but that proved much too difficult.
Perhaps this Script is useful to someone else.
#!/bin/bash
#
# login.sh $USERNAME $PASSWORD
#this script doesn't work if it is run as root, since then we don't have to specify a pw for 'su'
if [ $(id -u) -eq 0 ]; then
echo "This script can't be run as root." 1>&2
exit 1
fi
if [ ! $# -eq 2 ]; then
echo "Wrong Number of Arguments (expected 2, got $#)" 1>&2
exit 1
fi
USERNAME=$1
PASSWORD=$2
# Setting the language to English for the expected "Password:" string, see http://askubuntu.com/a/264709/18014
export LC_ALL=C
#since we use expect inside a bash-script, we have to escape tcl-$.
expect << EOF
spawn su $USERNAME -c "exit"
expect "Password:"
send "$PASSWORD\r"
#expect eof
set wait_result [wait]
# check if it is an OS error or a return code from our command
# index 2 should be -1 for OS erro, 0 for command return code
if {[lindex \$wait_result 2] == 0} {
exit [lindex \$wait_result 3]
}
else {
exit 1
}
EOF

On Linux, you will need to write a small C program which calls pam_authenticate(). If the call returns PAM_SUCCESS, then the login and password are correct.

Partial answere would be to check user name, is it defined in the passwd/shadow file in /etc
then calculate the passwords MD5 with salt. If you have your user password sended over SSL (or at least some server terminal service).
Its just a hint because I dont know what do You need actually.
Because "su" is mainly for authentication purposes.
Other topics which You might look at are kerberos/LDAP services, but those are hard topics.

Related

How to prevent specific files from an R package to be pushed/uploaded to Github? [duplicate]

I have some files in my repository, and one contains a secret Adafruit key. I want to use Git to store my repository, but I don't want to publish the key.
What's the best way to keep it secret, without having to blank it out everytime I commit and push something?
Depending on what you're trying to achieve you could choose one of those methods:
keep file in the tree managed by git but ignore it with entry in gitignore
keep file content in environment variable,
don't use file with key at all, keep key content elsewhere (external systems like hashicorp's vault, database, cloud (iffy, I wouldn't recommend that), etc.)
First approach is easy and doesn't require much work, but you still has the problem of passing the secret key to different location where you'd use the same repository in a secure manner. Second approach requires slightly more work, has the same drawback as the first one.
Third requires certainly more work then 1st and 2nd, but could lead to a setup that's really secure.
This highly depends on the requirements of the project
Basically, the best strategy security-wise is not to store keys, passwords and in general any vulnerable information inside the source control system. If its the goal, there are many different approaches:
"Supply" this kind of information in Runtime and keep it somewhere else:
./runMyApp.sh -db.password=
Use specialized tools (like, for example Vault by Hashicorp) to manage secrets
Encode the secret value offline and store in git the encoded value. Without a secret key used for decoding, this encoded value alone is useless. Decode the value again in runtime, using some kind of shared keys infra / asymmetric key pair, in this case, you can use a public key for encoding, a private key for decoding
I want to use Git to store my repository, but I don't want to publish the key.
For something as critical as a secret key, I would use a dedicated keyring infrastructure located outside the development environment, optionally coupled to a secret passphrase.
Aside this case, I personnaly use submodules for this. Check out :
git submodule
In particular, I declare a global Git repository, in which I declare in turn another Git repository that will contain the actual project that will go public. This enables us to store at top level everything that is related to the given project, but not forcibly relevant to it and that is not to be published. This could be, for instance, all my drafts, automation scripts, worknotes, project specifications, tests, bug reports, etc.
Among all the advantages this facility provides, we can highlight the fact that you can declare as a submodule an already existing repository, this repository being located inside or outside the parent one.
And what's really interesting with this is that both main repository and submodules remains distinct Git repositories, that still can be configured independently. This means that you don't need your parent repository to have its remote servers configured.
Doing that way, you get all the benefits of a versioning system wherever you work, while still ensuring yourself that you'll never accidentally push outside something that is not stored inside the public submodule.
If you don't have too many secrets to manage, and you do want to keep the secrets in version control, I make the parent repository private. It contains 2 folders - a secrets folder, and a gitsubmodule for the public repository (in another folder). I use ansible crypt to encrypt anything in the secrets folder, and a bash script to pass the decrypted contents, and load those secrets as environment vars to ensure the secrets file always stays encrypted.
Ansible crypt can encrypt and decrypt an environment variable, which I wrap in a bash script to do these functions like so-
testsecret=$(echo 'this is a test secret' | ./scripts/ansible-encrypt.sh --vault-id $vault_key --encrypt)
result=$(./scripts/ansible-encrypt.sh --vault-id $vault_key --decrypt $testsecret)
echo $result
testsecret here is the encrypted base64 result, and it can be stored safely in a text file. Later you could source that file to keep the encrypted result in memory, and finally when you need to use the secret, you can decrypt it ./scripts/ansible-encrypt.sh --vault-id $vault_key --decrypt $testsecret
This bash script referenced above is below (ansible-encrypt.sh). It wraps ansible crypt functions in a way that can store encrypted variables in base64 which resolves some problems that can occur with encoding.
#!/bin/bash
# This scripts encrypts an input hidden from the shell and base 64 encodes it so it can be stored as an environment variable
# Optionally can also decrypt an environment variable
vault_id_func () {
if [[ "$verbose" == true ]]; then
echo "Parsing vault_id_func option: '--${opt}', value: '${val}'" >&2;
fi
vault_key="${val}"
}
secret_name=secret
secret_name_func () {
if [[ "$verbose" == true ]]; then
echo "Parsing secret_name option: '--${opt}', value: '${val}'" >&2;
fi
secret_name="${val}"
}
decrypt=false
decrypt_func () {
if [[ "$verbose" == true ]]; then
echo "Parsing secret_name option: '--${opt}', value: '${val}'" >&2;
fi
decrypt=true
encrypted_secret="${val}"
}
IFS='
'
optspec=":hv-:t:"
encrypt=false
parse_opts () {
local OPTIND
OPTIND=0
while getopts "$optspec" optchar; do
case "${optchar}" in
-)
case "${OPTARG}" in
vault-id)
val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
opt="${OPTARG}"
vault_id_func
;;
vault-id=*)
val=${OPTARG#*=}
opt=${OPTARG%=$val}
vault_id_func
;;
secret-name)
val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
opt="${OPTARG}"
secret_name_func
;;
secret-name=*)
val=${OPTARG#*=}
opt=${OPTARG%=$val}
secret_name_func
;;
decrypt)
val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
opt="${OPTARG}"
decrypt_func
;;
decrypt=*)
val=${OPTARG#*=}
opt=${OPTARG%=$val}
decrypt_func
;;
encrypt)
encrypt=true
;;
*)
if [ "$OPTERR" = 1 ] && [ "${optspec:0:1}" != ":" ]; then
echo "Unknown option --${OPTARG}" >&2
fi
;;
esac;;
h)
help
;;
*)
if [ "$OPTERR" != 1 ] || [ "${optspec:0:1}" = ":" ]; then
echo "Non-option argument: '-${OPTARG}'" >&2
fi
;;
esac
done
}
parse_opts "$#"
if [[ "$encrypt" = true ]]; then
read -s -p "Enter the string to encrypt: `echo $'\n> '`";
secret=$(echo -n "$REPLY" | ansible-vault encrypt_string --vault-id $vault_key --stdin-name $secret_name | base64 -w 0)
unset REPLY
echo $secret
elif [[ "$decrypt" = true ]]; then
result=$(echo $encrypted_secret | base64 -d | /snap/bin/yq r - "$secret_name" | ansible-vault decrypt --vault-id $vault_key)
echo $result
else
# if no arg is passed to encrypt or decrypt, then we a ssume the function will decrypt the firehawksecret env var
encrypted_secret="${firehawksecret}"
result=$(echo $encrypted_secret | base64 -d | /snap/bin/yq r - "$secret_name" | ansible-vault decrypt --vault-id $vault_key)
echo $result
fi
Storing encrypted values as environment variables is much more secure than decrypting something at rest and leaving the plaintext result in memory. That's extremely easy for any process to siphon off.
If you only wish to share the code with others and not the secrets, you can use a git template for the parent private repo structure so that others can inherit that structure, but use their own secrets. This also allows CI to pickup everything you would need for your tests.
Alternatively, If you don't want your secrets in version control, you can simply use git ignore on a containing folder that will house your secrets.
Personally, this makes me nervous, its possible for user error to still result in publicly committed secrets, since those files are still under the root of a public repo, any number of things could go wrong that could be embarrassing with that approach.
For Django
add another file called secrets.py or whatever you want and also another called .gitignore
type secrets.py in .gitignore
paste the secret key into the secrets.py file and to import it in the settings file by using
from foldername.secrets import *
this worked for me.

if statement unix shell script

I am using if statement to check for a condition and assign values. A mail will be sent after that. I executing the script ( bash ) but nothing really happens and I have to exit; could anyone tell me what I am doing wrong?
if [ $var -eq 0 ]
then
subject="there are zero issues"
else
subject="there are issues"
fi
mail -s "$subject" abc#gmail.com
The Unix mail command is waiting to receive a message body to send, but you have not provided one in your script. Try this:
$ mail -s "$subject" abc#gmail.com < /home/user/yourmessage.txt
where /home/user/yourmessage.txt contains some message you wish to include in the email.

ssh remote variable assignment?

The following does not work for me:
ssh user#remote.server "k=5; echo $k;"
it just returns an empty line.
How can I assign a variable on a remote session (ssh)?
Note: My question is not about how to pass local variables to my ssh session, but rather how to create and assign remote variables. (should be a pretty straight forward task?)
Edit:
In more detail I am trying to do this:
bkp=/some/path/to/backups
ssh user#remote.server "bkps=( $(find $bkp/* -type d | sort) );
echo 'number of backups: '${#bkps[#]};
while [ ${#bkps[#]} -gt 5 ]; do
echo ${bkps[${#bkps[#]}-1]};
#rm -rf $bkps[${#bkps[#]}-1];
unset bkps[${#bkps[#]}-1];
done;"
The find command works fine, but for some reason $bkps does not get populated.
So my guess was that it would be a variable assignment issue, since I think I have checked everything else...
Given this invocation:
ssh user#remote.server "k=5; echo $k;"
the local shell is expanding $k (which most likely isn't set) before it is executing ssh .... So the command that actually gets passed to the remote shell once the connection is made is k=5; echo ; (or k=5; echo something_else_entirely; if k is actually set locally).
To avoid this, escape the dollar sign like this:
ssh user#remote.server "k=5; echo \$k;"
Alternatively, use single quotes instead of double quotes to prevent the local expansion. However, while that would work on this simple example, you may actually want local expansion of some variables in the command that gets sent to the remote side, so the backslash-escaping is probably the better route.
For future reference, you can also type set -x in your shell to echo the actual commands that are being executed as a help for troubleshooting.

Using expect script for gpg - password decryption - does not work

Hi I am fairly new to expect scripting. I am trying to use the gpg for password encryption/decryption. Encryption has no issues. For Decryption, I am trying to automate it using expect script.
The basic command I am trying to use is: gpg -o -d <.gpg file with encrypted password>
When I run this command, stand alone, it asks for passphrase, when I enter it, it creates the output file, as expected. The output file has password in it.
When I run this command using an expect script so that the passphrase can be provided automatically at run time, the expect does not create the output file.
Any help is appreciated. It does not show any errors! The output is:
spawn gpg -o /home/gandhipr/passwdfile -d /home/gandhipr/passfile.gpg
gpg: CAST5 encrypted data
Enter passphrase:
Below is my expect script.
#!/usr/bin/expect
set timeout 1
set passdir [lindex $argv 0]
set passfile [lindex $argv 1]
set passfilegpg [lindex $argv 2]
set passphrase [lindex $argv 3]
spawn gpg -o $passdir$passfile -d $passdir$passfilegpg
expect "Enter passphrase:"
send "$passphrase\n"
exp_internal 1
exit 0;
interact
Use \r instead of \n in the send command: \r is the carriage return characters which mimics the user hitting Enter.

Checking ftp return codes from Unix script

I am currently creating an overnight job that calls a Unix script which in turn creates and transfers a file using ftp. I would like to check all possible return codes. The man page for ftp doesn't list return codes. Does anyone know where to find a list? Anyone with experience with this? We have other scripts that grep for certain return strings in the log, and they send an email when in error. However, they often miss unanticipated codes.
I am then putting the reason into the log and the email.
The ftp command does not return anything other than zero on most implementations that I've come across.
It's much better to process the three digit codes in the log - and if you're sending a binary file, you can check that bytes sent was correct.
The three digit codes are called 'series codes' and a list can be found here
I wrote a script to transfer only one file at a time and in that script use grep to check for the 226 Transfer complete message. If it finds it, grep returns 0.
ftp -niv < "$2"_ftp.tmp | grep "^226 "
Install the ncftp package. It comes with ncftpget and ncftpput which will each attempt to upload/download a single file, and return with a descriptive error code if there is a problem. See the “Diagnostics” section of the man page.
I think it is easier to run the ftp and check the exit code of ftp if something gone wrong.
I did this like the example below:
# ...
ftp -i -n $HOST 2>&1 1> $FTPLOG << EOF
quote USER $USER
quote PASS $PASSWD
cd $RFOLDER
binary
put $FOLDER/$FILE.sql.Z $FILE.sql.Z
bye
EOF
# Check the ftp util exit code (0 is ok, every else means an error occurred!)
EXITFTP=$?
if test $EXITFTP -ne 0; then echo "$D ERROR FTP" >> $LOG; exit 3; fi
if (grep "^Not connected." $FTPLOG); then echo "$D ERROR FTP CONNECT" >> $LOG; fi
if (grep "No such file" $FTPLOG); then echo "$D ERROR FTP NO SUCH FILE" >> $LOG; fi
if (grep "access denied" $FTPLOG ); then echo "$D ERROR FTP ACCESS DENIED" >> $LOG; fi
if (grep "^Please login" $FTPLOG ); then echo "$D ERROR FTP LOGIN" >> $LOG; fi
Edit: To catch errors I grep the output of the ftp command. But it's truly it's not the best solution.
I don't know how familier you are with a Scriptlanguage like Perl, Python or Ruby. They all have a FTP module which you can be used. This enables you to check for errors after each command. Here is a example in Perl:
#!/usr/bin/perl -w
use Net::FTP;
$ftp = Net::FTP->new("example.net") or die "Cannot connect to example.net: $#";
$ftp->login("username", "password") or die "Cannot login ", $ftp->message;
$ftp->cwd("/pub") or die "Cannot change working directory ", $ftp->message;
$ftp->binary;
$ftp->put("foo.bar") or die "Failed to upload ", $ftp->message;
$ftp->quit;
For this logic to work user need to redirect STDERR as well from ftp command as below
ftp -i -n $HOST >$FTPLOG 2>&1 << EOF
Below command will always assign 0 (success) as because ftp command wont return success or failure. So user should not depend on it
EXITFTP=$?
lame answer I know, but how about getting the ftp sources and see for yourself
I like the solution from Anurag, for the bytes transfered problem I have extended the command with grep -v "bytes"
ie
grep "^530" ftp_out2.txt | grep -v "byte"
-instead of 530 you can use all the error codes as Anurag did.
You said you wanted to FTP the file there, but you didn't say whether or not regular BSD FTP client was the only way you wanted to get it there. BSD FTP doesn't give you a return code for error conditions necessitating all that parsing, but there are a whole series of other Unix programs that can be used to transfer files by FTP if you or your administrator will install them. I will give you some examples of ways to transfer a file by FTP while still catching all error conditions with little amounts of code.
FTPUSER is your ftp user login name
FTPPASS is your ftp password
FILE is the local file you want to upload without any path info (eg file1.txt, not /whatever/file1.txt or whatever/file1.txt
FTPHOST is the remote machine you want to FTP to
REMOTEDIR is an ABSOLUTE PATH to the location on the remote machine you want to upload to
Here are the examples:
curl --user $FTPUSER:$FTPPASS -T $FILE ftp://$FTPHOST/%2f$REMOTEDIR
ftp-upload --host $FTPHOST --user $FTPUSER --password $FTPPASS --as $REMOTEDIR/$FILE $FILE
tnftp -u ftp://$FTPUSER:$FTPPASS#$FTPHOST/%2f$REMOTEDIR/$FILE $FILE
wput $FILE ftp://$FTPUSER:$FTPPASS#$FTPHOST/%2f$REMOTEDIR/$FILE
All of these programs will return a nonzero exit code if anything at all goes wrong, along with text that indicates what failed. You can test for this and then do whatever you want with the output, log it, email it, etc as you wished.
Please note the following however:
"%2f" is used in URLs to indicate that the following path is an absolute path on the remote machine. However, if your FTP server chroots you, you won't be able to bypass this.
for the commands above that use an actual URL (ftp://etc) to the server with the user and password embedded in it, the username and password MUST be URL-encoded if it contains special characters.
In some cases you can be flexible with the remote directory being absolute and local file being just the plain filename once you are familiar with the syntax of each program. You might just have to add a local directory environment variable or just hardcode everything.
IF you really, absolutely MUST use regular FTP client, one way you can test for failure is by, inside your script, including first a command that PUTs the file, followed by another that does a GET of the same file returning it under a different name. After FTP exits, simply test for the existence of the downloaded file in your shell script, or even checksum it against the original to make sure it transferred correctly. Yeah that stinks, but in my opinion it is better to have code that is easy to read than do tons of parsing for every possible error condition. BSD FTP is just not all that great.
Here is what I finally went with. Thanks for all the help. All the answers help lead me in the right direction.
It may be a little overkill, checking both the result and the log, but it should cover all of the bases.
echo "open ftp_ip
pwd
binary
lcd /out
cd /in
mput datafile.csv
quit"|ftp -iv > ftpreturn.log
ftpresult=$?
bytesindatafile=`wc -c datafile.csv | cut -d " " -f 1`
bytestransferred=`grep -e '^[0-9]* bytes sent' ftpreturn.log | cut -d " " -f 1`
ftptransfercomplete=`grep -e '226 ' ftpreturn.log | cut -d " " -f 1`
echo "-- FTP result code: $ftpresult" >> ftpreturn.log
echo "-- bytes in datafile: $bytesindatafile bytes" >> ftpreturn.log
echo "-- bytes transferred: $bytestransferred bytes sent" >> ftpreturn.log
if [ "$ftpresult" != "0" ] || [ "$bytestransferred" != "$bytesindatafile" ] || ["$ftptransfercomplete" != "226" ]
then
echo "-- *abend* FTP Error occurred" >> ftpreturn.log
mailx -s 'FTP error' `cat email.lst` < ftpreturn.log
else
echo "-- file sent via ftp successfully" >> ftpreturn.log
fi
Why not just store all output from the command to a log file, then check the return code from the command and, if it's not 0, send the log file in the email?

Resources