grep/sed not found error, not in $path - unix

I'm a unix newbie here and I have a unix command I'm trying to run but I get a "GREP: not found" error. I looked at $PATH and didn't see anything resembling grep (not sure if thats what i'm looking for either though)...
The command is this:
testabcd=$(bteq << EOF 2>&1 |grep '^>' |sed -e "s/^>//"
.LOGON server/user, pass
DATABASE schema;
.set width 2000;
.set titledashes off;
SELECT '>'||COUNT(*) FROM schema1.table1;
.LOGOFF;
.QUIT;
.EXIT
EOF)
echo "The count is: " $testabcd
then I get these errors:
-ksh: SED: not found (No such file or directory)
>echo "The count is: " $testvarabcd
THE DATA IS:
>-ksh: GREP: not found
*** Error: The following error was encountered on the output file.
*** Error: Broke pipe
*** Warning: Canceling the rest of the output
if grep is not in PATH, do I need to install it? If not, can I set the path in the command and how do I search where the grep path is??

replace grep with /bin/grep and sed with /bin/sed.
ultimately you need to add /bin to your path.
do you like ksh? In my experience, csh, tcsh, or bash are
more commonly used. Using one of those may give you a better path.
Then you would not need to edit your path.
the file that contains you path is a hidden file (it has a . in front
of the file name) in your home directory. try ls .* in your home directory.
Even better, try
/bin/grep PATH .*
this will locate the file with the PATH variable.

Related

Check if file exists or not in KORN Shell

I want to check if file exists are not in a Korn shell, but not able to get a proper documentation for that. I have the following code that checks if file exists and is of size zero. If the file size is more than zero it returns false.
if [[ ! -s ${abs_file_name} ]]
I need a list of possible options (like -s, -e, -x, etc like in the above example) with description to check if file exists in KORN shell, NOT BASH shell.
You can check if a node exists with
if [ -e "${abs_file_name}" ]
You can check if a node is a file.
if [ -f "${abs_file_name}" ]
This does something dumb if abs_file_name resolves to a symlink to a file.
Also: -d for directory, -r for you can read it, -x for executable.

Using "find" in csh script file

I run a .csh file in UNIX that contains the following script
#!/bin/tcsh -f
set path = "$1"
find "$path" -name myfolder
And get the following message
find: Command not found.
What am I missing?
Thanks
The $path variable is special - it tells the shell where to find tools like find. :-) Use a different variable name.
From your interactive shell, you can see what $path normally looks like by echoing it. The following is my path on my FreeBSD server:
ghoti% echo $path
/usr/local/sbin /usr/local/bin /usr/sbin /usr/bin /sbin /bin /home/ghoti/bin /usr/X11R6/bin /usr/games
If this list is replaced with something else, for example the contents of $1, then tcsh doesn't know to look in /usr/bin to find find:
ghoti% which find
/usr/bin/find
ghoti% set path = "hello world"
ghoti% which find
find: Command not found.
ghoti%

how to tail -f on multiple files with a script?

I am trying to tail multiple files in a ksh. I have the following script:
test.sh
#!/bin/ksh
for file in "$#"
do
# show tails of each in background.
tail -f $file>out.txt
echo "\n"
done
It is only reading the first file argument I provide to the script. Not reading the other files as the argument to the script.
When I do this:
./test.sh /var/adm/messages /var/adm/logs
it is only reading the /var/adm/messages not the logs. Any ideas what I might be doing wrong
You should use double ">>" syntax to redirect the stream at the end of your output file.
A simple ">" redirection will write the stream at the beginning of the file and consequently it will remove the previous content.
So try :
#!/bin/ksh
for file in "$#"
do
# show tails of each in background.
tail -f $file >> out.txt & # Don't forget to add the last character
done
EDIT : If you want to use multi tail it's not installed by default. On Debian or Ubuntu you can use apt-get install multi tail.

Using grep to find a file that contains a string

My .htaccess file in my htdocs folder does not work. I tried to redirect to Google when accessing a filename. I want to find out where the settings for my httpd.conf are, so I can enable mod_rewrite. I did the following UNIX command to find out if a httpd.conf file existed on my hard drive:
find * -name "httpd.conf"
The file does not exist. I am thinking that maybe there is another file that controls mod_rewrite. I want to see if "AllowOverride" exists in any directory. I entered the following UNIX command:
grep -r "AllowOverride" *
But it's hard to read because it prints out so many folders. The message that accompanies the folders are "Permission denied" or "No such file or directory". How do I only get the file paths of files that contain AllowOverride?
Many Unix and similar systems provide a locate(1) command that uses a database to speed finding individual files. Try this:
locate httpd.conf
Note, of course, that Apache configurations are stored in files of all sorts of names; I've seen apache.conf, httpd.conf, httpd2.conf, and then there's the giant pile of /etc/apache2/conf.d/ -- entire directory structures set aside for configuring Apache. Your distribution may vary.
Perhaps apachectl configtest will show the paths? (currently not installed on my machine, so I can't easily test.)
Try this command:
find / -name "httpd.conf" 2>1 | grep -v "Permission denied"
the 2>1 funnels stderr to stdout so that both can be piped into the grep utility. grep in turn will print anyline that doesn't have the string "Permission denied" in it (the -v negates/inverts the matching of the search string)
If you don't redirect stderr to stdout, the output of stderr to the console would bypass the rest of the command line.
You could extend the above command line by appending this:
| grep -v "No such file or directory"
if that string was coming up and you wanted to suppress it too.
This tells you all about io redirection. And here's a nice quick summary.
Use the following:
find / -type f -exec grep -n "AllowOverride" {} \; -print 2>/dev/null
To scan files containing the "AllowOverride" string from the root, if you want to run the search in a particular directory, use the following instead:
find /path/to/directory -type f -exec grep -n "AllowOverride" {} \; -print 2>/dev/null
The output should only print the files containing the specified string along with the number of the matching line

Checking ftp return codes from Unix script

I am currently creating an overnight job that calls a Unix script which in turn creates and transfers a file using ftp. I would like to check all possible return codes. The man page for ftp doesn't list return codes. Does anyone know where to find a list? Anyone with experience with this? We have other scripts that grep for certain return strings in the log, and they send an email when in error. However, they often miss unanticipated codes.
I am then putting the reason into the log and the email.
The ftp command does not return anything other than zero on most implementations that I've come across.
It's much better to process the three digit codes in the log - and if you're sending a binary file, you can check that bytes sent was correct.
The three digit codes are called 'series codes' and a list can be found here
I wrote a script to transfer only one file at a time and in that script use grep to check for the 226 Transfer complete message. If it finds it, grep returns 0.
ftp -niv < "$2"_ftp.tmp | grep "^226 "
Install the ncftp package. It comes with ncftpget and ncftpput which will each attempt to upload/download a single file, and return with a descriptive error code if there is a problem. See the “Diagnostics” section of the man page.
I think it is easier to run the ftp and check the exit code of ftp if something gone wrong.
I did this like the example below:
# ...
ftp -i -n $HOST 2>&1 1> $FTPLOG << EOF
quote USER $USER
quote PASS $PASSWD
cd $RFOLDER
binary
put $FOLDER/$FILE.sql.Z $FILE.sql.Z
bye
EOF
# Check the ftp util exit code (0 is ok, every else means an error occurred!)
EXITFTP=$?
if test $EXITFTP -ne 0; then echo "$D ERROR FTP" >> $LOG; exit 3; fi
if (grep "^Not connected." $FTPLOG); then echo "$D ERROR FTP CONNECT" >> $LOG; fi
if (grep "No such file" $FTPLOG); then echo "$D ERROR FTP NO SUCH FILE" >> $LOG; fi
if (grep "access denied" $FTPLOG ); then echo "$D ERROR FTP ACCESS DENIED" >> $LOG; fi
if (grep "^Please login" $FTPLOG ); then echo "$D ERROR FTP LOGIN" >> $LOG; fi
Edit: To catch errors I grep the output of the ftp command. But it's truly it's not the best solution.
I don't know how familier you are with a Scriptlanguage like Perl, Python or Ruby. They all have a FTP module which you can be used. This enables you to check for errors after each command. Here is a example in Perl:
#!/usr/bin/perl -w
use Net::FTP;
$ftp = Net::FTP->new("example.net") or die "Cannot connect to example.net: $#";
$ftp->login("username", "password") or die "Cannot login ", $ftp->message;
$ftp->cwd("/pub") or die "Cannot change working directory ", $ftp->message;
$ftp->binary;
$ftp->put("foo.bar") or die "Failed to upload ", $ftp->message;
$ftp->quit;
For this logic to work user need to redirect STDERR as well from ftp command as below
ftp -i -n $HOST >$FTPLOG 2>&1 << EOF
Below command will always assign 0 (success) as because ftp command wont return success or failure. So user should not depend on it
EXITFTP=$?
lame answer I know, but how about getting the ftp sources and see for yourself
I like the solution from Anurag, for the bytes transfered problem I have extended the command with grep -v "bytes"
ie
grep "^530" ftp_out2.txt | grep -v "byte"
-instead of 530 you can use all the error codes as Anurag did.
You said you wanted to FTP the file there, but you didn't say whether or not regular BSD FTP client was the only way you wanted to get it there. BSD FTP doesn't give you a return code for error conditions necessitating all that parsing, but there are a whole series of other Unix programs that can be used to transfer files by FTP if you or your administrator will install them. I will give you some examples of ways to transfer a file by FTP while still catching all error conditions with little amounts of code.
FTPUSER is your ftp user login name
FTPPASS is your ftp password
FILE is the local file you want to upload without any path info (eg file1.txt, not /whatever/file1.txt or whatever/file1.txt
FTPHOST is the remote machine you want to FTP to
REMOTEDIR is an ABSOLUTE PATH to the location on the remote machine you want to upload to
Here are the examples:
curl --user $FTPUSER:$FTPPASS -T $FILE ftp://$FTPHOST/%2f$REMOTEDIR
ftp-upload --host $FTPHOST --user $FTPUSER --password $FTPPASS --as $REMOTEDIR/$FILE $FILE
tnftp -u ftp://$FTPUSER:$FTPPASS#$FTPHOST/%2f$REMOTEDIR/$FILE $FILE
wput $FILE ftp://$FTPUSER:$FTPPASS#$FTPHOST/%2f$REMOTEDIR/$FILE
All of these programs will return a nonzero exit code if anything at all goes wrong, along with text that indicates what failed. You can test for this and then do whatever you want with the output, log it, email it, etc as you wished.
Please note the following however:
"%2f" is used in URLs to indicate that the following path is an absolute path on the remote machine. However, if your FTP server chroots you, you won't be able to bypass this.
for the commands above that use an actual URL (ftp://etc) to the server with the user and password embedded in it, the username and password MUST be URL-encoded if it contains special characters.
In some cases you can be flexible with the remote directory being absolute and local file being just the plain filename once you are familiar with the syntax of each program. You might just have to add a local directory environment variable or just hardcode everything.
IF you really, absolutely MUST use regular FTP client, one way you can test for failure is by, inside your script, including first a command that PUTs the file, followed by another that does a GET of the same file returning it under a different name. After FTP exits, simply test for the existence of the downloaded file in your shell script, or even checksum it against the original to make sure it transferred correctly. Yeah that stinks, but in my opinion it is better to have code that is easy to read than do tons of parsing for every possible error condition. BSD FTP is just not all that great.
Here is what I finally went with. Thanks for all the help. All the answers help lead me in the right direction.
It may be a little overkill, checking both the result and the log, but it should cover all of the bases.
echo "open ftp_ip
pwd
binary
lcd /out
cd /in
mput datafile.csv
quit"|ftp -iv > ftpreturn.log
ftpresult=$?
bytesindatafile=`wc -c datafile.csv | cut -d " " -f 1`
bytestransferred=`grep -e '^[0-9]* bytes sent' ftpreturn.log | cut -d " " -f 1`
ftptransfercomplete=`grep -e '226 ' ftpreturn.log | cut -d " " -f 1`
echo "-- FTP result code: $ftpresult" >> ftpreturn.log
echo "-- bytes in datafile: $bytesindatafile bytes" >> ftpreturn.log
echo "-- bytes transferred: $bytestransferred bytes sent" >> ftpreturn.log
if [ "$ftpresult" != "0" ] || [ "$bytestransferred" != "$bytesindatafile" ] || ["$ftptransfercomplete" != "226" ]
then
echo "-- *abend* FTP Error occurred" >> ftpreturn.log
mailx -s 'FTP error' `cat email.lst` < ftpreturn.log
else
echo "-- file sent via ftp successfully" >> ftpreturn.log
fi
Why not just store all output from the command to a log file, then check the return code from the command and, if it's not 0, send the log file in the email?

Resources