problem while doing gzip over ssh - unix

I am getting below error while running gzip command over ssh
ssh 123#HPUX "gzip"
ksh: gzip: not found
whereas if i am running tar in same way it is working properly.
ssh 123#HPUX "tar"
tar: usage tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
Can you please suggest why am i getting this error and how can i overcome this problem ?
When i tried following step gzip is working properly
ssh 123#HPUX
gzip
gzip: compressed data not written to a terminal. Use -f to force compression.
For help, type: gzip -h
which means that gzip is working.

Your $path may be set differently for an interactive login session, versus
executing a single command via ssh. Does it work if you specify an absolute path to gzip?
Try logging in interactively, and use the command which gzip to show where the
binary is. Perhaps it's something like /usr/local/gnu/gzip . (You might want to do
echo $path too, and make a note of it for comparison purposes.) Then try using
that path in your batch SSH command, i.e. ssh 123#HPUX "/usr/local/gnu/gzip" to see
what happens. The command ssh 123#HPUX 'echo $path' (note single quotes!) should tell you how your $path is set in that context -- if you compare that to your interactive $path, you'll probably see a difference that explains why gzip isn't found in the first version of your batch command.

Wild guess: it's ksh raising the error the first time. When you do a full ssh login, are you using ksh? Are you running any scripts that modify its path?

Related

How do I turn off wget proxy?

I had been using a proxy for a long time. Now I need to remove it. I have forgotten how I have added the proxy to wget. Can someone please help me get back to the normal wget where it doesn't use any proxy. As of now, I'm using
wget <link> --proxy=none
But I'm facing a problem when I'm installing using a pre-written script. It's painstaking to search all through the scripts and change each command.
Any simpler solution will be very much appreciated.
Thanks
Check your
~/.wgetrc
/etc/wgetrc
and remove proxy settings.
Or use wget --no-proxy command line option to override them.
In case your OS is alpine/busybox then the wget might vary from the one used by #Logu.
There the correct command is
wget --proxy off http://server:port/
Running wget --help outputs:
/ # wget --help
BusyBox v1.31.1 () multi-call binary.
Usage: wget [-c|--continue] [--spider] [-q|--quiet] [-O|--output-document FILE]
[-o|--output-file FILE] [--header 'header: value'] [-Y|--proxy on/off]
[-P DIR] [-S|--server-response] [-U|--user-agent AGENT] [-T SEC] URL...
Retrieve files via HTTP or FTP
--spider Only check URL existence: $? is 0 if exists
-c Continue retrieval of aborted transfer
-q Quiet
-P DIR Save to DIR (default .)
-S Show server response
-T SEC Network read timeout is SEC seconds
-O FILE Save to FILE ('-' for stdout)
-o FILE Log messages to FILE
-U STR Use STR for User-Agent header
-Y on/off Use proxy

Redirect not working correctly, 2> /dev/null becomes 2 > /dev/null and stderr doesn't get redirected

I am hoping someone can help me figure out what setting I might need to overwrite. I am working on a Unix terminal server, running a Linux Xterm linux shell. Everytime I use a command like grep "blah" 2> /dev/null at the shell prompt, the command is run as grep "blah" 2 > /dev/null and needless to say the redirection fails.
xterm version is X.Org 6.8.99.903(238)
I can not update or install anything, this is a locked down production server.
Thanks for any help and illumination on the topic, it is making my grep useless at high directory levels with recursion.
That's Bourne shell syntax, and it doesn't work in c-shell.
The best you can do is
( command >stdout_file ) >&stderr_file
Where you get stdout to one file, and stderr to another. Redirecting just stderr is not possible.
In a comment, you say "A minor note, this is csh". That's not a minor note, that's the cause of the problem. xterm is just a terminal emulator, not a shell; all it does is set up a window that provides textual input and output. csh (or bash, or ...) is the shell, the program that interprets the commands you type.
csh has different syntax for redirection, and doesn't let you redirect just stderr. command > file redirects stdout; command >& file redirects both stdout and stderr.
You say the system doesn't have bash, but it does have ksh. I suggest just using ksh; it will be a lot more familiar to you. Both bash and ksh are derived from the old Bourne shell.
All (?) Unix-like systems will have a Bourne-like shell installed as /bin/sh. Even if you're using csh (or tcsh?) as your interactive shell, you can still invoke sh, even in a one-liner. For example:
sh -c 'command 2>/dev/null'
will invoke sh, which in turn will invoke command and redirect just its stderr to /dev/null.
The purpose of an interactive shell is (mostly) to let you use other commands that are available on the system. sh, or any shell, can be used as just another command.

ftp mget not working when used in a script

I am trying to get a number of files from a Unix machine using an MS DOS ftp script (Windows 7). I am new to this so I have been trying to modify an on-line example. The code is as follows:
#echo off
SETLOCAL
REM ##################################
REM Change these parameters
set FTP_HOST=host
set FTP_USER=user
set FTP_REMOTE_DIR=/users/myAcc/logFiles
set FTP_REMOTE_FILE=*.log
set FTP_LOCAL_DIR=C:\Temp
set FTP_TRANSFER_MODE=ascii
REM ##################################
set FTP_PASSWD=password
set SCRIPT_FILE=%TEMP%\ftp.txt
(
echo %FTP_USER%
echo %FTP_PASSWD%
echo %FTP_TRANSFER_MODE%
echo lcd %FTP_LOCAL_DIR%
echo cd %FTP_REMOTE_DIR%
echo prompt
echo mget %FTP_REMOTE_FILE%
) > %SCRIPT_FILE%
ftp -s:%SCRIPT_FILE% %FTP_HOST%
del %SCRIPT_FILE%
ENDLOCAL
However, when I run this the mget command fails and the following output is given:
Note: the output from the rest of the script shows that all of the previous steps are working as expected. I have even added ls commands to verify the script is in the correct directory.
...
ftp> mget *.log
200 Type set to A; form set to N.
mget logFile1_SystemOut_22-01-13.log? mget logFile2_SystemOut_22-01-13.log? mget
logFile3_SystemOut_22-01-13.log? ftp>
I have run through this manually repeating the exact same steps and it works fine - no problems and the files are successfully transferred to the C:\Temp directory.
I have checked numerous forums and other websites and I can't see any reason why it should behave like this. Any pointers as to why this doesn't work in the script would be great!
Thanks
The usual option for turning off the prompt generated by ftp mget is
ftp -i
By default ftp waits with a prompt for each file found by the mget "wildcard" string you generate in your script.
I call ftp scripts on Windows like this:
ftp -i -s:%SCRIPT_FILE% %FTP_HOST%
This because ftp -si:%SCRIPT_FILE% %FTP_HOST% doesn't work.
I guess it's the same on unix - the switches have to be separated.
ftp -i worked for me.
Change ftp -s:%SCRIPT_FILE% %FTP_HOST% for ftp -si:%SCRIPT_FILE% %FTP_HOST% in your script as #jim mcnamara suggested.
ftp -i -s:%SCRIPT_FILE% %FTP_HOST% worked for me, too.
another option is to switch prompt in the ftp script before you invoke mget (that you do) and i've also read mget -i somewhere.
but note: prompt in the ftp script switches the prompt back on if it had been off before. so use either ftp -i OR prompt, but not both!
you can check if your script works otherwise by echoing a few y's in the ftp script after mget, so it answers yes to the prompts as they come up.

Issues logging in ldap root DN

I almost been stuck a day on the following issue,
I installed LDAP using: apt-get install slapd
and use the following configuration:
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/nis.schema
allow bind_v2
loglevel 0
moduleload back_sbdb.la
database bdb
suffix "dc=test,dc=nl"
rootdn "cn=Directory Manager,dc=test,dc=nl"
rootpw test
directory /var/lib/ldap
index objectClass eq
index userPassword eq,pres
index givenName,mail,mobile,sn,title,cn,description eq,sub,pres
index displayName eq,sub,pres
index postalAddress,facsimileTelephoneNumber pres
access to *
by self write
by * read
and I then try to bind using
ldapsearch -D cn=Directory Manager,dc=test,dc=nl -w test
but I still recieve the error ldap_bind: Invalid Credentials (49)
Anyone has any idea or clues what this could be?
Thanks in forward
Try it using quotes like;
ldapsearch -D "cn=Directory Manager,dc=test,dc=nl" -w test
Space character in Directory Manager may cause the problem.
Edit: Also, are you sure you don't need -h -p parameters?
-h The host name of the directory server
-p The port number of the directory server
Edit2: Just figured out what is wrong. You are using rootpw unencrypted in your slapd config file. You should use an encrypted password created by slappasswd tools output. This may cause problems under special circumstances.
Check this link for details: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-ldap-quickstart.html
A few things you could try:
Turn on more verbose logging (loglevel 255), and see if anything shows up in the log file.
Verify that the server really is reading the configuration file you think by checking the access time on the slapd.conf file (ls -lu slapd.conf)
Try binding using an invalid dn (ldapsearch -D cn=no-such-user -w test) and see if the error message changes (if so, that confirms that the problem is with the password, not the dn).
Try man ldapsearch.
I'm not really sure on debian/ubuntu, but in FreeBSD you need to add a -x to use simple authentication instead of SASL. I think this might be your issue?
Also, you could use -W instead of passing the password plain text on the commmand line.

How do you use an identity file with rsync?

How do you use an identity file with rsync?
This is the syntax I think I should be using with rsync to use an identity file to connect:
rsync -avz -e 'ssh -p1234 -i ~/.ssh/1234-identity' \
"/local/dir/" remoteUser#22.33.44.55:"/remote/dir/"
But it's giving me an error:
Warning: Identity file ~/.ssh/1234-identity not accessible: No such file or directory.
The file is fine, permissions are set correctly, it works when doing ssh - just not with rsync - at least in my syntax. What am I doing wrong? Is it trying to look for the identity file on the remote machine? If so, how do I specify that I want to use an identity file on my local machine?
Use either $HOME
rsync -avz -e "ssh -p1234 -i \"$HOME/.ssh/1234-identity\"" dir remoteUser#server:
or full path to the key:
rsync -avz -e "ssh -p1234 -i /home/username/.ssh/1234-identity" dir user#server:
Tested with rsync 3.0.9 on Ubuntu
You may want to use ssh-agent and ssh-add to load the key into memory. ssh will try identities from ssh-agent automatically if it can find them. Commands would be
eval $(ssh-agent) # Create agent and environment variables
ssh-add ~/.ssh/1234-identity
ssh-agent is a user daemon which holds unencrypted ssh keys in memory. ssh finds it based on environment variables which ssh-agent outputs when run. Using eval to evaluate this output creates the environment variables. ssh-add is the command which manages the keys memory. The agent can be locked using ssh-add. A default lifetime for a key can be specified when ssh-agent is started, and or specified for a key when it is added.
You might also want to setup a ~/.ssh/config file to supply the port and key definition. (See `man ssh_config for more options.)
host 22.33.44.55
IdentityFile ~/.ssh/1234-identity
Port 1234
Single quoting the ssh command will prevent shell expansion which is needed for ~ or $HOME. You could use the full or relative path to the key in single quotes.
You have to specify the absolute path to your identity key file. This probably some sort of quirck in rsync. (it can't be perfect after all)
I ran into this issue just a few days ago :-)
This works for me
rsync -avz --rsh="ssh -p1234 -i ~/.ssh/1234-identity" \
"/local/dir/" remoteUser#22.33.44.55:"/remote/dir/"
use key file with rsync:
rsync -rave "ssh -i /home/test/pkey_new.pem" /var/www/test/ ubuntu#231.210.24.48:/var/www/test
Are you executing the command in bash or sh? This might make a difference. Try replacing ~ with $HOME. Try double-quoting the string for the -e option.

Resources