ftp mget not working when used in a script - unix

I am trying to get a number of files from a Unix machine using an MS DOS ftp script (Windows 7). I am new to this so I have been trying to modify an on-line example. The code is as follows:
#echo off
SETLOCAL
REM ##################################
REM Change these parameters
set FTP_HOST=host
set FTP_USER=user
set FTP_REMOTE_DIR=/users/myAcc/logFiles
set FTP_REMOTE_FILE=*.log
set FTP_LOCAL_DIR=C:\Temp
set FTP_TRANSFER_MODE=ascii
REM ##################################
set FTP_PASSWD=password
set SCRIPT_FILE=%TEMP%\ftp.txt
(
echo %FTP_USER%
echo %FTP_PASSWD%
echo %FTP_TRANSFER_MODE%
echo lcd %FTP_LOCAL_DIR%
echo cd %FTP_REMOTE_DIR%
echo prompt
echo mget %FTP_REMOTE_FILE%
) > %SCRIPT_FILE%
ftp -s:%SCRIPT_FILE% %FTP_HOST%
del %SCRIPT_FILE%
ENDLOCAL
However, when I run this the mget command fails and the following output is given:
Note: the output from the rest of the script shows that all of the previous steps are working as expected. I have even added ls commands to verify the script is in the correct directory.
...
ftp> mget *.log
200 Type set to A; form set to N.
mget logFile1_SystemOut_22-01-13.log? mget logFile2_SystemOut_22-01-13.log? mget
logFile3_SystemOut_22-01-13.log? ftp>
I have run through this manually repeating the exact same steps and it works fine - no problems and the files are successfully transferred to the C:\Temp directory.
I have checked numerous forums and other websites and I can't see any reason why it should behave like this. Any pointers as to why this doesn't work in the script would be great!
Thanks

The usual option for turning off the prompt generated by ftp mget is
ftp -i
By default ftp waits with a prompt for each file found by the mget "wildcard" string you generate in your script.

I call ftp scripts on Windows like this:
ftp -i -s:%SCRIPT_FILE% %FTP_HOST%
This because ftp -si:%SCRIPT_FILE% %FTP_HOST% doesn't work.
I guess it's the same on unix - the switches have to be separated.

ftp -i worked for me.
Change ftp -s:%SCRIPT_FILE% %FTP_HOST% for ftp -si:%SCRIPT_FILE% %FTP_HOST% in your script as #jim mcnamara suggested.

ftp -i -s:%SCRIPT_FILE% %FTP_HOST% worked for me, too.
another option is to switch prompt in the ftp script before you invoke mget (that you do) and i've also read mget -i somewhere.
but note: prompt in the ftp script switches the prompt back on if it had been off before. so use either ftp -i OR prompt, but not both!
you can check if your script works otherwise by echoing a few y's in the ftp script after mget, so it answers yes to the prompts as they come up.

Related

what is the first step of ssh copy execute in terminal?

I am using this command to copy file from remote server to local machine:
scp -r app:/home/dolphin/model* .
In bash it works fine.In zsh it throw this error:zsh: no matches found: app:/home/dolphin/model*.I am searching from Google and understand the bash and zsh have different rule of glob.Here is my question:
what is the execute step detail of this command?
anyone could tell me the shell how to execute the command,the first step is echo the path of this command?
I could use -v(verbose) to see the scp execute process.
I am unfamiliar with Zsh, but as far as I can say, Bash will pass the original string to the program as an argument if nothing is globbed, while it appears that Zsh complains in this case.
To ensure the "unglobbed" string is passed as an argument to scp(1), you can escape the asterisk:
scp -r app:/home/dolphin/model\* .
^^

UNIX basic ftp upload

I'm trying to get terminal to upload a file for me, in this case: file.txt
Unfortunately, it won't work, no matter what I try.
#!/bin/bash
HOST=*
USER=*
PASS=*
# I'm 100% sure the host/user/pass are correct.
#Terminal also connects with the host provided
ftp -inv $HOST << EOF
user $USER $PASS
cd /Users/myname/Desktop
get file.txt #which is located on my desktop
bye
EOF
I've tried 100 different scripts but it just won't upload :(
This is the output after saving to an .sh file, chmod +x and sudo the .sh file:
Connected to *hostname*.
220 ProFTPD 1.3.4b Server ready.
331 Password required for *username*
230 User *username* logged in
Remote system type is UNIX.
Using binary mode to transfer files.
550 /Users/myname/Desktop: No such file or directory
local: file.txt remote: file.txt
229 Entering Extended Passive Mode (|||35098|)
550 file.txt: No such file or directory
221 Goodbye.
myname:Desktop Myname$
I've browsed through many other topics about the same issue here, but I just can't figure it out. I've started playing with UNIX since this morning, so excuse me for this (probably) foolish question.
Try:
#!/bin/bash
HOST=*
USER=*
PASS=*
# I'm 100% sure the host/user/pass are correct.
#Terminal also connects with the host provided
cd /Users/myname/Desktop # Go the the local folder where the file is located in
ftp -inv $HOST << EOF
user $USER $PASS
cd /User/$USER/Desktop # Go to the folder in which you want to upload the file
put file.txt #which is located on my desktop
bye
EOF
So use put and make sure your file is the current working directory and the remote directory exists.
You are using get but talk about an upload. Probably you just want to use put?
Anyway, I'm not sure this can be done using a basic ftp client. I'm always using ncftp for things like this. This comes with command line utilities like ncftpput which accept command line arguments and options to perform the task.
Alfe is right, you need to use put <filename> to upload a file to FTP. You can find a quick guide here. It should be possible using the basic FTP tool but I would also recommend ncftp :-)
You need to use put to upload a file.

xmlstarlet not working correct in mac

I created a batch file for windows that executes some xmlstarlet commands. I want to write it as .sh file so that i can run it on mac. The problem is.. Some commands are working fine in windows but not in mac. It didn't show any error too. Eg.
**xml ed -L -d //intent-filter//category[#android:name='android.intent.category.LAUNCHER'] my_folder\AndroidManifest.xml**
In windows, above command deletes the mentioned xml tag. BUt it does nothing in mac.
But the command
**xml sel -t -m //manifest -v //manifest/#package mim_apk_proj\AndroidManifest.xml**
is working fine in both mac and windows.
I have installed xml tool. Checked /usr/local/bin. It has libxslt.dylib and libxml2.dylib. I dont know where the problem lies?
Can someone help?
The quoting rules for bash (that's the shell on your mac, right?) are different from cmd.exe (the Windows shell), in particular, cmd.exe treats ' as a normal character while to bash it is a quoting character so it isn't passed to the program. In bash you therefore need to quote the 's as well:
xml ed -L -d //intent-filter//category[#android:name='android.intent.category.LAUNCHER'] my_folder\AndroidManifest.xml
# becomes
xml ed -L -d "//intent-filter//category[#android:name='android.intent.category.LAUNCHER'] my_folder\AndroidManifest.xml"
# or, since XPath treats both kinds of quotes identically you can also use
xml ed -L -d '//intent-filter//category[#android:name="android.intent.category.LAUNCHER"] my_folder\AndroidManifest.xml'
The second fix is safer because it also prevents bash from doing any variable expansion if you use $, but the first fix has the advantage of working in Windows as well.

problem while doing gzip over ssh

I am getting below error while running gzip command over ssh
ssh 123#HPUX "gzip"
ksh: gzip: not found
whereas if i am running tar in same way it is working properly.
ssh 123#HPUX "tar"
tar: usage tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
Can you please suggest why am i getting this error and how can i overcome this problem ?
When i tried following step gzip is working properly
ssh 123#HPUX
gzip
gzip: compressed data not written to a terminal. Use -f to force compression.
For help, type: gzip -h
which means that gzip is working.
Your $path may be set differently for an interactive login session, versus
executing a single command via ssh. Does it work if you specify an absolute path to gzip?
Try logging in interactively, and use the command which gzip to show where the
binary is. Perhaps it's something like /usr/local/gnu/gzip . (You might want to do
echo $path too, and make a note of it for comparison purposes.) Then try using
that path in your batch SSH command, i.e. ssh 123#HPUX "/usr/local/gnu/gzip" to see
what happens. The command ssh 123#HPUX 'echo $path' (note single quotes!) should tell you how your $path is set in that context -- if you compare that to your interactive $path, you'll probably see a difference that explains why gzip isn't found in the first version of your batch command.
Wild guess: it's ksh raising the error the first time. When you do a full ssh login, are you using ksh? Are you running any scripts that modify its path?

strange behavior of fc -l command

I have two unix machines, both running AIX 5.3
My $HOME is mounted on machine1.
Using NFS, login machine2 will go to the same $HOME
I login machine2 first, then machine1.
Both using telnet.
The 2 sessions will share the same .sh_history file.
I found out that the fc -l behavior very strange.
In machine2, I issue the commands in telnet:
fc -l
ksh fc -l
Both give the same output.
In machine1,
fc -l
ksh fc -l
give DIFFERENT results
The result for ksh fc -l
is the same as /usr/bin/fc -l
Also, when I run a script like this:
#!/usr/bin/ksh
fc -l
The result is same as /usr/bin/fc -l
Could anyone tell me what happened?
Alvin SIU
Ah, wisdom of the ancients... (Since this post is over a year old.)
Anyway, I just encountered this problem in Solaris 10. Issue seems to be this: When you define a function in /etc/profile, or in any file called by /etc/profile, your HISTFILE variable gets ignored by the Korn shell, and the shell instead uses ".sh_history" when accessing its history. Not sure why this is.
Result is that you see other root shell's commands. You can test it with :
lsof -p $$
or
cat /proc/$$/fd/63
It's possible that the login shell is not ksh or that $HISTFILE is being reset. One thing you can do is echo $HISTFILE in the various situations and see if it's different. Another thing to check is to see what shell you're in using ps.
Bash (default $HOME/.bash_history), for example, will have a different $HISTFILE than ksh (default $HOME/.sh_history).
Another possible reason for the difference is that the builtin fc may be able to see in-memory history that hasn't been written to disk yet (which the external /usr/bin/fc wouldn't be able to see). If this is true, it may be version dependent. Bash, for example, doesn't write history to the file until the shell exits. Ksh (at least the version I'm using) writes it immediately.

Resources