I am using lftp mirror -R to sync a local dir to a remote sftp dir
Just to get myself super clarified, the script my i am running lftp -f is
as follows
open sftp://hostname port
user username password
mirror -R local_dir sftp_dir
exit
However I keep getting exit code 1 from mirror -R,even though from the standard stdout it seem that it has successfully uploaded the file and I can verify that the files are indeed upload from sftp.
So just wondering why is that happening and how i can get correct exit code
Non-zero exit code without error messages means that something has silently failed. Most often it is "chmod" operation. Try adding --no-perms option. To be sure, enable debug and see the interaction with the server.
Related
I am trying to run rsync as follows and running into error sshpass: Failed to run command: No such file or directory .I verified the source /local/mnt/workspace/common/sectool and destination directories/prj/qct/wlan_rome_su_builds are available and accessible?what am I missing?how to fix this?
username#xxx-machine-02:~$ sshpass –p 'password' rsync –progress –avz –e ssh /local/mnt/workspace/common/sectool cnssbldsw#hydwclnxbld4:/prj/qct/wlan_rome_su_builds
sshpass: Failed to run command: No such file or directory
Would that be possible for you to check whether 'rsync' works without 'sshpass'?
Also, check whether the ports used by rsync is enabled. You can find the port info via cat /etc/services | grep rsync
The first thing is to make sure that ssh connection is working smoothly. You can check this via "sudo ssh -vvv cnssbldsw#hydwclnxbld4" (please post the message). In advance, If you are to receive any messages such as "ssh: connect to host hydwclnxbld4 port 22: Connection refused", the issue is with the openssh-server (not being installed or a broken package). Let's see what you get you get for the first command
When I umount Lustre FS it displays:
[root#cn17663-ens4 mnt]# umount /mnt/lustre
umount: /mnt/lustre: target is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
and if I add the force option -f it gives the same result:
[root#cn17663-ens4 mnt]# umount /mnt/lustre -f
umount: /mnt/lustre: target is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
When I try to list the directory it gives me :
[root#cn17663-ens4 mnt]# ls
ls: cannot access lustre: Cannot send after transport endpoint shutdown
lustre
and I cannot find what the reason is and cannot solve it.
Did you actually try running lsof /mnt/lustre (as the error message recommends) to see what is using the filesystem? This problem is not unique to Lustre, but true of any local filesystem as well - if there is a process using the filesystem (current working directory or open file) then it can't be unmounted until that process stops using it (cd out of /mnt/lustre or close the open file(s)).
I find I can use umount -l /mnt/xx to solve this problem!
I am developing an application that can establish a server-client connection using QTcp*
The client sends the server a number.
The received string is checked on its length and quality (is it really a number?)
If everything is OK, then the server replies back with a file path (which depends on the sent number).
The client checks if the file exists and if it is a valid image. If the file complies with the rules, it executes a command on the file.
What security concerns exist on this type of connection?
The program is designed for Linux systems and the external command on the image file is executed using QProcess. If the string sent contained something like (do not run the following command):
; rm -rf /
then it would be blocked on the file not found security check (because it isn't a file path). If there wasn't any check about the validity of the sent string then the following command would be executed:
command_to_run_on_image ; rm -rf /
which would cause panic! But this cannot happen.
So, is there anything I should take into consideration?
If you open a console and type command ; rm -rf /*, something bad would likely happen. It's because commands are processed by the shell. It parses text output, e.g. splits commands by ; delimiter and splits arguments by space, then it executes parsed commands with parsed arguments using system API.
However, when you use process->start("command", QStringList() << "; rm -rf /*");, there is no such danger. QProcess will not execute shell. It will execute command directly using system API. The result will be similar to running command "; rm -rf /*" in the shell.
So, you can be sure that only your command will be executed and the parameter will be passed to it as it is. The only danger is the possibility for an attacker to call the command with any file path he could construct. Consequences depends on what the command does.
I am trying to use SFTP to upload the entire directory to remote host but I got a error.(I know SCP does work, but I really want to figure out the problem of SFTP.)
I used the command as below:
(echo "put -r LargeFile/"; echo quit)|sftp -vb - username#remotehost:TEST/
But I got the error "Couldn't canonicalise: No such file or directory""Unable to canonicalise path "/home/s1238262/TEST/LargeFile"
I thought it was caused by access rights. So, I opened a SFTP connection to the remote host in interactive mode and tried to create a new directory "LargeFile" in TEST/. And I succeeded. Then, I used the same command as above to uploading the entire directory "LargeFile". I also succeeded. The subdirectories in LargeFile were create or copied automatically.
So, I am confused. It seems only the LargeFile/ directory cannot be created in non-interactive mode. What's wrong with it or my command?
With SFTP you can only copy if the directory exists. So
> mkdir LargeFile
> put -r path_to_large_file/LargeFile
Same as the advice in the link from #Vidhuran but this should save you some reading.
This error could possibly occur because of the -r option. Refer https://unix.stackexchange.com/questions/7004/uploading-directories-with-sftp
A better way is through using scp.
scp -r LargeFile/"; echo quit)|sftp -vb - username#remotehost:TEST/
The easiest way for me was to zip my folder on local LargeFile.zip and simply put LargeFile.zip
zip -r LargeFile.zip LargeFile
sftp www.mywebserver.com (or ip of the webserver)
put LargeFile.zip (it will be on your remote server local directory)
unzip Largefile.zip
If you are using Ubuntu 14.04, the sftp has a bug. If you have the '/' added to the file name, you will get the Couldn't canonicalize: Failure error.
For example:
sftp> cd my_inbox/ ##will give you an error
sftp> cd my_inbox ##will NOT give you the error
Notice how the forward-slash is missing in the correct request. The forward slash appears when you use the TAB key to auto-populate the names in the path.
I'm running the following command (where variables have valid values for ssh command and $file - is a .sql file).
nohup ssh -qn ${ssh_user}#${dbs} "sqlplus $dbuser/${dbpswd}#${dbname} <<ENDSQL | tee "${sql_run_output_file}".ssh.log
set echo off
set echo on
set timing on
set time on
set serveroutput on size 1000000
#${file}
ENDSQL
"
When I was using the above command without "nohup" before ssh command, after 1 hour or so, my connection from source server (where im running ssh) was getting an error/message "Connection reset...." and hanging my BASH shell script (which contains this ssh command in it). When, I use nohup, i dont see the connection issue.
Here's what I'm trying to get and need your help.
Change the command shown above so that the command will NOT create a nohup.out
(Did I read that I can use > instead of | tee ... and use 2>&1)
I DO NOT want to run the command giving a "&" (background)
I DO want a LOG file for the sqlplus session that's running on the target DB server via ssh command/connection (initiated from source server).
Thanks.
You can still lose the connection when running ssh under nohup, so it's not really a good solution. If possible, I would recommend that you copy the sql file via scp to the target server, then ssh in to the server, open a screen and run the command from there (Or run it under nohup). Is that an option?