So I run the command:
edmund#cat:/images/edmund/gallery$ rsync -rzvO --exclude='.svn' ./ edmund#dog.com:/images/edmund/gallery/
My local directory is empty, while the directory on the remote server is full of pics. This is the result of running the command:
sending incremental file list
sent 24 bytes received 12 bytes 5.54 bytes/sec
total size is 0 speedup is 0.00
However, nothing is in my folder. Does anyone know what I'm doing wrong? Could it be an SSH issue?
If you are on the local server which has no pictures you need to connect to the remote server. You need to reverse your syntax:
edmund#cat:/images/edmund/gallery$ rsync -rzvO --exclude='.svn' edmund#dog.com:/images/edmund/gallery/ ./
Syntax is rsync [Source] [Destination]. See man page for rsync.
Related
Atom is able to open a project, and to show the whole tree of the project on the left side, a really nice feature.
Now I'm using SSH on Host OS to access a Guest OS (say Red Hat Enterprise Linux, RHEL) on Virtualbox, is there a way of Atom located in Host OS to open a project located on RHEL?
Well yes there is!
You just need to configure sshfs, optionally with autofs. Then you can access the files as if they are stored locally. I've used this with Atom and it works seamlessly.
Instructions for Ubuntu
Install sshfs
$ sudo apt-get install sshfs
Mount the remote directory on a local mountpoint
$ sshfs [user#]host:[dir] mountpoint
Combining it with autofs
The following link has instructions for a setup using autofs.
Note: This requires you to setup SSH for the root user.
http://www.mccambridge.org/blog/2007/05/totally-seamless-sshfs-under-linux-using-fuse-and-autofs/
Additionally to that post, I've added some tricks for an even more seamless experience.
Enhance performance
I've noticed a significant performance boost by adding this SSH config to /root/.ssh/config:
Ciphers arcfour
Compression no
Note: This does make the connection less secure.
Make it appear as a disk
If you set the mount point to a directory in /media, the mount point will show up as a disk in your file browser. For example /media/sshfs.
I would recommend the Remote sync plugin for this. I have a python environment set up on a linux box and i connect to it from my PC.
It allows me to upload changes automatically when i save a file and also define files to be monitored for changes.
Not 100% what you're looking for, but there's the Remote-Edit package: https://atom.io/packages/remote-edit
This will allow you to define the connection parameters for the server, and will then allow you to browse and edit the files found on the server.
Complement to Remco's sshfs answer above:
If you use different users in the client and server hosts, consider using the 'idmap' option of sshfs.
I use different users in my working host and in the development or testing VMs.
Example:
using option '-o idmap=user' will automatically translate UID/GID of the remote host to the UID/GID of the connecting user in the local host
Files owned by remote user (devuser) in remote host (devhost1) will appear as belonging to the connecting user (locuser) in local host (clienthost)
locuser#clienthost:~$ sshfs devuser#devhost1:/var/www ~/dev/www -o idmap=user
locuser#clienthost:~$ ls -lR ~/dev/www
(...)
-rw-rw-r-- 1 locuser locuser 269 abr 1 11:37 index.html
-rw-rw-r-- 1 locuser locuser 249 abr 3 03:59 page1.html
-rw-rw-r-- 1 locuser locuser 1118 abr 2 15:07 page2.html
-rw-rw-r-- 1 locuser locuser 847 abr 3 03:20 page3.html
(...)
The mapping can also be made explicit (userx <-> usery). For more details see man sshfs
I am writing this answer because none of the other answers worked for me.
Mounting as a directory & browsing with atom (#Remco Haszing answer) was a brilliant one.
but in my case, atom wants to index all of the remote project & its a heavy one. and it gets not responding.
using remote-sync package was good when you working locally then want to upload the files to server.
Actually the remote-edit is the package meant to do this job. (editing files remotely on ssh)
the problem with this is, it has been abandon.
These help me as its replacements:
https://atom.io/packages/remote-edit-ni
https://atom.io/packages/remote-editor
this is what i get when i try to rm file on sftp server:
(when i run rename it's the same)
sftp> rm file
debug3: Sent message fd 3 T:7 I:34854
debug3: Received stat reply T:105 I:34854
Removing file
debug2: Sending SSH2_FXP_REMOVE "file"
debug3: Sent message fd 3 T:13 I:34855
debug3: SSH2_FXP_STATUS 4
Couldn't delete file: Failure
and file perms is:
-rw-rw-rw- 0 --NA-- --NA-- 6862 Sep 9 17:05 file
am i missing something?
somebody can help?
thanks in advance
For certain networked file systems, like Hetzner backup space, files cannot be no longer deleted over SFTP if the disk space is 100% full. Trying to delete file using SFTP leads to this exact error message.
As a workaround one can extend the allocated backup space by ordering larger backup space plan.
Am trying to mount a NFS device in my linux machine.
My /etc/fstab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rsize=32768,wsize=32768,intr,noatime 1 0
My /etc/mtab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rw,addr=192.168.0.5 0 0
I have enabled NFS in my NAS device.
When i type mount " mount -t nfs -v 192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs/" I get like this,
mount.nfs: timeout set for Thu Aug 1 07:01:04 2013
mount.nfs: trying text-based options 'vers=4,addr=192.168.0.5,clientaddr=192.168.1.1'
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting 192.168.0.5:/volume2/Asterisk_Recordings
Any possible reasons?
Thanks in advance.
This error can also occur if the /etc/hosts file on the nfs server maps the hostname of the client to an incorrect IP address, or the IP address of the client to an incorrect hostname. It is quick and easy to check, so worth doing before looking for other problems. Note that, if you do have to change any entries then the nfs-server has to be stopped and re-started, as it reads the hosts file only when it is started.
Is there a config file on the NAS where to put allowances for clients? E.g. in debian based OS the config file is "/etc/exports" and you would put there "/volume2/Asterisk_Recordings 192.168.1.1(rw,sync)" and activate this with "exportfs -a" (your NAS may do this automatically if you update the config via a web interface, I guess.) Check also https://stackoverflow.com/questions/22246477/mounting-nfs-results-in-access-denied-by-server.
Remember to add IP addresses/hostnames of your NFS' clients to /etc/hosts.allow of NFS' server
nfs: clienthost2, clienthost2, clienthost3
You might restart nfs config and nfs service on the NFS server as well as run export again.
systemctl restart nfs-config.service
systemctl status nfs.service
exportfs -arv
I have a Debian 10 system with a Debian 10 VM running inside it. I wanted to access a physical partition from the hard drive on the VM. I mounted the physical drive on the host and exported it. I was not able to mount it on the guest continually getting a access denied error
The solution after many hours was to add the no_all_squash option in the exports file. This is supposed to be the default but I needed to add it explicitly. As soon as I did that the problem went away and I could mount the file system. Unfortunately I could not see the files on the fs.
/media/dev 192.168.100.0/24(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
On the server I could see the files and on the host I could not.
I had to change the line to
/media/dev 192.168.100.0/255.255.255.0(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
to see the actual files that were on the file sets
I saw this error presumably due to an older NFS client and adding -o nfsvers=3 fixed the issue for me e.g. mount -t nfs -o nfsvers=3 x.x.x.x:/nfs_mount /mnt/nfs_mount
Or in /etc/fstab
x.x.x.x://nfs_mount /mnt/nfs_mount nfs proto=tcp,port=2049,nfsvers=3 0 0
Ref: https://www.thegeekdiary.com/mount-nfs-access-denied-by-server-while-mounting-how-to-resolve/
I'm trying to rsync files between 64bit windows 7 and 64bit Linux. I have rsync 3.0.7 installed on both machine (cygwin version on windows).
This is the command I'm running on windows. I see it's establishing a connection to the remote machine but it just dies after authenticating and before transferring files. What could cause this problem?
rsync -vvvv localfile REMOTE-HOST:~/
FILE_STRUCT_LEN=16, EXTRA_LEN=4
cmd=<NULL> machine=REMOTE-HOST user=guest path=~/
cmd[0]=ssh cmd[1]=-l cmd[2]=guest cmd[3]=REMOTE-HOST cmd[4]=rsync cmd[5]=--server cmd[6]=-vvvvv.Lsf cmd[7]=. cmd[8]=~/
opening connection using: ssh -l guest REMOTE-HOST rsync --server -vvvve.Lsf. "~/"
note: iconv_open("UTF-8", "UTF-8") succeeded.
Password:
FILE_STRUCT_LEN=24, EXTRA_LEN=4
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
[Receiver] _exit_cleanup(code=12, file=io.c, line=601): entered
rsync error: error in rsync protocol data stream (code 12) at io.c(601) [Receive
r=3.0.7]
[Receiver] _exit_cleanup(code=12, file=io.c, line=601): about to call exit(12)
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
[sender] _exit_cleanup(code=12, file=/home/lapo/packaging/rsync-3.0.7-1/src/rsyn
c-3.0.7/io.c, line=601): entered
rsync error: error in rsync protocol data stream (code 12) at /home/lapo/packagi
ng/rsync-3.0.7-1/src/rsync-3.0.7/io.c(601) [sender=3.0.7]
[sender] _exit_cleanup(code=12, file=/home/lapo/packaging/rsync-3.0.7-1/src/rsyn
c-3.0.7/io.c, line=601): about to call exit(12)
The problem was a conflicting git installation. Git included it's own ssh and it's path was before cygwin path, so rsync was using git's ssh. The fix was to swap the order of git/cygwin bin path so rsync was using the right version of ssh.
I found a way around this error by adding the absolute path to ssh in my call:
rsync -avzphi --chmod=ugo=rwX --checksum --delete-after -e
"C:\cwRsync\ssh.exe -p 9999 -i c:\cwRsync\ssh\test_id_rsa"
"/cygdrive/c/repository/blah.com/"
test#xx.xx.xxx.xxx:/home/test/public_html
I have a server where I store data from Mac A and Mac B.
I use rsync to keep the files updated between my Macs.
I run the following code unsuccessfully
#!/bin/zsh
# to copy files from my server to my folder
rsync -Pav $Masi:~/private/ ~/Dropbox/Courses/math/
# to copy files from my folder to my server
rsync -Pav ~/Dropbox/Courses/math $Masi:~/private/
I get the following error message
ssh: connect to host port 22: Connection refused
rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
rsync error: unexplained error (code 255) at io.c(600) [receiver=3.0.5]
ssh: connect to host port 22: Connection refused
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.5]
I have ssh keys in place so the connection should work, since I can use scp without problems.
How can you use rsync between my server and one of my Macs?
I used to do a lot of this. Just ran a test, a few suggestions.
Spell out your entire user#host pattern
Run the ssh connection sans the rsync first, you may need to first approve your fingerprint
You do not seem to pass a flag to protect extended attributes, this can yield broken files on OS X. If you do not need resource forks, you are OK, but most of the time you do need them.
My test case:
$ rsync -Pav ~/Desktop/ me#remote.example.com:~/rsyc-test
In that case, all the files within ~/Desktop were copied to the remote host, in my home dir. Since the directory 'rsyc-test' did not exist, it was made for me. I had a .app on my Desktop, it made it over, surprisingly, it works. Even some .webloc files made it and appear to work, though I do not trust it.
I would strongly suggest adding in the -E flag
-E, --extended-attributes
Apple specific option to copy extended attributes, resource
forks, and ACLs. Requires at least Mac OS X 10.4 or suitably
patched rsync.
I ran a new test, moved a Interarchy bookmark to my desktop, I know for a fact these break if they are copied sans resource forks. Running without the -E versus with the -E, there is a difference of 152 bytes in xfered data. The first file on the remote machine did not work, the second transfered file did work.
I can not help but notice in your example one of your paths is ~/Dropbox so this may all not matter, since DropBox, the app, does not at all support resource forks currently, though I hear there are plans to in the future.
You also are not sending in the --delete flag, if your end goal is a mirror of your data, you are not getting that, if your end goal is backups that continually grows, keeping everything that was ever on the source, the lack of --delete is good.
Other notes:
You can exclude those silly .DS_Store files
--exclude '.DS_Store'
You can also set rsync up in a way to be a true mirror, so you would not need to run your other command, see the man page for details.
My final working command to shove the Desktop of my laptop to a remote machine:
$ rsync -PEav --delete --exclude '.DS_Store' ~/Desktop/ me#remote.example.com:~/rsycn-test
Check "$Masi". Is that the hostname you are trying to reach?
Try the following command to debug it:
rsync -e 'ssh -v' -Pav $Masi:~/private/ ~/Dropbox/Courses/math/
The Connection refused usually happens when there is a connection issue to the remote (e.g. firewall).
In your case the problem is that $Masi variable is empty. If it's not variable, use Masi.
As per this error:
ssh: connect to host port 22: Connection refused
Notice the double space above after the host word.
the connect to host message doesn't say to which host, so you're trying to connect to empty host. So it sound like a typo in the host name.