I have a server where I store data from Mac A and Mac B.
I use rsync to keep the files updated between my Macs.
I run the following code unsuccessfully
#!/bin/zsh
# to copy files from my server to my folder
rsync -Pav $Masi:~/private/ ~/Dropbox/Courses/math/
# to copy files from my folder to my server
rsync -Pav ~/Dropbox/Courses/math $Masi:~/private/
I get the following error message
ssh: connect to host port 22: Connection refused
rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
rsync error: unexplained error (code 255) at io.c(600) [receiver=3.0.5]
ssh: connect to host port 22: Connection refused
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.5]
I have ssh keys in place so the connection should work, since I can use scp without problems.
How can you use rsync between my server and one of my Macs?
I used to do a lot of this. Just ran a test, a few suggestions.
Spell out your entire user#host pattern
Run the ssh connection sans the rsync first, you may need to first approve your fingerprint
You do not seem to pass a flag to protect extended attributes, this can yield broken files on OS X. If you do not need resource forks, you are OK, but most of the time you do need them.
My test case:
$ rsync -Pav ~/Desktop/ me#remote.example.com:~/rsyc-test
In that case, all the files within ~/Desktop were copied to the remote host, in my home dir. Since the directory 'rsyc-test' did not exist, it was made for me. I had a .app on my Desktop, it made it over, surprisingly, it works. Even some .webloc files made it and appear to work, though I do not trust it.
I would strongly suggest adding in the -E flag
-E, --extended-attributes
Apple specific option to copy extended attributes, resource
forks, and ACLs. Requires at least Mac OS X 10.4 or suitably
patched rsync.
I ran a new test, moved a Interarchy bookmark to my desktop, I know for a fact these break if they are copied sans resource forks. Running without the -E versus with the -E, there is a difference of 152 bytes in xfered data. The first file on the remote machine did not work, the second transfered file did work.
I can not help but notice in your example one of your paths is ~/Dropbox so this may all not matter, since DropBox, the app, does not at all support resource forks currently, though I hear there are plans to in the future.
You also are not sending in the --delete flag, if your end goal is a mirror of your data, you are not getting that, if your end goal is backups that continually grows, keeping everything that was ever on the source, the lack of --delete is good.
Other notes:
You can exclude those silly .DS_Store files
--exclude '.DS_Store'
You can also set rsync up in a way to be a true mirror, so you would not need to run your other command, see the man page for details.
My final working command to shove the Desktop of my laptop to a remote machine:
$ rsync -PEav --delete --exclude '.DS_Store' ~/Desktop/ me#remote.example.com:~/rsycn-test
Check "$Masi". Is that the hostname you are trying to reach?
Try the following command to debug it:
rsync -e 'ssh -v' -Pav $Masi:~/private/ ~/Dropbox/Courses/math/
The Connection refused usually happens when there is a connection issue to the remote (e.g. firewall).
In your case the problem is that $Masi variable is empty. If it's not variable, use Masi.
As per this error:
ssh: connect to host port 22: Connection refused
Notice the double space above after the host word.
the connect to host message doesn't say to which host, so you're trying to connect to empty host. So it sound like a typo in the host name.
Related
I have a VM instance on Oracle Cloud (Ubuntu 22.04) set up with ZeroTier to act as a web server for some services that should work with my local Synology NAS.
For some of those services I also need to mount three SMB shares from my NAS with the ZeroTier tunnel, but I can't make it work.
I used mount and mount.cifs plenty of times with automounting too, this time it acts very strange:
running the mount command seems to succeed from the console, but /var/log/syslog reads
CIFS: VFS: \\XXX.XXX.XXX.XXX has not responded in 180 seconds.
Reconnecting...
if trying to access one of the shares (ls or lsof or cd or any other command), it succeeds for only one of the shares (always the same one), but only for the first time any command is given:
$ ls /temp
folder1 folder2 folder3
any other following command just "hangs" as if they system is working on something, but it stays like that indefinitely most of the times:
$ ls /temp
█
Just a few times it spits out this error
lsof: WARNING: can't stat() cifs file system /temp
Output information may be incomplete.
ls 1475 ubuntu 3r DIR 0,44 0 123207681 /temp
findmnt reads:
└─/temp //XXX.XXX.XXX.XXX/Downloads cifs rw,relatime,vers=2.0,cache=strict, username=[redacted],uid=1005,noforceuid,gid=0,noforcegid,addr=XXX.XXX.XXX.XXX,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=65536,wsize=65536,bsize=1048576,echo_interval=60,actimeo=1
for the remaining two "mounted" shares, none of them seems to respond to any command, not even the very first command, and they just hang like the one share that, at least, lets me browse for one time;
umount and umount -l take at least 2-3 minutes to successfully unmount the shares.
Same behavior when using smbclient and also with NFS shares from the same NAS.
What I have already tried:
update kernel and all packages;
remove, purge and reinstall cifs-utils, smbclient and so on...
tried mounting the same shares in another client / node within the ZeroTier network and it works just fine; also browsing from Windows and Android file manager apps with and without ZeroTier works flawlessly;
tried all SMB versions including SMBv3 and SMBv1 (CIFS);
tried different browsing or mounting methods / commands including mount, mount.cifs, autofs, smbclient;
tried to debug what happens behind the console, but didn't found anything that seems related to this in logs, htop or anything else. During the "hanging" sessions there is no spike in CPU, RAM or Network usage in either the Oracle VM or Synology NAS;
checked, reset and reconfigured all permissions on my NAS for shares, folders and files recursively and reconfigured users groups permissions.
What I haven't tried yet (I'll try as soon as possible):
reproduce this on another Oracle VM configured the same as the faulty one and another with a different base image (maybe Oracle Linux?);
It seems to me that the mount.cifs process doesn't really succeeds in mounting the share correctly, as it doesn't show as such anywhere. It also seems an issue not related to folder/file permissions, but rather something related to networking?
A note on something that may or may not be related to this: ZeroTier on my Synology NAS does not seems to work with IPv4 only - it remains OFFLINE. The node goes ONLINE only when IPv6 is enabled, but I must say that this is the only node in my ZT network that shows a IPv6 as public IP in the ZT web GUI - the other nodes show IPv4 public addresses.
If anyone has any clue on this, I'll be happy to support and reproduce any advice. Thank you!
I'm using YailScale, but I presume it will work the same.
You need to add the port 445 to /etc/iptables/rules.v4 just under the SSH setup like below:
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 445 -j ACCEPT (like this)
Then you need to edit the interfaces in /etc/samba/smb.conf to:
interfaces = lo tailscale0 100.0.0.0/24
Obviously, my interface is tailscale0, but yours will be different. Use ip link show to find yours. You may also need to change your IP range to suit ZeroTeirs, such as 100.0.0.0/24, which is what tailscale uses.
Then reboot!
I couldn't get it working without doing this.
I am trying to use install openMPI and mpich2 on a multi-node cluster and I am having trouble running on multiple machines in both cases. Using mpich2 I am able to run on an specific host from the head node, but if I try to run something from the compute nodes to a different node I get:
HYDU_sock_connect (utils/sock/sock.c:172): unable to connect from "destination_node" to "parent_node" (No route to host)
[proxy:0:0#destination_node] main (pm/pmiserv/pmip.c:189): unable to connect to server parent_node at port 56411 (check for firewalls!)
If I try to use sge to set up a job I get similar errors.
On the other hand, if I try to use openMPI to run jobs, I am not able to run in any remote machine, even from the head node. I get:
ORTE was unable to reliably start one or more daemons.
This usually is caused by:
* not finding the required libraries and/or binaries on
one or more nodes. Please check your PATH and LD_LIBRARY_PATH
settings, or configure OMPI with --enable-orterun-prefix-by-default
* lack of authority to execute on one or more specified nodes.
Please verify your allocation and authorities.
* the inability to write startup files into /tmp (--tmpdir/orte_tmpdir_base).
Please check with your sys admin to determine the correct location to use.
* compilation of the orted with dynamic libraries when static are required
(e.g., on Cray). Please check your configure cmd line and consider using
one of the contrib/platform definitions for your system type.
* an inability to create a connection back to mpirun due to a
lack of common network interfaces and/or no route found between
them. Please check network connectivity (including firewalls
and network routing requirements).
The machines are connected to each other, I can ping, ssh passwordlessly etc from any of them to any other, MPI_LIB and the PATH are well set in all machines.
Usually this is caused because you didn't set up a hostfile or pass the list of hosts on the command line.
For MPICH, you do this by passing the flag -host on the command line, followed by a list of hosts (host1,host2,host3,etc.).
mpiexec -host host1,host2,host3 -n 3 <executable>
You can also put these in a file:
host1
host2
host3
Then you pass that file on the command line like so:
mpiexec -f <hostfile> -n 3 <executable>
Similarly, with Open MPI, you would use:
mpiexec --host host1,host2,host3 -n 3 <executable>
and
mpiexec --hostfile hostfile -n 3 <executable>
You can get more information at these links:
MPICH - https://wiki.mpich.org/mpich/index.php/Using_the_Hydra_Process_Manager#Hydra_with_Non-Ethernet_Networks
Open MPI - http://www.open-mpi.org/faq/?category=running#mpirun-hostfile
I want to make a backup for my remote server folders(ubunto server)to another remote sever (Linux server). but once I run this command from the the first server it dispalys me an error message:
rsync -raz --progress firstdirectoy root#serverIP:/home
The displayed messahe is:
ssh: connect to host <serverIP> port 22: Connection timed out
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(601) [sender=3.0.7]
But the same command from the server 2 to the server 1 works fine and the folder is nicely copyed into the server1.
How can I escape the connexion error in order to copy my folder from server 1 to server 2 throw rsync?
Seems like server2 has no active ssh daemon while server1 has.
Try to run ssh daemon or use raw rsync protocol and rsync daemon.
If it's a connection timeout because your SSH server is slow to respond, you can tweak the timeout in rsync:
rsync -e 'ssh -o ConnectTimeout=120'
Else it may be a missing SSH daemon (sshd) on server 2 as stated by #geov, or a closed port on your firewall. You may start by testing an SSH login:
ssh user#serverIP
And see if it's working or not. Probably nmap serverIP will help you too, stating if SSH is running or not.
And please do NOT use root user for your rsync copy!
if you wait for a long time, the prompt appears
I think that your server2's IP is wrong
For me, this error appeared when attempting to rsync between two AWS EC2 instances where the two instances were not a part of the same security group.
Overview of how to create security groups
How to change the security groups of the instances
Allow instances within the same security group to communicate
Am trying to mount a NFS device in my linux machine.
My /etc/fstab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rsize=32768,wsize=32768,intr,noatime 1 0
My /etc/mtab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rw,addr=192.168.0.5 0 0
I have enabled NFS in my NAS device.
When i type mount " mount -t nfs -v 192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs/" I get like this,
mount.nfs: timeout set for Thu Aug 1 07:01:04 2013
mount.nfs: trying text-based options 'vers=4,addr=192.168.0.5,clientaddr=192.168.1.1'
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting 192.168.0.5:/volume2/Asterisk_Recordings
Any possible reasons?
Thanks in advance.
This error can also occur if the /etc/hosts file on the nfs server maps the hostname of the client to an incorrect IP address, or the IP address of the client to an incorrect hostname. It is quick and easy to check, so worth doing before looking for other problems. Note that, if you do have to change any entries then the nfs-server has to be stopped and re-started, as it reads the hosts file only when it is started.
Is there a config file on the NAS where to put allowances for clients? E.g. in debian based OS the config file is "/etc/exports" and you would put there "/volume2/Asterisk_Recordings 192.168.1.1(rw,sync)" and activate this with "exportfs -a" (your NAS may do this automatically if you update the config via a web interface, I guess.) Check also https://stackoverflow.com/questions/22246477/mounting-nfs-results-in-access-denied-by-server.
Remember to add IP addresses/hostnames of your NFS' clients to /etc/hosts.allow of NFS' server
nfs: clienthost2, clienthost2, clienthost3
You might restart nfs config and nfs service on the NFS server as well as run export again.
systemctl restart nfs-config.service
systemctl status nfs.service
exportfs -arv
I have a Debian 10 system with a Debian 10 VM running inside it. I wanted to access a physical partition from the hard drive on the VM. I mounted the physical drive on the host and exported it. I was not able to mount it on the guest continually getting a access denied error
The solution after many hours was to add the no_all_squash option in the exports file. This is supposed to be the default but I needed to add it explicitly. As soon as I did that the problem went away and I could mount the file system. Unfortunately I could not see the files on the fs.
/media/dev 192.168.100.0/24(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
On the server I could see the files and on the host I could not.
I had to change the line to
/media/dev 192.168.100.0/255.255.255.0(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
to see the actual files that were on the file sets
I saw this error presumably due to an older NFS client and adding -o nfsvers=3 fixed the issue for me e.g. mount -t nfs -o nfsvers=3 x.x.x.x:/nfs_mount /mnt/nfs_mount
Or in /etc/fstab
x.x.x.x://nfs_mount /mnt/nfs_mount nfs proto=tcp,port=2049,nfsvers=3 0 0
Ref: https://www.thegeekdiary.com/mount-nfs-access-denied-by-server-while-mounting-how-to-resolve/
I used this code from the command prompt on a windows box (linux machine is at work):
ftp -u ftp://cran.R-project.org/incoming/ qdap_0.1.0.tar.gz
I used the info from:
https://github.com/hadley/devtools/wiki/Release
http://cran.r-project.org/doc/manuals/R-exts.html#Submitting-a-package-to-CRAN
I expected to see it show up here: ftp://cran.r-project.org/incoming/ but I do not see it.
Am I just being impatient or did my package not upload? Here is the command line output:
C:\Users\trinker\GitHub>ftp -u ftp://cran.R-project.org/incoming/ qdap_0.1.0.tar
.gz
Transfers files to and from a computer running an FTP server service
(sometimes called a daemon). Ftp can be used interactively.
FTP [-v] [-d] [-i] [-n] [-g] [-s:filename] [-a] [-A] [-x:sendbuffer] [-r:recvbuf
fer] [-b:asyncbuffers] [-w:windowsize] [host]
-v Suppresses display of remote server responses.
-n Suppresses auto-login upon initial connection.
-i Turns off interactive prompting during multiple file
transfers.
-d Enables debugging.
-g Disables filename globbing (see GLOB command).
-s:filename Specifies a text file containing FTP commands; the
commands will automatically run after FTP starts.
-a Use any local interface when binding data connection.
-A login as anonymous.
-x:send sockbuf Overrides the default SO_SNDBUF size of 8192.
-r:recv sockbuf Overrides the default SO_RCVBUF size of 8192.
-b:async count Overrides the default async count of 3
-w:windowsize Overrides the default transfer buffer size of 65535.
host Specifies the host name or IP address of the remote
host to connect to.
Notes:
- mget and mput commands take y/n/q for yes/no/quit.
- Use Control-C to abort commands.
(This was previously a comment and is being transferred to an answer here.)
Make sure you are not looking at a page cached earlier by your browser.
To perform the actual upload you might want to try the free cross platform FileZilla FTP software. You can upload and concurrently view the contents of the source directory on your machine (in the left pane) and the target directory on CRAN (in the right pane) and view a log of what is happening in the top pane and a progress indicator in the bottom pane. It also has a site manager to store the sites you upload to so you don't need to keep typing in their URL each time you do an upload.