rsync via gateway with ssh and destination connect rsync port (873) - rsync

server A --> server B (opened 22 port) --> (firewall) --> server C (opened 873 port)
IP
server B (10.10.10.5)
server C (10.10.10.6)
module name : dog
I connect to server B through port 22,
I want to get a file from server B to server C through port 873.
So I wrote the command as below.
rsync --list-only -e "ssh 10.10.10.5 rsync" 10.10.10.6::dog
However, the result was as follows.
rsync: did not see server greeting
rsync error: error starting client-server protocol (code 5) at main.c(1666) [Receiver=3.1.2]
Is it possible to perform rsync with the structure above?
Server C cannot open port 22, only port 873 is open, and the rsync daemon is running.

Related

rsync "connection refused" error

I'm getting crazy with rsync which gives me a "connection refused" error. Here's my problem:
I have two servers, used to store datas, with rsync installed on because I need that both servers stay synchronized. In this way, modifies on one server will cause the same modifies on the other and viceversa. The first node (sn1) works while the second (sn2) do not. In details.
- sn1 has 192.168.13.131 as ip address
- sn2 has 192.168.13.132 as ip address
If I give rsync rsync://192.168.13.131 from sn1 or sn2, it works fine; while if I give rsync rsync://182.168.13.132 from sn1 or sn2, I get this error:
rsync: failed to connect to 192.168.13.132 (192.168.13.132): Connection refused (111)
rsync error: error in socket IO (code 10) at clientserver.c(125) [Receiver=3.1.2]
Here's some information about sn2.
/etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.130.132
[account]
max connections = 20
path = /srv/node/
read only = False
lock file = /var/lock/account.lock
[container]
max connections = 20
path = /srv/node/
read only = False
lock file = /var/lock/container.lock
[object]
max connections = 20
path = /srv/node/
read only = False
lock file = /var/lock/object.lock
and /etc/default/rsync
# defaults file for rsync daemon mode
#
# This file is only used for init.d based systems!
# If this system uses systemd, you can specify options etc. for rsync
# in daemon mode by copying /lib/systemd/system/rsync.service to
# /etc/systemd/system/rsync.service and modifying the copy; add required
# options to the ExecStart line.
# start rsync in daemon mode from init.d script?
# only allowed values are "true", "false", and "inetd"
# Use "inetd" if you want to start the rsyncd from inetd,
# all this does is prevent the init.d script from printing a message
# about not starting rsyncd (you still need to modify inetd's config yourself).
RSYNC_ENABLE=true
# which file should be used as the configuration file for rsync.
# This file is used instead of the default /etc/rsyncd.conf
# Warning: This option has no effect if the daemon is accessed
# using a remote shell. When using a different file for
# rsync you might want to symlink /etc/rsyncd.conf to
# that file.
# RSYNC_CONFIG_FILE=
# what extra options to give rsync --daemon?
# that excludes the --daemon; that's always done in the init.d script
# Possibilities are:
# --address=123.45.67.89 (bind to a specific IP address)
# --port=8730 (bind to specified port; default 873)
RSYNC_OPTS=''
# run rsyncd at a nice level?
# the rsync daemon can impact performance due to much I/O and CPU usage,
# so you may want to run it at a nicer priority than the default priority.
# Allowed values are 0 - 19 inclusive; 10 is a reasonable value.
RSYNC_NICE=''
# run rsyncd with ionice?
# "ionice" does for IO load what "nice" does for CPU load.
# As rsync is often used for backups which aren't all that time-critical,
# reducing the rsync IO priority will benefit the rest of the system.
# See the manpage for ionice for allowed options.
# -c3 is recommended, this will run rsync IO at "idle" priority. Uncomment
# the next line to activate this.
# RSYNC_IONICE='-c3'
# Don't forget to create an appropriate config file,
# else the daemon will not start.
Now some logs. /var/log/rsyncd.log
2018/05/04 15:10:16 [889] rsyncd version 3.1.2 starting, listening on port 873
2018/05/04 15:10:16 [889] bind() failed: Cannot assign requested address (address-family 2)
2018/05/04 15:10:16 [889] unable to bind any inbound sockets on port 873
2018/05/04 15:10:16 [889] rsync error: error in socket IO (code 10) at socket.c(555) [Receiver=3.1.2]
Output of ps aux | grep rsync command on sn2:
sn2 1555 0.0 0.1 13136 1060 pts/0 S+ 15:46 0:00 grep --color=auto rsync
Output of ps aux | grep rsync command on sn1:
sn1 12875 0.0 0.1 13136 1012 pts/0 S+ 15:48 0:00 grep --color=auto rsync
root 21281 0.0 0.2 12960 2800 ? Ss 13:31 0:00 /usr/bin/rsync --daemon --no-detach
This is the main difference I see between the two nodes.
Output of the command sudo systemctl status rsync on sn1:
rsync.service - fast remote file copy program daemon
Loaded: loaded (/lib/systemd/system/rsync.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2018-05-04 13:31:10 UTC; 2h 19min ago
Main PID: 21281 (rsync)
Tasks: 1 (limit: 1113)
CGroup: /system.slice/rsync.service
└─21281 /usr/bin/rsync --daemon --no-detach
May 04 13:31:10 sn1 systemd[1]: Started fast remote file copy program daemon.
Output of the same command in sn2:
rsync.service - fast remote file copy program daemon
Loaded: loaded (/lib/systemd/system/rsync.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2018-05-04 15:10:16 UTC; 41min ago
Process: 889 ExecStart=/usr/bin/rsync --daemon --no-detach (code=exited, status=10)
Main PID: 889 (code=exited, status=10)
May 04 15:10:15 sn2 systemd[1]: Started fast remote file copy program daemon.
May 04 15:10:16 sn2 systemd[1]: rsync.service: Main process exited, code=exited, status=10/n/a
May 04 15:10:16 sn2 systemd[1]: rsync.service: Failed with result 'exit-code'.
Output of the command sudo netstat -lptu | grep rsync on sn1:
tcp 0 0 sn1:rsync 0.0.0.0:* LISTEN 21281/rsync
while in sn2 it returns nothing...
Finally the sn2 /etc/hosts containt
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
#ADDED BY ME
#10.0.2.15 sn2
192.168.13.130 proxy-server
192.168.13.131 sn1
192.168.13.132 sn2
#192.168.13.133 sn3
#192.168.13.134 sn4
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
And sn1 as well:
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
#ADDED BY ME
#10.0.2.15 sn1
192.168.13.130 proxy-server
192.168.13.131 sn1
192.168.13.132 sn2
#192.168.13.133 sn3
#192.168.13.134 sn4
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
I'm running ubuntu 18.04 in each server node, on a virtual machine.
Can you help me to figure out what is happening? Unfortunately I must use rsync because I'm working on OpenStack Swift, so no change are allowed :)
Ok, I've done. It was simply, just gain super user privilege and then enable and start rsync process:
sudo su
enter your password, then digit
systemctl enable rsync
systemctl start rsync
if you don't have a systemctl based terminal just use "service" instead.
service rsync restart
You can check that rsync is now working by accessing to /var/log/rsyncd.log. The bind() error is now gone.
With default config, I always do rsync -rtP /home/me/source/ x.x.x.x:/home/someoneelse/source (where x.x.x.x is an actual IP address). I am not aware when or if rsync:// would ever need to be specified as the protocol. In my case, I had your error until I enabled sshd. I also installed rsync-daemon but I do not actually know whether that is needed. Here is the entire solution (I only did this on the remote computer--I believe the local computer, on which I then could run the command above successfully, only had openssh and rsync packages):
sudo dnf -y install rsync-daemon openssh
sudo systemctl enable rsyncd
sudo systemctl start rsyncd
sudo systemctl enable sshd
sudo systemctl start sshd
To be clear, I have Fedora 27, sudo systemctl status firewalld says firewall is running, and I did not have to manually create firewall rules of which I'm aware (there are no instances of firewall or iptables in my bash history). In the command (at the top of this answer) that successfully runs, I used some options but they are not required: r: recursive, t: copy timestamp to destination file,P: show progress. A forward slash (/) is at the end of only the source path, so that rsync doesn't create a directory called /home/someoneelse/source/source in the destination.

FileZilla: able to connect via SFTP, but failed to list directories

I used FileZilla to connect to one of my Linux servers via the SFTP protocol, but got below error stack trace.
Status: Connecting to <server_ip>...
Response: fzSftp started, protocol_version=5
Command: keyfile "C:\ruifeng_ibm.ppk"
Command: open "root#<server_ip>" 22
Status: Connected to <server_ip>
Error: Connection timed out after 20 seconds of inactivity
Error: Could not connect to server
On the server when I ran lsof -i, I was able to see the established sshd connection.
sshd 12333 root 3u IPv4 109406 0t0 TCP <server_hostname>:ssh-><workstation_ip>:54315 (ESTABLISHED)
How could the directories not be listed when the connection is successful? No idea how to debug either.
Turned out to be a silly problem.
I put below welcome message in the .bashrc file.
echo -e "\n\nHello Ruifeng...Welcome to the Arena! \n#>>------>---->>"
Either it contained some illegal characters FileZilla does not honor, or it's completely not supported by FileZilla. Too lazy to further dig in. After removing this message, the connection worked and the directories got listed.

Checking if localhost is making ftp connection

Is there is a way to check if localhost is making ftp connection to other server?
The requirement is like this: Local host -> serverA
Remote server --> serverB.
Need to check if serverA is making ftp connection to serverB.
So whenever serverA is making ftp connection to serverB, how to get notified.
I tried like this: ps -ef | grep -i ftp; however since ps process too would get notified, so can't make this use in shell script, is there any better way which checks if serverA is making ftp connections to serverB, and if so, get notified / logs to a file.
Thanks
Your problem of "ps -ef | grep -i ftp" also reporting the 'ps' process is resulting from grep searching the string "ftp". This would also hit a lot of other processes which also have the word 'ftp' in it's command line.
To fix that check if you have the procps tools "pgrep" and "pkill" installed. They are very helpful for 'grepping' processes and running commandlines.
To solve your initial problem you might check if you have the 'ss' (show sockets from iproute2 packages) command installed.
It's output might be useful (11.22.33.44 is you local IP 130.133.3.130 the remote):
root:sigkill:~/# ss -p|cat
State Recv-Q Send-Q Local Address:Port Peer Address:Port
[...]
ESTAB 0 0 11.22.33.44:43681 130.133.3.130:ftp users:(("ftp",19729,4),("ftp",19729,3))
[...]
There are a few approaches that you could take:
You could poll running processes for ftp. This wouldn't catch other FTP clients (if you care about that), and it wouldn't catch very short ftp sessions that slip between polls.
If your system supports execution logging, you could log all executions of ftp. Again, this wouldn't catch other FTP clients.
You could watch for outbound connections on port 21/tcp using some mechanism provided by your system (for instance, on Linux, use an iptables rule that matches outbound FTP connections to any servers that you care about and logs them using the LOG target). This would catch all connections regardless of client, but tracking down the process and user would be a little more complicated.
You can use $ grep ftp /etc/services to list the current ftp connections.
$ grep ftp /etc/services
ftp-data 20/tcp
ftp-data 20/udp
...
ftp 21/tcp
ftp 21/udp fsp fspd
...
sftp 115/tcp
sftp 115/udp
...
ftp-data 20/sctp # FTP
ftp 21/sctp # FTP
...
ftps-data 989/tcp # ftp protocol, data, over TLS/SSL
ftps-data 989/udp # ftp protocol, data, over TLS/SSL
ftps 990/tcp # ftp protocol, control, over TLS/SSL
ftps 990/udp # ftp protocol, control, over TLS/SSL
Use netstat to see the open connections. e.g., For simple FTP...
$ netstat -tan | grep \:21
tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN
tcp 0 0 :::21 :::* LISTEN

rsync port 22: Connection timed out

I want to make a backup for my remote server folders(ubunto server)to another remote sever (Linux server). but once I run this command from the the first server it dispalys me an error message:
rsync -raz --progress firstdirectoy root#serverIP:/home
The displayed messahe is:
ssh: connect to host <serverIP> port 22: Connection timed out
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(601) [sender=3.0.7]
But the same command from the server 2 to the server 1 works fine and the folder is nicely copyed into the server1.
How can I escape the connexion error in order to copy my folder from server 1 to server 2 throw rsync?
Seems like server2 has no active ssh daemon while server1 has.
Try to run ssh daemon or use raw rsync protocol and rsync daemon.
If it's a connection timeout because your SSH server is slow to respond, you can tweak the timeout in rsync:
rsync -e 'ssh -o ConnectTimeout=120'
Else it may be a missing SSH daemon (sshd) on server 2 as stated by #geov, or a closed port on your firewall. You may start by testing an SSH login:
ssh user#serverIP
And see if it's working or not. Probably nmap serverIP will help you too, stating if SSH is running or not.
And please do NOT use root user for your rsync copy!
if you wait for a long time, the prompt appears
I think that your server2's IP is wrong
For me, this error appeared when attempting to rsync between two AWS EC2 instances where the two instances were not a part of the same security group.
Overview of how to create security groups
How to change the security groups of the instances
Allow instances within the same security group to communicate

Binding external IP address to Rabbit MQ server

I have box A and it has a consumer on it that listens on a Rabbit MQ server
I have box B that will publish a message to the listener
So as long as all of this in on box A and I start Rabbit MQ server w/ defaults it works fine.
The defaults are host=127.0.0.1 on port 5672, but
when I telnet box.a.ip.addy 5672 from box B I get:
Trying box.a.ip.addy...
telnet: connect to address box.a.ip.addy: No route to host
telnet: Unable to connect to remote host: No route to host
telnet on port 22 is fine, I can ssh into Box A from Box B
So I assume I need to change the ip that the RabbitMQ server uses
I found this: http://www.rabbitmq.com/configure.html and I now have a config file in the location the documentation said to use, with the name rabbitmq.config and it contains:
[
{rabbit, [{tcp_listeners, {"box.a.ip.addy", 5672}}]}
].
So I stopped the server, and started RabbitMQ server again. It failed. Here are the errors from the error logs. It's a little over my head. (in fact most of this is)
=ERROR REPORT==== 23-Aug-2011::14:49:36 ===
FAILED
Reason: {{case_clause,{{"box.a.ip.addy",5672}}},
[{rabbit_networking,'-boot_tcp/0-lc$^0/1-0-',1},
{rabbit_networking,boot_tcp,0},
{rabbit_networking,boot,0},
{rabbit,'-run_boot_step/1-lc$^1/1-1-',1},
{rabbit,run_boot_step,1},
{rabbit,'-start/2-lc$^0/1-0-',1},
{rabbit,start,2},
{application_master,start_it_old,4}]}
=INFO REPORT==== 23-Aug-2011::14:49:37 ===
application: rabbit
exited: {bad_return,{{rabbit,start,[normal,[]]},
{'EXIT',{rabbit,failure_during_boot}}}}
type: permanent
and here is some more from the start up log:
Erlang has closed
Error: {node_start_failed,normal}
^M
Crash dump was written to: erl_crash.dump^M
Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot}}}}})^M
Please help
did you try adding?
RABBITMQ_NODE_IP_ADDRESS=box.a.ip.addy
to the /etc/rabbitmq/rabbitmq.conf file?
Per http://www.rabbitmq.com/configure.html#customise-general-unix-environment
Also per this documentation it states that the default is to bind to all interfaces. Perhaps there is a configuration setting or environment variable already set in your system to restrict the server to localhost overriding anything else you do.
UPDATE: After reading again I realize that the telnet should have returned "Connection Refused" not "No route to host." I would also check to see if you are having a firewall related issue.
You need to open up the tcp port on your firewall
Using Linux, Find the iptables config file:
eric#dev ~$ find / -name "iptables" 2>/dev/null
/etc/sysconfig/iptables
Edit the file:
sudo vi /etc/sysconfig/iptables
Fix the file by adding a port:
# Generated by iptables-save v1.4.7 on Thu Jan 16 16:43:13 2014
*filter
-A INPUT -p tcp -m tcp --dport 15672 -j ACCEPT
COMMIT

Resources