[WARNING ] Unable to bind socket, error: [Errno 49] Can't assign requested address
The ports are not available to bind
// specially in Salt-master role?
The salt-master is already running and is using the ports.
Stop the salt-master service and then run sudo salt-master -l debug
Related
I am trying to set up a reverse ssh tunnel between a local machine behind a router and a machine on the Internet, so that the Internet machine can tunnel back and mount a disk on the local machine.
On the local machine, I type
/usr/bin/ssh -N -f -R *:2222:127.0.0.1:2222 root#ip_of_remote_machine
This causes the remote machine to listen on port 2222. But when I try to mount the sshfs disk on the remote machine, I get "connection refused" on the local machine. Interestingly, port 2222 doesn't show up on the local machine as being bound. However, I'm definitely talking to ssh on the local machine since it complains
debug1: channel 0: connection failed: Connection refused
I have GatewayPort set to Yes on both machines. I also have AllowTcpForwarding yes on both machines as well.
First, the line needs to be
/usr/bin/ssh -N -f -R *:2222:127.0.0.1:22 root#ip_of_remote_machine
Where port 22 represents the ssh server of the local machine.
Second, since I am using sshfs, the following line needs to be in its sshd_config
Subsystem sftp /usr/lib64/misc/sftp-server
I have created a virtual machine instance from snapshot taken the production server. SSH key is set. But I am unable to ssh into instance both from the putty and google cloud ssh option from browser.
I have search around and find out that the issue new release which does not set the
default IP gateway for the instance. I have set the IP gateway and restart the instance but instance still showing the same error .
I have also check the Firewall rule and port 22 traffic allowed to the instance.
All other instance in same zone are working on SSH other than instance newly created using snapshot.
After looking into the logs from the serial port ifup: failed to bring up lo
Image of the error
#Patrick answer helps me get to answer, explanatory steps
1) Serial Console.
Go to you instance detail and enable serial port.
Connect to your instance using serial port and login with the user and password
If you do not have user create one by following script as a startup-script
#!/bin/bash
sudo useradd -G sudo user
sudo echo 'user:password' | chpasswd
sudo systemctl status networking.service to check networking status
Remove the /etc/network/interfaces.d/setup file then edit your /etc/network/interfaces
auto lo
iface lo inet loopback
Restart networking service by running sudo systemctl status networking.service
2) Following startup script also work for me
#!/bin/bash
sudo dhclient eth0
It seems the issue here is that the network interface of your new instance is not coming up. You can try one of two steps:
1) try connecting through the serial console. This does not connect through port 22 or use SSH. However, if the network card is not coming up at all, this may also fail.
2) Add a startup script to the instance which will run the commands you need to configure the network card
I have installed minikube and kubectl on Ubuntu 16.04LTS
However when i try any command with kubectl it give the below error:
Unable to connect to the server: dial tcp x.x.x.x:x i/o timeout
kubectl version only gives client version. The server version is not dispalyed
Is there any workaround to fix this?
I had to ensure the interface was up and running.
So a sudo ifconfig vboxnet0 up resolved the issue.
We are implementing a 3-node openstack cloud using glusterfs for storage solution. 3 nodes : controller compute and network are peers in gluster. We need to add another compute node as a peer in gluster but it is showing the following error :
[root#newcompute2 ~]# gluster peer probe 192.168.10.3
peer probe: failed: Probe returned with Transport endpoint is not connected
where 192.168.10.3 is the IP of controller node. Logs are also showing the same error.
Please suggest what may be the reason for this and the required solution.
This indicate that their is no port open for proper communication Node1 and Node2 host.
For that you need to fire these commands on servers
1) firewall-cmd --zone=public --add-port=24007-24008/tcp --permanent
2) firewall-cmd --zone=public --add-port=24009/tcp --permanent
3) firewall-cmd --zone=public --add-service=nfs --add-service=samba --add-service=samba-client --permanent
4) firewall-cmd --zone=public --add-port=111/tcp --add-port=139/tcp --add-port=445/tcp --add-port=965/tcp --add-port=2049/tcp --add-port=38465-38469/tcp --add-port=631/tcp --add-port=111/udp --add-port=963/udp --add-port=49152-49251/tcp --permanent
5) firewall-cmd --reload
Then try again
command => gluster peer probe x.y.z.n (IP)
peer probe: success.
Please check and revert if any other query.
You must be sure that glusterd service has been started on the probed node. you can use the following command to start the glusterd service:
sudo service glusterd start
or you can restart the service with the following command:
sudo service glusterd restart
I have had this error come up when I have GlusterFS configured to use SSL and there is a problem with loading the certificates as well (expired cert, etc.) You can check this by removing the file: /var/lib/glusterd/secure-access. If this file does not exist, then you probably do not have SSL configured and need to look for the issue elsewhere.
I know its very late but I too got stuck with this problem and somehow solved it. So for others facing this issue, one reason could be that once the cluster is formed new nodes can be added only from one of the trusted nodes.
The quick start doc very clearly says
Note: Once this pool has been established, only trusted members may probe new servers into the pool. A new server cannot probe the pool, it must be probed from the pool.
for example:
node-1 and node-2: part of the trusted cluster group
node-3: needs to be added to this cluster
then from either node-1 or node-2 write
sudo gluster peer probe <node-3>
This is because of firewall issue.
peer probe: failed: Probe returned with Transport endpoint is not connected
You need to run the following command in all of your peers.
systemctl stop firewalld
iptables -I INPUT -p all -s 192.168.10.3 -j ACCEPT
(accpet rom one and the second
it works on my machines :)
it is on instruction -> http://gluster.readthedocs.org/en/latest/Install-Guide/Configure/
From link, https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/
below step should solve problem.
Step 4 - Configure the firewall
The gluster processes on the nodes need to be able to communicate with each other. To simplify this setup, configure the firewall on each node to accept all traffic from the other node.
iptables -I INPUT -p all -s -j ACCEPT
Turn off firewall on both the nodes.
systemctl status firewalld.service
I've got two servers on a LAN with fresh installs of Centos 6.4 minimal and R 3.0.1. Both computers have doParallel, snow, and snowfall packages installed.
The servers can ssh to each other fine.
When I attempt to make clusters in either direction, I get a prompt for a password, but after entering the password, it just hangs there indefinately.
makePSOCKcluster("192.168.1.1",user="username")
How can I troubleshoot this?
edit:
I also tried calling makePSOCKcluster on the above-mentioned computer with a host that IS capable of being used as a slave (from other computers), but it still hangs. So, is it possible there is a firewall issue? I also tried using makePSOCKcluster with port 22:
> makePSOCKcluster("192.168.1.1",user="username",port=22)
Error in socketConnection("localhost", port = port, server = TRUE, blocking = TRUE, :
cannot open the connection
In addition: Warning message:
In socketConnection("localhost", port = port, server = TRUE, blocking = TRUE, :
port 22 cannot be opened
here's my iptables
# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
You could start by setting the "outfile" option to an empty string when creating the cluster object:
makePSOCKcluster("192.168.1.1",user="username",outfile="")
This allows you to see error messages from the workers in your terminal, which will hopefully provide a clue to the problem. If that doesn't help, I recommend using manual mode:
makePSOCKcluster("192.168.1.1",user="username",outfile="",manual=TRUE)
This bypasses ssh, and displays commands for you to execute in order to manually start each of the workers in separate terminals. This can uncover problems such as R packages that are not installed. It also allows you to debug the workers using whatever debugging tools you choose, although that takes a bit of work.
If makePSOCKcluster doesn't respond after you execute the specified command, it means that the worker wasn't able to connect to the master process. If the worker doesn't display any error message, it may indicate a networking problem, possibly due to a firewall blocking the connection. Since makePSOCKcluster uses a random port by default in R 3.X, you should specify an explicit value for port and configure your firewall to allow connections to that port.
To test for networking or firewall problems, you could try connecting to the master process using "netcat". Execute makePSOCKcluster in manual mode, specifying the hostname of the desired worker host and the port on local machine that should allow incoming connections:
> library(parallel)
> makePSOCKcluster("node03", port=11234, manual=TRUE)
Manually start worker on node03 with
'/usr/lib/R/bin/Rscript' -e 'parallel:::.slaveRSOCK()' MASTER=node01
PORT=11234 OUT=/dev/null TIMEOUT=2592000 METHODS=TRUE XDR=TRUE
Now start a terminal session on "node03" and execute "nc" using the indicated values of "MASTER" and "PORT" as arguments:
node03$ nc node01 11234
The master process should immediately return with the message:
socket cluster with 1 nodes on host ‘node03’
while netcat should display no message, since it is quietly reading from the socket connection.
However, if netcat displays the message:
nc: getaddrinfo: Name or service not known
then you have a hostname resolution problem. If you can find a hostname that does work with netcat, you may be able to get makePSOCKcluster to work by specifying that name via the "master" option: makePSOCKcluster("node03", master="node01", port=11234).
If netcat returns immediately, that may indicate that it wasn't able to connect to the specified port. If it returns after a minute or two, that may indicate that it wasn't able to communicate with specified host at all. In either case, check netcat's return value to verify that it was an error:
node03$ echo $?
1
Hopefully that will give you enough information about the problem that you can get help from a network administrator.