Google Cloud virtual machine instance created from snapshot not allowing ssh - networking

I have created a virtual machine instance from snapshot taken the production server. SSH key is set. But I am unable to ssh into instance both from the putty and google cloud ssh option from browser.
I have search around and find out that the issue new release which does not set the
default IP gateway for the instance. I have set the IP gateway and restart the instance but instance still showing the same error .
I have also check the Firewall rule and port 22 traffic allowed to the instance.
All other instance in same zone are working on SSH other than instance newly created using snapshot.
After looking into the logs from the serial port ifup: failed to bring up lo
Image of the error

#Patrick answer helps me get to answer, explanatory steps
1) Serial Console.
Go to you instance detail and enable serial port.
Connect to your instance using serial port and login with the user and password
If you do not have user create one by following script as a startup-script
#!/bin/bash
sudo useradd -G sudo user
sudo echo 'user:password' | chpasswd
sudo systemctl status networking.service to check networking status
Remove the /etc/network/interfaces.d/setup file then edit your /etc/network/interfaces
auto lo
iface lo inet loopback
Restart networking service by running sudo systemctl status networking.service
2) Following startup script also work for me
#!/bin/bash
sudo dhclient eth0

It seems the issue here is that the network interface of your new instance is not coming up. You can try one of two steps:
1) try connecting through the serial console. This does not connect through port 22 or use SSH. However, if the network card is not coming up at all, this may also fail.
2) Add a startup script to the instance which will run the commands you need to configure the network card

Related

Set GITLAB to be accessible on LAN

After many research i have not found anything...
I install GITLAB on a CentOS VM. The CentOS ip address is 192.168.100.1.
In the file /etc/gitlab/gitlab.rb, I modified the line:
external_url 'http:192.168.100.1:1234'
I executed the command 'gitlab-ctl reconfigure' and no errors appeared.
When I use Firefox, and I can access to my Gitlab with all the Centos' interfaces:
192.168.100.1:1234
127.0.0.1:1234
It is normal because when i execute 'netstat -ntlp', I can see:
tcp 0 0.0.0.0:1234 LISTEN 22222/nginx:master
What is the problem?
I cannot access to GitLAB outside from the same Network 192.168.100.1/24.
From an other VM on the same network (192.168.100.2), i can ping '192.168.100.2'. I also make an ssh connection but if I made a:
curl 192.168.100.1:1234
The result is "Time out"
Thank,
Vincent

DevStack instances can't be reached outside devstack node

Following official documentation, I'm trying to deploy a Devstack on an Ubuntu 18.04 Server OS on a virtual machine. The devstack node has only one network card (ens160) connected to a network with the following CIDR 10.20.30.40/24. I need my instances accessible publicly on this network (from 10.20.30.240 to 10.20.30.250). So again the following the official floating-IP documentation I managed to form this local.conf file:
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
PUBLIC_INTERFACE=ens160
HOST_IP=10.20.30.40
FLOATING_RANGE=10.20.30.40/24
PUBLIC_NETWORK_GATEWAY=10.20.30.1
Q_FLOATING_ALLOCATION_POOL=start=10.20.30.240,end=10.20.30.250
This would lead to form a br-ex with the global IP address 10.20.30.40 and secondary IP address 10.20.30.1 (The gateway already exists on the network; isn't PUBLIC_NETWORK_GATEWAY parameter talking about real gateway on the network?)
Now, after a successful deployment, disabling ufw (according to this), creating a cirros instance with proper security group for ping and ssh and attaching a floating-IP, I only can access my instance on my devstack node, not on the whole network! Also from within the cirros instance, I cannot access the outside world (even though I can access the outside world from the devstack node)
Afterwards, watching this video, I modified the local.conf file like this:
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
FLAT_INTERFACE=ens160
HOST_IP=10.20.30.40
FLOATING_RANGE=10.20.30.240/28
After a successful deployment and instance setup, I still can access my instance only on devstack node and not from the outside! But the good news is that I can access the outside world from within the cirros instance.
Any help would be appreciated!
Update
On the second configuration, checking packets on tcpdump while pinging the instance floating-IP, I observed that the who-has broadcast packet for the floating-IP of the instance reaches the devstack node from the network router; however no is-at reply is generated and thus ICMP packets are not routed to the devstack node and the instance.
So, with some tricks I created the response and everything works fine afterwards; but certainly this isn't solution and I imagine that the devstack should work out of the box without any tweaking and probably this is because of a misconfiguration of devstack.
After 5 days of tests, research and lecture, I found this: Openstack VM is not accessible on LAN
Enter the following commands on devstack node:
echo 1 > /proc/sys/net/ipv4/conf/ens160/proxy_arp
iptables -t nat -A POSTROUTING -o ens160 -j MASQUERADE
That'll do the trick!
Cheers!

Docker : Unable to run Docker commands

I have installed docker engine v1.12.3 on Ubuntu 14.04 LTS and since after the following changes to enable Remote API, I'm not able to pull or run any of the docker images,
Added DOCKER_OPTS="-H tcp://127.0.0.1:2375" in /etc/default/docker.
/etc/init.d/docker start.
Following is the error received,
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Note: I have added login in user to the docker group
If you configure the docker daemon to listen to a TCP socket (as you do), you should use the -H command line option with the docker command to point it to that socket instead of the default Unix socket.
#mustaccio is correct. The docker command defaults to using a unix socket normally at /var/run/docker.sock. You can either make your options setup like:
DOCKER_OPTS="-H tcp://127.0.0.1:2375" -H unix:///var/run/docker.sock" and restart, or always use docker -H tcp://127.0.0.1:2375 whenever you interact with the host from the command line.
The only good scenario I've seen for removing the socket is pure user security. If your Docker host is TLS enabled, you can ensure only authorized people are accessing the host by signed certificates, not just people with access to the system.

Failed to ssh to machine after installing Docker

My ssh stops working after I successfully installed Docker (following the official site instruction https://docs.docker.com/engine/installation/) on an ubuntu machine A. Now my laptop cannot ssh to A but ok for other machines, say B, that sitting in the same network environment as A. A can ssh to B and B can also ssh to A. What could be the problem? Can anyone suggest how I can make a diagnostic?
If you are using a vpn service you might be encountering an ip conflict between docker0 interface and your vpn service.
to resolve this:
stop docker service:
sudo service docker stop
remove old docker0 interface created by docker
ip link del docker0
configure docker0 bridge (in my case i only had to define "bip" option)
start the docker service:
sudo service docker start
Most probably there is ip conflict between docker0 interface and your VPN service. As already answered, way is to stop docker service, remove docker0 interface and configure daemon.json file. I added following lines to my daemon.json
{
"default-address-pools":
[
{"base":"10.10.0.0/16","size":24}
]
}
My VPN was providing me an IP like 192.168.. so I chose a base IP that does not fall in that range. Note that the daemon.json file does not exist, so you have to create it in, etc/docker/.

Unable to use SSH after changing DHCP to static IP in coreOS

I could connect to coreOS through Putty in Windows10.
But after changing DHCP to static IP in coreOS,
I suddenly became unable to use SSH through putty(cannot connect to coreOS through putty in Windows10).
I wonder why this happened, and how I could solve this problem.
I investigated status of ssh in coreOS. and it says inavtive.
What should I do to solve this problem?
If anyone knows please help me.
I have no clue... TT
If your sshd is inactive, you might be able to restart it. I'd be interested whether you used networkd (as documented here) when you changed from DHCP to static IP, as I think that should have been picked up automagically by CoreOS.
If you are seeing that the following command shows sshd as "inactive (dead}":
sudo systemctl status sshd
You can start sshd with:
sudo systemctl start sshd
And just in case you need it here is documentation on how to customize the ssh daemon.
Are you sure that your network unit was formatted correctly as is being accepted?
Did you restart networkd afterwards if you added the network unit manually? sudo systemctl restart systemd-networkd
Are you using cloudconfig to add the network unit? See if there is anything in the journal: journalctl _EXE=/usr/bin/coreos-cloudinit
You can also validate your cloud-config here: https://coreos.com/validate/

Resources