How do I ssh into a KVM with minimal configuration/infrastructure? - networking

As per another QA, it's possible to setup a Ubuntu KVM with minimal infrastructure, directly with qemu / kvm alone (without virsh or any some such).
What's missing is the ability to ssh into it. (Using the default serial console is slow and some key bindings don't work, e.g., cannot go to the start of the line with control+A.)
What's the simplest hackish way to bind a single port on the host machine (e.g., 8022) to a given port on the virtualised one (e.g., 22), without setting up extra bridge networks, firewall rules or configuration files?
The simplest non-KVM-specific way I could think of would be to use ssh to ssh from the guest back to the host, with using the -R [bind_address:]port:host:hostport option of ssh, e.g., ssh -R "8022:[::1]:22" guest#10.0.2.2, but then this requires setting up a new user on the host and sharing login credentials between the host and the guest. Is there a simpler way?
P.S. The network on the guest already works, and you can already access the host from the guest, but I couldn't find a way to access the guest from within the host through IP (without setting up complex bridge networks).

The answer appears to be pretty straightfoward — as per https://unix.stackexchange.com/questions/124681/how-to-ssh-from-host-to-guest-using-qemu, just add the following to the kvm options, to forward the port 1810 on the host to 22 on the guest:
-net nic -net user,hostfwd=tcp::1810-:22
E.g.,
kvm -m 2048 -smp 2 -hda ubuntu-18.10-server-cloudimg-amd64.img -hdb user-data.img -net nic -net user,hostfwd=tcp::1810-:22 -nographic
Then you can ssh into the machine w/ ssh ubuntu#localhost -p1810.

Related

Docker on CentOS with bridge to LAN network

I have a server VLAN of 10.101.10.0/24 and my Docker host is 10.101.10.31. How do I configure a bridge network on my Docker host (VM) so that all the containers can connect directly to my LAN network without having to redirect ports around on the default 172.17.0.0/16? I tried searching but all the howtos I've found so far have resulted in losing SSH session which I had to go into the VM from a console to revert the steps I did.
There's multiple ways this can be done. The two I've had most success with are routing a subnet to a docker bridge and using a custom bridge on the host LAN.
Docker Bridge, Routed Network
This has the benefit of only needing native docker tools to configure docker. It has the down side of needing to add a route to your network, which is outside of dockers remit and usually manual (or relies on the "networking team").
Enable IP forwarding
/etc/sysctl.conf: net.ipv4.ip_forward = 1
sysctl -p /etc/sysctl.conf
Create a docker bridge with new subnet on your VM network, say 10.101.11.0/24
docker network create routed0 --subnet 10.101.11.0/24
Tell the rest of the network that 10.101.11.0/24 should be routed via 10.101.10.X where X is IP of your docker host. This is the external router/gateway/"network guy" config. On a linux gateway you could add a route with:
ip route add 10.101.11.0/24 via 10.101.10.31
Create containers on the bridge with 10.101.11.0/24 addresses.
docker run --net routed0 busybox ping 10.101.10.31
docker run --net routed0 busybox ping 8.8.8.8
Then your done. Containers have routable IP addresses.
If you're ok with the network side, or run something like RIP/OSPF on the network or Calico that takes care of routing then this is the cleanest solution.
Custom Bridge, Existing Network (and interface)
This has the benefit of not requiring any external network setup. The downside is the setup on the docker host is more complex. The main interface requires this bridge at boot time so it's not a native docker network setup. Pipework or manual container setup is required.
Using a VM can make this a little more complicated as you are running extra interfaces with extra MAC addresses over the main VM's interface which will need additional "Promiscuous" config first to allow this to work.
The permanent network config for bridged interfaces varies by distro. The following commands outline how to set the interface up and will disappear after reboot. You are going to need console access or a seperate route into your VM as you are changing the main network interface config.
Create a bridge on the host.
ip link add name shared0 type bridge
ip link set shared0 up
In /etc/sysconfig/network-scripts/ifcfg-br0
DEVICE=shared0
TYPE=Bridge
BOOTPROTO=static
DNS1=8.8.8.8
GATEWAY=10.101.10.1
IPADDR=10.101.10.31
NETMASK=255.255.255.0
ONBOOT=yes
Attach the primary interface to the bridge, usually eth0
ip link set eth0 up
ip link set eth0 master shared0
In /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
TYPE=Ethernet
IPV6INIT=no
USERCTL=no
BRIDGE=shared0
Reconfigure your bridge to have eth0's ip config.
ip addr add dev shared0 10.101.10.31/24
ip route add default via 10.101.10.1
Attach containers to bridge with 10.101.10.0/24 addresses.
CONTAINERID=$(docker run -d --net=none busybox sleep 600)
pipework shared1 $CONTAINERID 10.101.10.43/24#10.101.10.Y
Or use a DHCP client inside the container
pipework shared1 $CONTAINERID dhclient
Docker macvlan network
Docker has since added a network driver called macvlan that can make a container appear to be directly connected to the physical network the host is on. The container is attached to a parent interface on the host.
docker network create -d macvlan \
--subnet=10.101.10.0/24 \
--gateway=10.101.10.1 \
-o parent=eth0 pub_net
This will suffer from the same VM/softswitch problems where the network and interface will need be promiscuous with regard mac addresses.

How to create ssh tunnel and keep in running

I want to access machine A which is behind the firewall through a jump host from machine B.
I want to do the same either via ssh keys or via username and password.
What will be the steps and the commands to achieve the same?
The feature is called port forwarding:
ssh -L localport:machine-a-address.domain:remote-port machine-b
Then you can simply use localpott on localhost to access the remote service on machine-a, for example:
telnet localhost localport

Communication between two hosts both on a different network connected to the same switch

How can I make two machines on a different network, connected to the same switch, communicate with each other?
in ubuntu you can use ssh command to communication.
for this install open-ssh server (follow this link)
https://help.ubuntu.com/community/SSH/OpenSSH/Configuring
then use sudo ssh user#IP Address
be sure that firewall is disable
if there is no DHCP server, you would have to configure the IP Adresses manually on the devices

How do I get the external hosts ip address from inside a docker container

We are transitioning (we hope) from CoreOS to CentOS, from fleet to swarm. I need to determine the ip address of the machine running docker from inside the container.
The problem is that we use nginx to determine which of the machines in our docker cluster runs which service. To make this work we need the container to be able to post to our etcd repository the ip address of the machine upon which it is located. Everything I have seen so far has been able to get me to a 172.17.0.1 ip address for the external machine, but ALL of our containers on ALL of our dockers will have that private address. I need an EXTERNAL address that nginx may use to get to the service.
I could use the '--hostname ip ...' option or the '-e EXT_HOST_IP=ip ...' option to set an ip address, but if I include these in the 'docker run' command, the shell processing the docker command will expand the 'ip...' and return the ip address of the current machine -- NOT the machine upon which swarm will eventually run the container.
The best I have come up with so far is to create a file/directory on the host machine that contains the ip address of the host machine. I can then use the docker '-v' option to mount the directory inside the container, and get the ip address from that. It just seems like there should be an easier way to do this.
Swarm issue 1106 mentions in a recent answer
inside swarm I can get the ip of the host machine like so
ip route | awk '/default/ { print $3 }'
Which is fine for many purposes.
But when I talk to that IP I need to use the proper dns name for TLS to work.

SSH from Virtualbox Guest to DynDNS address

I have Windows 10 as host with a Manjaro installation as Guest on Virtualbox.
I have set a Debian server on another house with ssh installed. I have setup a dyndns on Debian's network so I can access it remotely.
For example..
From address 12.34.56.78 I ssh to foo.dyndns.org:1234. This port redirects me to 192.168.1.5:22 always as this is my Debian machine and the connection is established. I am able to do this from Windows 10 as well as my android and any other device in 12.34.56.78 or by 3G.
But..
When I try to do this
$ ssh foo.dyndns.org:1234
from the Manjaro Guest in Virtualbox I get the following error:
ssh: Could not resolve hostname foo.dyndns.org:1234: Name or service not known
So I did ifconfig and I saw my inet address was 10.0.2.15. I changed virtualbox's network adapter from NAT to Bridged so I can get a lan ip and I got the host's ip, 192.168.2.4. So I gave it another try and still didn't work.
Also, if i try to connect from vm to server while I'm in the same network
$ ssh user#192.168.2.5:22
it works. In this case virtualbox's network adapter was NAT.
This command works if I try from my android (connectbot).
I can connect the same way from PuTTY from Windows.
So my questions are:
Can it be done?
If so, how? (and why?)
Can a VBox Guest get lan ip that's not the same as the host's?
Is there any more information I should provide?
I have searched for a couple of days in here and on google and all I found where solutions on how someone can ssh INTO a vm. No one (from what I saw) asked the opposite.
Checking manual page for ssh reveals the format of command-line options:
ssh [...] [-p port] [...] [user#]hostnamessh
This simply describes, that you need to change
ssh foo.dyndns.org:1234
to
ssh -p 1234 foo.dyndns.org
if the domain resolves correctly to the ip address.

Resources