I am using windows, and I'm also running an ubuntu server on virtual box. I've SSH'd into the guest machine countless times in the recent past when the guest machine was connected to a network using IPv4 addresses. This worked back when I was at home and at work. Right now, I'm connected to the university network. Here's the result from ifconfig when executed in my VM.
eth0 Link encap:Ethernet HWaddr 08:00:27:ae:e4:a0
inet6 addr: fe80::a00:27ff:feae:e4a0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:65404 errors:0 dropped:0 overruns:0 frame:0
TX packets:43 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:7588239 (7.5 MB) TX bytes:10610 (10.6 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1296 (1.2 KB) TX bytes:1296 (1.2 KB)
I did some research, and found this post on SO.
IPv6 link-local address format
So I ran netsh interface ipv6 show address on my host machine and this is my vbox network info
Interface 208: VirtualBox Host-Only Network
Addr Type DAD State Valid Life Pref. Life Address
--------- ----------- ---------- ---------- ------------------------
Other Preferred infinite infinite fe80::f8cd:e410:b1b1:c081%208
I then tried pinging the address, and it was successful. I then tried to SSH into the server, using the following command
ssh -6 fe80::f8cd:e410:b1b1:c081%208
And I got this error
"no address associated with name"
I don't understand why I'm getting this error - I've ssh'd into machines by specifying their ipv4 addresses before, and I've never gotten this error before. Could anyone tell me what I might be doing wrong?
Thanks for the help!
Try specifying the interface to the ssh client. however ssh does not have a switch for that, you have to use this syntax:
ipv6%eth1
fe80::f8cd:e410:b1b1:c081%eth0
Related
I'm developing a WordPress site with Vagrant box "vccw-team/xenial64", which can be found at vccw.cc. The website was slow with waiting times that average around 5 seconds, did some googling and many people where pointing at Vagrants synced folder wich is slow in combination with Virtualbox. The solution: nfs. Nfs doesn't exist on Windows so that gave rise to the Vagrant plugin winnfsd.
I installed the plugin and changed the Vagrantfile as such:
config.vm.network :private_network, ip: "192.168.33.10"
config.vm.synced_folder _conf['synced_folder'],
_conf['document_root'], :create => "true", :mount_options => ['dmode=755', 'fmode=644'], type: "nfs"
On vagrant up, I receive this message:
==> vccw.dev: Mounting NFS shared folders...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mount -o vers=3,udp,dmode=755,fmode=644 192.168.33.1:/D/_projects/Vagrant/vccw/wordpress /var/www/html
Stdout from the command:
Stderr from the command:
mount.nfs: an incorrect mount option was specified
I guess the portion 192.168.33.1:/D/_projects/Vagrant/vccw/wordpress might be wrong because D/_projects/Vagrant/vccw/wordpress exist on the host and not on the guest (192.168.33.1).
Other people managed to get the plugin working. Does anyone know what I'm doing wrong?
Versions:
Vagrant: 2.0.0
vagrant-winnfsd: 1.3.1
Virtualbox: 5.1.26 r117224 (Qt5.6.2)
I enabled DHCP in Vagrantfile like so:
config.vm.network :private_network, ip: "192.168.33.11", type: "dhcp"
But that resulted in the error:
NFS requires a host-only network to be created. Please add a host-only network to the machine (with either DHCP or a static IP) for NFS to work
Then I read in another question on StackOverflow that one can configure a host-only network by using the code below in the Vagrantfile:
config.vm.network :private_network, ip: "192.168.33.11"
config.vm.network :public_network, ip: "192.168.44.12"
config.vm.synced_folder _conf['synced_folder'],
_conf['document_root'], type: "nfs"
I think 192.168.44.0 255.255.255.0 is now the hosted network, derived from ip: "192.168.44.12". It work's now and my WordPress site is faster with a loading time that averages around 3 seconds. I appreciate the improvement but I'll be hawking for other tweaks.
Extra info, the output of ifconfig in the guest:
vagrant#vccw:~$ ifconfig
enp0s3 Link encap:Ethernet HWaddr 08:00:27:a8:df:8b
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fea8:df8b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1156 errors:0 dropped:0 overruns:0 frame:0
TX packets:788 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:277602 (277.6 KB) TX bytes:97056 (97.0 KB)
enp0s8 Link encap:Ethernet HWaddr 08:00:27:da:65:a0
inet addr:192.168.33.11 Bcast:192.168.33.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:feda:65a0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:12461 errors:0 dropped:0 overruns:0 frame:0
TX packets:7004 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:10059912 (10.0 MB) TX bytes:2671763 (2.6 MB)
enp0s9 Link encap:Ethernet HWaddr 08:00:27:62:47:ec
inet addr:192.168.44.12 Bcast:192.168.44.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe62:47ec/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:204 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:18654 (18.6 KB) TX bytes:648 (648.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:18 errors:0 dropped:0 overruns:0 frame:0
TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:1658 (1.6 KB) TX bytes:1658 (1.6 KB)
I'm trying to have a CentOS container with two network interfaces.
After going through the Docker docs and "googleing" a bit, I found this GitHub issue comment that specifies how to achieve this.
Following it, I created a new network (default type: bridge)
docker network create my-network
Inspecting the new network, I can see that Docker assigned it to the subnetwork 172.18.0.0/16 and the gateway 172.18.0.1/16.
Then, when creating the container, I specifically attach the new network:
docker create -ti --privileged --net=my-network --mac-address 08:00:AA:AA:AA:FF <imageName>
Inside the container, I can check with ifconfig that indeed the interface is present with that IP and mac address:
eth0 Link encap:Ethernet HWaddr 08:00:AA:AA:AA:FF
inet addr:172.18.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::a00:aaff:feaa:aaff/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3 errors:0 dropped:0 overruns:0 frame:0
TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:258 (258.0 b) TX bytes:258 (258.0 b)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
The problem comes when I connect the container to the default Docker network (bridge0 a.k.a bridge):
docker network connect bridge <my-container>
Checking now the interfaces in the container:
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:17 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2941 (2.8 KiB) TX bytes:508 (508.0 b)
eth1 Link encap:Ethernet HWaddr 08:00:AA:AA:AA:FF
inet addr:172.18.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::a00:aaff:feaa:aaff/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:17 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2941 (2.8 KiB) TX bytes:508 (508.0 b)
The interface for my new network gets moved onto eth1, meanwhile the interface for the default networks gets eth0.
Also, when checking the configuration file for the interface (/etc/sysconfig/network-scripts/ifcfg-eth0), I can see that the MAC address specified there differs from the one I manually set up when running the container (08:00:AA:AA:AA:FF):
DEVICE="eth0"
BOOTPROTO="dhcp"
HWADDR="52:54:00:85:11:33"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
MTU="1500"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"
UUID="25016937-1ff9-40d7-b4c3-18e08af0f98d"
In /etc/sysconfig/network-scripts there is only the configuration file for eth0. The file for eth1 (the newly added interface) is missing.
Due to the requirements of the work I'm involved, I need that the first interface has to be always disabled and its MAC address has to be specifically set.
Any other network-related work must go through the new attached NIC.
My question is:
How can I attach a new NIC to the container so eth0 will have the desired MAC address.
Doing this at image level is also fine.
The goal is to have a running container with two NICs: eth0 and eth1.
eth0 will have a specific MAC address (let's say, AA:AA:AA:AA:AA:AA) and will be disabled. All networking will be done through eth1.
I will assume that the Docker image has a user with rights to execute ifdown and/or ifconfig
eth0 is already present in the image and "talks" to the default Docker networ: bridge (created when Docker was installed).
We have to modify the config file for eth0 in the image (/etc/sysconfig/network-scripts/ifcg-eth0) to modify its MAC address: the field called HWADDR in the file.
After this, we have to commit the changes to a new image. Let's call it myImage.
Now, we have to create a new network for the second interface:
docker network create myNetwork
By default it is a bridge network (which is enough in my case).
Since the requirement is to have eth0 with a custom MAC address, we have to create the container without specifying a network; which will connect it to the default bridge network.
docker create -ti --mac-address=AA:AA:AA:AA:AA:AA --privileged --hostname=myHostnane --name=myContainer myImage
It is important to create the container with the --privileged switch so we can take down the eth0 interface.
Now, before starting the container, we connect it to the new network:
docker network connect myNetwork myContainer
Now the container has two interfaces: the original eth0 for the bridge network and the new eth1 for myNetwork network.
At this point, we can start the container:
docker start myContainer
and then execute the order to take down eth0:
docker exec myContainer /bin/bash -c "sudo ifdown eth0"
To take down the interface, we must do this when running a container. The reason is that any changes in the networking files will only persist in its running container, so it's not possible to commit the down interface (old, but still relevant).
I am trying to deploy my Shiny App on a Virtual Machine (CentOS 6.7). I have configured a bridged connection (I think I did it correctly) for the Virtual Machine, and I have my Static IP Address for the web application. The sample application works on localhost:3838.
I am behind a corporate proxy, so I am using a proxy to connect to the internet. The proxy is set in http_proxy. I can also connect to the internet successfully on the Virtual Machine.
When I try to access <my_VM_static_IP_Address>:3838 the website does not connect.
I can ping both the host IP address and the guest (static) IP address successfully from another PC that is connected to the network.
br0 Link encap:Ethernet HWaddr 70:F3:95:03:B5:CC
inet addr:<My Static IP Address> Bcast:<my_broadcast_address> Mask:255.255.254.0
inet6 addr: fe80::72f3:95ff:fe03:b5cc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:69753 errors:0 dropped:0 overruns:0 frame:0
TX packets:9698 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:34003791 (32.4 MiB) TX bytes:843817 (824.0 KiB)
eth0 Link encap:Ethernet HWaddr 70:F3:95:03:B5:CC
inet6 addr: fe80::72f3:95ff:fe03:b5cc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:130187 errors:0 dropped:0 overruns:0 frame:0
TX packets:9704 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:45704171 (43.5 MiB) TX bytes:845299 (825.4 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:439 errors:0 dropped:0 overruns:0 frame:0
TX packets:439 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:814679 (795.5 KiB) TX bytes:814679 (795.5 KiB)
My host default gateway and subnet mask are the same on the VM Guest and the host.
Any support is greatly appreciated!
Investigate whether a firewall on the host prevents port 3838 from being accessible from the rest of the network. It seems like you have a Windows host, correct?
A quick and easy way to check is to telnet to that port on your guest OS's IP from another machine on the network. If the connection is refused with an error, you know something is blocking the port. A blank screen means an open port.
From some other machine on the network, telnet to your guest VM's IP on port 3838. A blocked port will cause telnet to quickly return an error. If the telnet command sits with a blank screen, you have an open port.
I have configured a NAT connection on Centos 6.5 VM and it seems it does not connect to my NIC.
service network restart returns
Bringing up interface eth1: Error: No suitable device found: no device found for connection 'eth1'
ifconfig -a is shown below
eth2 Link encap:Ethernet HWaddr 00:0C:29:90:6C:31
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX Packets:90 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 Base address:0x2024
Well your ifconfig states that you have eth2 device. It is my assumption that you originally had it bridged and changed into NAT.
In /etc/sysconfig/network-scripts/
Create file named ifcfg-eth2
And fill it so it would be configured automatically by DHPC server:
DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
I am not sure if it should have rights to be executed (chmod +x).
I have a freedompop Ubee stick that I would like to connect to my beaglebone black (running angstrom with 3.2.0-54-generic kernel). After solving some issues with hotswapping (it's not possible apparently), I am seeing the the interface in using ifconfig. But when I try bringing it up nothing happens:
root#beaglebone:~# ifconfig eth1 up
root#beaglebone:~# udhcpc eth1
udhcpc (v1.20.2) started
Sending discover...
Sending discover...
Sending discover...
Something also strange is that the interface initially has an address:
root#beaglebone:~# ifconfig eth1
eth1 Link encap:Ethernet HWaddr 00:1D:88:53:2F:52
inet addr:192.168.14.2 Bcast:192.168.14.255 Mask:255.255.255.0
inet6 addr: fe80::21d:88ff:fe53:2f52/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:22 errors:0 dropped:0 overruns:0 frame:0
TX packets:48 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2542 (2.4 KiB) TX bytes:9062 (8.8 KiB)
But a few moments ( < 1 minute) later, if I run the same command, eth1 no longer has an address, bcast, etc:
root#beaglebone:~# ifconfig eth1
eth1 Link encap:Ethernet HWaddr 00:1D:88:53:2F:52
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:25 errors:0 dropped:0 overruns:0 frame:0
TX packets:51 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2730 (2.6 KiB) TX bytes:9240 (9.0 KiB)
Under no circumstance (before or after address is stripped in ifconfig) can I ever ping something.
I have tried re-assigning the address, mask, etc, but nothing helps. Bring the interface up or down does not help. I tried manually creating an interfaces file and that didn't help either.
To solve this problem, I had to:
Add an inet dhcp interface in /etc/network/interfaces:
iface eth1 inet dhcp
Add the freedompop as a nameserver in resolve.conf
nameserver 192.168.14.1
Bring up the interface
ifup eth1