I'm setting a vpc only mode in sagemaker studio . based on documentation here https://docs.aws.amazon.com/sagemaker/latest/dg/studio-notebooks-and-internet-access.html, it is stated that "This is required for connectivity between the JupyterServer app and the KernelGateway apps. You must allow access to at least ports in the range 8192-65535." i have set up following security group as below, I understand that these port needs to open for connectivity between the jupyter server and kernel gateway so what should one user for CidrIp instead of 0.0.0.0/0.
SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: "Open tcp"
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 8192
ToPort: 65535
CidrIp: 0.0.0.0/0
SecurityGroupEgress:
- IpProtocol: tcp
FromPort: 8192
ToPort: 65535
CidrIp: 0.0.0.0/0
Related
Background
I have a strange use-case where my VPN cannot be on any of the private subnets, but, also cannot use a TAP interface. The machine will be moving through different subnets, and requires access to the entire private address space by design. A single blocked IP would be considered a failure of design.
So, these are all off limits:
10.0.0.0/8
172.16.0.0/12
192.168.0.0/16
169.254.0.0/16
In searching for a solution, I came across RFC 5735, which defines:
192.0.2.0/24 TEST-NET-1
198.51.100.0/24 TEST-NET-2
203.0.113.0/24 TEST-NET-3
As:
For use in documentation and example code. It is often used in conjunction with domain names
example.com or example.net in vendor and protocol documentation. As described in [RFC5737], addresses within this block do not legitimately appear on the public Internet and can be used without any coordination with IANA or an Internet registry.
Which, was a "Jackpot" moment for me and my use case.
Config
I configured an OpenVPN server as such:
local 0.0.0.0
port 443
proto tcp
dev tun
topology subnet
server 203.0.113.0 255.255.255.0 # TEST-NET-3 RFC 5735
push "route 203.0.113.0 255.255.255.0"
...[Snip]...
With Client:
client
nobind
dev tun
proto tcp
...[Snip]...
And ufw rules:
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 203.0.113.0/24 -o ens160 -j MASQUERADE
COMMIT
However, upon running I get /sbin/ip route add 203.0.113.0/24 via 203.0.113.1 RTNETLINK answers: File exists in the error logs. While the VPN completes the rest of its connection successfully.
No connection
Running the following commands:
Server: sudo python3 -m http.server 80
Client: curl -X GET / 203.0.113.1
Results in:
curl: (28) Failed to connect to 203.0.113.1 port 80: Connection timed out
I have tried:
/sbin/ip route replace 203.0.113.0/24 dev tun 0 on client and server.
/sbin/ip route change 203.0.113.0/24 dev tun 0 on client and server.
Adding route 203.0.113.0 255.255.255.0 to the server.
Adding push "route 203.0.113.0 255.255.255.0 127.0.0.1" to server
And none of it seems to work.
Does anyone have any idea how I can force the client to push this traffic over the VPN to my server, instead of to the public IP?
This does actually work!
Just dont forget to allow connections within your firewall. I fixed my config with:
sudo ufw allow in on tun0
However, 198.18.0.0/15 and 100.64.0.0/10 defined as Benchmarking and Shared address space respectively, may be more appropriate choices, since being able to forward TEST-NET addresses may be considered a bug.
I have 2 VM instances using the same network(default), same subnet (default), but in 2 different zones. I accessed the VM and then ping to another VM but they did not resolve! What do I have to do to make them communicate? Below is the information of the system:
Network:
- Name: default
Subnet:
- Name: default
- Network: default
- Ip range: 10.148.0.0/20
- Region: asia-southeast1
VM1:
- Subnet: default
- IP: 10.148.0.54
- Zone: asia-southeast1-c
VM2:
- Subnet: default
- IP: 10.148.0.56
- Zone: asia-southeast1-b
Please help me! thank you!
First check if the ARP is resolved for the remote VM you want to ping.
Also check if there is a firewall rule for the default network blocking the communication between the VM's.
I used vagrant to install KVM followed this guide:
https://www.cyberciti.biz/faq/installing-kvm-on-ubuntu-16-04-lts-server/
Here changed the bridged networking:
Step 3: Configure bridged networking
$ sudo cp /etc/network/interfaces /etc/network/interfaces.bakup-1-july-2016
$ sudo vi /etc/network/interfaces
auto br1
iface br0 inet static
address 10.18.44.26
netmask 255.255.255.192
broadcast 10.18.44.63
dns-nameservers 10.0.80.11 10.0.80.12
# set static route for LAN
post-up route add -net 10.0.0.0 netmask 255.0.0.0 gw 10.18.44.1
post-up route add -net 161.26.0.0 netmask 255.255.0.0 gw 10.18.44.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
# br1 setup with static wan IPv4 with ISP router as a default gateway
auto br1
iface br1 inet static
address 208.43.222.51
netmask 255.255.255.248
broadcast 208.43.222.55
gateway 208.43.222.49
bridge_ports eth1
bridge_stp off
bridge_fd 0
bridge_maxwait 0
$ sudo systemctl restart networking
When I restart vagrant, it always stop here:
$ vagrant up
==> default: Attempting graceful shutdown of VM...
default: Guest communication could not be established! This is usually because
default: SSH is not running, the authentication information was changed,
default: or some other networking issue. Vagrant will force halt, if
default: capable.
==> default: Forcing shutdown of VM...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: hostonly
==> default: Forwarding ports...
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.
If you look above, you should be able to see the error(s) that
Vagrant had when attempting to connect to the machine. These errors
are usually good hints as to what may be wrong.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.
How can fix it?
If you're using vagrant, you dont need to make those modification, you can just change in your Vagrantfile the network configuration (see Public Networks) and make the change
Vagrant.configure("2") do |config|
...
config.vm.network "public_network"
...
end
vagrant will take care to update the right configuration file
I need help understanding my networking logs due to docker-compose networking.
I'm ssh'd into a VM, and I have two projects with docker-compose. The first is launched simply with docker-compose up. When I try to launch the second, my ssh session freezes, and I can no longer ssh into the VM. After lots of trial and error, and after reading this I tried to append to my 2nd project's docker-compose.yml file the following:
networks:
default:
external:
name: abcdef_default
where abcdef_default is the name of the network created by docker-compose up of the 1st project. With this, the docker-compose up on the 2nd project doesn't kick me out of the ssh session.
I tailed the logs in /var/log/*.log, and here's the output with the networks section in the docker-compose.yml file (without the timestamp prefix: Jan 19 09:13:42 hostname kernel: [420096.305357]):
aufs au_opts_verify:1597:dockerd[13813]: dirperm1 breaks the protection by the permission bits on the lower branch
device veth6a84537 entered promiscuous mode
IPv6: ADDRCONF(NETDEV_UP): veth6a84537: link is not ready
eth0: renamed from veth2480623
IPv6: ADDRCONF(NETDEV_CHANGE): veth6a84537: link becomes ready
br-fe0deb0149df: port 18(veth6a84537) entered forwarding state
br-fe0deb0149df: port 18(veth6a84537) entered forwarding state
aufs au_opts_verify:1597:dockerd[25317]: dirperm1 breaks the protection by the permission bits on the lower branch
device veth1a3c1e3 entered promiscuous mode
IPv6: ADDRCONF(NETDEV_UP): veth1a3c1e3: link is not ready
br-fe0deb0149df: port 22(veth1a3c1e3) entered forwarding state
br-fe0deb0149df: port 22(veth1a3c1e3) entered forwarding state
eth0: renamed from veth54e576d
IPv6: ADDRCONF(NETDEV_CHANGE): veth1a3c1e3: link becomes ready
br-fe0deb0149df: port 22(veth1a3c1e3) entered disabled state
veth54e576d: renamed from eth0
br-fe0deb0149df: port 22(veth1a3c1e3) entered disabled state
device veth1a3c1e3 left promiscuous mode
br-fe0deb0149df: port 22(veth1a3c1e3) entered disabled state
br-fe0deb0149df: port 18(veth6a84537) entered forwarding state
and here's the output without the networks section (i.e. when I get kicked out of the ssh session):
IPv6: ADDRCONF(NETDEV_UP): br-55349b03453a: link is not ready
aufs au_opts_verify:1597:dockerd[26982]: dirperm1 breaks the protection by the permission bits on the lower branch
aufs au_opts_verify:1597:dockerd[26982]: dirperm1 breaks the protection by the permission bits on the lower branch
aufs au_opts_verify:1597:dockerd[3051]: dirperm1 breaks the protection by the permission bits on the lower branch
device veth7a1bcde entered promiscuous mode
IPv6: ADDRCONF(NETDEV_UP): veth7a1bcde: link is not ready
br-55349b03453a: port 1(veth7a1bcde) entered forwarding state
br-55349b03453a: port 1(veth7a1bcde) entered forwarding state
br-55349b03453a: port 1(veth7a1bcde) entered disabled state
eth0: renamed from veth5d8a2ea
IPv6: ADDRCONF(NETDEV_CHANGE): veth7a1bcde: link becomes ready
br-55349b03453a: port 1(veth7a1bcde) entered forwarding state
br-55349b03453a: port 1(veth7a1bcde) entered forwarding state
IPv6: ADDRCONF(NETDEV_CHANGE): br-55349b03453a: link becomes ready
aufs au_opts_verify:1597:dockerd[13814]: dirperm1 breaks the protection by the permission bits on the lower branch
aufs au_opts_verify:1597:dockerd[13814]: dirperm1 breaks the protection by the permission bits on the lower branch
aufs au_opts_verify:1597:dockerd[13922]: dirperm1 breaks the protection by the permission bits on the lower branch
device veth3253bd4 entered promiscuous mode
IPv6: ADDRCONF(NETDEV_UP): veth3253bd4: link is not ready
br-55349b03453a: port 2(veth3253bd4) entered forwarding state
br-55349b03453a: port 2(veth3253bd4) entered forwarding state
br-55349b03453a: port 2(veth3253bd4) entered disabled state
eth0: renamed from veth9c8aaa3
IPv6: ADDRCONF(NETDEV_CHANGE): veth3253bd4: link becomes ready
br-55349b03453a: port 2(veth3253bd4) entered forwarding state
br-55349b03453a: port 2(veth3253bd4) entered forwarding state
br-55349b03453a: port 2(veth3253bd4) entered disabled state
veth9c8aaa3: renamed from eth0
br-55349b03453a: port 2(veth3253bd4) entered disabled state
device veth3253bd4 left promiscuous mode
br-55349b03453a: port 2(veth3253bd4) entered disabled state
br-55349b03453a: port 1(veth7a1bcde) entered forwarding state
br-55349b03453a: port 1(veth7a1bcde) entered disabled state
veth5d8a2ea: renamed from eth0
br-55349b03453a: port 1(veth7a1bcde) entered disabled state
device veth7a1bcde left promiscuous mode
br-55349b03453a: port 1(veth7a1bcde) entered disabled state
I don't really understand how to read these logs.
Here is my ifconfig also.
Can someone help me read the logs and figure out what the problem is?
Diagnosis
Our team is using AWS EC2 instances running Ubuntu 18.04 as devservers. We recently got reports that docker-compose broke SSH connections. Even after restarting, the devservers are still inaccessible. So I started investigation.
I was able to exclude the cause of docker-compose by reproducing using docker only.
ubuntu#ip-172-31-115-116:~$ docker network create -d bridge my-bridge-network
aca5884d60f146cef81ac55c8cccd231a43f40927d645168642d9b28c5e009a6
ubuntu#ip-172-31-115-116:~$ docker network prune
WARNING! This will remove all custom networks not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Networks:
my-bridge-network
ubuntu#ip-172-31-115-116:~$ docker network create -d bridge my-bridge-network
f0a7a06a9627bc2de00eb60091a92010451690626d95e077f622f3058cc3a07c
ubuntu#ip-172-31-115-116:~$ docker network prune
WARNING! This will remove all custom networks not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Networks:
my-bridge-network
ubuntu#ip-172-31-115-116:~$ docker network create -d bridge my-bridge-network
Connection reset by 172.31.115.116 port 22
Then the root cause occurred to me.
Root cause
Our docker-compose files are using the bridge network mode which will create a new bridge network by default. When docker-compose down or docker network prune is run, the bridge network will be torn down. And the next docker-compose run or docker network create will create a new bridge network.
The default IP range for the docker0 bridge adapter is 172.17.0.0/16.
When I first ran the docker network create -d bridge my-bridge-network command, it created a new bridge adapter for 172.18.0.0/16.
The second bridge adapter was created for 172.19.0.0/16.
Naturally, the 3rd bridge adapter is created for 172.20.0.0/16. However, that is our Engineering VPN IP range. Therefore the overlap caused the server unable to communicate with our laptop.
Solutions
The solution is to make sure new docker bridge networks will skip our VPN IP range.
Temp solution
If we add the skipped IP ranges to system route table, docker will automatically skip them. Therefore, we can run the below script whenever the devserver got rebooted.
sudo route add -net [our VPN IP range] netmask 255.255.0.0 gw [our gateway]
This solution is imperfect that the new routes will be discarded after restarting the machine.
Main solution
We should permanently apply the route changes to all devservers.
echo " routes:" | sudo tee -a /etc/netplan/50-cloud-init.yaml
echo " - to: [our VPN IP range]" | sudo tee -a /etc/netplan/50-cloud-init.yaml
echo " via: [our gateway]" | sudo tee -a /etc/netplan/50-cloud-init.yaml
sudo netplan apply
Docker IP changes
We also plan to modify the docker default-address-pools to redefine docker IP ranges. Refer to https://github.com/docker/compose/issues/4336#issuecomment-457326123. I would say modifying /etc/docker/daemon.json is better.
br-xxxxxxx are the bridge interfaces of Docker and vethxxxxxxx are the virtual interfaces of your containers, Docker use those veth interfaces but you do not directly interact on it, they use an IPv6 address and don't have IPv4. Docker can't create NAT interfaces, it can only create bridge and veth with IPv6 for containers. You can link your bridge to any physical or virtual interface of your host.
So it work like that:
eth0 (your interface or v-interface if you want) ↔ brxxxxx(docker bridge) ↔ vethxxxxx (v-interface of your container)
It's all I can say, I'm not sure that someone else will answer, there is not a lot of Docker experts, so I give you all informations I can to help you to understand your logs.
I finally ended up running a docker network ls. The output was a list of more than 15 networks which were very old. I ran a docker ps to make sure that nothing related to these networks was still running. One container was indeed still running (redis) and it was on a network called bridge. I stopped the container. Then I started going through all the networks with docker network rm <network name> until I was left with 4 networks: bridge, host, none, and the only network that was still working. Then I could start new networks with docker-compose up again as usual
I had the same issue, I solved it by setting the network_mode option on docker compose (see the docs here. The solution came from this thread ).
services:
my_service:
image: ...
network_mode: "host"
Context: I have setup a demo cloud in my laptop using VirtualBox and have two virtual machines - one has the client and other as server. Create a small instance using the server and running instance is TinyLinux.
Problem: How shall I send data to that instances and stores in that instance.
Some pointers would be very helpful.
Well, with libvirt, you have several options how to do the networking. The default is to use NATing. In that case libvirt creates a bridge and virtual nics for every so configured virtual nic:
$ brctl show
bridge name bridge id STP enabled interfaces
virbr0 8000.525400512fc8 yes virbr0-nic
vnet0
Then sets-up iptables rules to NAT (masquerade) the packets on such bridge.
Chain POSTROUTING (policy ACCEPT 19309 packets, 1272K bytes)
pkts bytes target prot opt in out source destination
8 416 MASQUERADE tcp -- any any 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
216 22030 MASQUERADE udp -- any any 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
11 460 MASQUERADE all -- any any 192.168.122.0/24 !192.168.122.0/24
enables forwarding
# cat /proc/sys/net/ipv4/ip_forward
1
and spawns DHCP server (dnsmasq is both DHCP and DNS in one)
ps aux | grep dnsmasq
nobody 1334 0.0 0.0 13144 568 ? S Feb06 0:00 \
/sbin/dnsmasq --strict-order --local=// --domain-needed \
--pid-file=... --conf-file= --except-interface lo --bind-dynamic \
--interface virbr0 --dhcp-range 192.168.122.2,192.168.122.254 \
--dhcp-leasefile=.../default.leases --dhcp-lease-max=253 --dhcp-no-override
If I had two virtual network interfaces (two machines with one NIC on same network, there would be two nics in that bridge. The machines gets the address from the range 192.168.122.2-254 from the dnsmasq DHCP server. So if you know that addresses, you should be able to connect from one to the other VM as both are on same broadcast domain (connected by the bridge). To the outside of your computer the machines all appear as "one IP address".
The more "advanced" option is to use Bridged networking, which again puts the virtual interfaces into one bridge, but it puts some physical device there as well, so the machines appears as if there were several machines connected to some switch...
I usually bind a web server to the gateway interface the VMs use to NAT with the physical host.