WebRTC ice candidates - networking

I set up RTCPeerConnection but it only works locally (between 2 laptops on my wireless connection). For other connections I see a black stream. I suspect it is due to the ICE candidates not being properly gathered, they only contain local IPs:
RTCIceCandidate {sdpMLineIndex: 0, sdpMid: "", candidate: "a=candidate:2999745851 1 udp 2113937151 192.168.56.1 51411 typ host generation 0
↵"} app.js:14530
RTCIceCandidate {sdpMLineIndex: 0, sdpMid: "", candidate: "a=candidate:3366620645 1 udp 2113937151 192.168.0.17 44628 typ host generation 0
↵"} app.js:14530
RTCIceCandidate {sdpMLineIndex: 1, sdpMid: "", candidate: "a=candidate:2999745851 1 udp 2113937151 192.168.56.1 51411 typ host generation 0
↵"} app.js:14530
RTCIceCandidate {sdpMLineIndex: 1, sdpMid: "", candidate: "a=candidate:3366620645 1 udp 2113937151 192.168.0.17 44628 typ host generation 0
↵"}
RTCIceCandidate {sdpMLineIndex: 0, sdpMid: "", candidate: "a=candidate:4233069003 1 tcp 1509957375 192.168.56.1 0 typ host generation 0
↵"} app.js:14507
RTCIceCandidate {sdpMLineIndex: 0, sdpMid: "", candidate: "a=candidate:2250862869 1 tcp 1509957375 192.168.0.17 0 typ host generation 0
↵"} app.js:14507
RTCIceCandidate {sdpMLineIndex: 1, sdpMid: "", candidate: "a=candidate:4233069003 1 tcp 1509957375 192.168.56.1 0 typ host generation 0
↵"} app.js:14507
RTCIceCandidate {sdpMLineIndex: 1, sdpMid: "", candidate: "a=candidate:2250862869 1 tcp 1509957375 192.168.0.17 0 typ host generation 0
↵"}
Here is the iceServers config:
this.configuration = {
'iceServers': [
{
'url': 'stun:stun.l.google.com:19302'
}
]
};
However on another deployment machine this configuration actually works for remote peers and I receive candidates with a public IP.
EDIT
Actually running tests with yet another peer outputs the following:
handling offer from radu1
caching candidate from radu1 (x 15 - saving them locally because the remote description is not received/set yet and it will throw errors like: Illegal string...)
Set remote description from radu1
Object {sdp: "v=0
↵o=- 7594479116751954142 2 IN IP4 127.0.0.1
↵s…06 label:iuzaFLXbo6HCbnWGdobaYN2gSPQmAFKZQaP1v0
↵", type: "offer"}
sdp: "v=0
↵o=- 7594479116751954142 2 IN IP4 127.0.0.1
↵s=-
↵t=0 0
↵a=group:BUNDLE audio video
↵a=msid-semantic: WMS iuzaFLXbo6HCbnWGdobaYN2gSPQmAFKZQaP1
↵m=audio 1 RTP/SAVPF 111 103 104 0 8 106 105 13 126
↵c=IN IP4 0.0.0.0
↵a=rtcp:1 IN IP4 0.0.0.0
↵a=ice-ufrag:nFjsr4JB2b6hTc4K
↵a=ice-pwd:z3BUY0Mlga5JywRNw9lLGqeF
↵a=ice-options:google-ice
↵a=fingerprint:sha-256 64:76:B6:98:ED:FA:6D:D5:E2:40:B6:FE:98:00:29:F7:28:93:C5:6A:CF:2F:59:D2:B7:82:14:BF:38:FD:3B:83
↵a=setup:actpass
↵a=mid:audio
↵a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
↵a=sendrecv
↵a=rtcp-mux
↵a=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:xGSOTjjxbfNVNAxoRxY6UFHTJY86bFnGqK1p23Tm
↵a=rtpmap:111 opus/48000/2
↵a=fmtp:111 minptime=10
↵a=rtpmap:103 ISAC/16000
↵a=rtpmap:104 ISAC/32000
↵a=rtpmap:0 PCMU/8000
↵a=rtpmap:8 PCMA/8000
↵a=rtpmap:106 CN/32000
↵a=rtpmap:105 CN/16000
↵a=rtpmap:13 CN/8000
↵a=rtpmap:126 telephone-event/8000
↵a=maxptime:60
↵a=ssrc:4260698723 cname:8jJISPnQEaP+YvYy
↵a=ssrc:4260698723 msid:iuzaFLXbo6HCbnWGdobaYN2gSPQmAFKZQaP1 iuzaFLXbo6HCbnWGdobaYN2gSPQmAFKZQaP1a0
↵a=ssrc:4260698723 mslabel:iuzaFLXbo6HCbnWGdobaYN2gSPQmAFKZQaP1
↵a=ssrc:4260698723 label:iuzaFLXbo6HCbnWGdobaYN2gSPQmAFKZQaP1a0
↵m=video 1 RTP/SAVPF 100 116 117
↵c=IN IP4 0.0.0.0
↵a=rtcp:1 IN IP4 0.0.0.0
↵a=ice-ufrag:nFjsr4JB2b6hTc4K
↵a=ice-pwd:z3BUY0Mlga5JywRNw9lLGqeF
↵a=ice-options:google-ice
↵a=fingerprint:sha-256 64:76:B6:98:ED:FA:6D:D5:E2:40:B6:FE:98:00:29:F7:28:93:C5:6A:CF:2F:59:D2:B7:82:14:BF:38:FD:3B:83
↵a=setup:actpass
↵a=mid:video
↵a=extmap:2 urn:ietf:params:rtp-hdrext:toffset
↵a=extmap:3 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time
↵a=sendrecv
↵a=rtcp-mux
↵a=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:xGSOTjjxbfNVNAxoRxY6UFHTJY86bFnGqK1p23Tm
↵a=rtpmap:100 VP8/90000
↵a=rtcp-fb:100 ccm fir
↵a=rtcp-fb:100 nack
↵a=rtcp-fb:100 goog-remb
↵a=rtpmap:116 red/90000
↵a=rtpmap:117 ulpfec/90000
↵a=ssrc:1805691906 cname:8jJISPnQEaP+YvYy
↵a=ssrc:1805691906 msid:iuzaFLXbo6HCbnWGdobaYN2gSPQmAFKZQaP1 iuzaFLXbo6HCbnWGdobaYN2gSPQmAFKZQaP1v0
↵a=ssrc:1805691906 mslabel:iuzaFLXbo6HCbnWGdobaYN2gSPQmAFKZQaP1
↵a=ssrc:1805691906 label:iuzaFLXbo6HCbnWGdobaYN2gSPQmAFKZQaP1v0
↵"
type: "offer"
RTC: adding stream from radu1
Sending answer to radu1
Set candidate from cache for radu1 (x 15)
RTCIceCandidate {sdpMLineIndex: 0, sdpMid: "", candidate: "a=candidate:826241329 1 udp 2113937151 169.254.159.173 52996 typ host generation 0
↵"}
...
The above results in an peerconnection.iceConnectionState = 'checking'. Is the order of events right for a callee?
Receive offer
Receive ice candidates from another peer but not saving them because setRemoteDescription callback was not fired
Remote description successfully set.
Remote stream is received
Send answer
Add cached candidates
Note that this actual setup works between 2 laptops in my LAN. I can view remote streams. It just doesn't work for different networks, black screen and iceConnectionState = 'checking'
What does that mean?
How can I solve/debug this problem?
Do I need to setup any other STUN/TURN servers?

Solved by properly setting up a STUN/TURN server. Seems that some peers need a TURN server to relay traffic because STUN fails.

Related

Getting TCP Retransmission instead of ACK on TUN device

I'm trying to implement a TCP stack over TUN device according to RFC 793 in Linux. By default, my program is in the LISTEN state and is waiting for an SYN packet to establish a connection. I use nc to send an SYN:
$ nc 192.168.20.99 20
My program responds with SYN, ACK, but nc doesn't send an ACK at the end. This is the flow:
# tshark -i tun0 -z flow,tcp,network
1 0.000000000 192.168.20.1 → 192.168.20.99 TCP 60 39284 → 20 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM=1 TSval=1691638570 TSecr=0 WS=128
2 0.000112185 192.168.20.99 → 192.168.20.1 TCP 40 20 → 39284 [SYN, ACK] Seq=0 Ack=1 Win=10 Len=0
3 1.001056784 192.168.20.1 → 192.168.20.99 TCP 60 [TCP Retransmission] [TCP Port numbers reused] 39284 → 20 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM=1 TSval=1691639571 TSecr=0 WS=128
|Time | 192.168.20.1 |
| | | 192.168.20.99 |
|0.000000000| SYN | |Seq = 0
| |(39284) ------------------> (20) |
|0.000112185| SYN, ACK | |Seq = 0 Ack = 1
| |(39284) <------------------ (20) |
|1.001056784| SYN | |Seq = 0
| |(39284) ------------------> (20) |
More info about my TCP header:
Frame 2: 40 bytes on wire (320 bits), 40 bytes captured (320 bits) on interface tun0, id 0
Raw packet data
Internet Protocol Version 4, Src: 192.168.20.99, Dst: 192.168.20.1
Transmission Control Protocol, Src Port: 20, Dst Port: 39310, Seq: 0, Ack: 1, Len: 0
Source Port: 20
Destination Port: 39310
[Stream index: 0]
[Conversation completeness: Incomplete, CLIENT_ESTABLISHED (3)]
[TCP Segment Len: 0]
Sequence Number: 0 (relative sequence number)
Sequence Number (raw): 0
[Next Sequence Number: 1 (relative sequence number)]
Acknowledgment Number: 1 (relative ack number)
Acknowledgment number (raw): 645383655
0101 .... = Header Length: 20 bytes (5)
Flags: 0x012 (SYN, ACK)
Window: 10
[Calculated window size: 10]
Checksum: 0x99b0 [unverified]
[Checksum Status: Unverified]
Urgent Pointer: 0
NOTE: I'm aware of the ISN prediction attack, but this is just a test, and 0 for the sequence number is just as random as any other number in this case.
UPDATE: This is the output of tcpdump which says I'm calculating checksum wrong:
# tcpdump -i tun0 -vv -n
...
IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40, bad cksum 16f3 (->911b)!)
192.168.20.99.20 > 192.168.20.1.39308: Flags [S.], cksum 0x9bb0 (incorrect -> 0x1822), seq 0, ack 274285560, win 10, length 0
...
Here is my checksum calculator (From RFC 1071):
uint16_t checksum(void *addr, int count)
{
uint32_t sum = 0;
uint16_t *ptr = addr;
while (count > 1) {
sum += *ptr++;
count -= 2;
}
if (count > 0)
sum += *(uint8_t *)ptr;
while (sum >> 16)
sum = (sum & 0xffff) + (sum >> 16);
return ~sum;
}
And I'm passing the combination of pseudo-header with the TCP segment for TCP checksum. (in big-endian order):
uint16_t tcp_checksum(struct tcp_header *tcph, uint8_t *pseudo_header)
{
size_t len = PSEUDO_HEADER_SIZE + (tcph->data_offset * 4);
uint8_t combination[len];
memcpy(combination, pseudo_header, PSEUDO_HEADER_SIZE);
dump_tcp_header(tcph, combination, PSEUDO_HEADER_SIZE);
return checksum(combination, len / 2);
}
What am I doing wrong here?
Problem solved by calculating checksums via in_cksum.c from tcpdump source code, which is a line-by-line implementation of the RFC 1071. I also had to set IFF_NO_PI for the tun device. For this case, using a tap device instead of a tun device is probably a better choice to handle EtherType.

Openstack impossible to connect to Internet from instances or to instances from host

I installed Openstack (all-in-one) by using KAYOBE. I follow all configuration steps described here https://docs.openstack.org/kayobe/latest/installation.html and here https://docs.openstack.org/kayobe/latest/configuration/scenarios/all-in-one/overcloud.html#configuration
Everything seems fine network/floating IPs, flavors,images,instances but I still have an issue to reach Internet from my instances and also to reach the instances from the host.
I have 2 instances in my Openstack with for each a floating IP in 192.168.213.xxx/24
I can ping from both instances their floating IP and their local IP.
I can also ping from both instances the public interface of my pubic network (EXT)
Any idea or support would be welcome.
Thank you.
My configuration is this one:
On my host the network interfaces are the following:
brens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.213.36 netmask 255.255.255.0 broadcast 192.168.213.255
inet6 fe80::ec5d:5cff:fee4:e865 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:26:9f:e3 txqueuelen 1000 (Ethernet)
RX packets 975109 bytes 2110124730 (1.9 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 607990 bytes 243846445 (232.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:ec:7a:b0:c6 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::20c:29ff:fe26:9fe3 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:26:9f:e3 txqueuelen 1000 (Ethernet)
RX packets 1793551 bytes 2197893693 (2.0 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 706011 bytes 252766868 (241.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Boucle locale)
RX packets 48961080 bytes 56924742056 (53.0 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 48961080 bytes 56924742056 (53.0 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
p-brens33-ovs: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::9040:7aff:fe13:333e prefixlen 64 scopeid 0x20<link>
ether 92:40:7a:13:33:3e txqueuelen 1000 (Ethernet)
RX packets 83424 bytes 9297973 (8.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 30 bytes 2056 (2.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
p-brens33-phy: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::ccf3:e0ff:fe09:eeb prefixlen 64 scopeid 0x20<link>
ether ce:f3:e0:09:0e:eb txqueuelen 1000 (Ethernet)
RX packets 30 bytes 2056 (2.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 83424 bytes 9297973 (8.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
qbr24033e67-96: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
ether 1a:08:1a:6e:23:fc txqueuelen 1000 (Ethernet)
RX packets 90 bytes 6932 (6.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
qbr2aa4f6fc-67: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
ether 66:d4:18:69:3b:12 txqueuelen 1000 (Ethernet)
RX packets 79 bytes 5618 (5.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
qvb24033e67-96: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet6 fe80::1808:1aff:fe6e:23fc prefixlen 64 scopeid 0x20<link>
ether 1a:08:1a:6e:23:fc txqueuelen 1000 (Ethernet)
RX packets 189 bytes 19262 (18.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 190 bytes 15978 (15.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
qvb2aa4f6fc-67: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet6 fe80::64d4:18ff:fe69:3b12 prefixlen 64 scopeid 0x20<link>
ether 66:d4:18:69:3b:12 txqueuelen 1000 (Ethernet)
RX packets 1567 bytes 185484 (181.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1654 bytes 154488 (150.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
qvo24033e67-96: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet6 fe80::c411:18ff:fe38:f94a prefixlen 64 scopeid 0x20<link>
ether c6:11:18:38:f9:4a txqueuelen 1000 (Ethernet)
RX packets 190 bytes 15978 (15.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 189 bytes 19262 (18.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
qvo2aa4f6fc-67: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet6 fe80::8c85:c4ff:fe68:8145 prefixlen 64 scopeid 0x20<link>
ether 8e:85:c4:68:81:45 txqueuelen 1000 (Ethernet)
RX packets 1654 bytes 154488 (150.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1567 bytes 185484 (181.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
tap24033e67-96: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet6 fe80::fc16:3eff:fee4:c722 prefixlen 64 scopeid 0x20<link>
ether fe:16:3e:e4:c7:22 txqueuelen 1000 (Ethernet)
RX packets 168 bytes 14342 (14.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 201 bytes 18968 (18.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
tap2aa4f6fc-67: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet6 fe80::fc16:3eff:fef6:8f99 prefixlen 64 scopeid 0x20<link>
ether fe:16:3e:f6:8f:99 txqueuelen 1000 (Ethernet)
RX packets 1632 bytes 152852 (149.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1587 bytes 186940 (182.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
tapbdd8182b-15: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet6 fe80::f816:3eff:fe61:78d5 prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:61:78:d5 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 24 bytes 1776 (1.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Host: VMWARE Workstation 16
My WM where is installed my Openstack Instance:
-----------------------------------------------
- Operating System: CentOS 8
- Selinux deactivated
- Firewalld service inactive
One single network interface setup in NAT with a static IP address:
IP: 192.168.213.36/24
DNS1: 192.168.213.2 (DNS from my VMWARE Workstation network)
DNS2: 8.8.8.8
GATEWAY: 192.168.213.2
My Openstack network configuration file:
FROM /etc/neutron directory:
##################
File: ml2_conf.ini
##################
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = flat,vlan,vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security,dns
[ml2_type_vlan]
network_vlan_ranges =
[ml2_type_flat]
flat_networks = physnet1
[ml2_type_vxlan]
vni_ranges = 1:1000
#################
File neutron.conf
#################
[DEFAULT]
debug = True
log_dir = /var/log/kolla/neutron
use_stderr = False
bind_host = 192.168.213.36
bind_port = 9696
api_paste_config = /usr/share/neutron/api-paste.ini
api_workers = 1
metadata_workers = 1
rpc_workers = 2
rpc_state_report_workers = 2
metadata_proxy_socket = /var/lib/neutron/kolla/metadata_proxy
interface_driver = openvswitch
allow_overlapping_ips = true
core_plugin = ml2
service_plugins = metering,router,vpnaas
transport_url = rabbit://openstack:uUmeMPEtFFu6IIQNoJ4Kdl026ZEG9TvlT5PdZnR9#192.168.213.36:5672//
dns_domain = sample.openstack.org.
external_dns_driver = designate
ipam_driver = internal
[nova]
auth_url = http://192.168.213.120:35357
auth_type = password
project_domain_id = default
user_domain_id = default
region_name = Toulouse
project_name = service
username = nova
password = wxXLLtG3uRpUxExM3YJvCsofbmPMWFy0RbnZ92Wk
endpoint_type = internal
cafile =
[oslo_middleware]
enable_proxy_headers_parsing = True
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
[database]
connection = mysql+pymysql://neutron:zRMXmINpPXB7QNBSROcks7MuWdzbYw3TIouRbI7m#192.168.213.120:3306/neutron
connection_recycle_time = 10
max_pool_size = 1
max_retries = -1
[keystone_authtoken]
www_authenticate_uri = http://192.168.213.120:5000
auth_url = http://192.168.213.120:35357
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = IujTCY3xxrnATYsc5BLw7MMDOjsvktHkaxnJvWux
cafile =
region_name = Toulouse
memcache_security_strategy = ENCRYPT
memcache_secret_key = vUHcQln5CqIRRbnABG6D4MO81Fb6jVEum4gPPMvY
memcached_servers = 192.168.213.36:11211
memcache_use_advanced_pool = True
[oslo_messaging_notifications]
transport_url = rabbit://openstack:uUmeMPEtFFu6IIQNoJ4Kdl026ZEG9TvlT5PdZnR9#192.168.213.36:5672//
driver = messagingv2
topics = notifications,notifications_designate
[designate]
url = http://192.168.213.120:9001/v2
auth_uri = http://192.168.213.120:5000
auth_url = http://192.168.213.120:35357
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = designate
password = ngYG2ZLmGwfdfWHgSq4u0fqZz1s4fGmIola9pfL9
allow_reverse_dns_lookup = True
ipv4_ptr_zone_prefix_size = 24
ipv6_ptr_zone_prefix_size = 116
cafile =
region_name = Toulouse
[placement]
auth_type = password
auth_url = http://192.168.213.120:35357
username = placement
password = fJmE67U1r91mM0n7PqvgI0JJ55qjbBxem1qahJuL
user_domain_name = Default
project_name = service
project_domain_name = Default
os_region_name = Toulouse
os_interface = internal
cafile =
region_name = Toulouse
[privsep]
helper_command = sudo neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper
################
File config.json
################
{
"command": "neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/neutron_vpnaas.conf ",
"config_files": [
{
"source": "/var/lib/kolla/config_files/neutron.conf",
"dest": "/etc/neutron/neutron.conf",
"owner": "neutron",
"perm": "0600"
},
{
"source": "/var/lib/kolla/config_files/neutron_vpnaas.conf",
"dest": "/etc/neutron/neutron_vpnaas.conf",
"owner": "neutron",
"perm": "0600"
},
{
"source": "/var/lib/kolla/config_files/ml2_conf.ini",
"dest": "/etc/neutron/plugins/ml2/ml2_conf.ini",
"owner": "neutron",
"perm": "0600"
}
],
"permissions": [
{
"path": "/var/log/kolla/neutron",
"owner": "neutron:neutron",
"recurse": true
}
]
}
#################
File config.json
#################
{
"command": "neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/neutron_vpnaas.conf ",
"config_files": [
{
"source": "/var/lib/kolla/config_files/neutron.conf",
"dest": "/etc/neutron/neutron.conf",
"owner": "neutron",
"perm": "0600"
},
{
"source": "/var/lib/kolla/config_files/neutron_vpnaas.conf",
"dest": "/etc/neutron/neutron_vpnaas.conf",
"owner": "neutron",
"perm": "0600"
},
{
"source": "/var/lib/kolla/config_files/ml2_conf.ini",
"dest": "/etc/neutron/plugins/ml2/ml2_conf.ini",
"owner": "neutron",
"perm": "0600"
}
],
"permissions": [
{
"path": "/var/log/kolla/neutron",
"owner": "neutron:neutron",
"recurse": true
}
]
}
From /etc/neutron-openvswitch-agent
################
File config.json
################
{
"command": "neutron-openvswitch-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini",
"config_files": [
{
"source": "/var/lib/kolla/config_files/neutron.conf",
"dest": "/etc/neutron/neutron.conf",
"owner": "neutron",
"perm": "0600"
},
{
"source": "/var/lib/kolla/config_files/openvswitch_agent.ini",
"dest": "/etc/neutron/plugins/ml2/openvswitch_agent.ini",
"owner": "neutron",
"perm": "0600"
}
],
"permissions": [
{
"path": "/var/log/kolla/neutron",
"owner": "neutron:neutron",
"recurse": true
}
]
}
#################
File neutron.conf
#################
[DEFAULT]
debug = True
log_dir = /var/log/kolla/neutron
use_stderr = False
bind_host = 192.168.213.36
bind_port = 9696
api_paste_config = /usr/share/neutron/api-paste.ini
api_workers = 1
metadata_workers = 1
rpc_workers = 2
rpc_state_report_workers = 2
metadata_proxy_socket = /var/lib/neutron/kolla/metadata_proxy
interface_driver = openvswitch
allow_overlapping_ips = true
core_plugin = ml2
service_plugins = metering,router,vpnaas
transport_url = rabbit://openstack:uUmeMPEtFFu6IIQNoJ4Kdl026ZEG9TvlT5PdZnR9#192.168.213.36:5672//
dns_domain = sample.openstack.org.
external_dns_driver = designate
ipam_driver = internal
[nova]
auth_url = http://192.168.213.120:35357
auth_type = password
project_domain_id = default
user_domain_id = default
region_name = Toulouse
project_name = service
username = nova
password = wxXLLtG3uRpUxExM3YJvCsofbmPMWFy0RbnZ92Wk
endpoint_type = internal
cafile =
[oslo_middleware]
enable_proxy_headers_parsing = True
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
[database]
connection = mysql+pymysql://neutron:zRMXmINpPXB7QNBSROcks7MuWdzbYw3TIouRbI7m#192.168.213.120:3306/neutron
connection_recycle_time = 10
max_pool_size = 1
max_retries = -1
[keystone_authtoken]
www_authenticate_uri = http://192.168.213.120:5000
auth_url = http://192.168.213.120:35357
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = IujTCY3xxrnATYsc5BLw7MMDOjsvktHkaxnJvWux
cafile =
region_name = Toulouse
memcache_security_strategy = ENCRYPT
memcache_secret_key = vUHcQln5CqIRRbnABG6D4MO81Fb6jVEum4gPPMvY
memcached_servers = 192.168.213.36:11211
memcache_use_advanced_pool = True
[oslo_messaging_notifications]
transport_url = rabbit://openstack:uUmeMPEtFFu6IIQNoJ4Kdl026ZEG9TvlT5PdZnR9#192.168.213.36:5672//
driver = messagingv2
topics = notifications,notifications_designate
[designate]
url = http://192.168.213.120:9001/v2
auth_uri = http://192.168.213.120:5000
auth_url = http://192.168.213.120:35357
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = designate
password = ngYG2ZLmGwfdfWHgSq4u0fqZz1s4fGmIola9pfL9
allow_reverse_dns_lookup = True
ipv4_ptr_zone_prefix_size = 24
ipv6_ptr_zone_prefix_size = 116
cafile =
region_name = Toulouse
[placement]
auth_type = password
auth_url = http://192.168.213.120:35357
username = placement
password = fJmE67U1r91mM0n7PqvgI0JJ55qjbBxem1qahJuL
user_domain_name = Default
project_name = service
project_domain_name = Default
os_region_name = Toulouse
os_interface = internal
cafile =
region_name = Toulouse
[privsep]
helper_command = sudo neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper
##########################
File openvswitch_agent.ini
##########################
[agent]
tunnel_types = vxlan
l2_population = true
arp_responder = true
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
bridge_mappings = physnet1:brens33-ovs
datapath_type = system
ovsdb_connection = tcp:127.0.0.1:6640
local_ip = 192.168.213.36
Openstack Security groups link attached to the instances network
[root#controller0 kolla]# openstack security group show be8d2de7-6bd4-4702-9b9f-c3e0f96d451a
+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2021-07-31T12:06:30Z |
| description | Project-00000 |
| id | be8d2de7-6bd4-4702-9b9f-c3e0f96d451a |
| location | cloud='', project.domain_id=, project.domain_name=, project.id='8d4b5b9558d64a04b34b797db97157bf', project.name=, region_name='Toulouse', zone= |
| name | Project-00000 |
| project_id | 8d4b5b9558d64a04b34b797db97157bf |
| revision_number | 6 |
| rules | created_at='2021-07-31T12:06:57Z', direction='ingress', ethertype='IPv4', id='5a896360-c712-4c93-bda9-be669662915e', port_range_max='22', port_range_min='22', protocol='tcp', remote_ip_prefix='0.0.0.0/0', updated_at='2021-07-31T12:06:57Z' |
| | created_at='2021-07-31T12:06:45Z', direction='ingress', ethertype='IPv6', id='7e8642ed-05d2-44a1-9034-216856d21492', protocol='ipv6-icmp', remote_ip_prefix='::/0', updated_at='2021-07-31T12:06:45Z' |
| | created_at='2021-07-31T12:06:31Z', direction='egress', ethertype='IPv6', id='85074747-5623-4142-86a3-045bf3b84ce6', updated_at='2021-07-31T12:06:31Z' |
| | created_at='2021-07-31T12:06:38Z', direction='ingress', ethertype='IPv4', id='86270591-3ad2-4feb-a278-5c2a5968fd27', protocol='icmp', remote_ip_prefix='0.0.0.0/0', updated_at='2021-07-31T12:06:38Z' |
| | created_at='2021-07-31T12:06:31Z', direction='egress', ethertype='IPv4', id='8fa6cb0e-7616-4b0e-9662-887dd012e777', updated_at='2021-07-31T12:06:31Z' |
| | created_at='2021-07-31T12:07:05Z', direction='ingress', ethertype='IPv6', id='acd0daab-a5b7-4eb3-9e2b-5c848f546ba6', port_range_max='22', port_range_min='22', protocol='tcp', remote_ip_prefix='::/0', updated_at='2021-07-31T12:07:05Z' |
| stateful | True |
| tags | [u'Project-00000', u'TrashProject'] |
| updated_at | 2021-07-31T12:07:05Z |
+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[root#controller0 kolla]# openstack security group show 88e4c862-eed0-4884-b205-defd8a50c67b
+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2021-07-31T12:05:25Z |
| description | Default security group |
| id | 88e4c862-eed0-4884-b205-defd8a50c67b |
| location | cloud='', project.domain_id=, project.domain_name=, project.id='8d4b5b9558d64a04b34b797db97157bf', project.name=, region_name='Toulouse', zone= |
| name | default |
| project_id | 8d4b5b9558d64a04b34b797db97157bf |
| revision_number | 1 |
| rules | created_at='2021-07-31T12:05:26Z', direction='egress', ethertype='IPv4', id='26b1decc-0515-436a-a3a6-445e93c1797c', updated_at='2021-07-31T12:05:26Z' |
| | created_at='2021-07-31T12:05:26Z', direction='ingress', ethertype='IPv6', id='508e85c3-1bac-4407-903b-92be818e012b', remote_group_id='88e4c862-eed0-4884-b205-defd8a50c67b', updated_at='2021-07-31T12:05:26Z' |
| | created_at='2021-07-31T12:05:26Z', direction='egress', ethertype='IPv6', id='51abe797-cead-4790-86ee-3341af0253de', updated_at='2021-07-31T12:05:26Z' |
| | created_at='2021-07-31T12:05:26Z', direction='ingress', ethertype='IPv4', id='591c792d-c8c6-4282-8f6e-f66c9cc1c0f7', remote_group_id='88e4c862-eed0-4884-b205-defd8a50c67b', updated_at='2021-07-31T12:05:26Z' |
| stateful | True |
| tags | [] |
| updated_at | 2021-07-31T12:05:26Z |
+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Network topology
One port of the network
I can see on my host where openstack (docker) is , and with tcpdump, that icmp requests are well received from the VM instances, but on VM instances I did not see any ICMP reply.

Firewalld port forward reports Connection refused

I want to forward 9876 of 192.168.9.111 to 192.168.9.112:3333, configured as follows:
# firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: dhcpv6-client ssh
ports: 9876/tcp
protocols:
masquerade: yes
forward-ports: port=9876:proto=tcp:toport=3333:toaddr=192.168.9.112
source-ports:
icmp-blocks:
rich rules:
# sysctl -a |grep forward |grep 4
net.ipv4.conf.all.forwarding = 1
net.ipv4.conf.all.mc_forwarding = 0
net.ipv4.conf.default.forwarding = 1
net.ipv4.conf.default.mc_forwarding = 0
net.ipv4.conf.docker0.forwarding = 1
net.ipv4.conf.docker0.mc_forwarding = 0
net.ipv4.conf.eth0.forwarding = 1
net.ipv4.conf.eth0.mc_forwarding = 0
net.ipv4.conf.lo.forwarding = 1
net.ipv4.conf.lo.mc_forwarding = 0
net.ipv4.ip_forward = 1
net.ipv4.ip_forward_use_pmtu = 0
# the destination is listening:
# telnet 192.168.9.112 3333
Trying 192.168.9.112...
Connected to 192.168.9.112.
Escape character is '^]'.
# but the forward does not work
# telnet 192.168.9.111 9876
Trying 192.168.9.111...
telnet: connect to address 192.168.9.111: Connection refused
Any idea?
net.ipv4.ip_forward=1 already set 1
and he is not trying to ping, he is connecting via telnet
Turns out your also need to edit /etc/sysctl.conf to include:
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.ip_forward=1

Malformed IP Scapy

My goal is to develop a script that can send IP packets to any host to any other host in a different subnet. Right now everything is seemingly working, except my IP packet is malformed so scapy cannot send it.
def sendIPMessage(interfaceName, dst_ip, routerIP, message):
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind(("", port))
src_addr = get_mac_address(interface=interfaceName)
my_ip = get_ip_address(interfaceName)
netmask = ipaddress.ip_address(dst_ip) in ipaddress.ip_network(my_ip)
if netmask is True: # if dst is in the same network
arp_MAC = sendArpMesage(interfaceName, dst_ip)
else:
arp_MAC = sendArpMesage(interfaceName, routerIP)
ether = Ether(src=str(src_addr), dst=str(arp_MAC))
print(ether.show())
size = len(message) + 14
ip = IP(src=my_ip, dst=dst_ip, proto=17, ihl=5, len=size, ttl=5, chksum=0)
#print(ip.show())
payload = Raw(message)
packet = ether / ip / msg
del packet[IP].chksum
packet = packet.__class__(bytes(packet)) # same as packet.show2()
print(packet.show())
success = send(packet)
if success is not None:
print(success.show)
else:
print("success is None")
Here is the show() information
Begin emission:
*Finished sending 1 packets.
Received 1 packets, got 1 answers, remaining 0 packets
###[ Ethernet ]###
dst = 4e:98:22:86:f6:75
src = 00:00:00:00:00:11
type = LOOP
None
###[ Ethernet ]###
dst = 4e:98:22:86:f6:75
src = 00:00:00:00:00:11
type = IPv4
###[ IP ]###
version = 4
ihl = 5
tos = 0x0
len = 28
id = 1
flags =
frag = 0
ttl = 5
proto = udp
chksum = 0xe9c2
src = 192.168.1.101
dst = 10.0.0.1
\options \
###[ UDP ]###
sport = 21608
dport = 26995
len = 8297
chksum = 0x7320
###[ Padding ]###
load = 'a test'
None
.
Sent 1 packets.
success is None
And this is what wireshark currently looks like
I am not sure if the problem is because the checksum values do not align, but any help creating this packet would be appreciated

I dont get HTTP answer with sr function. Just an ACK

I am trying to send an HTTP GET request to google.com, but the answer I get is an ACK and not the HTML file. Here is the code:
def Make_Get():
synR = IP(dst = 'www.google.com', ttl = 64)/TCP(dport = 80,sport = randint(1024,65535), flags = 'S')
synAckAN = sr1(synR)
req = (IP(dst='www.google.com') / TCP(dport=80, sport=synAckAN[TCP].dport, seq=synAckAN[TCP].ack, ack=synAckAN[TCP].seq + 1, flags='A')/"GET / HTTP/1.0 \n\n")
ans, a = sr(req)
return ans
and this are the two packets I got in return of this function:
###[ IP ]###
version = 4
ihl = None
tos = 0x0
len = None
id = 1
flags =
frag = 0
ttl = 64
proto = tcp
chksum = None
src = 192.168.233.128
dst = 216.58.214.100
\options \
###[ TCP ]###
sport = 35534
dport = http
seq = 1
ack = 1964930533
dataofs = None
reserved = 0
flags = A
window = 8192
chksum = None
urgptr = 0
options = {}
###[ Raw ]###
load = 'GET / HTTP/1.0 \n\n'
None
###[ IP ]###
version = 4L
ihl = 5L
tos = 0x0
len = 40
id = 32226
flags =
frag = 0L
ttl = 128
proto = tcp
chksum = 0x6425
src = 216.58.214.100
dst = 192.168.233.128
\options \
###[ TCP ]###
sport = http
dport = 35534
seq = 1964930533
ack = 18
dataofs = 5L
reserved = 0L
flags = A
window = 64240
chksum = 0xe5e6
urgptr = 0
options = {}
###[ Padding ]###
load = '\x00\x00\x00\x00\x00\x00'
None
When I sniffed the traffic while I sent this packet, I got this:
###[ Ethernet ]###
dst= 00:0c:29:bb:8e:79
src= 00:50:56:e9:b8:b1
type= 0x800
###[ IP ]###
version= 4L
ihl= 5L
tos= 0x0
len= 517
id= 32136
flags=
frag= 0L
ttl= 128
proto= tcp
chksum= 0x5004
src= 172.217.20.100
dst= 192.168.233.128
\options\
###[ TCP ]###
sport= http
dport= 1928
seq= 1828330545
ack= 18
dataofs= 5L
reserved= 0L
flags= FPA
window= 64240
chksum= 0x8b5f
urgptr= 0
options= []
###[ HTTP ]###
###[ HTTP Response ]###
Status-Line= u'HTTP/1.0 302 Found'
Accept-Ranges= None
Age= None
E-Tag= None
Location= u'http://www.google.co.il/?gfe_rd=cr&ei=9fiTV6P6FuWg8weei7rQDA'
Proxy-Authenticate= None
Retry-After= None
Server= None
Vary= None
WWW-Authenticate= None
Cache-Control= u'private'
Connection= None
Date= u'Sat, 23 Jul 2016 23:08:37 GMT'
Pragma= None
Trailer= None
Transfer-Encoding= None
Upgrade= None
Via= None
Warning= None
Keep-Alive= None
Allow= None
Content-Encoding= None
Content-Language= None
Content-Length= u'261'
Content-Location= None
Content-MD5= None
Content-Range= None
Content-Type= u'text/html; charset=UTF-8'
Expires= None
Last-Modified= None
Headers= u'Date: Sat, 23 Jul 2016 23:08:37 GMT\r\nContent-Length: 261\r\nContent-Type: text/html; charset=UTF-8\r\nLocation: http://www.google.co.il/?gfe_rd=cr&ei=9fiTV6P6FuWg8weei7rQDA\r\nCache-Control: private'
Additional-Headers= None
###[ Raw ]###
load= '<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">\n<TITLE>302 Moved</TITLE></HEAD><BODY>\n<H1>302 Moved</H1>\nThe document has moved\nhere.\r\n</BODY></HTML>\r\n'
As you can see, the last layer in this one, contain the code I need.
my question is:
Why don't I get the packet with sr() and how can I obtain it to collect the HTML code?
EDIT:
The call to the function:
print Make_Get('www.google.com')[0][Raw]
The function:
def Make_Get(ipp):
ip = DNS_Req(ipp)
synR = IP(dst = ip)/TCP(dport = 80,sport = randint(1024,65535), flags = 'S')
syn_ack = sr1(synR)
getStr = "GET / HTTP/1.1\r\nHost: {}\r\n\r\n".format(ip)
request = (IP(dst= ip) / TCP(dport=80, sport=syn_ack[TCP].dport, seq=syn_ack[TCP].ack, ack=syn_ack[TCP].seq + 1, flags='A')/getStr)
an = sr(request)
return an
The resuls:
Begin emission:
.Finished to send 1 packets.
*
Received 2 packets, got 1 answers, remaining 0 packets
Begin emission:
*Finished to send 1 packets.
Received 1 packets, got 1 answers, remaining 0 packets
[]
First, in HTTP, a correct newline is "\r\n", not "\n".
Second, is there any reason why you use HTTP/1.0 and not HTTP/1.1? If not, you should change your request to:
GET / HTTP/1.1\r\n
Host: www.google.com\r\n
\r\n
Third, the ACK you are getting is usually sent by the server before sending the actual HTTP response to acknowledge your request faster. A second segment is then sent with the HTTP response. You are missing this one in your first show() example.
Have a look here.
To catch this segment, you can use sr() function with its parameter timeout and multi:
ans, unans = sr(request, timeout=2, multi=True)
for c, s in ans:
if s.haslayer(Raw):
print b[Raw]
print("-----------") # just a delimiter
timeout is used to ensure that sr() will stop (value 2 is arbitrary).
multi mean "accept multiple answers for the same stimulus" unless it is there, sr() will stop sniffing after one answer to the request sent.

Resources