netfilter_queue ipv4 optional header removal - tcp

I'm implementing netfilter_queue-based user program that deletes ipv4 optional header 'Time Stamp'
ping works well with this program, because it uses ICMP transmission.
But TCP-based applications doesn't work. I've checked it with wireshark, and this program deletes timestamp well. Instead, TCP-based application doesn't send ACK for that packet, and remote server retransmits the same packet indefinitely.
Is there any missing procedures for TCP packet processing? I just modified IPv4 header part only. Then why the tcp transmission doesn't work at all?
My main code is:
static int cb(struct nfq_q_handle *qh, struct nfgenmsg *nfmsg,
struct nfq_data *nfa, void *data)
{
unsigned int timestamp = 0;
bool ptype = true;
int pnow = 20;
int plast = 20;
int ihl;
struct nfqnl_msg_packet_hdr *ph = nfq_get_msg_packet_hdr(nfa);
unsigned char* rawBuff = NULL;
int len;
len = nfq_get_payload(nfa, &rawBuff);
if(len < 0) printf("Failed to get payload");
struct pkt_buff* pkBuff = pktb_alloc(AF_INET, rawBuff, len, 0x20);
struct iphdr* ip = nfq_ip_get_hdr(pkBuff);
ihl = ip->ihl;
uint8_t* buff = NULL;
if( (ip->daddr != 0x0101007f) && (ip->daddr != 0x0100007f) && (ip->daddr != 0x0100A9C0) && (ip->saddr != 0x0100A9C0)) { // filter_out dns
if(ip->version == 4) {
if(ihl != 5) { // if ipv4 packet header is longer than default packet header
buff = pktb_data(pkBuff); // packet buffer
plast = ihl * 4;
while(pnow != plast) {
if(buff[pnow] == 0x44) { // timestamp type
ptype = false;
break;
}
else {
if(buff[pnow+1] == 0) {
pnow = pnow + 4;
}
else {
pnow = pnow + buff[pnow+1];
}
}
}
}
if(!ptype) {
timestamp = buff[pnow + 4] << 24 | buff[pnow + 5] << 16 | buff[pnow + 6] << 8 | buff[pnow + 7];
if(timestamp > 100000) { // if TS is big, delete it.
ip->ihl -= 2;
nfq_ip_mangle(pkBuff, pnow, 0, 8, "", 0);
}
}
}
}
nfq_ip_set_checksum(ip);
if(nfq_ip_set_transport_header(pkBuff, ip) < 0) printf("Failed to set transport header");
int result = 0;
result = nfq_set_verdict(qh, ntohl(ph->packet_id), NF_ACCEPT, pktb_len(pkBuff), pktb_data(pkBuff));
pktb_free(pkBuff);
return result;
}
iptables setting is:
sudo iptables -t mangle -A PREROUTING -j NFQUEUE -p all --queue-num 0
sudo iptables -t mangle -A POSTROUTING -j NFQUEUE -p all --queue-num 0

It seems that netfilter_queue is malfunctioning. After debugging Kernel, I could identify that skbuff's network_header pointer is not updated even I changed netfilter_queue's equivalent pointer. Mangled packet is dropped by packet length checking code.

Shouldn't you place the TCP header just after the IP header?
0...............20.........28..........48..........
+---------------+----------+-----------+-----------+
|IP HEADER |IP OPTIONS| TCP HEADER|TCP OPTIONS|
+---------------+----------+-----------+-----------+
So, decreasing ihl value, you're creating a gap between IP header and TCP header.
You need to use memmove along with decreasing ihl value.

Related

ESP8266 tcp recv returning errno 11 (EAGAIN) when handling large amounts of data

I am running ESP8266_RTOS_SDK_3.4 for an app that uses TCP to talk to a private port on a local server. In normal operation, it uploads large amounts of data to the server and receives acknowledgements that are usually less than 200 bytes: this always works fine. When I do an OTA update, where it's mainly receiving data, TCP recv fails on attempting to read the first block, and errno is set to 11 (EAGAIN). Even if I make the server send just 1024 bytes, the same thing happens.
This is the TCP connect and recv code:
bool net_tcp_connect (SENDER_DESTINATION * dest) {
struct sockaddr_in destAddr;
if (!find_host (dest->hostname)) {
return false;
}
memset(&destAddr, 0, sizeof(destAddr));
memcpy (&destAddr.sin_addr, findhost_ip (), sizeof (destAddr.sin_addr));
destAddr.sin_family = AF_INET;
destAddr.sin_port = htons (dest->port);
sock = socket(AF_INET, SOCK_STREAM, 0);
if (sock < 0) {
LOGEF("Create: errno %d", errno);
return false;
}
struct timeval tv;
tv.tv_sec = dest->timeout;
tv.tv_usec = 0;
setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO, (const char*)&tv, sizeof(tv));
if (connect(sock, (struct sockaddr *)&destAddr, sizeof(destAddr)) != 0) {
LOGEF("Connect: %s %d errno %d", findhost_str (), dest->port, errno);
EVENT_HERE ( );
net_tcp_close ();
return false;
}
return true;
}
// --------------------------------------------------------------------------------------------
int net_tcp_recv (void * buffer, int max_length) {
if (sock < 0)
return false;
int bytes_received = recv (sock, buffer, max_length, 0);
if (bytes_received < 0) {
LOGEF("Receive: errno= %d", errno);
net_tcp_close ();
bytes_received = 0;
}
return bytes_received;
}
EAGAIN can be a sign of a receive timeout, but the timeout is set to 30 seconds and the server usually sends out the first 32k bytes in less than a second.
The ESP8266 code does run OK on some access points and, as far as I can tell, the same code on an ESP32 runs OK on all access points.
Any suggestions for why this might happen, or things that I could try changing in the code or the ESP setup to make it work reliably on any access point?

Linux: inter process communication by tcp 127.0.0.1 on the same node very slow

Tcp communication is very slow by 127.0.0.1 or eth IP(eg:10.10.253.12) on the same host.
server listen on 0.0.0.0:2000, client connect to 127.0.0.1:2000 or local eth ip:10.10.253.12:2000, CS transfer speed only 100KB per second.
Program written by C using libevent and Java using Netty have the same effect, program written as:
server accept connect, echo everything it received.
client sends arbitrary 128-byte data, when socket writable, send another 128-byte data; read and discard what it received.
This couple client\server program works fine if run on a different machine, the speed at 30MB per second.
But zeromq pair communication by 127.0.0.1 has no such issue.
Server side code is:
---start listener
struct evconnlistener *listener = evconnlistener_new_bind(leader,
listener_cb, NULL,
LEV_OPT_REUSEABLE | LEV_OPT_CLOSE_ON_FREE, s_backlog, &addr,
addrlen);
if (!listener) {
logit("Could not create a listener!");
return 1;
}
int fd = evconnlistener_get_fd(listener);
int keepAlive = 0; // 非0值,开启keepalive属性
setsockopt(fd, SOL_SOCKET, SO_KEEPALIVE, (void *)&keepAlive, sizeof(keepAlive));
do{
if (event_base_loop(leader, EVLOOP_NO_EXIT_ON_EMPTY) < 0){
break;
}
}while(!event_base_got_exit(leader));
---connect processing
static void listener_cb(struct evconnlistener *listener, evutil_socket_t fd, struct sockaddr *sa, int socklen, void *user_data) {
if (s_rcvbufsize > 0){
setsockopt(fd, SOL_SOCKET, SO_RCVBUF, (void *)&s_rcvbufsize, sizeof(s_rcvbufsize));
}
if (s_sndbufsize > 0){
setsockopt(fd, SOL_SOCKET, SO_SNDBUF, (void *)&s_sndbufsize, sizeof(s_sndbufsize));
}
setsockopt(fd, IPPROTO_TCP, TCP_NODELAY, (char*)&s_tcpnodelay, sizeof(s_tcpnodelay));
int keepAlive = 0; // 非0值,开启keepalive属性
setsockopt(fd, SOL_SOCKET, SO_KEEPALIVE, (void *)&keepAlive, sizeof(keepAlive));
struct bufferevent *bev = bufferevent_socket_new(s_worker, fd, BEV_OPT_CLOSE_ON_FREE|BEV_OPT_THREADSAFE);
if (!bev) {
logit("Error constructing bufferevent!");
evutil_closesocket(fd);
return;
}
bufferevent_setcb(bev, conn_readcb, conn_writecb, conn_eventcb, NULL);
bufferevent_enable(bev, EV_READ);
}
---read\write processing
static void conn_writecb(struct bufferevent *bev, void *user_data) {
}
static void conn_readcb(struct bufferevent *bev, void *user_data) {
struct evbuffer *input = bufferevent_get_input(bev);
int len = evbuffer_get_length(input);
struct evbuffer *output = bufferevent_get_output(bev);
evbuffer_add_buffer(output, input);
}
Client side code is:
---init connection
struct bufferevent* bev= bufferevent_socket_new(s_event_base, -1, BEV_OPT_CLOSE_ON_FREE|BEV_OPT_THREADSAFE);
if (!bev){
return 1;
}
struct timeval tv;
tv.tv_sec = 30; //connect timeout
tv.tv_usec = 0;
bufferevent_set_timeouts(bev, NULL, &tv);
bufferevent_setcb(bev, NULL, NULL, connect_eventcb, (void*)s_event_base);
int flag = bufferevent_socket_connect(bev, &s_target_sockaddr, s_target_socklen);
if (-1 == flag ){
bufferevent_free(bev);
return 1;
}
---connected processing
static void connect_eventcb(struct bufferevent *bev, short events, void *user_data) {
if (events & (BEV_EVENT_EOF | BEV_EVENT_ERROR | BEV_EVENT_TIMEOUT)){
bufferevent_free(bev);
}else if (events & BEV_EVENT_CONNECTED) {
int fd = bufferevent_getfd(bev);
if (s_sorcvbufsize > 0){
setsockopt(fd, SOL_SOCKET, SO_RCVBUF, (void *)&s_sorcvbufsize, sizeof(s_sorcvbufsize));
}
if (s_sosndbufsize > 0){
setsockopt(fd, SOL_SOCKET, SO_SNDBUF, (void *)&s_sosndbufsize, sizeof(s_sosndbufsize));
}
setsockopt(fd, IPPROTO_TCP, TCP_NODELAY, (char*)&s_tcpnodelay, sizeof(s_tcpnodelay));
int keepAlive = 0; // 非0值,开启keepalive属性
setsockopt(fd, SOL_SOCKET, SO_KEEPALIVE, (void *)&keepAlive, sizeof(keepAlive));
bufferevent_setwatermark(bev, EV_WRITE, s_snd_wmark_l, s_snd_wmark_h);
bufferevent_setcb(bev, conn_readcb, conn_writecb, conn_eventcb, NULL);
bufferevent_enable(bev, EV_READ|EV_WRITE);
bufferevent_trigger(bev, EV_WRITE, BEV_TRIG_IGNORE_WATERMARKS|BEV_OPT_DEFER_CALLBACKS);
}
}
---read/write processing
static void conn_writecb(struct bufferevent *bev, void *user_data) {
struct evbuffer *output = bufferevent_get_output(bev);
for (int len = evbuffer_get_length(output); len < s_snd_wmark_h; len += s_sendsize){
if (0 != bufferevent_write(bev, s_send_buf, s_sendsize)){
break;
}
}
}
static void conn_readcb(struct bufferevent *bev, void *user_data) {
struct evbuffer *input = bufferevent_get_input(bev);
evbuffer_drain(input, 0x7FFFFFFF);
}
Tshark capture shows there is many KeepAliveReq no matter how SO_KEEPALIVE is setted:
Tshark capture result 1
Tshark capture result 2
I tested and resolved it now:
the main reason is send buffer size of the server side is two small(8K), then the recv size, which cause server send congest.
When I adjust both buffer size to 32K, the problem disappeared.

How to make TAP adapter accept all traffic form PC?

I'm developing a client application to capture traffic on a PC. I'm using OpenVpn tun/tap adapter. But it dosen't work, when I'm testing application traffic goes through Wireless LAN adapter. How to make all traffic go through tun/tap adapter?
DWORD active = 1;
DWORD len;
int status = DeviceIoControl(handle,
TAP_CONTROL_CODE(6, 0), // TAP_IOCTL_SET_MEDIA_STATUS
&active,
sizeof(active),
&active,
sizeof(active),
&len,
NULL
);
if(status == 0)
{
return NULL;
}
int configtun[3] = {0x01000b0a, 0x01000b0a, 0x0000ffff}; // IP, NETWORK, MASK
configtun[0] = inet_addr(ip);
configtun[1] = inet_addr(ip);
char *p = (char*)(configtun+1);
*(p+3) = 0;
status = DeviceIoControl(handle,
TAP_CONTROL_CODE(10, 0), // TAP_IOCTL_CONFIG_TUN
&configtun,
sizeof(configtun),
&configtun,
sizeof(configtun),//active),
&len,
NULL
);
if(status == 0)
{
return NULL;
}

Adding nftables nat rules with libnftnl APIs

I want to add following snat and dnat rules using nftables:
nft add rule nat post udp sport 29000 ip saddr 192.168.101.102 udp dport 40000 ip daddr 192.168.101.102 snat 192.168.101.55:35000
nft add rule nat pre udp sport 29000 ip saddr 192.168.101.103 udp dport 32000 ip daddr 192.168.101.55 dnat 192.168.101.102:40000
Here, pre and post is name of the chains in the nat table and I have added those with the following commands:
nft add table nat
nft add chain nat pre { type nat hook prerouting priority 0 \; }
nft add chain nat post { type nat hook postrouting priority 100 \; }
I have already checked it with the nft command and it is working perfectly.
I want to achieve the same with API approach from my application. There are some examples and test program provided in the libnftnl library. From the reference of nft-rule-add.c and nft-expr_nat-test.c I have made a test application for the same as follows:
/* Packets coming from */
#define SADDR "192.168.1.103"
#define SPORT 29000
/* Packets coming to (the machine on which nat happen) */
#define DADDR "192.168.101.55"
#define DPORT 32000
/* Packets should go from (the machine on which nat happen) */
#define SNATADDR DADDR
#define SNATPORT 35000
/* Packets should go to */
#define DNATADDR SADDR
#define DNATPORT 38000
static void add_payload(struct nftnl_rule *rRule, uint32_t base, uint32_t dreg,
uint32_t offset, uint32_t len)
{
struct nftnl_expr *tExpression = nftnl_expr_alloc("payload");
if (tExpression == NULL) {
perror("expr payload oom");
exit(EXIT_FAILURE);
}
nftnl_expr_set_u32(tExpression, NFTNL_EXPR_PAYLOAD_BASE, base);
nftnl_expr_set_u32(tExpression, NFTNL_EXPR_PAYLOAD_DREG, dreg);
nftnl_expr_set_u32(tExpression, NFTNL_EXPR_PAYLOAD_OFFSET, offset);
nftnl_expr_set_u32(tExpression, NFTNL_EXPR_PAYLOAD_LEN, len);
nftnl_rule_add_expr(rRule, tExpression);
}
static void add_cmp(struct nftnl_rule *rRule, uint32_t sreg, uint32_t op,
const void *data, uint32_t data_len)
{
struct nftnl_expr *tExpression = nftnl_expr_alloc("cmp");
if (tExpression == NULL) {
perror("expr cmp oom");
exit(EXIT_FAILURE);
}
nftnl_expr_set_u32(tExpression, NFTNL_EXPR_CMP_SREG, sreg);
nftnl_expr_set_u32(tExpression, NFTNL_EXPR_CMP_OP, op);
nftnl_expr_set(tExpression, NFTNL_EXPR_CMP_DATA, data, data_len);
nftnl_rule_add_expr(rRule, tExpression);
}
static void add_nat(struct nftnl_rule *rRule, uint32_t sreg,
const void *ip, uint32_t ip_len,
const void *port, uint32_t port_len)
{
struct nftnl_expr *tExpression = nftnl_expr_alloc("nat");
if (tExpression == NULL) {
perror("expr cmp oom");
exit(EXIT_FAILURE);
}
/* Performing snat */
nftnl_expr_set_u32(tExpression, NFTNL_EXPR_NAT_TYPE, NFT_NAT_SNAT);
/* Nat family IPv4 */
nftnl_expr_set_u32(tExpression, NFTNL_EXPR_NAT_FAMILY, NFPROTO_IPV4);
/* Don't know what to do with these four calls */
/* How to add IP address and port that should be used in snat */
// nftnl_expr_set_u32(tExpression, NFTNL_EXPR_NAT_REG_ADDR_MIN, NFT_REG_1);
// nftnl_expr_set_u32(tExpression, NFTNL_EXPR_NAT_REG_ADDR_MAX, NFT_REG_3);
// nftnl_expr_set_u32(tExpression, NFTNL_EXPR_NAT_REG_PROTO_MIN, 0x6124385);
// nftnl_expr_set_u32(tExpression, NFTNL_EXPR_NAT_REG_PROTO_MAX, 0x2153846);
nftnl_rule_add_expr(rRule, tExpression);
}
static struct nftnl_rule *setup_rule(uint8_t family, const char *table,
const char *chain, const char *handle)
{
struct nftnl_rule *rule = NULL;
uint8_t proto = 0;
uint16_t dport = 0, sport = 0, sNatPort = 0, dNatPort = 0;
uint8_t saddr[sizeof(struct in6_addr)]; /* Packets coming from */
uint8_t daddr[sizeof(struct in6_addr)]; /* Packets coming to (the machine on which nat happens) */
uint8_t snataddr[sizeof(struct in6_addr)]; /* Packets should go from (the machine on which nat happens) */
uint8_t dnataddr[sizeof(struct in6_addr)]; /* Packets should go to */
uint64_t handle_num;
rule = nftnl_rule_alloc();
if (rule == NULL) {
perror("OOM");
exit(EXIT_FAILURE);
}
nftnl_rule_set(rule, NFTNL_RULE_TABLE, table);
nftnl_rule_set(rule, NFTNL_RULE_CHAIN, chain);
nftnl_rule_set_u32(rule, NFTNL_RULE_FAMILY, family);
if (handle != NULL) {
handle_num = atoll(handle);
nftnl_rule_set_u64(rule, NFTNL_RULE_POSITION, handle_num);
}
/* Declaring tcp port or udp port */
proto = IPPROTO_UDP;
add_payload(rule, NFT_PAYLOAD_NETWORK_HEADER, NFT_REG_1,
offsetof(struct iphdr, protocol), sizeof(uint8_t));
add_cmp(rule, NFT_REG_1, NFT_CMP_EQ, &proto, sizeof(uint8_t));
/* Declaring dport */
dport = htons(DPORT);
add_payload(rule, NFT_PAYLOAD_TRANSPORT_HEADER, NFT_REG_1,
offsetof(struct udphdr, dest), sizeof(uint16_t));
add_cmp(rule, NFT_REG_1, NFT_CMP_EQ, &dport, sizeof(uint16_t));
/* Declaring dest ip */
add_payload(rule, NFT_PAYLOAD_NETWORK_HEADER, NFT_REG_1,
offsetof(struct iphdr, daddr), sizeof(daddr));
inet_pton(AF_INET, DADDR, daddr);
add_cmp(rule, NFT_REG_1, NFT_CMP_EQ, daddr, sizeof(daddr));
/* Declaring sport */
sport = htons(SPORT);
add_payload(rule, NFT_PAYLOAD_TRANSPORT_HEADER, NFT_REG_1,
offsetof(struct udphdr, source), sizeof(uint16_t));
add_cmp(rule, NFT_REG_1, NFT_CMP_EQ, &sport, sizeof(uint16_t));
/* Declaring src ip */
add_payload(rule, NFT_PAYLOAD_NETWORK_HEADER, NFT_REG_1,
offsetof(struct iphdr, saddr), sizeof(saddr));
inet_pton(AF_INET, SADDR, saddr);
add_cmp(rule, NFT_REG_1, NFT_CMP_EQ, saddr, sizeof(saddr));
/* Addding snat params */
inet_pton(AF_INET, SNATADDR, snataddr);
sNatPort = htons(SNATPORT);
add_nat(rule, NFT_REG_1, snataddr, sizeof(snataddr), &sNatPort, sizeof(uint16_t));
return rule;
}
int main(int argc, char *argv[])
{
struct mnl_socket *nl;
struct nftnl_rule *r;
struct nlmsghdr *nlh;
struct mnl_nlmsg_batch *batch;
uint8_t family;
char buf[MNL_SOCKET_BUFFER_SIZE];
uint32_t seq = time(NULL);
int ret, batching;
if (argc < 4 || argc > 5) {
fprintf(stderr, "Usage: %s <family> <table> <chain>\n", argv[0]);
exit(EXIT_FAILURE);
}
if (strcmp(argv[1], "ip") == 0)
family = NFPROTO_IPV4;
else if (strcmp(argv[1], "ip6") == 0)
family = NFPROTO_IPV6;
else {
fprintf(stderr, "Unknown family: ip, ip6\n");
exit(EXIT_FAILURE);
}
// Now creating rule
if (argc != 5)
r = setup_rule(family, argv[2], argv[3], NULL);
else
r = setup_rule(family, argv[2], argv[3], argv[4]);
// Now adding rule through mnl socket
nl = mnl_socket_open(NETLINK_NETFILTER);
if (nl == NULL) {
perror("mnl_socket_open");
exit(EXIT_FAILURE);
}
if (mnl_socket_bind(nl, 0, MNL_SOCKET_AUTOPID) < 0) {
perror("mnl_socket_bind");
exit(EXIT_FAILURE);
}
batching = nftnl_batch_is_supported();
if (batching < 0) {
perror("cannot talk to nfnetlink");
exit(EXIT_FAILURE);
}
batch = mnl_nlmsg_batch_start(buf, sizeof(buf));
if (batching) {
nftnl_batch_begin(mnl_nlmsg_batch_current(batch), seq++);
mnl_nlmsg_batch_next(batch);
}
nlh = nftnl_rule_nlmsg_build_hdr(mnl_nlmsg_batch_current(batch),
NFT_MSG_NEWRULE,
nftnl_rule_get_u32(r, NFTNL_RULE_FAMILY),
NLM_F_APPEND|NLM_F_CREATE|NLM_F_ACK, seq++);
nftnl_rule_nlmsg_build_payload(nlh, r);
nftnl_rule_free(r);
mnl_nlmsg_batch_next(batch);
if (batching) {
nftnl_batch_end(mnl_nlmsg_batch_current(batch), seq++);
mnl_nlmsg_batch_next(batch);
}
ret = mnl_socket_sendto(nl, mnl_nlmsg_batch_head(batch),
mnl_nlmsg_batch_size(batch));
if (ret == -1) {
perror("mnl_socket_sendto");
exit(EXIT_FAILURE);
}
mnl_nlmsg_batch_stop(batch);
ret = mnl_socket_recvfrom(nl, buf, sizeof(buf));
if (ret == -1) {
perror("mnl_socket_recvfrom");
exit(EXIT_FAILURE);
}
ret = mnl_cb_run(buf, ret, 0, mnl_socket_get_portid(nl), NULL, NULL);
if (ret < 0) {
perror("mnl_cb_run");
exit(EXIT_FAILURE);
}
mnl_socket_close(nl);
return EXIT_SUCCESS;
}
Here, main() function calls setup_rule() where whole rule is built. setup_rule() calls add_nat() where expression for snat is built. The IP and Port that should be used for snat or dnat are passed as args to add_nat().
As I understand, the phylosophy of building a rule is that a rule is built with various expressions. In add_nat() an expression is built for doing snat and that's where I start to fumble. Don't know what to do with NFTNL_EXPR_NAT_REG_ADDR_MIN and NFTNL_EXPR_NAT_REG_PROTO_MIN kind of macros. How to pass IP Address and Port Address which will be used in snat or dnat.
I run the above test application as follows:
./application ip nat pre
And with the following command
nft list table nat -a
got the output as follows:
table ip nat {
chain pre {
type nat hook prerouting priority 0; policy accept;
udp dport 32000 ip daddr 192.168.101.55 #nh,160,96 541307071 udp sport 29000 ip saddr 192.168.1.103 ip daddr 0.0.0.0 #nh,160,64 3293205175 snat to # handle 250
} # handle 33
chain post {
type nat hook postrouting priority 100; policy accept;
} # handle 34
} # handle 0
May be the approach is wrong, but, as shown above, it displayed snat to in the end. And that gives me the hope that this is the way forward. I also digged in the nftables utility code but to no avail.
So to list all queries:
How to add snat or dnat rules with nftable apis?
What is with NFT_REG_1? How these registers should be used and where it should be used?
What is written after # in the output of the nft list table nat -a command? When I apply rules with nft command than it does not show info after # in the output of nft list table
Also, there is an extra ip addr 0.0.0.0 displayed in the output of nft list table nat -a command. Where does that get added?

Problem in listening to multicast in C++ with multiple NICs

I am trying to write a multicast client on a machine with two NICs, and I can't make it work. I can see with a sniffer that once I start the program the NIC (eth4) start receiving the multicast datagrams, However, I can't recieve() any in my program.
When running "tshark -i eth4 -R udp.port==xxx (multicast port)"
I get:
1059.435483 y.y.y.y. (some ip) -> z.z.z.z (multicast ip, not my eth4 NIC IP) UDP Source port: kkk (some other port) Destination port: xxx (multicast port)
Searched the web for some examples/explanations, but it seems like I do what everybody else does. Any help will be appreciated. (anything to do with route/iptables/code?)
bool connectionManager::sendMulticastJoinRequest()
{
struct sockaddr_in localSock;
struct ip_mreqn group;
char* mc_addr_str = SystemManager::Instance()->getTCP_IP_CHT();
char* local_addr_str = SystemManager::Instance()->getlocal_IP_TOLA();
int port = SystemManager::Instance()->getTCP_Port_CHT();
/* Create a datagram socket on which to receive. */
CHT_UDP_Feed_sock = socket(AF_INET, SOCK_DGRAM, 0);
if(CHT_UDP_Feed_sock < 0)
{
perror("Opening datagram socket error");
return false;
}
/* application to receive copies of the multicast datagrams. */
{
int reuse = 1;
if(setsockopt(CHT_UDP_Feed_sock, SOL_SOCKET, SO_REUSEADDR, (char *)&reuse, sizeof(reuse)) < 0)
{
perror("Setting SO_REUSEADDR error");
close(CHT_UDP_Feed_sock);
return false;
}
}
/* Bind to the proper port number with the IP address */
/* specified as INADDR_ANY. */
memset((char *) &localSock, 0, sizeof(localSock));
localSock.sin_family = AF_INET;
localSock.sin_port = htons(port);
localSock.sin_addr.s_addr =inet_addr(local_addr_str); // htonl(INADDR_ANY); //
if(bind(CHT_UDP_Feed_sock, (struct sockaddr*)&localSock, sizeof(localSock)))
{
perror("Binding datagram socket error");
close(CHT_UDP_Feed_sock);
return false;
}
/* Join the multicast group mc_addr_str on the local local_addr_str */
/* interface. Note that this IP_ADD_MEMBERSHIP option must be */
/* called for each local interface over which the multicast */
/* datagrams are to be received. */
group.imr_ifindex = if_nametoindex("eth4");
if (setsockopt(CHT_UDP_Feed_sock, SOL_SOCKET, SO_BINDTODEVICE, "eth4", 5) < 0)
return false;
group.imr_multiaddr.s_addr = inet_addr(mc_addr_str);
group.imr_address.s_addr = htonl(INADDR_ANY); //also tried inet_addr(local_addr_str); instead
if(setsockopt(CHT_UDP_Feed_sock, IPPROTO_IP, IP_ADD_MEMBERSHIP, (char *)&group, sizeof(group)) < 0)
{
perror("Adding multicast group error");
close(CHT_UDP_Feed_sock);
return false;
}
// Read from the socket.
char databuf[1024];
int datalen = sizeof(databuf);
if(read(CHT_UDP_Feed_sock, databuf, datalen) < 0)
{
perror("Reading datagram message error");
close(CHT_UDP_Feed_sock);
return false;
}
else
{
printf("Reading datagram message...OK.\n");
printf("The message from multicast server is: \"%s\"\n", databuf);
}
return true;
}
Just a thought, (I've not done much work with multicast), but could it be because you're binding to a specific IP address? The socket could be only accepting packets destined for it's bound IP address and rejecting the multicast ones?
Well, there is some time ago since I didn't play with multicast. Don't you need root/admin rights to use it? did you enable them when launching your program?

Resources