Capturing berr-counter tx/rx from ip link show - ip

I would like to be able to capture the berr-counter values in a shell script. I can view the values with:
ip -det link show can0 which gives:
2: can0: <NOARP,ECHO> mtu 16 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000
link/can promiscuity 0
can state STOPPED (berr-counter tx 144 rx 128) restart-ms 100
bitrate 125000 sample-point 0.866
tq 133 prop-seg 6 phase-seg1 6 phase-seg2 2 sjw 1
flexcan: tseg1 4..16 tseg2 2..8 sjw 1..4 brp 1..256 brp-inc 1
clock 30000000
I could just parse this output and capture the tx/rx berr-counter, but I would rather capture these values directly. So, I have been trying find where to access these values. I dug into https://github.com/shemminger/iproute2 's code and found where these values are being printed in ip/iplink_can.c in the function:
static void can_print_opt(struct link_util *lu, FILE *f, struct rtattr *tb[])
There is the code:
if (tb[IFLA_CAN_BERR_COUNTER]) {
struct can_berr_counter *bc =
RTA_DATA(tb[IFLA_CAN_BERR_COUNTER]);
fprintf(f, "(berr-counter tx %d rx %d) ", bc->txerr, bc->rxerr);
}
And at the bottom of the same file there is a struct:
struct link_util can_link_util = {
.id = "can",
.maxattr = IFLA_CAN_MAX,
.parse_opt = can_parse_opt,
.print_opt = can_print_opt,
.print_xstats = can_print_xstats,
.print_help = can_print_help,
};
But I can't find anywhere where can_print_opt, or can_link_util.print_opt are called, and I haven't found any success sifting through all of the struct rtattr in the repo.
I'm not sure where to go from here to get these values other than just grabbing them from the output of ip -det link show can0

Maybe a little bit late, but I was trying the same thing : access CAN interface state and error counters from within a userspace application, without calling ip and parsing output.
As you did, I explored iproute2's code, and then read some documentation about netlink for interacting with network devices. Mainly what you have to do is to send an RTM_GETLINK message to a netlink socket, then parse the response, that is a nested list of netlink attributes.
I found this very interesting starting point : http://iijean.blogspot.com/2010/03/howto-get-list-of-network-interfaces-in.html
In this blog the link to full code is broken, but it's available here : https://gist.github.com/cl4u2/5204374.
Note that instead of doing all this "manually", it is also possible to use libnetlink.
Based on this, I was able to write a test code - quick and dirty - that does what you want. You only need to determine my ifIndex_ variable, which is the integer index of your CAN network interface (can be determined by a SIOCGIFINDEX ioctl on your socketcan socket).
printf("Starting rtnetlink stats reading ...\n");
struct sockaddr_nl local;
struct {
struct nlmsghdr nlh;
struct ifinfomsg ifinfo;
} request;
struct sockaddr_nl kernel;
struct msghdr rtnl_msg;
struct iovec io;
pid_t pid = getpid();
qint64 rtnetlink_socket = socket(AF_NETLINK, SOCK_RAW, NETLINK_ROUTE);
memset(&local, 0, sizeof(local));
local.nl_family = AF_NETLINK;
local.nl_pid = pid;
local.nl_groups = 0;
if (bind(rtnetlink_socket, (struct sockaddr *) &local, sizeof(local)) < 0) {
printf("Binding failed !\n");
return true;
}
printf("Binding successful.\n");
memset(&rtnl_msg, 0, sizeof(rtnl_msg));
memset(&kernel, 0, sizeof(kernel));
memset(&request, 0, sizeof(request));
kernel.nl_family = AF_NETLINK;
request.nlh.nlmsg_len = NLMSG_LENGTH(sizeof(struct ifinfomsg));
request.nlh.nlmsg_type = RTM_GETLINK;
request.nlh.nlmsg_flags = NLM_F_REQUEST; // NLM_F_ROOT|NLM_F_MATCH| were originally specified and return all interfaces.
request.nlh.nlmsg_pid = pid;
request.nlh.nlmsg_seq = 1; // Must be monotonically increasing, but we send only one.
// Interface is specified only with index.
request.ifinfo.ifi_family = AF_PACKET;
request.ifinfo.ifi_index = ifIndex_;
request.ifinfo.ifi_change = 0;
io.iov_base = &request;
io.iov_len = request.nlh.nlmsg_len;
rtnl_msg.msg_iov = &io;
rtnl_msg.msg_iovlen = 1;
rtnl_msg.msg_name = &kernel;
rtnl_msg.msg_namelen = sizeof(kernel);
if (sendmsg(rtnetlink_socket, &rtnl_msg, 0) < 0) {
printf("Sendmsg finished with an error.\n");
return true;
}
printf("Sendmsg finished successfully.\n");
// Reply reception
int end = 0;
int replyMaxSize = 8192;
char reply[replyMaxSize];
while (!end) {
int len;
struct nlmsghdr *msg_ptr;
struct msghdr rtnl_reply;
struct iovec io_reply;
memset(&io_reply, 0, sizeof(io_reply));
memset(&rtnl_reply, 0, sizeof(rtnl_reply));
io.iov_base = reply;
io.iov_len = replyMaxSize;
rtnl_reply.msg_iov = &io;
rtnl_reply.msg_iovlen = 1;
rtnl_reply.msg_name = &kernel;
rtnl_reply.msg_namelen = sizeof(kernel);
printf("Waiting for data ...\n");
len = recvmsg(rtnetlink_socket, &rtnl_reply, 0);
printf("Received data with length %d.\n", len);
if (len) {
for (msg_ptr = (struct nlmsghdr *) reply; NLMSG_OK(msg_ptr, len); msg_ptr = NLMSG_NEXT(msg_ptr, len)) {
switch(msg_ptr->nlmsg_type) {
case NLMSG_DONE:
end++;
printf("Received NLMSG_DONE end message.\n");
break;
case RTM_NEWLINK:
printf("Received RTM_NEWLINK message with multipart flag : %d.\n", msg_ptr->nlmsg_flags & NLM_F_MULTI);
if (!(msg_ptr->nlmsg_flags & NLM_F_MULTI)) { end++; }
struct ifinfomsg *iface;
struct rtattr *attribute;
struct rtattr *subAttr;
int msgLen, attrPayloadLen;
iface = (struct ifinfomsg*)NLMSG_DATA(msg_ptr);
msgLen = msg_ptr->nlmsg_len - NLMSG_LENGTH(sizeof(*iface));
for (attribute = IFLA_RTA(iface); RTA_OK(attribute, msgLen); attribute = RTA_NEXT(attribute, msgLen)) {
switch(attribute->rta_type) {
case IFLA_IFNAME:
printf("Interface %d name : %s\n", iface->ifi_index, (char *) RTA_DATA(attribute));
break;
case IFLA_LINKINFO:
attrPayloadLen = RTA_PAYLOAD(attribute);
printf("Found link information. Parsing %d payload bytes ...\n", attrPayloadLen);
for (subAttr = (struct rtattr *)RTA_DATA(attribute); RTA_OK(subAttr, attrPayloadLen); subAttr = RTA_NEXT(subAttr, attrPayloadLen)) {
struct rtattr *subSubAttr;
int subAttrPayloadLen = RTA_PAYLOAD(subAttr);
printf("Found sub-attribute. Type : %d, length : %d.\n", subAttr->rta_type, subAttr->rta_len);
switch (subAttr->rta_type) {
case IFLA_INFO_KIND:
printf("\t Link kind : %s.\n", (char *) RTA_DATA(subAttr));
break;
case IFLA_INFO_DATA:
printf("Found link information data. Parsing %d payload bytes ...\n", RTA_PAYLOAD(subAttr));
for (subSubAttr = (struct rtattr *)RTA_DATA(subAttr); RTA_OK(subSubAttr, subAttrPayloadLen); subSubAttr = RTA_NEXT(subSubAttr, subAttrPayloadLen)) {
printf("Found sub-sub-attribute. Type : %d, length : %d.\n", subSubAttr->rta_type, subSubAttr->rta_len);
switch (subSubAttr->rta_type) {
case IFLA_CAN_STATE:
{
int state = *(int *)RTA_DATA(subSubAttr);
printf("State : %d\n", state);
break;
}
case IFLA_CAN_BERR_COUNTER:
{
struct can_berr_counter *bc = (struct can_berr_counter *)RTA_DATA(subSubAttr);
printf("Error counters : (berr-counter tx %d rx %d)\n", bc->txerr, bc->rxerr);
break;
}
default:
break;
}
}
break;
case IFLA_INFO_XSTATS:
default:
break;
}
}
break;
default:
printf("New attribute. Type : %d, length : %d.\n", attribute->rta_type, attribute->rta_len);
break;
}
}
printf("Finished parsing attributes.\n");
break;
case NLMSG_ERROR:
printf("Could not read link details for interface %d.\n", ifIndex_);
end++;
break;
default:
printf("Received unexpected message ID : %d.\n", msg_ptr->nlmsg_type);
break;
}
printf("Finished parsing message.\n");
}
printf("Finished parsing data.\n");
}
}
close(rtnetlink_socket);
return true;

Related

Implementing cloning and (de/en)capsulation of packets using eBPF

I am trying to create a TC program that will clone a packet, encapsulate it with a modified L3 header and send the clone to a different host ("Monitor host") - Can I do that using a combination of bpf_skb_adjust_room with bpf_clone_redirect?
Kernel examples do not shed too much details into this use-case (for example, here.)
My current attempt seems to be mutating the original packet:
// Represents the redirect destination.
struct destination {
__u32 destination_ip;
__u8 destination_mac[ETH_ALEN];
};
// Contains the destination to redirect traffic to.
struct bpf_map_def SEC("maps") destinations = {
.type = BPF_MAP_TYPE_HASH,
.key_size = sizeof(__u32),
.value_size = sizeof(struct destination),
.max_entries = 1,
.map_flags = BPF_F_NO_PREALLOC,
};
SEC("tc")
int tc_ingress(struct __sk_buff *skb) {
__u32 key = 0;
struct destination *dest = bpf_map_lookup_elem(&destinations, &key);
if (dest != NULL) {
void *data_end = (void *)(long)skb->data_end;
void *data = (void *)(long)skb->data;
// Necessary validation: if L3 layer does not exist, ignore and continue.
if (data + sizeof(struct ethhdr) > data_end) {
return TC_ACT_OK;
}
struct ethhdr *eth = data;
struct iphdr encapsulate_iphdr = {};
struct iphdr *original_iphdr = data + sizeof(struct ethhdr);
if ((void*) original_iphdr + sizeof(struct iphdr) > data_end) {
return TC_ACT_OK;
}
// Change the L2 destination to the provided MAC destination
// and the source to the MAC addr of the recieving host.
memcpy(&eth->h_source, &eth->h_dest, ETH_ALEN);
memcpy(&eth->h_dest, dest->destination_mac, ETH_ALEN);
// Change the L3 destination to the provided destination IP
// and the source to the ip addr of the recieving host.
memcpy(&encapsulate_iphdr.daddr, &dest->destination_ip, IPV4_ADDR_LEN);
memcpy(&encapsulate_iphdr.saddr, &original_iphdr->daddr, IPV4_ADDR_LEN);
// Adjust room for another iphdr after the L2 layer.
if (bpf_skb_adjust_room(skb, sizeof(struct iphdr), BPF_ADJ_ROOM_NET, 0)) {
return TC_ACT_OK;
}
// Store the headers at after L2 headers at the original headers offset.
unsigned long offset = (unsigned long) original_iphdr;
if (bpf_skb_store_bytes(skb, (int)offset, &encapsulate_iphdr, sizeof(struct iphdr), 0)) {
return TC_ACT_OK;
}
// route back the to egress path.
// Zero flag means that the socket buffer is
// cloned to the iface egress path.
bpf_clone_redirect(skb, skb->ifindex, 0);
}
return TC_ACT_OK;
}
I believe that's not possible within the same BPF program run today because bpf_clone_redirect will redirect the clone as soon as it's called and there is not clone helper that wouldn't redirect as well.
You could however implement this with a recirculation to the same interface. The pseudo code would look something like:
if (skb->mark == ORIGINAL_PACKET) {
skb->mark = 0;
return TC_ACT_OK;
}
skb->mark = ORIGINAL_PACKET;
bpf_clone_redirect(skb, skb->ifindex, BPF_F_INGRESS);
skb->mark = 0;
... implement changes ...
return bpf_redirect(skb, skb->ifindex, 0);

Using BPF/XDP with Mininet

I've created the following network topology in Mininet to run an algorithm I've implemented using the Linux kernel eXpress Data Path.
The objective is to sample packets on the incoming link s1-eth1 on Switch 1 using XDP and store metadata in a shared BPF map. The execution is successful when run on multiple VMs (instead of using Mininet to create an emulation).
However, when using XDP on Mininet (to listen on the emulated network interface), packets aren't recorded.
To further diagnose the cause, I ran Wireshark to listen on the s1-eth1 interface, which does record packets hitting the interface, but for some reason these same packets aren't being registered through the XDP pipeline.
#define KBUILD_MODNAME "foo"
#include <linux/bpf.h>
#include <linux/in.h>
#include <linux/if_ether.h>
#include <linux/if_packet.h>
#include <linux/if_vlan.h>
#include <linux/ip.h>
#include <linux/ipv6.h>
//BPF_TABLE("percpu_array", uint32_t, long, dropcnt, 256);
BPF_HASH(proto_map, uint32_t, uint32_t, 256);
//Packet Counter to keep track of number of packets flowing through XDP
BPF_ARRAY(pkt_count, uint64_t, 1);
//Map to keep track of the current EPOCH SIZE
BPF_ARRAY(epoch_size_map, uint64_t, 1);
static inline int parse_ipv4(void *data, u64 nh_off, void *data_end,
__be32 *src, __be32 *dest)
{
struct iphdr *iph = data + nh_off;
if (iph + 1 > data_end)
return 0;
*src = iph->saddr;
*dest = iph->daddr;
return iph->protocol;
}
static inline int bitXor(int* x, int* y)
{
int a = *x & *y;
int b = ~*x & ~*y;
int z = ~a & ~b;
return z;
}
int xdp_dsa(struct CTXTYPE *ctx) {
void* data_end = (void*)(long)ctx->data_end;
void* data = (void*)(long)ctx->data;
struct ethhdr *eth = data;
// drop packets
int rc = RETURNCODE; // let pass XDP_PASS or redirect to tx via XDP_TX
uint32_t *value;
uint32_t *counter_value;
uint32_t *epoch_size;
uint16_t h_proto;
uint64_t nh_off = 0;
uint32_t ipproto;
uint64_t magic_value = 12345678;
uint32_t packet = 0;
__be32 src_ip = 0, dest_ip = 0;
nh_off = sizeof(*eth);
if (data + nh_off > data_end)
pkt_count.increment(packet);
return rc;
h_proto = eth->h_proto;
if (h_proto == htons(ETH_P_IP))
ipproto = parse_ipv4(data, nh_off, data_end, &src_ip, &dest_ip);
/*
else if (h_proto == htons(ETH_P_IPV6))
index = parse_ipv6(data, nh_off, data_end);
*/
else
ipproto = 0; //i.e. unknown protocol
/*XOR the srcIP, destIP, and ipproto to encode, then hash*/
int xor_src_dest = bitXor(&src_ip, &dest_ip);
int xor_srcdst_ipproto = bitXor(&xor_src_dest, &ipproto);
uint32_t zero = 0;
//Predecided initial epoch size
uint32_t init_epoch_size = 10;
//Variable to store the current epoch size (to check end of epoch)
uint32_t cur_epoch_size;
//Lookup epoch size from shared map (to check whether intialized else read)
epoch_size = epoch_size_map.lookup(&zero);
// Start condition (epoch size map is initialized with zero), then set to initial epoch size
// Else read the current epoch size into a variable
if(epoch_size)
{
if(*epoch_size == 0)
{
*epoch_size = init_epoch_size;
}
else
{
cur_epoch_size = *epoch_size;
}
}
counter_value = pkt_count.lookup(&packet);
if (counter_value)
{
if (*counter_value < cur_epoch_size)
{
value = proto_map.lookup_or_init(&xor_srcdst_ipproto, &zero);
if (value)
{
pkt_count.increment(packet);
*value += 1;
}
}
else if (*counter_value == cur_epoch_size)
{
pkt_count.update(&packet, &magic_value);
}
else if(*counter_value == magic_value)
{
return rc;
}
}
return rc;
}
Any ideas?

GCM-AEAD support for ubuntu system running linux kernel-3.10

I am trying to implement a AEAD sample code for encryption Using GCM encryption. But I always get invalid argument error while setting the key
static int init_aead(void)
{
printk("Starting encryption\n");
struct crypto_aead *tfm = NULL;
struct aead_request *req;
struct tcrypt_result tresult;
struct scatterlist plaintext[1] ;
struct scatterlist ciphertext[1];
struct scatterlist gmactext[1];
unsigned char *plaindata = NULL;
unsigned char *cipherdata = NULL;
unsigned char *gmacdata = NULL;
const u8 *key = kmalloc(16, GFP_KERNEL);
char *algo = "rfc4106(gcm(aes))";
unsigned char *ivp = NULL;
int ret, i, d;
unsigned int iv_len;
unsigned int keylen = 16;
/* Allocating a cipher handle for AEAD */
tfm = crypto_alloc_aead(algo, 0, 0);
init_completion(&tresult.completion);
if(IS_ERR(tfm)) {
pr_err("alg: aead: Failed to load transform for %s: %ld\n", algo,
PTR_ERR(tfm));
return PTR_ERR(tfm);
}
/* Allocating request data structure to be used with AEAD data structure */
req = aead_request_alloc(tfm, GFP_KERNEL);
if(IS_ERR(req)) {
pr_err("Couldn't allocate request handle for %s:\n", algo);
return PTR_ERR(req);
}
/* Allocting a callback function to be used , when the request completes */
aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, aead_work_done,&tresult);
crypto_aead_clear_flags(tfm, ~0);
/* Set key */
get_random_bytes((void*)key, keylen);
if((ret = crypto_aead_setkey(tfm, key, 16) != 0)) {
pr_err("Return value for setkey is %d\n", ret);
pr_info("key could not be set\n");
ret = -EAGAIN;
return ret;
}
/* Set authentication tag length */
if(crypto_aead_setauthsize(tfm, 16)) {
pr_info("Tag size could not be authenticated\n");
ret = -EAGAIN;
return ret;
}
/* Set IV size */
iv_len = crypto_aead_ivsize(tfm);
if (!(iv_len)){
pr_info("IV size could not be authenticated\n");
ret = -EAGAIN;
return ret;
}
plaindata = kmalloc(16, GFP_KERNEL);
cipherdata = kmalloc(16, GFP_KERNEL);
gmacdata = kmalloc(16, GFP_KERNEL);
ivp = kmalloc(iv_len, GFP_KERNEL);
if(!plaindata || !cipherdata || !gmacdata || !ivp) {
printk("Memory not availaible\n");
ret = -ENOMEM;
return ret;
}
for (i = 0, d = 0; i < 16; i++, d++)
plaindata[i] = d;
memset(cipherdata, 0, 16);
memset(gmacdata, 0, 16);
for (i = 0,d=0xa8; i < 16; i++, d++)
ivp[i] = d;
sg_init_one(&plaintext[0], plaindata, 16);
sg_init_one(&ciphertext[0], cipherdata, 16);
sg_init_one(&gmactext[0], gmacdata, 128);
aead_request_set_crypt(req, plaintext, ciphertext, 16, ivp);
aead_request_set_assoc(req, gmactext, 16);
ret = crypto_aead_encrypt(req);
if (ret)
printk("cipher call returns %d \n", ret);
else
printk("Failure \n");
return 0;
}
module_init(init_aead);
module_exit(exit_aead);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("My code for aead encryption test");
}
On inserting the module I get following output
Starting encryption
Return value for setkey is -22
key could not be set
According to AEAD specification aead uses aes-128 for encryption hence the block size should be 128 bit .
But my system shows only 1 Byte block size support for AEAD
name : rfc4106(gcm(aes))
driver : rfc4106-gcm-aesni
module : aesni_intel
priority : 400
refcnt : 1
selftest : passed
type : nivaead
async : yes
blocksize : 1
ivsize : 8
maxauthsize : 16
geniv : seqiv
Does the invalid argument error is thrown becuase of the block size. If so , what shall I do to make it work ?
The block size of AES is indeed always 128 bit. The block size of GCM is a different matter though. GCM (Galois-Counter Mode) is - as the name suggests - build on top of the CTR (Counter) mode of operation, sometimes also called the SIC (Segmented Integer Counter) mode of operation. This turns AES into a stream cipher. Stream ciphers - by definition - have a block size of one byte (or, more precisely, one bit, but bit level operations are usually not supported by API's).
Block size however has little to do with the key size displayed in the call, and the argument does seem to require bytes instead of bits (in which key lengths are usually defined).
The size of the IV should be 12 bytes (the default). Otherwise additional calculations may be needed by the GCM implementation (if those exist at all).
For Aes GCM RFC 4106 the key must be 20 bytes. I don't known yet why. I've looked into ipsec source code to see how the encryption is made there.

Multithreaded client server problem

I am facing a peculiar problem with my clients server application.
I have written a console based multithreaded client server.
Multiple Client try to send and receive data thousand times.
When i run more than one client, my old client stops sending receiving data and new client starts operation.
I am unable to understand why my first client gets blocked when i start another client.
Here I have added some code.
These are TCP operations.I have just mentioned key functions.
bool LicTCPServer::Initialize()
{
m_socketAccept = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP );
if (m_socketAccept == INVALID_SOCKET)
return false;
int *p_int;
p_int = (int*)malloc(sizeof(int));
*p_int = 1;
if( (setsockopt(m_socketAccept, SOL_SOCKET, SO_REUSEADDR, (char*)p_int, sizeof(int)) == -1 )||
(setsockopt(m_socketAccept, SOL_SOCKET, SO_KEEPALIVE, (char*)p_int, sizeof(int)) == -1 ) ){
printf("Error setting options %d\n", WSAGetLastError());
free(p_int);
}
free(p_int);
/*int iMode = 1;
ioctlsocket(m_socketAccept, FIONBIO, (u_long FAR*) &iMode);*/
SOCKADDR_IN oSockAddr;
::ZeroMemory(&oSockAddr, sizeof(SOCKADDR_IN));
oSockAddr.sin_family = AF_INET;
oSockAddr.sin_port = htons(m_wPortNoServer);
oSockAddr.sin_addr.s_addr = m_dwInetAddrServer;
int ret = bind(m_socketAccept, (const sockaddr*) &oSockAddr, sizeof(SOCKADDR_IN));
int error;
if (ret == SOCKET_ERROR)
{
closesocket(m_socketAccept);
error = WSAGetLastError();
return false;
}
error = listen(m_socketAccept, SOMAXCONN);
DWORD temp = GetLastError();
if (error == SOCKET_ERROR)
return false;
return true;
}
bool LicTCPServer::CheckConnection(struct sockaddr_in *clientAddress, SOCKET *sockValue)
{
//struct sockaddr_in clientAddress1;
int clientAddressLen = sizeof(struct sockaddr_in);
//struct sockaddr_in clientAddress; // Address of the client that sent data
SOCKET socket = accept(m_socketAccept, (struct sockaddr *)clientAddress, &clientAddressLen);
printf("Port - %d \n",clientAddress->sin_port);
//m_socketConnect = socket;
*sockValue = socket;
return true;
}
int LicTCPServer::ReceiveData(SOCKET socketNo, char* pszBuf, int & bufLen)
{
/*struct timeval tvTimeout;
tvTimeout.tv_sec = 0;
tvTimeout.tv_usec = (long) (10 * 1000);
fd_set fdSet;
FD_ZERO(&fdSet);
FD_SET(socketNo, &fdSet);
long lStatus = select(socketNo + 1, &fdSet, NULL, NULL, &tvTimeout);
if (lStatus <= 0)
{
FD_ZERO(&fdSet);
}
if (!FD_ISSET(socketNo, &fdSet))
{
return 0;
}*/
/*if (!CanReadOnBlockingSocket(socketNo))
{
return TELEGRAM_RECEIVE_ERROR;
}*/
bufLen = recv(socketNo, pszBuf, 10, 0);
if(bufLen == -1)
return WSAGetLastError();
else if(bufLen == 0)
{
closesocket(socketNo);
return -1;
}
else
return 0;
}
bool LicTCPServer::SendData(SOCKET socketNo, BYTE *puchBuf, int iBufLen)
{
int ret = send(socketNo, (char*)puchBuf, iBufLen,0);
//printf("Sent from server: %d %d\n\n",test2.a,test2.b);
return true;
}
Here is my main() function:
void ClientServerCommunication(void *dummy);
struct informationToClient
{
LicTCPServer *serverFunctions;
SOCKET clientSocketNo;
}infoToClient;
int _tmain(int argc, _TCHAR* argv[])
{
DWORD dwInetAddrServer = inet_addr("127.0.0.1");
if(dwInetAddrServer == INADDR_NONE)
return 0;
WORD dwPortNumber = 2001;
LicTCPServer server(dwInetAddrServer, dwPortNumber);
server.Initialize();
struct sockaddr_in clientAddress;
SOCKET sockValue;
infoToClient.serverFunctions = &server;
while(1)
{
bool retValue = server.CheckConnection(&clientAddress, &sockValue);
if( sockValue == INVALID_SOCKET )
{
Sleep(10000);
continue;//or exit from thread
}
infoToClient.clientSocketNo = sockValue;
//retrieve client information from Make Connection and put it into LicenseClientData class
//Create thread for each client which will receive or send data
_beginthread(ClientServerCommunication, 0, (void *)&infoToClient);
//delete
}
return 0;
}
void ClientServerCommunication(void *dummy)
{
int iRetValue;
informationToClient *socketInfo = (informationToClient *)dummy;
char szBuf[10];
int iBufLen = 0;
while(1)
{
iRetValue = socketInfo->serverFunctions->ReceiveData(socketInfo->clientSocketNo, szBuf, iBufLen);
printf("Data received on socket %d %s\n", socketInfo->clientSocketNo, szBuf);
if(iRetValue == WSAECONNRESET)
{
_endthread();
}
socketInfo->serverFunctions->SendData(socketInfo->clientSocketNo, (BYTE*)szBuf, iBufLen);
printf("Data sent on socket %d %s\n", socketInfo->clientSocketNo, szBuf);
}
}
All the code mentioned above is a testing code.Once it works fine then I have to use LicTCPServer class in my application.
There are many possible errors.
Your description is consistent with you storing the 'connection' as a global variable, rather than giving each thread on the server its own connection instance.
Update:
quick glance at the code, infoToClient is global.

Unix Client and Server Stuck in an Infinite Loop After Reading a File to the Client

I am currently making a simple client and server but I have run into an issue. Part of the system is for the client to query about a local file on the server. The contents of that file must be then sent to the client. I am able to send all the text within a file to the client however it seems to be stuck in the read loop on the client. Below are the code spits for both the client and server that are meant to deal with this:
Client Code That Reads The Loop
else if(strcmp(commandCopy, get) == 0)
{
char *ptr;
int total = 0;
char *arguments[1024];
char copy[2000];
char * temp;
int rc;
strcpy(copy, command);
ptr = strtok(copy," ");
while (ptr != NULL)
{
temp = (char *)malloc(sizeof(ptr));
temp = ptr;
arguments[total] = temp;
total++;
ptr = strtok (NULL, " ");
}
if(total == 4)
{
if (strcmp(arguments[2], "-f") == 0)
{
printf("1111111111111");
send(sockfd, command, sizeof(command), 0 );
printf("sent %s\n", command);
memset(&command, '\0', sizeof(command));
cc = recv(sockfd, command, 2000, 0);
if (cc == 0)
{
exit(0);
}
}
else
{
printf("Here");
strcpy(command, "a");
send(sockfd, command, sizeof(command), 0 );
printf("sent %s\n", command);
memset(&command, '\0', sizeof(command));
cc = recv(sockfd, command, 2000, 0);
}
}
else
{
send(sockfd, command, sizeof(command), 0 );
printf("sent %s\n", command);
memset(&command, '\0', sizeof(command));
while ((rc = read(sockfd, command, 1000)) > 0)
{
printf("%s", command);
}
if (rc)
perror("read");
}
}
Server Code That Reads the File
char* getRequest(char buf[], int fd)
{
char * ptr;
char results[1000];
int total = 0;
char *arguments[1024];
char data[100];
FILE * pFile;
pFile = fopen("test.txt", "r");
ptr = strtok(buf," ");
while (ptr != NULL)
{
char * temp;
temp = (char *)malloc(sizeof(ptr));
temp = ptr;
arguments[total] = temp;
total++;
ptr = strtok (NULL, " ");
}
if(total < 2)
{
strcpy(results, "Invaild Arguments \n");
return results;
}
if(pFile != NULL)
{
while(fgets(results, sizeof(results), pFile) != NULL)
{
//fputs(mystring, fd);
write(fd,results,strlen(results));
}
}
else
{
printf("Invalid File or Address \n");
}
fclose(pFile);
return "End of File \0";
}
Server Code to execute the command
else if(strcmp(command, "get") == 0)
{
int pid = fork();
if (pid ==-1)
{
printf("Failed To Fork...\n");
return-1;
}
if (pid !=0)
{
wait(NULL);
}
else
{
char* temp;
temp = getRequest(buf, newsockfd);
strcpy(buf, temp);
send(newsockfd, buf, sizeof(buf), 0 );
exit(1);
}
}
The whole else if clause in the client code is a bit large for a function, let alone a part of a function as it presumably is. The logic in the code is ... interesting. Let us dissect the first section:
else if (strcmp(commandCopy, get) == 0)
{
char *ptr;
int total = 0;
char *arguments[1024];
char *temp;
ptr = strtok(copy, " ");
while (ptr != NULL)
{
temp = (char *)malloc(sizeof(ptr));
temp = ptr;
arguments[total] = temp;
total++;
ptr = strtok(NULL, " ");
}
I've removed immaterial declarations and some code. The use of strtok() is fine in context, but the memory allocation is leaky. You allocate enough space for a character pointer, and then copy the pointer from strtok() over the only pointer to the allocated space (thus leaking it). Then the pointer is copied to arguments[total]. The code could, therefore, be simplified to:
else if (strcmp(commandCopy, get) == 0)
{
char *ptr;
int total = 0;
char *arguments[1024];
ptr = strtok(copy, " ");
while (ptr != NULL)
{
arguments[total++] = ptr;
ptr = strtok(NULL, " ");
}
Nominally, there should be a check that you don't overflow the arguments list, but since the original limits the string to 2000 characters, you can't have more than 1000 arguments (all single characters separated by single spaces).
What you have works - it achieves the same assignment the long way around, but it leaks memory prodigiously.
The main problem seems to be that the server sends all the contents, but it doesn't close the socket, so the client has no way of knowing the server's done. If you close the socket after you finish sending the data (or just call shutdown()), then the client's read() will return 0 when it finishes reading the data.
FWIW, there are lots of other problems with this code:
getRequest: you call malloc but never free. In fact, the return value is thrown away.
Why bother forking if you're just going to wait() on the child?
You probably want to use strlcpy instead of strpcy to avoid buffer overruns.

Resources