DHCP : Cant receive reply from server - networking

I am working on Ubuntu 9.04. I am running this on VMware workstation. Here is my C code:
int sockfd,cnt,addrlen;
const int on = 1;
struct sockaddr_in servaddr,cliaddr;
char reply[512];
sockfd = socket(AF_INET, SOCK_DGRAM, 0);
if (sockfd < 0) {
perror("socket");
exit(1);
}
setsockopt(sockfd,SOL_SOCKET,SO_REUSEADDR, &on,sizeof(on));
bzero(&cliaddr, sizeof(cliaddr));
cliaddr.sin_family = AF_INET;
cliaddr.sin_addr.s_addr = htonl(INADDR_ANY);
cliaddr.sin_port = htons(68);
addrlen = sizeof(servaddr);
if (bind(sockfd, (struct sockaddr *) &cliaddr, sizeof(cliaddr)) < 0) {
perror("bind");
exit(1);
}
while(1)
{
cnt = recvfrom(sockfd, reply, sizeof(reply), 0,(struct sockaddr *) &servaddr, &addrlen);
if (cnt < 0) {
perror("recvfrom");
exit(1);
}
printf("\nReply Received\n");
}
I run this program in one terminal and run 'dhclient' on another. I receive no datagrams. What am I doing wrong?

Looks like you're listening on UDP port 68 for a broadcasted message from the client? If I'm reading DHCP correctly, the client will send its broadcase 'discover' request FROM UDP port 68, but TO UDP port 67 on the server, so you would need to be listening on port 67 to receive it.
An easy 'first' test to test you're code before trying it with dhclient would be to try talking to your server with netcat. a command line like
echo "Foo" | netcat -u localhost 68
Should cause a packet to be received by your current code.
Another good debugging tool is wireshark which will let you see exactly what UDP packets are being sent by dhclient and what they contain.

I'm not sure what you're doing wrong but if I were you I'd write my own client which is very simple and see if it can talk to your server code above (who knows what dhclient might do outside of contact your code). I'd also temporarily change the port number to something not well-known. This way I wouldn't be interfering with any other programs and interfaces.

I recommend this tutorial. Also, are you running as root? You can't get that low-numbered port otherwise.

Related

Why did TCP-BSD server stuck in read() even data receives?

I've created a TCP server application using BSD sockets and NUCLEO-H743ZI2 development board with STM32CubeMX 5.6.0 & LwIP 2.0.3 in Keil-MDKARM.
I noticed that:
If a client connects and sends 11 bytes or more at first, server
receives the data correctly and read() responds displaying the data.
However, if client sends the first data lower than 11
bytes, read() function blocks even next received data is higher than 11 bytes, until client disconnects. After the disconnection, all the data queued is displayed.
Namely, if first data sent from a client to my server is lower than 11 bytes, event_callback for a rcvevent is not triggered until disconnection.
My aim is to make the server available to one byte reception.
I've pasted my Server task/thread below. Let me have your kind response at your earliest convenience and feel free to request other related files/libraries(lwip.h, lwipopts.h..).
Kind Regards
void StartTask01(void const * argument)
{
/* USER CODE BEGIN StartTask01 */
MX_LWIP_Init();
/*start a listening tcp server*/
int iServerSocket;
struct sockaddr_in address;
if ((iServerSocket = socket(AF_INET, SOCK_STREAM, 0)) < 0)
{
printf("Socket could not be created\n");
}
else
{
address.sin_family = AF_INET;
address.sin_port = htons(80);
address.sin_addr.s_addr = INADDR_ANY;
if (bind(iServerSocket, (struct sockaddr *)&address, sizeof (address)) < 0)
{
printf("socket could not be bound\n");
}
else
{
listen(iServerSocket, MEMP_NUM_NETCONN);
}
}
/*server started listening*/
struct sockaddr_in remoteHost;
int newconn;
char caReadBuffer[1500];
memset(caReadBuffer, 0, 1500);
for(;;)
{
/*block until accepting an incoming connection*/
newconn = accept(iServerSocket, (struct sockaddr *)&remoteHost, (socklen_t *)(sizeof(remoteHost)));
if (newconn != -1)/*if accepted well*/
{
/*block until data arrives*/
read(newconn, caReadBuffer, sizeof(caReadBuffer));
printf("data read: %s\n", caReadBuffer);
memset(caReadBuffer, 0, 1500);
}
}
/* USER CODE END StartTask01 */
}
The problem that's causing this issue is that you only call read once on each connection. If you don't happen to receive all the data from that single call to read (which is entirely unpredictable), you will never call read on that connection again.
When you call read on a blocking TCP connection, it will only block if there is no data available. Otherwise, it will give you whatever data is available up to the maximum number of bytes you ask for. It will not wait for more data if only some is available. It's up to you to call read again if you didn't receive all the data you expected.
One your second iteration of the for loop, you overwrite newconn with a new connection. You don't close the old connection. So you have a socket leak.
SOLVED:
The problem is, my server was listening port 80. I changed it to port 7 and thankfully bug is resolved, now read() works as expected.
This bug let me think that LwIP had problems on listening that web(80) port instead of others. There should be a some kind of discrimination between listening some spectacular ports even for unimplemented protocols.

ETH_P_IPV6 in dev_start_xmit

I have a network driver(called gtp - a udp encapsulation protocol) as kernel module implemented in Openwrt system and a netlink library to control the driver by opening sockets and etc. The driver seems to work fine, but when kernel calls gtp_start_xmit() with .ndo_start_xmit() pointer, the skb passed to this function has protocol 0x86dd (ETH_P_IPV6) such that ntohs(skb->protocol)=0x86dd. I was hoping to see 0x0800 (ETH_P_IP) instead. I dont know how the ethernet frame goes to my driver with this field. Could someone help me figure it out?
The netlink library opens socket as:
int fd1 = socket(AF_INET, SOCK_DGRAM, 0);
struct sockaddr_in sockaddr_fd1 = {
.sin_family = AF_INET,
.sin_port = htons(3386),
.sin_addr = {
.s_addr = INADDR_ANY,
},
};
if (bind(fd1, (struct sockaddr *) &sockaddr_fd1,
sizeof(sockaddr_fd1)) < 0) {
perror("bind");
exit(EXIT_FAILURE);
}
also my device is big endian.

QTcpSocket - Connection Refused until Port Changed

I apologize, I was not sure how to deliver a concise title for this issue.
Background: I am using QTcpSocket to connect to a PLC Simulator application. Previously I was using PLCQTLIB to connect to the simulator and everything was working fine. The library did not offer enough functionality for my project so I have created my own library for interfacing Qt with the libnodave library.
The simulator runs on IP Address 192.168.32.1 and Port 102
Current Issue: I enter the IP (192.168.32.1) and Port (102) and press connect. I receive:
TCP Error = QAbstractSocket::ConnectionRefusedError
TCP Error = QAbstractSocket::SocketTimeoutError
If I change the port to 80 and press connect, the connection is successful. However the connection to the PLC Simulator will fail because it is not listening on Port 80.
Now that a successful connection has been established to 192.168.32.1 and the current state of the connection is disconnected, I can enter the correct port 102 and connect successfully.
Question: Why does the TCP Socket not establish a connection to Port 102 until after a connection has been previously opened on Port 80? No firewalls exist and all communication is occurring on the local machine.
Declared in Header:
QTcpSocket *tcp;
Source File:
PLCLibNoDave::PLCLibNoDave()
{
tcp = new QTcpSocket();
connect(tcp, SIGNAL(stateChanged(QAbstractSocket::SocketState)),
this, SLOT(tcpStateChanged(QAbstractSocket::SocketState)));
connect(tcp, SIGNAL(error(QAbstractSocket::SocketError)),
this, SLOT(tcpErrorHandler(QAbstractSocket::SocketError)));
connect(tcp, SIGNAL(hostFound()), this, SLOT(tcpHostFound()));
connect(tcp, SIGNAL(connected()), this, SLOT(tcpConnected()));
connect(tcp, SIGNAL(disconnected()), this, SLOT(tcpDisconnected()));
}
void PLCLibNoDave::connectTCP(int port, QString ip)
{
tcp->connectToHost(ip,port);
if(!tcp->waitForConnected(3000)){
tcp->disconnectFromHost();
return;
}
tcpHandle = tcp->socketDescriptor();
if(tcpHandle == -1){
tcp->disconnectFromHost();
qDebug() << "Invalid Socket Descriptor on Connect";
return;
}
return;
}
void PLCLibNoDave::disconnectTCP()
{
tcp->disconnectFromHost();
if(tcp->state() == QAbstractSocket::UnconnectedState ||
tcp->waitForDisconnected(1000)){
tcpError = tcp->error();
}
else{
tcpError = tcp->error();
qDebug() << "Disconnect Failed: " << tcp->errorString();
}
return;
}

Winsock bind() failing with WSAEADDRNOTAVAIL for directed broadcast address

I am setting up a UDP socket and trying to bind what should be a valid network broadcast address to it (192.168.202.255 : 23456), but bind fails with error 10049, WSAEADDRNOTAVAIL. If I use a localhost broadcast address, 127.0.0.255, it succeeds.
WSAEADDRNOTAVAIL's documentation says that "The requested address is not valid in its context. This normally results from an attempt to bind to an address that is not valid for the local computer. This can also result from connect, sendto, WSAConnect, WSAJoinLeaf, or WSASendTo when the remote address or port is not valid for a remote computer (for example, address or port 0)." But I think this address, 192.168.202.255, should be a valid broadcast address because of the following entry when running ipconfig:
What might be the problem?
Code
I am new to Winsock programming and am probably making an elementary error, but I can't find it. The code I have so far is:
m_ulAddress = ParseIPAddress(strAddress);
// Winsock 2.2 is supported in XP
const WORD wVersionRequested = MAKEWORD(2, 2);
WSADATA oWSAData;
const int iError = WSAStartup(wVersionRequested, &oWSAData);
if (iError != 0) {
PrintLine(L"Error starting the network connection: WSAStartup error " + IntToStr(iError));
} else if (LOBYTE(oWSAData.wVersion) != 2 || HIBYTE(oWSAData.wVersion) != 2) {
PrintLine(L"Error finding version 2.2 of Winsock; got version " + IntToStr(LOBYTE(oWSAData.wVersion)) + L"." + IntToStr(HIBYTE(oWSAData.wVersion)));
} else {
m_oSocket = socket(AF_INET, SOCK_DGRAM /*UDP*/, IPPROTO_UDP);
if (m_oSocket == INVALID_SOCKET) {
PrintLine(L"Error creating the network socket");
} else {
// Socket needs to be able to send broadcast messages
int iBroadcast = true; // docs say int sized, but boolean values
if (setsockopt(m_oSocket, SOL_SOCKET, SO_BROADCAST, (const char*)&iBroadcast, sizeof(iBroadcast)) != 0) {
PrintLine(L"Error setting socket to allow broadcast addresses; error " + IntToStr(WSAGetLastError()));
} else {
m_oServer.sin_family = AF_INET;
m_oServer.sin_port = m_iPort;
m_oServer.sin_addr.S_un.S_addr = m_ulAddress;
// !!! This is the failing call
if (bind(m_oSocket, (sockaddr*)&m_oServer, sizeof(m_oServer)) == -1) {
PrintLine(L"Error binding address " + String(strAddress.c_str()) + L":" + IntToStr(m_iPort) + L" to socket; error " + IntToStr(WSAGetLastError()));
} else {
m_bInitialisedOk = true;
}
}
}
}
Comments
ParseIPAddress is a wrapper around inet_addr; inspecting the value of m_oServer.sin_addr.S_un.S_addr it appears to be correct. m_oSocket is a SOCKET. I added the call to setsockopt since you can't broadcast via anything but TCP by default (see the second paragraph in sendto's Remarks); this call doesn't make any difference. PrintLine is a wrapper to the console output. The odd String / c_str() casts are converting to and from C++ wstrings to VCL Unicode strings, since I am using C++ Builder and its VCL libraries. The IP address is a narrow (char) string.
The sendto documentation states that "If a socket is opened, a setsockopt call is made, and then a sendto call is made, Windows Sockets performs an implicit bind function call." This implies that bind is not needed at all. If I omit the call, then calling sendto like so:
const int iLengthBytes = strMessage.length() * sizeof(char); // Narrow string
const int iSentBytes = sendto(m_oSocket, strMessage.c_str(), iLengthBytes, 0, (sockaddr*)&m_oServer, sizeof(m_oServer));
if (iSentBytes != iLengthBytes) {
PrintLine(L"Error sending network message; error: " + IntToStr(WSAGetLastError()));
fails with error 10047, WSAEAFNOSUPPORT, "Address family not supported by protocol family."
The output of netsh winsock show catalog (mentioned at the bottom of socket's Remarks) is lengthy but does include several entries mentioning UDP and IPv4.
A possible complication is that this is running in a VMWare Fusion host; Fusion does have an odd setup for networks. I also have a Cisco VPN configured running back to my office. Connecting and disconnecting this makes no difference.
One thing that seems dodgy to me is hard-casting the SOCKET m_oSocket to sockaddr, but this seems to be normal practice for Winsock programming when I've been reading examples. Reading up it may be required since the underlying interpretation depends on the protocol family. It seems a potential source of error, but I'm not sure how to avoid it.
Any ideas? I am stumped :)
Setup
Windows 7 Pro running on VMWare Fusion 4.1.3
The program is compiled as 32-bit with Embarcadero C++ Builder 2010.
The program is a console program only
Much confusion here. I'll address it point by point for your edification, but if you just want working code, skip to the end.
// Winsock 2.2 is supported in XP
Actually, Winsock 2.2 goes back to NT 4 SP4, which dates it to 1998. Because of that, I wouldn't bother checking oWSAData.wVersion in the error case. There's basically no chance this is going to happen any more.
If broad portability is your goal, I'd target Winsock 1.1, which is all you need for the code you show, and will let the code build and run on anything that supports Winsock, even back to Windows 3.x.
m_oSocket = socket(AF_INET, SOCK_DGRAM /*UDP*/, IPPROTO_UDP);
Bad style. You should use PF_INET here instead of AF_INET. They have the same value, but you're not specifying an address family (AF) here, you're specifying a protocol family (PF). Also, the third parameter can safely be zero, because it's implied by the first two parameters. Again, it's just a style fix, not a functional fix.
int iBroadcast = true; // docs say int sized, but boolean values
Yup. Don't second-guess the docs and use bool here. Remember, Winsock is based on BSD sockets, and that goes back to the days before C++ existed.
m_oServer.sin_addr.S_un.S_addr = m_ulAddress;
You really shouldn't be digging into the internals of the sockaddr_in structure this way. The sockets API has a shortcut for that, which is shorter and hides some of the internal implementation details. It is:
m_oServer.sin_addr.s_addr = m_ulAddress;
Moving on...
if (bind(m_oSocket, ...
Although Remy is right that the bind() call isn't correct, you actually don't need it at all. You can depend on your system's routing layer to send the packet out the right interface. You don't need to "help" it with a bind() call.
you can't broadcast via anything but TCP by default (see the second paragraph in sendto's Remarks);
You've misunderstood what MSDN is telling you. When you see the term "TCP/IP", it often (but not always!) includes UDP. They're using it in that generic sense here.
The MSDN bit you point to talks about TCP/IP because Winsock was created in a world when TCP/IP had not yet won the network protocol wars. They're trying to restrict the discussion to TCP/IP (UDP, really) so you don't get the idea that what they're saying applies to other network transports supported by Winsock stacks in the early days: NetBIOS, IPX, DECNet...
In fact, you can only broadcast (or multicast) using UDP sockets. TCP is point-to-point, only.
One thing that seems dodgy to me is hard-casting the SOCKET m_oSocket to sockaddr,
That's also part of the multiple network transport support in sockets. In addition to sockaddr_in, there's sockaddr_ipx for IPX, sockaddr_dn for DECnet... Winsock is a C API, not a C++ API, so we can't subclass sockaddr and pass a reference to the base class, or create function overloads for each of the variations. This trick of casting structures is a typical C way to get a kind of polymorphism.
Here's a working example, which builds with MinGW, g++ foo.cpp -o foo.exe -lwsock32:
#include <winsock.h>
#include <iostream>
#include <string.h>
using namespace std;
int main(int argc, char* argv[])
{
WSADATA wsa;
if (WSAStartup(MAKEWORD(1, 1), &wsa)) {
cerr << "Failed to init Winsock!" << endl;
return 1;
}
// Get datagram socket to send message on
SOCKET sd = socket(PF_INET, SOCK_DGRAM, 0);
if (sd < 0) {
cerr << "socket() failed: " << WSAGetLastError() << endl;
return 1;
}
// Enable broadcasts on the socket
int bAllow = 1;
if (setsockopt(sd, SOL_SOCKET, SO_BROADCAST, (char*)&bAllow,
sizeof(bAllow)) < 0) {
cerr << "setsockopt() failed: " << WSAGetLastError() << endl;
closesocket(sd);
return 1;
}
// Broadcast the request
string msg = "Hello, world!";
const int kMsgLen = msg.length();
struct sockaddr_in sin;
memset(&sin, 0, sizeof(sin));
const uint16_t kPort = 54321;
sin.sin_port = htons(kPort);
sin.sin_family = AF_INET;
if (argc == 1) {
sin.sin_addr.s_addr = INADDR_BROADCAST;
}
else if ((sin.sin_addr.s_addr = inet_addr(argv[1])) == INADDR_NONE) {
cerr << "Couldn't parse IP '" << argv[1] << "'!" << endl;
}
int nBytes = sendto(sd, msg.c_str(), kMsgLen, 0,
(sockaddr*)&sin, sizeof(struct sockaddr_in));
closesocket(sd);
// How well did that work out, then?
if (nBytes < 0) {
cerr << "sendto() IP " << inet_ntoa(sin.sin_addr) <<
" failed" << WSAGetLastError() << endl;
return 1;
}
else if (nBytes < kMsgLen) {
cerr << "WARNING: Short send, " << nBytes << " bytes! "
"(Expected " << kMsgLen << ')' << endl;
return 1;
}
else {
cerr << "Sent " << kMsgLen << "-byte msg to " <<
inet_ntoa(sin.sin_addr) << ':' << kPort << '.' << endl;
}
return 0;
}
It sends to 255.255.255.255 (INADDR_BROADCAST) by default, but if you pass a directed broadcast IP (such as your 192.168.202.255 value) as the first parameter, it will use that instead.
You should not bind() to a broadcast IP address. You need to bind() to an individual network adapter IP instead. If you want to send out a broadcast message, you bind() to the adapter that is going to send the broadcast, and then sendto() the broadcast IP. If you want to receive a broadcast message, you bind() to the specific adapter whose IP matches the broadcast IP being sent to.

TCP sockets over wlan

I have a project that uses TCP sockets to communicate between a server and one client. As of now I have been doing this on one computer so I have just used local address of "127.0.0.1" for the address to bind and connect to on both sides and its worked fine. Now I have a second computer to act as a client, but I don't know how to change the addresses accordingly. They are connected through a network that is not connected to the Internet. Before the code looked like this -
Server -
struct addrinfo hints;
struct addrinfo *servinfo; //will point to the results
//store the connecting address and size
struct sockaddr_storage their_addr;
socklen_t their_addr_size;
memset(&hints, 0, sizeof hints); //make sure the struct is empty
hints.ai_family = AF_INET; //local address
hints.ai_socktype = SOCK_STREAM; //tcp
hints.ai_flags = AI_PASSIVE; //use local-host address
//get server info, put into servinfo
if ((status = getaddrinfo("127.0.0.1", port, &hints, &servinfo)) != 0) {
fprintf(stderr, "getaddrinfo error: %s\n", gai_strerror(status));
return false;
}
//make socket
fd = socket(servinfo->ai_family, servinfo->ai_socktype, servinfo->ai_protocol);
if (fd < 0) {
printf("\nserver socket failure %m", errno);
return false;
}
//allow reuse of port
int yes=1;
if (setsockopt(fd,SOL_SOCKET,SO_REUSEADDR,(char*) &yes,sizeof(int)) == -1) {
perror("setsockopt");
return false;
}
//unlink and bind
unlink("127.0.0.1");
if(bind (fd, servinfo->ai_addr, servinfo->ai_addrlen) < 0) {
printf("\nBind error %m", errno);
return false;
}
Client -
struct addrinfo hints;
struct addrinfo *servinfo; //will point to the results
memset(&hints, 0, sizeof hints); //make sure the struct is empty
hints.ai_family = AF_INET; //local address
hints.ai_socktype = SOCK_STREAM; //tcp
hints.ai_flags = AI_PASSIVE; //use local-host address
//get server info, put into servinfo
if ((status = getaddrinfo("127.0.0.1", port, &hints, &servinfo)) != 0) {
fprintf(stderr, "getaddrinfo error: %s\n", gai_strerror(status));
return false;
}
//make socket
fd = socket(servinfo->ai_family, servinfo->ai_socktype, servinfo->ai_protocol);
if (fd < 0) {
printf("\nserver socket failure %m", errno);
return false;
}
//connect
if(connect(fd, servinfo->ai_addr, servinfo->ai_addrlen) < 0) {
printf("\nclient connection failure %m", errno);
return false;
}
I know it should be simple, but I can't figure out how to change the IPs to get them to work. I tried setting the server computer's IP address in the quotes in these lines -
if ((status = getaddrinfo("127.0.0.1", port, &hints, &servinfo)) != 0)
and
unlink("127.0.0.1");
and then change the address in the client code to the client computer's IP address in this line -
if ((status = getaddrinfo("127.0.0.1", port, &hints, &servinfo)) != 0)
Whenever I do that, it tells me connection refused. I have also tried doing the opposite way of putting the server's address in the client's line and client's address in the server's lines along with a few other attempts. At this point I feel like I am just guessing though. So can someone please help me understand how to change this from using the local address with one computer to connecting two computers? Any help is appreciated.
First, unlink("127.0.0.1"); is totally wrong here, don't do that.
Then, you have two computers connected by some network. Both should have IP addresses. Replace 127.0.0.1 with the server's IP address in both client and the server. The server does not to have to know client's address beforehand - it'll get that information from the accept(2) call. The client needs server's address to know where to connect. The server needs its own address for the bind(2) call.
The main problem is that your putting AI_PASSIVE in your client code. AI_PASSIVE is meant for servers only (that's what it signals).
Also on the server side you should first of all not call unlink. That's for AF_UNIX sockets only, not AF_INET. Secondly you don't need to put "127.0.0.1" in the getaddrinfo line on the server side. It's better to use NULL to bind to all available addresses.
If you change those things, I believe your code should work. However you're actually supposed to loop on the getaddrinfo result using the ai_next pointer and try to connect to each result, using the first that succeeds.
Connection Refused usually means your client received a RST to his SYN. This is most often caused by the lack of a listening socket on the server, on the port you're trying to connect to.
Run your server
On the CLI, type netstat -ant. Do you see an entry that's in LISTEN state on your port?
Something like:
tcp4 0 0 *.3689 *.* LISTEN
I bet you do not, and therefore have a problem with your server listening socket. I also bet the changes you made this this line:
if ((status = getaddrinfo("127.0.0.1", port, &hints, &servinfo)) != 0) {
Weren't quite right. Try changing that IP to 0.0.0.0 on the server to tell it to to bind to any IP on the system. On the client, that line should have the IP address of the server. You should also remove the unlink() call in the server; unnecessary.
If you do have a listening socket, then there's probably a firewall or something in between your boxes that's blocking the SYN. Try typing service iptables stop on the CLI of both systems.

Resources