I'm implementing a simple UDP server-client app. The main logic is: server starts sending broadcast -> clients responses -> server stops sending broadcast and performs some work.
I'm stuck with broadcasting loop:
sUDP::sUDP(QObject *parent) : QObject(parent)
{
serverSocket = new QUdpSocket(this);
serverIp = new QHostAddress("192.168.1.2");
pickUpThePhone = false;
if(serverSocket->state() != serverSocket->BoundState){
if (!serverSocket->bind(*serverIp, 4321)) {
qFatal("Error binding server");
}
}
connect(serverSocket,SIGNAL(readyRead()), SLOT(readSocket()));
while(!pickUpThePhone){
QByteArray Data;
Data.append("server");
serverSocket->writeDatagram(Data, Data.size(), QHostAddress("192.168.255.255"), 1234);
}
}
actually signal readyRead() never emitted and so broadcasting never stops:
void sUDP::readSocket()
{
qDebug() << "read";
while(serverSocket->hasPendingDatagrams()){
QByteArray buffer;
buffer.resize(serverSocket->pendingDatagramSize());
QHostAddress sender;
quint16 senderPort;
serverSocket->readDatagram(buffer.data(), buffer.size(), &sender, &senderPort);
if(strcmp(buffer.data(),"stop") == 0){
pickUpThePhone = true;
}
}
}
The client works as it should - it responses to the datagram with "server" with "stop" message, but it looks like the while-loop is never interrupted.
Any help will be useful, thank you.
It doesn't work because your program is busy sending broadcasts. Your while loop will not let the Qt event-loop perform its work and therefore your program can't do anything else.
You're also sending broadcasts constantly, your computer will do nothing else than spam the network.
The solution to both problems is a timer. Create a repeating timer that once every second (ore more or less) sends out the broadcast. Stop or pause the timer when you get a response.
Related
I implemented a small tcp client on STM32F7 with freeRtos and LwIP and netconn api.
I can establish a connection with the server and send some data to the network. My problem is a huge delay between the command and when I can actually see the ethernet data on the network (seconds..). Seems like the data is buffered before sending it in one go.
I'm aware of the TCP_NODELAY flag and I set it (with tcp_nagle_disable(conn->pcb.tcp)), but it doesn't make a difference. Ethernet payload is around 50 bytes, TCP_MSS is 1460.
The netconn api sees the data as sent (netbuffer structure is updated, tcp_write() and tcp_output() are called with no errors), but I have the impression that after low_level_output() is called and the data buffer is passed to the DMA (with HAL_ETH_TransmitFrame()) it stays there until something happened and 3 or 4 ethernet packets are sent in a go.
I don't want to wait forever for a reply and I set a timeout on netconn_recv(), enabling LWIP_SO_RCVTIMEO and calling netconn_set_recvtimeout(). I set up the server to answer with an echo but even with a timeout of 500ms I loose most of the replies.
Here some code:
conn = netconn_new(NETCONN_TCP);
if (conn != NULL)
{
err = netconn_bind(conn, IP_ADDR_ANY, 0);
if (err == ERR_OK)
{
connect_error = netconn_connect(conn, &dest_addr, SRV_PORT);
if (connect_error == ERR_OK)
{
// set a timeout to avoid waiting forever
netconn_set_recvtimeout(conn, 500);
//TCP_NODELAY
tcp_nagle_disable(conn->pcb.tcp);
osSemaphoreRelease(tcpsem); // signal tcpsend
if (netconn_recv(conn, &buf) == ERR_OK)
{
//Parse message
do
{
// do stuff..
}
while (netbuf_next(buf) >0);
netbuf_delete(buf);
} else {
// TIMEOUT
}
}
/* Close connection */
netconn_close(conn);
}
netconn_delete(conn);
}
vTaskDelete(tcpsendTaskHandle);
vTaskDelete(NULL);
tcpsend
void tcpsend (void *data, size_t len)
{
// send the data to the connected connection
netconn_write(conn, data, len, NETCONN_NOFLAG);
}
tcpsend
static void tcpsend_thread (void *arg)
{
for (;;)
{
// semaphore must be taken before accessing the tcpsend function
osSemaphoreAcquire(tcpsem, portMAX_DELAY);
// build msg
if (ethPrepare(ðMsg) == ERR_OK)
{
// send the data to the server
tcpsend(ðMsg, ethMsg.ncSize);
}
vTaskDelay(100);
}
}
Found the problem:
I forgot to set also the TxBuffer memory as not cacheable bufferable..
Once I added the memory configuration on the loader script and in ethernetif.c also for the tx buffer (I had it only for he rx) I could see the ethernet packets right away.
I have an Arduino running a webserver that is receiving data every 2 seconds. I CAN connect to its IP address by typing into the browser. I also created a web app that pulls data from this IP address every time new data comes in. The issue is that I need to access the IP address with the web app while another program is accessing it. Currently only one program can access at a time. I would like to have a Python script pulling data from the IP address constantly and still allow the web app to connect to view it live. How can I achieve this?
Arduino code with a lot of other stuff removed...
WiFiServer server(80); //server socket
WiFiClient client_1 = server.available();
void setup() {
Serial.begin(9600);
enable_WiFi(); // function to enable wifi
connect_WiFi(); // function to connect to wifi
server.begin();
}
void loop() {
client_1 = server.available();
if (client_1) {
printWEB(client_1); // this posts the data as text to the web IP address
}
delay(2000);
}
void printWEB(WiFiClient client) {
if (client) { // if you get a client,
Serial.println("new client"); // print a message out the serial port
String currentLine = ""; // make a String to hold incoming data from the client
while (client.connected()) { // loop while the client's connected
if (client.available()) { // if there's bytes to read from the client,
char c = client.read(); // read a byte, then
Serial.write(c); // print it out the serial monitor
if (c == '\n') { // if the byte is a newline character
// if the current line is blank, you got two newline characters in a row.
// that's the end of the client HTTP request, so send a response:
if (currentLine.length() == 0) {
// HTTP headers always start with a response code (e.g. HTTP/1.1 200 OK)
// and a content-type so the client knows what's coming, then a blank line:
client.println("HTTP/1.1 200 OK");
client.println("Access-Control-Allow-Origin: *");
client.println("Access-Control-Allow-Methods: GET");
client.println("Content-type: application/json");
client.println();
moistureReading = analogRead(A1);
tmpString = dataPosted;
tmpString.replace("%moistureData%", String(moistureReading) );
client.flush();
client.print( tmpString );
// The HTTP response ends with another blank line:
client.println();
// break out of the while loop:
break;
}
else { // if you got a newline, then clear currentLine:
currentLine = "";
}
}
else if (c != '\r') { // if you got anything else but a carriage return character,
currentLine += c; // add it to the end of the currentLine
}
}
}
// close the connection:
client.stop();
Serial.println("client disconnected");
}
}
The solution here was to take the delay off of the arduino code by simply removing the sleep function (Turns out this is not required). I then had to create a delay in all of my fetching codes. The React.js needed a sleep function and the python script needed a sleep function. This allows the two platforms to collect data from the same IP Address simultaneously. I needed a minimum of about 5-10 second sleep for both apps in order for this to work. I initially tried 1 second for each and it didn't work.
The webserver as programmed on your Arduino can only serve one request from a client at a time, not two simultaneously.
If two clients are doing requests in a loop, and are doing that fairly quickly, there will be situations where one client is blocking access for the other.
Try making the two clients much slower, i.e. retrieve information at a much lower frequency, just to see if the Arduino can keep up then.
Also, make the webserver on the Arduino as fast as possible, and call it as often as possible.
So, don't put delay()s in the loop, or anywhere else, and try to do the analogread() and the return string preparation outside of the webserver code and in the loop() every x seconds using millis(), so as little time as possible is spent in the webserver code.
I am working with some protocol on my Windows 10 pro with VC++ 2013 Community, basically includes three steps:
client sends a GET header (e.g. authentication, etc)
server returns a HTTP header (e.g. status code 200 if everything is fine)
then server keeps sending binary data stream after the HTTP header
I send the header, and call recv() in blocking mode to receive data from server through the TCP stream. However, the recv() blocks, never return.
I use WireShark to "follow" the TCP stream, and it shows that server does keep sending binary data, and I do see ACK message from client side to acknowledge every segment it receives. However, recv() still blocks, and never returns.
I tried to use:
pure C implementation over WinSock
C# using TcpClient
C++ with Boost Asio
non-blocking WinSock (as in this article)
The first version was implemented in WinHTTP, and eventually got Timeout.
None of them can receive any data. However, the WireShark can still tell that the server keeps sending binary data.
I tried to turn off my firewall, but the problem still there.
The most weird thing is my first implementation actually did successfully get data from recv(), about two days ago. On that day, recv() returned three times, and then blocked again. The next day, the same implementation, recv() never be able to return anything.
I am really confused. Thank you!
Here is the code, blocking Winsock version:
WSADATA wsaData;
int iResult;
SOCKET ConnectSocket = INVALID_SOCKET;
struct sockaddr_in clientService;
char recvbuf[DEFAULT_BUFLEN];
int recvbuflen = DEFAULT_BUFLEN;
//----------------------
// Initialize Winsock
iResult = WSAStartup(MAKEWORD(2, 2), &wsaData);
if (iResult != NO_ERROR) {
printf("WSAStartup failed: %d\n", iResult);
return 1;
}
//----------------------
// Create a SOCKET for connecting to server
ConnectSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if (ConnectSocket == INVALID_SOCKET) {
printf("Error at socket(): %ld\n", WSAGetLastError());
WSACleanup();
return 1;
}
//----------------------
// The sockaddr_in structure specifies the address family,
// IP address, and port of the server to be connected to.
clientService.sin_family = AF_INET;
auto ip = gethostbyname(name);
clientService.sin_addr = *(reinterpret_cast<struct in_addr *>(ip->h_addr));
clientService.sin_port = htons(DEFAULT_PORT);
//----------------------
// Connect to server.
iResult = connect(ConnectSocket, (SOCKADDR*)&clientService, sizeof(clientService));
if (iResult == SOCKET_ERROR) {
closesocket(ConnectSocket);
printf("Unable to connect to server: %ld\n", WSAGetLastError());
WSACleanup();
return 1;
}
// Send an initial buffer
iResult = send(ConnectSocket, sendbuf, (int)strlen(sendbuf), 0);
if (iResult == SOCKET_ERROR) {
printf("send failed: %d\n", WSAGetLastError());
closesocket(ConnectSocket);
WSACleanup();
return 1;
}
printf("Bytes Sent: %ld\n", iResult);
// shutdown the connection since no more data will be sent
iResult = shutdown(ConnectSocket, SD_SEND);
if (iResult == SOCKET_ERROR) {
printf("shutdown failed: %d\n", WSAGetLastError());
closesocket(ConnectSocket);
WSACleanup();
return 1;
}
// Receive until the peer closes the connection
do {
iResult = recv(ConnectSocket, recvbuf, recvbuflen, 0);
if (iResult > 0)
printf("Bytes received: %d\n", iResult);
else if (iResult == 0)
printf("Connection closed\n");
else
printf("recv failed: %d\n", WSAGetLastError());
} while (iResult > 0);
// cleanup
closesocket(ConnectSocket);
WSACleanup();
return 0;
Since the shutdown code is a copy/paste from the microsoft page (https://msdn.microsoft.com/en-us/library/windows/desktop/ms740121(v=vs.85).aspx) - I suppose that is how they indicate sending has completed. I believe this explains the issue you're having (from the above page):
Note When issuing a blocking Winsock call such as recv, Winsock may need to wait for a network event before the call can complete. Winsock performs an alertable wait in this situation, which can be interrupted by an asynchronous procedure call (APC) scheduled on the same thread. Issuing another blocking Winsock call inside an APC that interrupted an ongoing blocking Winsock call on the same thread will lead to undefined behavior, and must never be attempted by Winsock clients.
I actually just realized that I've only ever used non-blocking sockets and always have the receiving code on its own thread. So try adding this:
iResult = ioctlsocket(m_socket, FIONBIO, &iMode);
From here:
https://msdn.microsoft.com/en-us/library/windows/desktop/ms738573(v=vs.85).aspx
Note: You're going to get mostly a series of failed attempts to receive data from the socket - so you'll need to handle those and you'll also not want to tight loop.
I have to send data reliable from a client using QTcpSocket to a server using QTcpServer.
The connection is established once and the data is send line by line with terminator of "\r\n" (The server is splitting the incoming data on this terminator)
If need to know which line could be successful and which line not.
This works as expected. But if I unplug the server network cable from the network, the client is still writing data and the "bytesWritten" signal is still emitted.
But no write error occurred and the "error" signal is not emitted. It seems QTcpSocket is writing the data to an internal buffer even if the TPC connection is lost. Using flush() has no effect.
Example code based on the fortune client:
Client::Client(QWidget *parent)
: QDialog(parent), networkSession(0)
{
m_messageCount=0;
QString ipAddress= "192.168.1.16";
m_socket = new QTcpSocket(this);
//No effect...
//m_socket->setSocketOption(QAbstractSocket::KeepAliveOption, 1);
connect(m_socket, SIGNAL(connected()), this, SLOT(Connected()));
connect(m_socket, SIGNAL(disconnected()), this, SLOT(DisConnected()));
connect(m_socket, SIGNAL(bytesWritten(qint64)), this, SLOT(OnBytesWritten(qint64)));
connect(m_socket, SIGNAL(error(QAbstractSocket::SocketError)), this, SLOT(displayError(QAbstractSocket::SocketError)));
}
void Client::connectToHost()
{
m_socket->connectToHost("192.168.1.16", 1234);
}
void Client::Connected()
{
qDebug() << "Connected()";
QTimer::singleShot(1000, this, SLOT(SendNextRecord()));
}
void Client::DisConnected()
{
qDebug() << "DisConnected()";
}
void Client::SendNextRecord()
{
m_messageCount++;
QByteArray singleRecord=QString("Nr: %1 Some Text").arg(m_messageCount).toUtf8();
singleRecord.append("\r\n");
Q_ASSERT(m_socket->isValid());
qDebug() << "Sending: " <<singleRecord;
//bytesSend always > 0
qint64 bytesSend=m_socket->write(singleRecord);
//No effect
m_socket->flush();
qDebug() <<"bytes Send:" << bytesSend;
}
//Signal is still emitted even if network cable is unplugged
void Client::OnBytesWritten(qint64 bytes)
{
qDebug() << "OnBytesWritten:" << bytes;
//No effect
m_socket->flush();
QTimer::singleShot(1000, this, SLOT(SendNextRecord()));
}
//Signal not emitted even if network cable is unplugged
void Client::displayError(QAbstractSocket::SocketError socketError)
{
qDebug() << "Socket error";
}
Can I change this behaviour ?
i'm trying to build a server with a PIC24F.
This is a piece of code i'm isuing:
switch(TCPServerState) {
case SM_HOME:
// Allocate a socket for this server to listen and accept connections on
socket.Socket = TCPOpen(0, TCP_OPEN_SERVER, SERVER_PORT, TCP_PURPOSE_GENERIC_TCP_SERVER);
if(socket.Socket != INVALID_SOCKET) {
TCPServerState = SM_LISTENING;
}
break;
case SM_LISTENING:
// See if anyone is connected to us
//if(TCPIsConnected(socket.Socket)) {
if(!TCPWasReset(socket.Socket)){
if(socket.Connected == 0) {
socket.Connected = 1;
printf("Socket is CONNECTED: %d\n", socket.Socket);
}
uint16_t avaible = TCPIsGetReady(socket.Socket);
// Some stuff
}
else if(socket.Connected == 1){
printf("Socket RESET: %d\n", socket.Socket);
TCPServerState = SM_CLOSING;
}
break;
case SM_CLOSING:
// Close the socket connection.
socket.Connected = 0;
TCPClose(socket.Socket);
TCPServerState = SM_HOME;
printf("Socket is CLOSED: %d\n", socket.Socket);
break;
}
All works fine if i close my client socket properly, but if i disconnect ethernet cable i am not able to detect disconnection and my code does not close the socket because TCPWasReset still FALSE(or TCPIsConnected still TRUE).
So how can i detect the disconnection of network cable(without add a software keep_alive implementation) ?
Thanks
Check a few items:
Call TickInit(); before StackInit();
Select the correct TIMER1 clock source for your application - internal clock or external clock (T1CON.TCS)
Otherwise, just use a debugger on the keepalive logic in TCP.C, which should default to 10 seconds in the latest Microchip Library for Applications TCP/IP stack 5.42.08.