Proper way of notifying the application that DPDK received messages - intel

currently, I'am using dpdk by sending and receiving the packets in to the rte-rings. I'am having difficulty of finding the proper way of to notify the application that the DPDK received incoming messages.
In order to check the whether the rte_ring has received the data or not, I run a busy loop on the rte_ring.
here is the example below
while (1) {
if (rte_ring_dequeue(rx_ring, &_msg) < 0) {
usleep(5);
} else {
recv_msg = (char *) _msg;
if (chara_debug) printf("[%d] Server merge data::[%.24s...]__length::[%ld]\n", batched_packets, recv_msg, strlen(recv_msg));
collect_packets++;
if (collect_packets > MERGE_PACKETS) break;
}
}
However, my fellow developers say that this is not a efficient way nor the proper way of checking received messages. Busy polling should be only done in the DPDK API and not in the application.
Is there a way for DPDK to send a signal to the application so that the application can only check the rte_ring only when there is a received message?

Well, direct answer is to use DPDK event library: http://doc.dpdk.org/guides/prog_guide/eventdev.html
But it is not that smooth. Unless you have a hardware which directly supports the event model, you still need at least one RX core to poll (i.e. do the busy loop) as shown on this picture:
http://doc.dpdk.org/guides/prog_guide/eventdev.html#api-walk-through

Related

Qt: Detect a QTcpSocket disconnection in a console app when the user closes it

My question title should be enough. I already tried (without success):
Using a C-style destructor in a function: __attribute__((destructor)):
void sendToServerAtExit() __attribute__((destructor)) {
mySocket->write("$%BYE_CODE%$");
}
The application destructor is called, but the socket is already disconnected and I can't write to the server.
Using the standard C function atexit(), but the TCP connection is already lost so I can't send anything to the server.
atexit(sendToServerAtExit); // is the same function of point 1
The solution I found is check every second if all connected sockets are still connected, but I don't want to do so inefficient thing. It's only a temporary solution. Also, I want that others apps (even web ones) can join the chat room of my console app, and I don't want to request data every second.
What should I do?
Handle the below signal (QTcpSocket is inherited from QAbstractSocket)
void QAbstractSocket::stateChanged(QAbstractSocket::SocketState socketState)
Inside the slot called, check if socketState is QAbstractSocket::ClosingState.
QAbstractSocket::ClosingState indicates the socket is about to close.
http://doc.qt.io/qt-5/qabstractsocket.html#SocketState-enum
You can connect a slot to the disconnect signal.
connect(m_socket, &QTcpSocket::disconnected, this, &Class::clientDisconnected);
Check the documentation.
You can also know which user has been disconnected using a slot like this:
void Class::clientDisconnected
{
QTcpSocket* client = qobject_cast<QTcpSocket*>(sender());
if(client)
{
// Do something
client->deleteLater();
}
else
{
// Handle error
}
}
This method is usefull if you have a connections pool. You can use it as well if you have a single connection, but do not forget nullptr after client->deleteLater().
If I understand you question correctly, you want to send data over TCP to notify the remote computer that you are closing the socket.
Technically this can be done in Qt by listenning to the QIODevice::aboutToClose() or QAbstractSocket::stateChanged() signals.
However, if you graciously exit your program and close the QTcpSocket by sending a FIN packet to the remote computer. This means that on the remote computer,
the running program will be notified that the TCP connection finished. For instance, if the remote program is also using QTcpSocket, the QAbstractSocket::disconnected()
signal will be emitted.
The real issues arise when one of the program does not graciously exit (crash, hardware issue, cable unplugged, etc.). In this case, the TCP FIN packet will
not be sent and the remote computer will never get notified that the other side of the TCP connection is disconnected. The TCP connection will just time-out after a few minutes.
However, in this case you cannot send your final piece of data to the server either.
In the end the only solution is to send a "I am here" packet every now and then. Even though you claim it is ineficient, it is a widely used technique and it also has the advantage that it works.

How non-blocking web server works?

I'm trying to understand the idea of non-blocking web server and it seems like there is something I miss.
I can understand there are several reasons for "block" web request(psuedocode):
CPU bound
string on_request(arg)
{
DO_SOME_HEAVY_CPU_CALC
return "done";
}
IO bound
string on_request(arg)
{
DO_A_CALL_TO_EXTERNAL_RESOURCE_SUCH_AS_WEB_IO
return "done";
}
sleep
string on_request(arg)
{
sleep(VERY_VERY_LONG_TIME);
return "done";
}
are all the three can benefit from non-blocking server?
how the situation that do benefit from the non-blocking web server really do that?
I mean, when looking at the Tornado server documentation, it seems
like it "free" the thread. I know that a thread can be put to sleep
and wait for a signal from the operation system (at least in Linux),
is this the meaning of "freeing" the thread? is this some higher
level implementation? something that actually create a new thread
that is waiting for new request instead of the "sleeping" one?
Am I missing something here?
Thanks
Basically the way the non-blocking sockets I/O work is by using polling and the state machine. So your scheme for many connections would be something like that:
Create many sockets and make them nonblocking
Switch the state of them to "connect"
Initiate the connect operation on each of them
Poll all of them until some events fire up
Process the fired up events (connection established or connection failed)
Switch the state those established to "sending"
Prepare the Web request in a buffer
Poll "sending" sockets for WRITE operation
send the data for those who got the WRITE event set
For those which have all the data sent, switch the state to "receiving"
Poll "receiving" sockets for READ operation
For those which have the READ event set, perform read and process the read data according to the protocol
Repeat if the protocol is bidirectional, or close the socket if it is not
Of course, at each stage you need to handle errors, and that the state of each socket is different (one may be connecting while another may be already reading).
Regarding polling I have posted an article about how different polling methods work here: http://www.ulduzsoft.com/2014/01/select-poll-epoll-practical-difference-for-system-architects/ - I suggest you check it.
To benefit from a non-blocking server, your code must also be non-blocking - you can't just run blocking code on a non-blocking server and expect better performance. For example, you must remove all calls to sleep() and replace them with non-blocking equivalents like IOLoop.add_timeout (which in turn involves restructuring your code to use callbacks or coroutines).
How To Use Linux epoll with Python http://scotdoyle.com/python-epoll-howto.html may give you some points about this topic.

Detecting Serial port disconnect in Chrome App

I'm using Chrome's Serial Port API (http://developer.chrome.com/apps/serial.html) in a Web App.
The problem I have is that pretty much all serial ports now are implemented via USB devices. If the user disconnects (or just resets) the USB device, I don't have any way of knowing. Not only that, because the app hasn't disconnected in Chrome (because it didn't know), if the USB device is plugged back in, bad things happen (on Linux it just gets a different name, but in Windows it is not usable at all).
The best I can manage is:
var checkConnection = function() {
chrome.serial.getControlSignals(connectionInfo.connectionId, function (sigs) {
var connected = "cts" in sigs;
if (!connected) console.log("Disconnected");
});
} // called every second or so
Is there a better way? a callback would be ideal!
It looks it should be safe on all platforms to assume that getting a read callback with 0 bytes means EOF, which in turn is a good indication that the device has been disconnected.
chrome.serial.read(connectionId, function(readInfo) {
if (readInfo.bytesRead === 0) {
/// Safely assume the device is gone. Clean up.
chrome.serial.close(connectionId);
/// ...
}
});
The serial API will be improving over the next few weeks (in Canary at least) to add stability improvements, an event-based read API, and the ability to more clearly detect timeouts and error conditions (like a disconnected device). You can track that progress at http://crbug.com/307184.

in Vxworks, under what circumstances Send() api will get stuck or take more time?

i am using two m/c A and B, both are having same vxworks image as well as hardware. but only change is application. suppose M/c A is server and M/c B is client. while communication over ethernet client M/c is not able send the data. it's getting stuck send() and task state will be Pend.
wState = send(vstCCEUSerSocket.wCCEUAcceptFD,(char* )vstCCEUAppTask.rgubyCCEUTxPkt,sizeof(vstCCEUAppTask.rgubyCCEUTxPkt),0);
/*logMsg("\nTrmtd = %d\t",wState);*/
if(wState == ERROR)
{
perror("write");
Close the Fd
}
From the VxWorks OS Libraries API Reference
Page 497/498 you can find info about the connect() but there's also a connectWithTimeout()
Page 1203/1204 you might find some interesting items for TCP sockets. For example the KEEP_ALIVE
If you rely on a quick connection time, and you want to keep control you can combine connectWithTimeout with the keep alive.
It can take another day for me to recall old code to check how I ever solved this in one of my projects.
VxWorks 5.5 Network Programmers Guide - Stream Sockets

TCP client-server SIGPIPE

I am designing and testing a client server program based on TCP sockets(Internet domain). Currently , I am testing it on my local machine and not able to understand the following about SIGPIPE.
*. SIGPIPE appears quite randomly. Can it be deterministic?
The first tests involved single small(25 characters) send operation from client and corresponding receive at server. The same code, on the same machine runs successfully or not(SIGPIPE) totally out of my control. The failure rate is about 45% of times(quite high). So, can I tune the machine in any way to minimize this.
**. The second round of testing was to send 40000 small(25 characters) messages from the client to the server(1MB of total data) and then the server responding with the total size of data it actually received. The client sends data in a tight loop and there is a SINGLE receive call at the server. It works only for a maximum of 1200 bytes of total data sent and again, there are these non deterministic SIGPIPEs, about 70% times now(really bad).
Can some one suggest some improvement in my design(probably it will be at the server). The requirement is that the client shall be able to send over medium to very high amount of data (again about 25 characters each message) after a single socket connection has been made to the server.
I have a feeling that multiple sends against a single receive will always be lossy and very inefficient. Shall we be combining the messages and sending in one send() operation only. Is that the only way to go?
SIGPIPE is sent when you try to write to an unconnected pipe/socket. Installing a handler for the signal will make send() return an error instead.
signal(SIGPIPE, SIG_IGN);
Alternatively, you can disable SIGPIPE for a socket:
int n = 1;
setsockopt(thesocket, SOL_SOCKET, SO_NOSIGPIPE, &n, sizeof(n));
Also, the data amounts you're mentioning are not very high. Likely there's a bug somewhere that causes your connection to close unexpectedly, giving a SIGPIPE.
SIGPIPE is raised because you are attempting to write to a socket that has been closed. This does indicate a probable bug so check your application as to why it is occurring and attempt to fix that first.
Attempting to just mask SIGPIPE is not a good idea because you don't really know where the signal is coming from and you may mask other sources of this error. In multi-threaded environments, signals are a horrible solution.
In the rare cases were you cannot avoid this, you can mask the signal on send. If you set the MSG_NOSIGNAL flag on send()/sendto(), it will prevent SIGPIPE being raised. If you do trigger this error, send() returns -1 and errno will be set to EPIPE. Clean and easy. See man send for details.

Resources