QT QTcpSocket miss some data after start write data to server - qt

I get sine wave from server though TCP and plot it. Everything seems to be fine until I start sending something back at c>1000. After one byte sent, I still get data but the waveform of sine wave is changed. I'm sure that there are some missed data but I can't find bugs in my code. The transmission rate is about 1M bps.
The question is
When I write something to server, how it effects to socket?
Why the socket miss some data?
How can I fix it?
ssTcpClient::ssTcpClient(QObject *parent) :
QObject(parent)
{
socket = new QTcpSocket(this);
connect(socket, SIGNAL(connected()),
this, SLOT(on_connected()));
connect(socket, SIGNAL(disconnected()),
this, SLOT(on_disconnected()));
}
void ssTcpClient::on_connected()
{
qDebug() << "Client: Connection established.";
connect(socket, SIGNAL(readyRead()),
this, SLOT(on_readyRead()));
in = new QDataStream(socket);
}
void ssTcpClient::on_readyRead(){
static quint32 c = 0;
qDebug() << "c" << c++;
QVector<quint8> data;
quint8 buf;
while(socket->bytesAvailable()>0){
//read data to buffer
*in >> buf;
data.append(buf);
}
//process data
emit data_read(data);
//if there are over 1000 data then send something back
if(c>1000){
char msg[10];
msg[0] = 'c';
socket->write(msg,1);
socket->flush();
}
}

You cannot rely on TCP traffic to be complete, data arrives in indeterminable chunks.
You are using QDataStream to read data from the socket. This is a really bad idea because QDataStream assumes that you have complete set of data. If there isn't enough data, it will silently fail.
I suggest you modify your data source so it either sends a byte count as the first thing, or it sends some kind of termination sequence that you can look out for to tell you that you have received enough to process.

Related

Is this code send hex data in correct way in Qt C++?

I am new to Qt.I am working on finger print madoule with this document. I want to send my data to serial port in this format:
I wrote my code in this format, but I think my data has mistake, because this code turn on the LED in some device:
QByteArray ba;
ba.resize(24);
ba[0]=0x55;
ba[1]=0xAA;
ba[2]=0x24;
ba[3]=0x01;
ba[4]=0x01;
ba[5]=0x00;
ba[6]=0x00;
ba[7]=0x00;
ba[8]=0x00;
ba[9]=0x00;
ba[10]=0x00;
ba[11]=0x00;
ba[12]=0x00;
ba[13]=0x00;
ba[14]=0x00;
ba[15]=0x00;
ba[16]=0x00;
ba[17]=0x00;
ba[18]=0x00;
ba[19]=0x00;
ba[20]=0x00;
ba[21]=0x00;
ba[22]=0x27;
ba[23]=0x01;
p->writedata(ba);
Is this data correct?
You're just copying a drawing into code. It won't work without understanding what the drawing means. You seem to miss that:
The LEN field seems to be a little-endian integer that gives the number of bytes in the DATA field - perhaps it's the number of bytes that carry useful information if the packet has a fixed size.
The CKS field seems to be a checksum of some sort. You need to calculate it based on the contents of the packet. The protocol documentation should indicate whether it's across the entire packet or not, and how to compute the value.
It seems like you are talking to a fingerprint identification module like FPM-1502, SM-12, ADST11SD300/310 or similar. If so, then you could obtain a valid command packet as follows:
QByteArray cmdPacket(quint16 cmd, const char *data, int size) {
Q_ASSERT(size <= 16);
QByteArray result(24, '\0');
QDataStream s(&result, QIODevice::WriteOnly);
s.setByteOrder(QDataStream::LittleEndian);
s << quint16(0xAA55) << cmd << quint16(size);
s.writeRawData(data, size);
s.skipRawData(22 - s.device()->pos());
quint16 sum = 0;
for (int i = 0; i < 22; ++i)
sum += result[i];
s << sum;
qDebug() << result.toHex();
return result;
}
QByteArray cmdPacket(quint16 cmd, const QByteArray& data) {
return cmdPacket(cmd, data.data(), data.size());
}
The command packet to turn the sensor led on/off can be obtained as follows:
QByteArray cmdSensorLed(bool on) {
char data[2] = {'\0', '\0'};
if (on) data[0] = 1;
return cmdPacket(0x124, data, sizeof(data));
}

How to use QTcpSocket for high frequent sending of small data packages?

We have two Qt applications. App1 accepts a connection from App2 through QTcpServer and stores it in an instance of QTcpSocket* tcpSocket. App1 runs a simulation with 30 Hz. For each simulation run, a QByteArray consisting of a few kilobytes is sent using the following code (from the main/GUI thread):
QByteArray block;
/* lines omitted which write data into block */
tcpSocket->write(block, block.size());
tcpSocket->waitForBytesWritten(1);
The receiver socket listens to the QTcpSocket::readDataBlock signal (in main/GUI thread) and prints the corresponding time stamp to the GUI.
When both App1 and App2 run on the same system, the packages are perfectly in sync. However when App1 and App2 are run on different systems connected through a network, App2 is no longer in sync with the simulation in App2. The packages come in much slower. Even more surprising (and indicating our implementation is wrong) is the fact that when we stop the simulation loop, no more packages are received. This surprises us, because we expect from the TCP protocol that all packages will arrive eventually.
We built the TCP logic based on Qt's fortune example. The fortune server, however, is different, because it only sends one package per incoming client. Could someone identify what we have done wrong?
Note: we use MSVC2012 (App1), MSVC2010 (App2) and Qt 5.2.
Edit: With a package I mean the result of a single simulation experiment, which is a bunch of numbers, written into QByteArray block. The first bits, however, contain the length of the QByteArray, so that the client can check whether all data has been received. This is the code which is called when the signal QTcpSocket::readDataBlock is emitted:
QDataStream in(tcpSocket);
in.setVersion(QDataStream::Qt_5_2);
if (blockSize == 0) {
if (tcpSocket->bytesAvailable() < (int)sizeof(quint16))
return; // cannot yet read size from data block
in >> blockSize; // read data size for data block
}
// if the whole data block is not yet received, ignore it
if (tcpSocket->bytesAvailable() < blockSize)
return;
// if we get here, the whole object is available to parse
QByteArray object;
in >> object;
blockSize = 0; // reset blockSize for handling the next package
return;
The problem in our implementation was caused by data packages being piled up and incorrect handling of packages which had only arrived partially.
The answer goes in the direction of Tcp packets using QTcpSocket. However this answer could not be applied in a straightforward manner, because we rely on QDataStream instead of plain QByteArray.
The following code (run each time QTcpSocket::readDataBlock is emitted) works for us and shows how a raw series of bytes can be read from QDataStream. Unfortunately it seems that it is not possible to process the data in a clearer way (using operator>>).
QDataStream in(tcpSocket);
in.setVersion(QDataStream::Qt_5_2);
while (tcpSocket->bytesAvailable())
{
if (tcpSocket->bytesAvailable() < (int)(sizeof(quint16) + sizeof(quint8)+ sizeof(quint32)))
return; // cannot yet read size and type info from data block
in >> blockSize;
in >> dataType;
char* temp = new char[4]; // read and ignore quint32 value for serialization of QByteArray in QDataStream
int bufferSize = in.readRawData(temp, 4);
delete temp;
temp = NULL;
QByteArray buffer;
int objectSize = blockSize - (sizeof(quint16) + sizeof(quint8)+ sizeof(quint32));
temp = new char[objectSize];
bufferSize = in.readRawData(temp, objectSize);
buffer.append(temp, bufferSize);
delete temp;
temp = NULL;
if (buffer.size() == objectSize)
{
//ready for parsing
}
else if (buffer.size() > objectSize)
{
//buffer size larger than expected object size, but still ready for parsing
}
else
{
// buffer size smaller than expected object size
while (buffer.size() < objectSize)
{
tcpSocket->waitForReadyRead();
char* temp = new char[objectSize - buffer.size()];
int bufferSize = in.readRawData(temp, objectSize - buffer.size());
buffer.append(temp, bufferSize);
delete temp;
temp = NULL;
}
// now ready for parsing
}
if (dataType == 0)
{
// deserialize object
}
}
Please not that the first three bytes of the expected QDataStream are part of our own procotol: blockSize indicates the number of bytes for a complete single package, dataType helps deserializing the binary chunk.
Edit
For reducing the latency of sending objects through the TCP connection, disabling packet bunching was very usefull:
// disable Nagle's algorithm to avoid delay and bunching of small packages
tcpSocketPosData->setSocketOption(QAbstractSocket::LowDelayOption,1);

Resource temporarily unavailable

Consider this thread, it acts like a timer, send some packet to serial:
void PlCThead::run()
{
while(1)
{
const char str[]={UPDATE_PACKET};
QByteArray built;
built.append(0x02);
built.append(0x05);
built.append(0x03);
emit requestForWriteAndReceive(built);
msleep(100);
}
}
emit works fine, it goes inside the slot, there, it writes only 78 or char x to serial instead of a packet of 3 bytes.
bool RS::rs_ThreadPlcDataAqustn(QByteArray byteArray)
{
QByteArray rd15Bytes;
char *data = byteArray.data();
int len = byteArray.length();
if(!rs_serialWrite(data, len))
{
qDebug() << "Failure:( rs_dataqustn: rs_plcWrite(data, len)";
emit plc_port_dscntd();
return false;
}
}
bool RS::rs_serialWrite(char* buff, size_t length)
{
int tries;
int len;
tries = 0;
QByteArray built((char*)buff, length);
qDebug() << built.toHex();
len = write(fd, buff, length);
qDebug() << len;
qDebug() << strerror(errno);
return true;
}
this is how fd created:
fd = open(portPath, O_RDWR | O_NOCTTY | O_NDELAY | O_NONBLOCK, S_IWUSR | S_IRUSR | S_IXUSR);
this is how the thread created in mainwindow:
rs_plc->rs_plcOpenPort((char *)"/dev/ttyS0"); /*/dev/ttyS3*/
PlCThead *thread = new PlCThead();
connect(thread, SIGNAL(requestForWriteAndReceive(QByteArray)), rs_plc, SLOT(rs_ThreadPlcDataAqustn(QByteArray )));
thread->start();
rs_plc is a private member of MainWindow.
strerror returns back this warning:
> Resource temporarily unavailable
any ideas? this code works fine with timers, it has been checked and tested accurately, but now i need to add this thread instead of the timer. Thanks
Your question isn't complete enough for a full diagnosis since you aren't showing how fd is created, how the threads are set up (which you say is part of the problem), etc.
But... your resource temporarily unavailable line is a big hint. The write() function isn't succeeding to write everything because it's returning an error (probably EAGAIN or EWOULDBLOCK). The fd file descriptor is attached to something that either has a small buffer, no buffer, or a buffer which is already full. And it's full, and it's the applications job to not send it anything else until it can handle it. A common thing to do there is to sleep and then try the write again if the error code is EAGAIN or EWOULDBLOCK.
But, you said it's returning 3, which actually indicates "no error" too. And if that's the case then the error string won't be referring to write itself, and something else set errno previously. (which could have been write itself in the past).
In short, if this is getting called more than once (likely) you probably need to watch out for writing too fast (and it looks like a serial buffer, which definitely falls into the category of easy-to-fill-the-buffer).
In short: if it's not writing all the bytes to the fd than you want, it's because it can't handle more than that.
This likely has absolutely nothing to do with qt by the way. It's all about the write() call.

QProcess dies for no obvious reason

While coding a seemingly simple part of a Qt application that would run a subprocess and read data from its standard output, I have stumbled upon a problem that has me really puzzled. The application should read blocks of data (raw video frames) from the subprocess and process them as they arrive:
start a QProcess
gather data until there is enough for one frame
process the frame
return to step 2
The idea was to implement the processing loop using signals and slots – this might look silly in the simple, stripped-down example that I provide below, but seemed entirely reasonable within the framework of the original application. So here we go:
app::app() {
process.start("cat /dev/zero");
buffer = new char[frameLength];
connect(this, SIGNAL(wantNewFrame()), SLOT(readFrame()), Qt::QueuedConnection);
connect(this, SIGNAL(frameReady()), SLOT(frameHandler()), Qt::QueuedConnection);
emit wantNewFrame();
}
I start here a trivial process (cat /dev/zero) so that we can be confident that it will not run out of data. I also make two connections: one starts the reading when a frame is needed and the other calls a data handling function upon the arrival of a frame. Note that this trivial example runs in a single thread so the connections are made to be of the queued type to avoid infinite recursion. The wantNewFrame() signal initiates the acquisition of the first frame; it gets handled when the control returns to the event loop.
bool app::readFrame() {
qint64 bytesNeeded = frameLength;
qint64 bytesRead = 0;
char* ptr = buffer;
while (bytesNeeded > 0) {
process.waitForReadyRead();
bytesRead = process.read(ptr, bytesNeeded);
if (bytesRead == -1) {
qDebug() << "process state" << process.state();
qDebug() << "process error" << process.error();
qDebug() << "QIODevice error" << process.errorString();
QCoreApplication::quit();
break;
}
ptr += bytesRead;
bytesNeeded -= bytesRead;
}
if (bytesNeeded == 0) {
emit frameReady();
return true;
} else
return false;
}
Reading the frame: basically, I just stuff the data into a buffer as it arrives. The frameReady() signal at the end announces that the frame is ready and in turn causes the data handling function to run.
void app::frameHandler() {
static qint64 frameno = 0;
qDebug() << "frame" << frameno++;
emit wantNewFrame();
}
A trivial data processor: it just counts the frames. When it is done, it emits wantNewFrame() to start the reading cycle anew.
This is it. For completeness, I'll also post the header file and main() here.
app.h:
#include <QDebug>
#include <QCoreApplication>
#include <QProcess>
class app : public QObject
{
Q_OBJECT
public:
app();
~app() { delete[] buffer; }
signals:
void wantNewFrame();
void frameReady();
public slots:
bool readFrame();
void frameHandler();
private:
static const quint64 frameLength = 614400;
QProcess process;
char* buffer;
};
main.cpp:
#include "app.h"
int main(int argc, char** argv)
{
QCoreApplication coreapp(argc, argv);
app foo;
return coreapp.exec();
}
And now for the bizarre part. This program processes a random number of frames just fine (I've seen anything from fifteen to more than thousand) but eventually stops and complains that the QProcess had crashed:
$ ./app
frame 1
...
frame 245
frame 246
frame 247
process state 0
process error 1
QIODevice error "Process crashed"
Process state 0 means "not running" and process error 1 means "crashed". I investigated into it and found out that the child process receives a SIGPIPE – i.e., the parent had closed the pipe on it. But I have absolutely no idea of where and why this happens. Does anybody else?
The code is a bit weird looking (not using the readyRead signal and instead relying on delayed signals/slots). As you pointed out in the discussion, you've already seen the thread on the qt-interest ML where I asked about a similar problem. I've just realized that I, too, used the QueuedConnection at that time. I cannot explain why it is wrong -- the queued signals "should work", in my opinion. A blind shot is that the invokeMethod which is used by the Qt's implementation somehow races with your signal delivery so that you empty your read buffer before Qt gets a chance to process the data. This would mean that Qt will ultimately read zero bytes and (correctly) interpret that as an EOF, closing the pipe.
I cannot find the referenced "Qt task 217111" anymore, but there is a couple of reports in their Jira about waitForReadyRead not working as users expect, see e.g. QTBUG-9529.
I'd bring this to the Qt's "interest" mailing list anmd stay clear of the waitFor... family of methods. I agree that their documentation deserves updating.

QNetworkAccessManager read outgoingData and keep it in QIODevice

I'm trying to save all outgoing POST data in QtWebKit.
I do it using overriding QNetworkReply *QNetworkAccessManager::createRequest(Operation op, const QNetworkRequest &request, QIODevice outgoingData) method and reading an outgoingData that contains outgoing POST data.
The problem is that after reading it, the data become not available in the QIODevice.
How to save an outgoing (PUT, POST) data and keep it available for the future internal Qt operations?
If I need to use another approach to save PUT/POST data - please, let me know.
Code example:
QNetworkReply *MyNetworkAccessManager::createRequest(Operation op, const QNetworkRequest &request, QIODevice *outgoingData)
{
QByteArray bArray = outgoingData->readAll();
// save bArray (that contains POST outgoing data) somewhere
// do other things, and outgoingData now has no data anymore, as it was already read to bArray
}
I have tried
QByteArray bArray = outgoingData->readAll();
outgoingData->write(bArray);
qDebug() << bArray;
But in this case I get "QIODevice::write: ReadOnly device" message.
How to save the outgoing POST/PUT data in Qt?
Thanks.
qint64 QIODevice::peek (char * data, qint64 maxSize)
Reads at most maxSize bytes from the
device into data, without side effects
(i.e., if you call read() after
peek(), you will get the same data).
Returns the number of bytes read. If
an error occurs, such as when
attempting to peek a device opened in
WriteOnly mode, this function returns
-1.
0 is returned when no more data is
available for reading.
EDIT
Forget about peak(), it's not good in this situation. You could use it but you would have to do much work to accomplish what you ask for. Instead read Tee is for Tubes, grab code from there and use it.
Link by courtesy of peppe from #qt irc channel on http://irc.freenode.net.
I'd like to thank peppe and thiago who were so kind to discuss this problem on #qt channel with me.
In case one day you want to steal incoming (as opposed to outgoing) data from QNetworkAccessManager you'll find answer and code in How to read data from QNetworkReply being used by QWebPage? question.
Using pos() and seek() does actually not work in that special case. The idea of using peek() instead seems to be much better. But an example would be helpful. So, here an example of how to get data buffer from given QIODevice's outgoing data in function createRequest() without affecting original data.
if (outgoing != NULL)
{
const qint64 delta = 100;
qint64 length = delta;
QByteArray array;
while (true)
{
char *buffer = new char[length];
qint64 count = outgoing->peek(buffer, length);
if (count < length)
{
array = QByteArray(buffer, count);
delete buffer;
break;
}
length += delta;
delete buffer;
}
}
For an optimization you may adjust the value of 'delta'.
Save the IO device marker with QIODevice::pos(). Read data from it. Then restore the marker with QIODevice::seek().
This will only work if the device is a random access one. But I think it covers most of them.

Resources