So I have a strange issue that when I read data (QDataStream) on a QTcpSocket: some of the data seems to be missing. The bytesAvailable() function will return the proper amount of bytes to be read, but QDataStream doesn't seem to hold all the bytes.
First of all, this is how the data looks:
bufferX always contains 768 floats and bufferY always contains 5376 floats. Therefore, I would expect the total data to be sent to be (excluding block size) : int + 768 floats + 5376 floats = 4 + 3072 + 21504 = 24580 bytes.
Now, here is the sender code:
void ClientSocket::serverTaskResult()
{
QByteArray block;
QDataStream out(&block, QIODevice::WriteOnly);
out.setVersion(QDataStream::Qt_DefaultCompiledVersion);
out << quint16(0);
out << mServerTask->getNPoints();
for (size_t i = 0; i < BUFFERX_SIZE; i++)
out << mServerTask->getBufferX(i);
for (size_t i = 0; i < BUFFERY_SIZE; i++)
out << mServerTask->getBufferY(i);
out.device()->seek(0);
out << quint16(block.size() - sizeof(quint16));
write(block);
out << quint16(0xFFFF);
}
And here is the receiver code:
void TestClient::recoverResult()
{
QDataStream in(&mTcpSocket);
in.setVersion(QDataStream::Qt_DefaultCompiledVersion);
float wBufferX[BUFFERX_SIZE];
float wBufferY[BUFFERY_SIZE];
int wNPoints;
forever{
if (mNextBlockSize == 0)
{
qint64 nBytesAvailable = mTcpSocket.bytesAvailable();
if (nBytesAvailable < sizeof(quint16))
break;
in >> mNextBlockSize;
}
if (mNextBlockSize == 0xFFFF)
{
closeConnection();
break;
}
if (mTcpSocket.bytesAvailable() < mNextBlockSize)
break;
for (size_t i = 0; i < BUFFERX_SIZE; i++)
in >> wBufferX[i];
for (size_t i = 0; i < BUFFERY_SIZE; i++)
in >> wBufferY[i];
in >> wNPoints;
mNextBlockSize = 0;
}
}
Now, the first odd thing I'm noticing is that nBytesAvailable always has a value of 49158, which is about double of what I'm expecting. How is it that I'm receiving twice as many bytes as expected?
Secondly, since I have all these bytes available, I would expect the QDataStream to be able to properly fill in the buffers. However, after anywhere between 315 and 350 floats, the QDataStream seems to contain unavailable data. That is, wBufferX will have defined (and correct) values in its first 315-350 indexes and unknown values afterwards. I don't understand how that is since bytesAvailable() clearly indicates that almost 50 000 bytes are on the socket. What am I missing?
Your help is greatly appreciated! Thanks!
Related
i have a QByteArray like this:
// read file
QFile file("e:/test/test.dat");
if(!file.open(QIODevice::ReadOnly))return;
QByteArray ba = file.readAll();
Now I want to divide the ba variable into 8 parts. Each part must have a certain size. For example 100200 bytes. How to do this?
sorry for my english
You can split your byte array like this:
std::vector<QByteArray> parts;
static const int size = 100200;
assert(ba.size() >= size * 8); // Make sure you have enough bytes.
for (int i = 0; i < 8; ++i) {
parts.emplace_back(ba.mid(i * size, size));
}
I am storing some data in QDataStream and immediately taking the data, but the count is showing zero while retriving. code looks fine but unexpected behaviour
//Overloading
QDataStream& operator<< (QDataStream& writeTO, const CascadeJobInfo& data)
{
writeTO << data.m_infoJobType << data.m_connectionName << data.m_submitJobId << data.m_submitJobStat;
return writeTO;
}
QDataStream& operator>> (QDataStream& readIn, CascadeJobInfo& data)
{
readIn >> data.m_infoJobType >> data.m_connectionName >> data.m_submitJobId >> data.m_submitJobStat;
return readIn;
}
void Fun()
{
// Code Starts here
projectFileName = /*Path to folder*/
QFile file(projectFileName);
file.open(QFile::ReadWrite);
file.close();
QDataStream dStream(&file);
int jobLstCount = /*Get the Count, assume 4*/
dStream << jobLstCount;
for(int i = 0; i < jobLstCount; i++)
{
JobInfo.m_infoJobType = jobFlowItem->getJobType();
JobInfo.m_connectionName = submitItem->connectionName();
JobInfo.m_submitJobId = submitItem->jobID();
JobInfo.m_submitJobStat = submitItem->jobState();
// All valid data stored here
}
file.close();
QDataStream dStreamOut(&file);
dStreamOut >> jobLstCount; /*Count returns zero here insted of 4*/
CascadeJobInfo jobInfo;
// Why jobLstCount is getting zero here
for(int i = 0 ; i < jobLstCount ; i++)
{
dStreamOut >> jobInfo;
}
}
file.open(QFile::ReadWrite);
file.close(); <--- HERE
QDataStream dStream(&file);
You are closing the file as soon as you open it, so basically you are working with an invalid file descriptor which won't work. Put file.close() at the end of the code when you are done.
Pointer related question. I'm going through some example code that currently reads in data from a file called dataFile into a buffer. The reading is done inside a loop as follows:
unsigned char* buffer = (unsigned char*)malloc(1024*768*);
fread(buffer,1,1024*768,dataFile);
redPointer = buffer;
bluePointer = buffer+1024;
greenPointer = buffer+768;
Now, I want to try and write the entire contents of the array buffer to a file, so that I can save just those discrete images (and not have a large file). However, I am not entirely sure how to go about doing this.
I was trying to cout statements, however I get a print-out of garbage characters on the console and also a beep from the PC. So then I end my program.
Is there an alternative method other than this:
for (int i=0; i < (1024*768); i++) {
fprintf(myFile, "%6.4f , ", buffer[i]);
}
By declaring your buffer as a char*, any pointer arithmatic or array indexes will use sizeof(char) to calculate the offset. A char is 1 byte (8 bits).
I'm not sure what you are trying to do with the data in your buffer. Here are some ideas:
Print the value of each byte in decimal, encoded as ASCII text:
for (int i=0; i < (1024*768); i++) {
fprintf(myFile, "%d , ", buffer[i]);
}
Print the value of each byte in hexadecimal, encoded in ASCII text:
for (int i=0; i < (1024*768); i++) {
fprintf(myFile, "%x , ", buffer[i]);
}
Print the value of each floating point number, in decimal, encoded in ASCII text (I think my calculation of the array index is correct to process adjacent non-overlapping memory locations for each float):
for (int i=0; i < (1024*768); i += sizeof(float)) {
fprintf(myFile, "%6.4f , ", buffer[i]);
}
Split the buffer into three files, each one from a non-overlapping section of the buffer:
fwrite(redPointer, sizeof(char), 768, file1);
fwrite(greenPointer, sizeof(char), 1024-768, file2);
fwrite(bluePointer, sizeof(char), (1024*768)-1024, file3);
Reference for fwrite. Note that for the count parameter I simply hard-coded the offsets that you had hard-coded in your question. One could also subtract certain of the pointers to calculate the number of bytes in each region. Note also that the contents of these three files will only be sensible if those are sensibly independent sections of the original data.
Maybe this gives you some ideas.
Updated: so I created a complete program to compile and test the formatting behavior. This only prints the first 20 items from the buffer. It compiles (with gcc -std=c99) and runs. I created the file /tmp/data using ghex and simply filled in some random data.
#include <stdlib.h>
#include <stdio.h>
int main()
{
FILE* dataFile = fopen("/tmp/data", "rb");
if (dataFile == NULL)
{
printf("fopen() failed");
return -2;
}
unsigned char* buffer = (unsigned char*)malloc(1024*768);
if (buffer == NULL)
{
printf("malloc failed");
return -1;
}
const int bytesRead = fread(buffer,1,1024*768,dataFile);
printf("fread() read %d bytes\n", bytesRead);
// release file handle
fclose(dataFile); dataFile = NULL;
printf("\nDecimal:\n");
for (int i=0; i < (1024*768); i++) {
printf("%hd , ", buffer[i]);
if (i > 20) { break; }
}
printf("\n");
printf("\nHexadecimal:\n");
for (int i=0; i < (1024*768); i++) {
printf("%#0hx , ", buffer[i]);
if (i > 20) { break; }
}
printf("\n");
printf("\nFloat:\n");
for (int i=0; i < (1024*768); i += sizeof(float)) {
printf("%6.4f , ", (float)buffer[i]);
if (i > 20) { break; }
}
printf("\n");
return 0;
}
This is more or less Qt's example with some small changes.
The output is PcPcPcPc...etc. I don't understand why.
Namely, I am confused about how sProducer.acquire(256); works. I believe I understand how sProducer.acquire(1); works. It doesn't make sense to me to acquire anything more than 1 because I don't see how acquiring more than 1 makes any difference logically. Could someone explain this? On the surface, writing 1 byte and reading 1 byte doesn't seem very efficient due to semaphore overhead...but acquiring more resources doesn't seem to make a performance difference nor does the code make sense.
Logically I think both the acquire and release have to have the same number (whatever that number is). But how can I modify this code so I can acquire more (say 256) and thus reduce semaphore overhead? The code bellow just doesn't make sense to me when acquire and release is not 1.
#include <QtCore>
#include <iostream>
#include <QTextStream>
//Global variables.
QTextStream out(stdout);
QTextStream in(stdin);
const int DataSize = 1024;
const int BufferSize = 512;
char buffer[BufferSize];
QSemaphore sProducer(BufferSize);
QSemaphore sConsumer(0);
//-----------------------------
class Producer : public QThread
{
public:
void run();
};
void Producer::run()
{
for (int i = 0; i < DataSize; ++i) {
sProducer.acquire(256);
buffer[i % BufferSize] = 'P';
sConsumer.release(256);
}
}
class Consumer : public QThread
{
public:
void run();
};
void Consumer::run()
{
for (int i = 0; i < DataSize; ++i) {
sConsumer.acquire(256);
std::cerr << buffer[i % BufferSize];
out << "c";
out.flush();
sProducer.release(256);
}
std::cerr << std::endl;
}
int main()
{
Producer producer;
Consumer consumer;
producer.start();
consumer.start();
producer.wait();
consumer.wait();
in.readLine(); //so i can read console text.
return 0;
}
Since there is only one producer and one consumer, they can move freely their own private cursor, their i variable, of the amount of bytes they want, as long as there is enough room to do that (something higher that 256 on both sides with a 512 buffer would cause a deadlock).
Basically, when a thread successfully acquire 256 bytes, it means it can safely read or write these 256 bytes in one single operation, so you just have to put another loop inside the acquire/release block to handle that number of bytes.
For the producer:
void Producer::run()
{
for (int i = 0; i < DataSize; ++i) {
const int blockSize = 256;
sProducer.acquire(blockSize);
for(int j = 0; j < blockSize; ++i, ++j) {
buffer[i % BufferSize] = 'P';
}
sConsumer.release(blockSize);
}
}
And for the consumer
void Consumer::run()
{
for (int i = 0; i < DataSize; ++i) {
const int blockSize = 128;
sConsumer.acquire(blockSize);
for(int j = 0; j < blockSize; ++i, ++j) {
std::cerr << buffer[i % BufferSize];
out << "c";
out.flush();
}
sProducer.release(blockSize);
}
std::cerr << std::endl;
}
So far I thought that the following syntax was invalid,
int B[ydim][xdim];
But today I tried and it worked! I ran it many times to make sure it did not work by chance, even valgrind didn't report any segfault or memory leak!! I am very surprised. Is it a new feature introduced in g++? I always have used 1D arrays to store matrices by indexing them with correct strides as done with A in the program below. But this new method, as with B, is so simple and elegant that I have always wanted. Is it really safe to use? See the sample program.
PS. I am compiling it with g++-4.4.3, if that matters.
#include <cstdlib>
#include <iostream>
int test(int ydim, int xdim) {
// Allocate 1D array
int *A = new int[xdim*ydim](); // with C++ new operator
// int *A = (int *) malloc(xdim*ydim * sizeof(int)); // or with C style malloc
if (A == NULL)
return EXIT_FAILURE;
// Declare a 2D array of variable size
int B[ydim][xdim];
// populate matrices A and B
for(int y = 0; y < ydim; y++) {
for(int x = 0; x < xdim; x++) {
A[y*xdim + x] = y*xdim + x;
B[y][x] = y*xdim + x;
}
}
// read out matrix A
for(int y = 0; y < ydim; y++) {
for(int x = 0; x < xdim; x++)
std::cout << A[y*xdim + x] << " ";
std::cout << std::endl;
}
std::cout << std::endl;
// read out matrix B
for(int y = 0; y < ydim; y++) {
for(int x = 0; x < xdim; x++)
std::cout << B[y][x] << " ";
std::cout << std::endl;
}
delete []A;
// free(A); // or in C style
return EXIT_SUCCESS;
}
int main() {
return test(5, 8);
}
int b[ydim][xdim] is declaring a 2-d array on the stack. new, on the other hand, allocates the array on the heap.
For any non-trivial array size, it's almost certainly better to have it on the heap, lest you run yourself out of stack space, or if you want to pass the array back to something outside the current scope.
This is a C99 'variable length array' or VLA. If they are supported by g++ too, then I believe it is an extension of the C++ standard.
Nice, aren't they?