QNetworkAccessManager: post http multipart from serial QIODevice - qt

I'm trying to use QNetworkAccessManager to upload http multiparts to a dedicated server.
The multipart consists of a JSON part describing the data being uploaded.
The data is read from a serial QIODevice, which encrypts the data.
This is the code that creates the multipart request:
QHttpMultiPart *multiPart = new QHttpMultiPart(QHttpMultiPart::FormDataType);
QHttpPart metaPart;
metaPart.setHeader(QNetworkRequest::ContentTypeHeader, "application/json");
metaPart.setHeader(QNetworkRequest::ContentDispositionHeader, QVariant("form-data; name=\"metadata\""));
metaPart.setBody(meta.toJson());
multiPart->append(metaPart);
QHttpPart filePart;
filePart.setHeader(QNetworkRequest::ContentTypeHeader, QVariant(fileFormat));
filePart.setHeader(QNetworkRequest::ContentDispositionHeader, QVariant("form-data; name=\"file\""));
filePart.setBodyDevice(p_encDevice);
p_encDevice->setParent(multiPart); // we cannot delete the file now, so delete it with the multiPart
multiPart->append(filePart);
QNetworkAccessManager netMgr;
QScopedPointer<QNetworkReply> reply( netMgr.post(request, multiPart) );
multiPart->setParent(reply.data()); // delete the multiPart with the reply
If the p_encDevice is an instance of QFile, that file gets uploaded just fine.
If the specialised encrypting QIODevice is used (serial device) then all of the data is read from my custom device. however QNetworkAccessManager::post() doesn't complete (hangs).
I read in the documentation of QHttpPart that:
if device is sequential (e.g. sockets, but not files),
QNetworkAccessManager::post() should be called after device has
emitted finished().
Unfortunately I don't know how do that.
Please advise.
EDIT:
QIODevice doesn't have finished() slot at all. What's more, reading from my custom IODevice doesn't happen at all if QNetworkAccessManager::post() is not called and therefore the device wouldn't be able to emit such an event. (Catch 22?)
EDIT 2:
It seems that QNAM does not work with sequential devices at all. See discussion on qt-project.
EDIT 3:
I managed to "fool" QNAM to make it think that it is reading from non-sequential devices, but seek and reset functions prevent seeking. This will work until QNAM will actually try to seek.
bool AesDevice::isSequential() const
{
return false;
}
bool AesDevice::reset()
{
if (this->pos() != 0) {
return false;
}
return QIODevice::reset();
}
bool AesDevice::seek(qint64 pos)
{
if (this->pos() != pos) {
return false;
}
return QIODevice::seek(pos);
}

You'll need to refactor your code quite a lot so that the variables you pass to post are available outside that function you've posted, then you'll need a new slot defined with the code for doing the post inside the implementation. Lastly you need to do connect(p_encDevice, SIGNAL(finished()), this, SLOT(yourSlot()) to glue it all together.
You're mostly there, you just need to refactor it out and add a new slot you can tie to the QIODevice::finished() signal.

I've had more success creating the http post data manually than with using QHttpPart and QHttpMultiPart. I know it's probably not what you want to hear, and it's a little messy, but it definitely works. In this example I am reading from a QFile, but you can call readAll() on any QIODevice. It also is worth noting, QIODevice::size() will help you check if all the data has been read.
QByteArray postData;
QFile *file=new QFile("/tmp/image.jpg");
if(!(file->open(QIODevice::ReadOnly))){
qDebug() << "Could not open file for reading: "<< file->fileName();
return;
}
//create a header that the server can recognize
postData.insert(0,"--AaB03x\r\nContent-Disposition: form-data; name=\"attachment\"; filename=\"image.jpg\"\r\nContent-Type: image/jpeg\r\n\r\n");
postData.append(file->readAll());
postData.append("\r\n--AaB03x--\r\n");
//here you can add additional parameters that your server may need to parse the data at the end of the url
QString check(QString(POST_URL)+"?fn="+fn+"&md="+md);
QNetworkRequest req(QUrl(check.toLocal8Bit()));
req.setHeader(QNetworkRequest::ContentTypeHeader,"multipart/form-data; boundary=AaB03x");
QVariant l=postData.length();
req.setHeader(QNetworkRequest::ContentLengthHeader,l.toString());
file->close();
//free up memory
delete(file);
//post the data
reply=manager->post(req,postData);
//connect the reply object so we can track the progress of the upload
connect(reply,SIGNAL(uploadProgress(qint64,qint64)),this,SLOT(updateProgress(qint64,qint64)));
Then the server can access the data like this:
<?php
$filename=$_REQUEST['fn'];
$makedir=$_REQUEST['md'];
if($_FILES["attachment"]["type"]=="image/jpeg"){
if(!move_uploaded_file($_FILES["attachment"]["tmp_name"], "/directory/" . $filename)){
echo "File Error";
error_log("Uploaded File Error");
exit();
};
}else{
print("no file");
error_log("No File");
exit();
}
echo "Success.";
?>
I hope some of this code can help you.

I think the catch is that QNetworkAccessManager does not support chunked transfer encoding when uploading (POST, PUT) data. This means that QNAM must know in advance the length of the data it's going to upload, in order to send the Content-Length header. This implies:
either the data does not come from sequential devices, but from random-access devices, which would correctly report their total size through size();
or the data comes from a sequential device, but the device has already buffered all of it (this is the meaning of the note about finished()), and will report it (through bytesAvailable(), I suppose);
or the data comes from a sequential device which has not buffered all the data, which in turn means
either QNAM reads and buffers itself all the data coming from the device (by reading until EOF)
or the user manually set the Content-Length header for the request.
(About the last two points, see the docs for the QNetworkRequest::DoNotBufferUploadDataAttribute.)
So, QHttpMultiPart somehow shares these limitations, and it's likely that it's choking on case 3. Supposing that you cannot possibly buffer in memory all the data from your "encoder" QIODevice, is there any chance you might know the size of the encoded data in advance and set the content-length on the QHttpPart?
(As a last note, you shouldn't be using QScopedPointer. That will delete the QNR when the smart pointer falls out of scope, but you don't want to do that. You want to delete the QNR when it emits finished()).

From a separate discussion in qt-project and by inspecting the source code it seems that QNAM doesn't work with sequential at all. Both the documentation and code are wrong.

Related

http.Client rejects request with >unsupported protocol scheme ""< even if it's set

I try to upload some videos to youtube. Somewhere in the stack it comes down to a http.Client. This part somehow behaves weird.
The request and everything is created inside the youtube package.
After doing my request in the end it fails with:
Error uploading video: Post https://www.googleapis.com/upload/youtube/v3/videos?alt=json&part=snippet%2Cstatus&uploadType=multipart: Post : unsupported protocol scheme ""
I debugged the library a bit and printed the URL.Scheme content. As a string the result is https and in []byte [104 116 116 112 115]
https://golang.org/src/net/http/transport.go on line 288 is the location where the error is thrown.
https://godoc.org/google.golang.org/api/youtube/v3 the library I use
My code where I prepare/upload the video:
//create video struct which holds info about the video
video := &yt3.Video{
//TODO: set all required video info
}
//create the insert call
insertCall := service.Videos.Insert("snippet,status", video)
//attach media data to the call
insertCall = insertCall.Media(tmp, googleapi.ChunkSize(1*1024*1024)) //1MB chunk
video, err = insertCall.Do()
if err != nil {
log.Printf("Error uploading video: %v", err)
return
//return errgo.Notef(err, "Failed to upload to youtube")
}
So I have not idea why the schema check fails.
Ok, I figured it out. The problem was not the call to YouTube itself.
The library tried to refresh the token in the background but there was something wrong with the TokenURL.
Ensuring there is a valid URL fixed the problem.
A nicer error message would have helped a lot, but well...
This will probably apply to very, very few who arrive here: but my problem was that a RoundTripper was overriding the Host field with an empty string.

Memory leak while sending response from rebus handler

I saw a very strange behavior in my rebus handler which is self hosted in exe. Right after sending response using bus.send method it adds up some memory consumed by process. I tried to look up object graph using memory profile and found that rebus is holding response message in serialized format somewhere.
Object graph was showing below hierarchy to the root.
System.Message --> CachedBodyMessage --> stream
Give me some pointers if anybody is aware of this thing.
I understand that a memory leak is a grave concern, but my belief is that it is unlikely that Rebus should contain a memory leak.
This belief is rooted in the fact that I have been running Windows Service-hosted Rebus endpoints in production for 1,5 years now, and several of them (e.g. the timeout managers) have sometimes been running for several months without being restarted.
I'd like to be absolutely bulletproof sure though, so I'm willing to investigate the issue you're reporting.
You're mentioning "CachedBodyMessage" - judging by the names of fields inside System.Messaging.Message, it sounds like it's something within MSMQ. To try to reproduce your issue, I coded the following test:
[Test, Ignore("Only works in RELEASE mode because otherwise object references are held on to for the duration of the method")]
public void DoesNotLeakMessages()
{
// arrange
const string inputQueueName = "test.leak.input";
var queue = new MsmqMessageQueue(inputQueueName);
disposables.Add(queue);
var body = Encoding.UTF8.GetBytes(new string('*', 32768));
var message = new TransportMessageToSend
{
Headers = new Dictionary<string, object> { { Headers.MessageId, "msg-1" } },
Body = body
};
var weakMessageRef = new WeakReference(message);
var weakBodyRef = new WeakReference(body);
// act
queue.Send(inputQueueName, message, new NoTransaction());
message = null;
body = null;
GC.Collect();
GC.WaitForPendingFinalizers();
// assert
Assert.That(weakMessageRef.IsAlive, Is.False, "Expected the message to have been collected");
Assert.That(weakBodyRef.IsAlive, Is.False, "Expected the body bytes to have been collected");
}
which verifies that the sent transport message is collected as it should (will only do this in RELEASE mode though, because of the way DEBUG mode holds on to object references within scope)
I'll try and run the TimePrinter sample now and leave it running for a while to see if I can reproduce the issue. If you stumble upon more information about e.g. exactly which objects are leaking, it would be very helpful.
Thanks again for taking the time to report your worries to me :)
Followup:
I've modified the TimePrinter sample so that it sends 50 msg/s and includes a 64 KB random string payload with each message, and I've tracked the memory usage for almost four hours now. As you can see, it does not look like memory is being leaked.
I'll leave it running the rest of the day, just to be sure.
Maybe you can tell me some more about why you suspected there was a memory leak in the first place?
Update:
As you can see from the trace, it has now been running for 7 hours and thus more than 1,200,000 messages containing more than 70 GB of data has been sent and consumed by the same process. If cached message bodies were leaking, I am pretty sure that we would have been able to see something rising on the graph.

Using ElectricImp server.show() and Arduino

I'm following the sparkfun tutorial for connecting an arduino to electric imp. I only have one arduino and imp, so I'm trying to get whatever I type in the arduino serial monitor to display in the imp node using server.show().
I've modified one of the functions in the sparkfun code to look like this:
function pollUart()
{
imp.wakeup(0.00001, pollUart.bindenv(this)); // schedule the next poll in 10us
local byte = hardware.uart57.read(); // read the UART buffer
// This will return -1 if there is no data to be read.
while (byte != -1) // otherwise, we keep reading until there is no data to be read.
{
// server.log(format("%c", byte)); // send the character out to the server log. Optional, great for debugging
// impeeOutput.set(byte); // send the valid character out the impee's outputPort
server.show(byte)
byte = hardware.uart57.read(); // read from the UART buffer again (not sure if it's a valid character yet)
toggleTxLED(); // Toggle the TX LED
}
}
server.show(byte) is only displaying seemingly random numbers. I have an idea of why this is, I just don't know how to fix it because I'm not that familiar with UARTs and squirrel.
local byte = hardware.uart57.read(); reads in the ascii characters from the arduino in byte form (I think), and they're not being 'translated' into their ascii characters before I use server.show(byte).
How do I do this in squirrel?
Also, I think polling every 10us is the wrong way to go here. I'd like to only poll when there's new information, but I also don't know how to do that in squirrel. Can someone point me to an example where this happens?
Thanks!
I think you are passing the wrong data type to the show method of the server object. The electric imp docs state that it takes a string, server.show(string). I think that local is the correct type to receive the value from hardware.uart57.read(). You can tell from the docs as well. So, you need to find a way to cast your byte to a string. I bet you could find the answer here. From what I read Squirrel use's Unicode so there is a probably a function that takes Unicode bytes and loads them into a string object.

Qt Streaming Large File via HTTP and Flushing to eMMC Flash

I'm streaming a large file ( 1Gb ) via HTTP to my server in Qt on a very memory constrained embedded Linux device. When I first receive the header I determine where to write the data on the filesystem, create a QFile pointer to that location, and open the file for appending. There is an 'accumulate' function in the server that is called each time new data arrives to the socket. From that accumulate function I want to stream the data right to the file via write(). You can see my accumulate function below.
My problem is memory usage when doing this -- I run out of memory. Shouldn't I be able to flush() and fsync() each iteration of the accumulation and not have to worry about RAM usage? What am I doing wrong and how can I fix this? Thanks -
I open my file once before the accumulate function:
// Open the file
filePointerToWriteTo->open(QIODevice::WriteOnly | QIODevice::Append | QIODevice::Unbuffered)
Here is a portion of the accumulate function:
// Extract the QFile pointer from the QVariant
QFile *filePointerToWriteTo = (QFile *)(containerForPointer->pointer).value<void *>();
qDebug() << "APPENDING bytes: " << data.length();
// Write to the file and sync
filePointerToWriteTo->write(data);
filePointerToWriteTo->waitForBytesWritten(-1);
filePointerToWriteTo->flush(); // Flush
fsync(filePointerToWriteTo->handle()); // Make sure bytes are written to disk
EDIT:
I instrumented my code and the 'waitForBytesWritten(-1)' call ALWAYS return 'false'. The docs say this should wait until data is written to the device.
Also, If I uncomment only the 'write(data)' line, then my free memory never decreases. What could be going on? How does 'write' consume so much memory?
EDIT:
Now I am doing the following. I do not run out of memory, but my free memory drops to 2Mb and hovers there until the entire file is transferred. At which point, the memory is released. If I kill the transfer in the middle, the kernel seems to hold on to the memory because it stays around 2Mb free until I restart the process and try to write to the same file. I still think I should be able to use and flush the memory each iteration:
// Extract the QFile pointer from the QVariant
QFile *filePointerToWriteTo = (QFile *)(containerForPointer->pointer).value<void *>();
int numberOfBytesWritten = filePointerToWriteTo->write(data);
qDebug() << "APPENDING bytes: " << data.length() << " ACTUALLY WROTE: " << numberOfBytesWritten;
// Flush and sync
bool didWaitForWrite = filePointerToWriteTo->waitForBytesWritten(-1); // <----------------------- This ALWAYS returns false!
filePointerToWriteTo->flush(); // Flush
fsync(filePointerToWriteTo->handle()); // Make sure bytes are written to disk
fdatasync(filePointerToWriteTo->handle()); // Specific Sync
sync(); // Total sync
EDIT:
This kind of sounds like me misunderstanding Linux caching. After reading this post --> http://blog.scoutapp.com/articles/2010/10/06/determining-free-memory-on-linux, it's possible that I am misunderstanding the output of 'free -mt'. I have been watching the 'free' field in that output and see it drop to hover around 2MB on the massive file transfer. I would just like to see it return to high levels of free data when the file transfer is done.
I think Linux is just caching everything it can and frees what it can spare around the 2MB free memory limit. I do not run out of memory when receiving or sending out ~2Gb of files on a 512 MB RAM system. In my Qt program, after receiving all of the data, appending to file, and closing the file. I do the following in a QProcess to see my 'free' memory return in the 'free -mt' command in a separate terminal:
// Now we've returned a large file - so free up cache in linux
QProcess freeCachedMemory;
freeCachedMemory.start("sh");
freeCachedMemory.write("sync; echo 3 > /proc/sys/vm/drop_caches"); // Sync to disk and clear Linux cache
freeCachedMemory.waitForFinished();
freeCachedMemory.close();

QImageReader with custom QIODevice implementation

I have a custom QIODevice that decrypts the data stream from another QIODevice (it might be a file). It is used it to encrypt and decrypt files. Some of the files are images. Then QImageReader is used to load the image directly from the encryption stream, but in some rare cases QImageReader fails to read the image from that stream. There is one PNG image that can be properly read by QImageReader from unencrypted file. But when my custom QIODevice is layered over QFile and passed to QImageReader, it would fail and prints
"libpng error: IDAT: CRC error"
I've done some intensive debugging and traced all the reads and seeks that QImageReader would invoke on my QIODevice, and put them along with these of QFile of unencrypted file:
device.read(encData, 2 );
file.read(pngData, 2 );
Q_ASSERT(memcmp(encData, pngData, 2) == 0);
device.read(encData, 6 );
file.read(pngData, 6 );
Q_ASSERT(memcmp(encData, pngData, 6) == 0);
device.seek(0 );
file.seek(0 );
....
And it turned out that all the data read from a file is exactly the same as the data coming from the stream...
why it would return that libpng error?
Ok, I figured it out. It was the QIODevice::size() function that I haven't implemented. The docs should probably be more specific about the functions that need to be implemented...

Resources