Loading media file - qt

I'm using Qt with phonon to play some mp3 files. The problem is that I need multiple mp3 files running together and they are not playing in a synchronized fashion, especially when I order to seek or something.
I've noticed that from the hard drive synchronization is better than from an USB drive. It seems that the program doesn't load the whole file into memory. Since I need to put this program on a USB drive, is there any way to allocate a file into memory and then play from that?

If your concern is reading from the filesystem, maybe you can just cache your sound files into QBuffer objects ahead of time, and then use them in the Phonon::MediaSource(QIODevice * ioDevice)
That way you are no longer depending on the filesystem to maintain stable IO. Its in memory like you wanted.

Related

out of memory allocating 1073745919 bytes

I am trying to upload a car mesh in QML but the mesh.obj(under qrc) apparently is too big, I added CONFIG +=resources_big to my .pro file but nothing changed.then I tried to call it from outside the application but it doesn't work. how can I resolve this? I am using qt5.10 and MinGW as a compiler.
You don't want to put huge files in a qrc resource. That will result in a significant overhead. It will bloat your executable file, which takes up RAM, it will have to be additionally loaded into ram in Qt's resource virtual file system, an you will still to load it from there into ram in order to use it.
Put it out on the file system in the app folder, where you can load it from disk straight into RAM, significantly reducing memory usage.
Also, 3d meshes lend themselves to compression really well, and Qt's QByteArray has compression support, so you might want to put that to work to reduce the deployment footprint.

MPI parallel write to a TIFF file

I'm trying to write a TIFF file in MPI code. Different processors have different parts of the image, and I want to write the image to the file in parallel.
The write fails, only the 1st processor can write to it.
How do I do this?
There is no error in my implementation, just it does not work.
I used h=TIFFOpen(file, "a+") on each processor to open the same file (I am not sure whether this is a right way or not), then each processor who is responsible for a directory will write the header at its own place using TIFFSetDirectory(h, directorynumber), then the content of each directory will be written. I will finalize with TIFFWriteDirectory(h). The result would be the first directory which is written on the file.
I thought that I need to open the file using MPI_IO but doing this way it is not TIFFOpen?
Different MPI tasks are independent programs, running on independent hosts from the OS point of view. In your case the TIFF library is not designed to handle parallel operations, so opening the file will lead the first process to succeed, all the rest to fail because they found the file already opened (on a shared filesystem).
Except in case you are dealing with huge images (eg: astronomical images) where it's important for performance to perform parallel I/O (you need a filesystem supporting it however... I am aware of IBM GPFS), I would avoid to write a custom TIFF driver with MPI_IO.
Instead the typical solution is to gather (MPI_Gather()) the image parts on the process with rank==0 and let it only save the tiff file.

Is it better to execute a file over the network or copy it locally first?

My winforms app needs to run an executable that's sitting on a share. The exe is about 50MB (it's a setup.exe type of file). My app will run on many different machines/networks with varying speeds (some fast, but some awfully slow, like barely 10baseT speeds).
Is it better to execute the file straight from the share or is it more efficient to copy it locally and then execute it? I am talking in terms of annoying the user the least.
Locally is better. A copy will read each byte of the file a single time, no more, no less. As you execute, you may revisit code that is out of cache, etc and gets pulled again.
As a setup program, I would assume that the engine will want to do some kind of CRC or other integrity check too, which means it's reading the entire file anyway.
It is always better to execute it locally than running it over the network.
If you're application is small, and does not need to load many different resource during runtime then it is ok to run it over the network. It might even be preferable because if you run it over the network the code is read (download and load to memory) once as oppose of manually downloading the file then run it which take 2 read code. For example you can run a clock widget application over the network.
On the other hand, if your application does read a lot of resources during runtim, then it is absolutely a bad idea to run it over the network because each read of the resource will go over the network, which is very slow. For example, you probably don't want to be running Eclipse over the network.
Another factor to take into consideration is how many concurrent user will be accessing the application at the same time. If there are many, you should copy the application to local and run from there.
I believe the OS always copy the file to a local temp folder before it is actually executed. There are no round trips from/to the network after it gets a copy, it only happens once. This is sort of like how a browser works... it first retrieves the file, saves it locally, then it runs if off of the local temp where it saved it. In other words, there is no need to copy it manually unless you want to keep a copy for yourself.

Writing to and reading from the same file, at the same time (disk being asynchronous?)

We're creating a web service where we're writing files to disk. Sometimes these files will be read at the same time as they are written.
If we do this - writing and reading from the same file - we sometimes end up with files that are of the same length, but where some of the data inside are not the same. So with a 350mb file we get maybe 20-40 bytes that differs.
This problem mostly occur if we have 3-4 files that are being written and read at the same time. Could this problem be because there is no guarantee that after a "write" to a disk, that the data is actually written, i.e., the disk is being asynchronous.
Also, the computer we're testing on is just a standard macbook pro, so no fancy disks of any kind.
The bug might be somewhere else, but we just wanted to ask the question and see if anybody knew something about this writing+reading thing.
All modern OSs support concurrent reading and writing to files (obviously, given a single writer). So this is not an OS level bug. But do make sure you do not have multiple threads/processes trying to append data to the file.
Check your application code. Check the buffers you are using. Make sure your application is synchronized and there are no race conditions between readers and writers.

Get current CPU, RAM and Disk drive usage using Qt

The question speaks for itself : is there a convenient wrapper for system specific function in Qt, so I can tell how much is the current resources usage ?
I want to execute some expensive task when the system is idle. For your information (I might need to put that in another question), I want to calculate the content hash of a file. I thought of doing it by streams instead of a basic readAll followed by a call to QCryptographicHash, but didn't find how to do it yet, so i'm stuck with reading the whole file and calling hash()...
You need to use platform specific code to detect resource usage.
For fastest file access to calculate hashes, use memory mapped files (QMemoryFile)
To get the system data in Linux, you can read '/proc' and display info.
For Windows, you may want to look at WMI queries.

Resources