out of memory allocating 1073745919 bytes - qt

I am trying to upload a car mesh in QML but the mesh.obj(under qrc) apparently is too big, I added CONFIG +=resources_big to my .pro file but nothing changed.then I tried to call it from outside the application but it doesn't work. how can I resolve this? I am using qt5.10 and MinGW as a compiler.

You don't want to put huge files in a qrc resource. That will result in a significant overhead. It will bloat your executable file, which takes up RAM, it will have to be additionally loaded into ram in Qt's resource virtual file system, an you will still to load it from there into ram in order to use it.
Put it out on the file system in the app folder, where you can load it from disk straight into RAM, significantly reducing memory usage.
Also, 3d meshes lend themselves to compression really well, and Qt's QByteArray has compression support, so you might want to put that to work to reduce the deployment footprint.

Related

What is the pro's and con's of locking the actual file vs an empty lock file?

My program is writing to a binary file, and there could be multiple instances of the program accessing the same binary file for the same user. In Unix/Linux, I see some programs (particularly daemon processes) locking an empty lock file instead of the actual shared data that needs to be locked (so instead of locking ~/.data/foo they lock ~/.data/foo.lck). What are the pros and cons of locking the actual file vs an empty lock file?
flock is not supported over NFS or other network file systems for all version of unix (it wasn't even supported by Linux until 2.6.12). On the other hand O_CREAT|O_EXCL is much more reliable over many more file systems, and has been so for much longer.
Even on systems that do support flock on network filesystems (or cases where you don't need that flexibility), O_CREAT|O_EXCL together with flock is very useful because it distinguishes between a clean shutdown and a non-clean shutdown. flock helpfully goes away automatically, but it also, unhelpfully, doesn't distinguish why it went away.
flocking the file itself prevents atomic writes (copy, erase old, rename), or any other case where you might erase the existing file. Sometimes "the actual file" doesn't always have the same inode over the entire run of the program. So a separate file is much more convenient in those cases as well. This is very common in those foo.lck cases, because often you're locking foo for a short period of time, and might erase it in the process.
I see three cons of an empty lock file:
The user permissions of the directory should allow you to create a file.
In case of disk space issues, this might fail.
In case your program crashes, the lockfile is still present.
I see one con of modifying the actual file's name:
In case your program crashes, your file has been altered (only the filename, but it might generate confusion).
Obviously, I see one big advantage of the empty lock file:
your original file does not change at all.
By the way, I believe this question is better suited for the SoftwareEngineering community.

MPI parallel write to a TIFF file

I'm trying to write a TIFF file in MPI code. Different processors have different parts of the image, and I want to write the image to the file in parallel.
The write fails, only the 1st processor can write to it.
How do I do this?
There is no error in my implementation, just it does not work.
I used h=TIFFOpen(file, "a+") on each processor to open the same file (I am not sure whether this is a right way or not), then each processor who is responsible for a directory will write the header at its own place using TIFFSetDirectory(h, directorynumber), then the content of each directory will be written. I will finalize with TIFFWriteDirectory(h). The result would be the first directory which is written on the file.
I thought that I need to open the file using MPI_IO but doing this way it is not TIFFOpen?
Different MPI tasks are independent programs, running on independent hosts from the OS point of view. In your case the TIFF library is not designed to handle parallel operations, so opening the file will lead the first process to succeed, all the rest to fail because they found the file already opened (on a shared filesystem).
Except in case you are dealing with huge images (eg: astronomical images) where it's important for performance to perform parallel I/O (you need a filesystem supporting it however... I am aware of IBM GPFS), I would avoid to write a custom TIFF driver with MPI_IO.
Instead the typical solution is to gather (MPI_Gather()) the image parts on the process with rank==0 and let it only save the tiff file.

Loading media file

I'm using Qt with phonon to play some mp3 files. The problem is that I need multiple mp3 files running together and they are not playing in a synchronized fashion, especially when I order to seek or something.
I've noticed that from the hard drive synchronization is better than from an USB drive. It seems that the program doesn't load the whole file into memory. Since I need to put this program on a USB drive, is there any way to allocate a file into memory and then play from that?
If your concern is reading from the filesystem, maybe you can just cache your sound files into QBuffer objects ahead of time, and then use them in the Phonon::MediaSource(QIODevice * ioDevice)
That way you are no longer depending on the filesystem to maintain stable IO. Its in memory like you wanted.

Is it better to execute a file over the network or copy it locally first?

My winforms app needs to run an executable that's sitting on a share. The exe is about 50MB (it's a setup.exe type of file). My app will run on many different machines/networks with varying speeds (some fast, but some awfully slow, like barely 10baseT speeds).
Is it better to execute the file straight from the share or is it more efficient to copy it locally and then execute it? I am talking in terms of annoying the user the least.
Locally is better. A copy will read each byte of the file a single time, no more, no less. As you execute, you may revisit code that is out of cache, etc and gets pulled again.
As a setup program, I would assume that the engine will want to do some kind of CRC or other integrity check too, which means it's reading the entire file anyway.
It is always better to execute it locally than running it over the network.
If you're application is small, and does not need to load many different resource during runtime then it is ok to run it over the network. It might even be preferable because if you run it over the network the code is read (download and load to memory) once as oppose of manually downloading the file then run it which take 2 read code. For example you can run a clock widget application over the network.
On the other hand, if your application does read a lot of resources during runtim, then it is absolutely a bad idea to run it over the network because each read of the resource will go over the network, which is very slow. For example, you probably don't want to be running Eclipse over the network.
Another factor to take into consideration is how many concurrent user will be accessing the application at the same time. If there are many, you should copy the application to local and run from there.
I believe the OS always copy the file to a local temp folder before it is actually executed. There are no round trips from/to the network after it gets a copy, it only happens once. This is sort of like how a browser works... it first retrieves the file, saves it locally, then it runs if off of the local temp where it saved it. In other words, there is no need to copy it manually unless you want to keep a copy for yourself.

Config file Performance question

I have around 60 web apps on web server, all of these app have some of the same appsetting values in the web.config. These settings are loaded into memory as soon as the application starts. I would like to centralise these values in one config file for all apps to load.
My question is, if i load all of the apps up at the same time, would there be any performance issues accessing this same config file at the same time?
Cheers
Read locks are generally not exclusive so any number of applications can read from the same file at the same time. If you're not specifically requesting read exclusivity, you should be fine.
You should look at how you're loading the configuration file into each application.
See http://en.wikipedia.org/wiki/File_locking for more information.
Probably, because you need disk IO to access the file. But if the values stay in memory, I would say the performance issue would be minimal afterwards. Just be sure to read out the file without locking it (I believe with the FileShare.ReadWrite enum).
You might even see a small performance improvement since the file will be cached by the operation system when the first application reads the file, subsequent files will read directly from memory.
But the only way to know for sure is to measure and see.

Resources