Is there any reason to *not* compress a JAR? - jar

Netbeans has an option to compress JARs after building them. Is there any reason to not do this? Does it make the JVM start slower maybe? Why aren't JARs compressed automatically by all compilers without giving you the option?

It may possibly make the JVM start an infinitesimally amount of time slower than usual.

Related

Profiling GPRBuild

I have a large GPR based project that can take over 30 minutes to compile.
Having analyzed the build process I noticed many obvious inefficiencies (multiple calls to gprbuild rather than aggregates, excessive use of alternative files rather than configurations, etc). I am wondering if there is some means to 'profile' the build process to see what takes so long.
In particular it takes about 5 minutes to recompile when even a single file changes and there is an error in it. In theory it should be pretty quick to realize that that file has to be recompiled (its the only one that does) and start the compilation process, rapidly discovering the error.
From the verbose output it looks like it takes quite a while just parsing the massive web of gpr files used to define the build, but I would like to know where it spends most of its time.
Thus my question is: Is it possible to profile a build done by gprbuild? If so, how?
From low to high complexity:
Ask gprbuild to report more details about what it is doing with the flag -vh.
Run gprbuild through strace.
Rebuild gprbuild with the required flags to profile it using gprof (but be aware that gprof doesn't always tell the truth).

How use a patch that fixed a bug

I'm getting crazy with a bad memory access in a qt program when i'm using qglwidget::rendertext function. My program is super simple, I'm only one pointer, but the crash doesn't seem relate to that because the debugger stops sometimes when i call rendertext, sometimes when i close the programs. i'm not experienced c++ programmer and this is getting me crazy.
but i've found this BUG REPORT. It seems recent (Updated: 25/Apr/13 8:47 AM) and due to the fact I don't know what to do with this bad memory access i think it worths to give it a try.
the solution patch is posted here but i don't know what to do.. do i have to recompile all qt 4.8? only the opengl part? can i avoid to recompile everything?
Go to the directory where you compiled Qt and change the file qt/src/opengl/qpaintengine_opengl.cpp. Make the changes that the author made, or download the author's file and replace it in your source directory. Change directory to the main qt directory and run make. Be sure not to re-run ./configure before you do the make or it will rebuild the whole thing.
After make has finished, run sudo make install and it will put the newly compiled QPaintEngine module into your install directory. Unfortunately, I don't know if this will work if you have a number of configurations (like static libraries), but it's worth a try.
I have done this with modules in QtMobility hundreds of times. You also have to remember that you have a Frankenstein's Monster version of Qt now, and when you upgrade remember to re-patch if the change was not committed to the newest build.
Hope this helps.

Sluggishness with WebStorm and Meteor?

So, I'm working on a Mac Mini, using WebStorm to fuss with Meteor apps. I'm finding that WebStorm tends to get sluggish, and is constantly trying to index things. I have 4 gigs of RAM, of which 791M seem to be allocated to WebStorm at any one time. My disk drive is 500GB, and I make sure there's always at least 20% to 30% free space.
So, a few questions... is it Meteor's bundle process that's causing WebStorm to do the indexing? Is there any way to optimize the indexing? Make it run less frequently? Ignore the .meteor directory, perhaps? Is 20% of available RAM an appropriate amount to allocate to WebStorm for Meteor development? Are there any other things that people can recommend to optimize WebStorm so it's not so sluggish?
Thanks in advance for any recommendations!
As #Martin said - exclude the directories where Meteor stores it compiled files: .meteor\local and .meteor\meteorite (when using meteorite).
To have Meteor suggestions / ... add the Meteor source as an external library: /usr/lib/meteor/packages/. I'm using PhpStorm as well and add the path to the PHP include path (doesn't matter it's not a PHP-library).
When adding it as a JavaScript Library in the project settings the directory structure gets lost and you have to repeat this when upgrading meteor.
I'm using PHPStoem for my meteor development and I am having the same issue as you do. I guess the engine in PHPStorm is identical to WebStorm...
I'm unsure if increasing the amount of RAM available to the IDE actually will have any effect. The issue is related to the IDE re-indexing the folder-tree whenever changes are made to any file(s) in the tree.
When meteor is running and changes is made to a file, meteor is bundling the whole application into the .meteor folder why the tree is re-indexed.
I haven't tried it out yet, but I guess what actually will help is to add the .meteor-folder to the ignore list so it wont be re-indexed every time a file-change happens.

My QT static builds run out of memory on other systems

When I build my application statically, it comes out to just over 5Mb, so it's a small, simple program. However, any system that has under 3Gb of ram can't run the program, saying there's not enough memory. There is nothing very memory intensive in the program, and I did nothing to allocate memory specifically. Any thoughts on whats causing this?
I believe that less the 1Mb built code can easilly fill the 10GB memory. Make sure that your code does not use redundant memory.
There was a problem with the static build. I first got it to work by exporting from the visual studio plugin, and then I rebuilt the SDK and program again, and everything worked fine from QT creator.

Any ideas why incremental flex compilation would not work for successive compilations of identical source?

I am running mxmlc in the command-line with -incremental=true. Flex is building the cache file using a checksum the first time. Subsequent compilations fail with this message:
Failed to match the compile target with path_to_cache/projectname_329043.cache. The cache file will not be reused.
path_to_cache exists
the cache file exists in path_to_cache
the compiler is not trying to create a new cache file, so I assume it is generating the same checksum
My environment:
Flex 3.0
Mac - OSX 10.4.x
I just ran across this issue myself and after not finding the answer anywhere on the web, I bashed my head against mxmlc in practically trail-and-error until finding the answer. In my case, I was regenerating the flex config xml file each time I compiled from within ant. It turns out that this is the error you get in the case where it thinks the config has changed. You can test this by simply touching your config file and running against unmodified sources. So, if the timestamp is changing on your flex config.xml between compiles, that is likely the culprit.
It could be a permissions issue. Have you tried running with sudo? I wouldn't recommend doing that permanently, but if using sudo makes the error message go away, then you know it's a permissions issue; and you can move on to the proper way to resolve it.
You could also try going into Disk Utility and doing a check/repair of disk permissions. OSX has been notorious for needing this done occasionally.

Resources