I'm working on some general purpose math libraries that I will want to use in multiple projects. Obviously, it makes sense for all the project to reference the same code files so if I fix something it affects all projects.
One way to do this would be to simply have them all use an INCLUDEPATH and DEPENDPATH that points to the same directory of code files.
But I was reading this: http://qt-project.org/wiki/IncludingProFiles
And I'm wondering if it is better to create a .pri Qt project for purposes of inclusion only. Would this be better? Do you have to then manually change the extension from .pro to .pri?
What's the best setup for sharing code between projects?
Obviously you know the way of including them with INCLUDEPATH and DEPENDPATH, which is fairly easy to setup but a bit annoying. Using a pri file does alleviate some of the headaches, and it makes it easier to add them into a new project as you go along with any additional customisation you like to have on projects. Or if you're working on a large multi-application project and need to keep similar build settings.
The third option is building them as a library file and just including them, the same as you would any library. Trickier to setup initially than just using a pri or the include directives, but it does mean the code is kept as it's own separate unit.
If it's a small amount of code, but you plan on using it often I'd use a pri, if it's a reasonably large amount of code I'd go for library, and if you only plan on using it rarely I'd use the include directives.
The best way is to create a versioned repo of the source in e.g. git. Make it possible to build a versioned SDK (containing lib*.so/lib*.a and *.h) from the source to share. Most sucessful projects grow over time and then this investment will pay off. Learning how to do this once you can do re-use easier in the future across several platforms. Re-use is very important.
Related
We have a multi module project consisting of two modules, modA and modB.
modA depends on modB.
modB in turn depends on a list of libraries (libA and libB) where we also have the source code. This sources have already been adapted by us.
At last, libB and libC are independend from each other, but depend on a third library, libC.
What I want to have is a setup, where the three libraries (which are in principle also a multi-module SBT project) can just be "included" in the top level project.
The point here is also that these libraries can be re-used for other projects, too, so the changed sources should not belong to this super project only.
Currently I tried to solve it by including the library as GIT submodule.
Unfortunately SBT does not (seem to) support hierarchical sub modules, so I cannot really just have a second, also multi-module SBT file for all libraries which just gets included in the "super-super" project.
This current setup is clearly not the SBT way.
What is the intended method of solving this?
Just adapting the library separately and re-using it just as JAR file in the super project is possible, but clumsy, because the using project(s) are the main reason to hack the library, so it would be nice if this works in a smooth way.
Qt's resource system lets you build an executable that has resources (.qml's, images...) embedded inside it. I'd like to load the files from the filesystem instead, and ship them alongside my executable. Is this a technique supported by Qt? Any gotchas? Any advantages to one way or the other?
Qt doesn't limit you in what you can do here. It's your choice. Qt's resource system is there if you want it, it's not forced down your throat. Not using it doesn't make you automatically wrong.
If you want to deploy files along with your application, go for it - if it makes sense for your particular application.
My personal preference for small (<0.25GB) applications is for a nice monolithic portable executable on Windows, with everything inside, that you don't have to install if you don't wish - mimicking how an app bundle would be on OS X.
The portability helps, as does the slightly stronger locality of reference: most filesystems will attempt to keep a file's blocks together a bit harder than they do for files that merely are in the same folder.
If there's any utility in power users tweaking the contents of the deployed files, then certainly using the files over using resources has advantages. You could also use the resource system as a fallback for files that are missing - that way a user could provide a replacement file optionally, if it's something you could use.
To make it short: If it is applicable for your application and 'business model' it is ok.
Besides being supported by Qt using QFile (for example), Qt's resource system has some advantages:
compression (ZIP)
simple usage in the application
no care about missing files (typically)
If you mind adding all resources to the executable will not work for you and you want to seperate them, look at the option of seperate (binary) resource files with Qt.
I'm using premake5 to build a complex application on multiple platforms. My application links against both static and shared/dynamic external libraries.
There seems to be significant build chain dependencies that break premake generated 'gmake' make files in this case.
Case in Point:
If you mix shared and static libraries in premake 'link' statements, GCC seems to get confused and expect your shared library references to actually be static libraries. When it can't find them the link stage fails. This is normally handled by prefixing your shared libs with '-Bdynamic'. Unfortunately there is no way to tell premake5 that an external link lib is static or dynamic, so you have to manually fixup the make files, which defeats the purpose of a build utility.
This is kind of a showstopper. I don't think you can just feed "-Bdynamic" into the linkoptions because it must be followed by the list of shared libraries.
Seems like a bug in gmake action (or at least a missing functionality)
For those, the best approach is to go to the Premake page on Github (https://github.com/premake/premake-core) and create a new issue.
And if you have the time to provide a small reproductible project (a static lib project, a dynamic one, and an application using both, each with only 1 cpp or some simple stuff + the premake script) and attach it to the issue, it would also be really appreciated (and much easier to treat this issue ^^)
I'm frustrated by the lack of flexibility in the Visual Studio project/solution, but I realized that now that it uses MSBUILD it might be quite powerful but just doesn't expose that to the IDE. So I took a look at MSBUILD docs and don't know where to start! I wish there was a Nutshell book for that. Is there any good tutorial someone could point me to?
More specifically, here is the kinds of things I want to do:
Run a utility pre-processor to generate .CPP and .H files, which are then used by a regular C++ project. There are multiple inputs (to figure dependencies of; specifically should know if a normal .h file it uses has changed) and multiple outputs (at least one .cpp and one .h file) that are used as files in another project.
FWIW, the most complex case involves using Qt in a "normal" C++ project that can be built using VS Express 2010 or MSBUILD directly from a script on a server. Since that is a common library, there might be some guides or whatever to help? Note that a VS plug-in is not useful for the building stage, but could be used to initially generate project files that then rely only on MSBUILD and stuff included with the source code.
Would somebody please point me in the right direction?
--John
It gets worse from there, but that's my first goal.
I found the kind of information I was looking for in a book MSBuild Trickery: 99 Ways to Bend the Build Engine to Your Will by Brian Kretzler.
In the first 18 pages I found a few key pieces of information that, along with the on-line documentations I've already gone through, helps clear things up enough to try tackling my project. Details of interest include the processing order of how MSBuild reads and operates on the things in the file, quick points on when wildcard in items are expanded and how to handle generated files, and how to see what's happening in some practical cases or even step in the debugger.
FWIW, I managed to attack my problem without using the murky ".targets"/rules files that I have yet to understand, but only using better documented/exampled features (in particular, a Target that has wildcard items doesn't care that the file name extension is not in any ".target"; is simple enough to copy from example and allows the files to be seen in the IDE Project and added to the list using the IDE; again, the FileExtension there just works OK.)
I'm trying to write a simple Qt app that will access zip files and read the content of these zip files (the content are text files). Many posts says that Quazip is the solution.
Being new to Qt and coming from .Net background, I really don't know how to use the Quazip, I downloaded the Quazip source but I'm not sure, should I compile it or should I use the source code in my project. I really have no clue.
Any help is much appreciated.
Regards.
It seems as you have various options. You can use source code immediately by copying the relevant files into your project. By default however a static library is compiled when you run
> make install
and in that case you need to add the relevant path to your .pro file so that it finds is static library.
Well, you have several options:
Just add the sources to your project. Pros: you can modify them if you want without affecting your other projects. Cons: updating QuaZip is probably going to be a maintenance headache.
Compile it as a static library (qmake CONFIG+=staticlib). Pros: updating is easier as you don't have to deal with structural changes, only recompile QuaZip and rebuild/relink your projects. Cons: you still have to recompile and relink.
Compile QuaZip as a shared (DLL) library. Pros: updating is extremely easy provided that the new version maintains binary compatibility, plus the code is shared among various applications running at the same time. Cons: it will break everything (that is, until you rebuild/relink your apps) if the new version doesn't maintain binary compatibility.
If you just need to read some zip files in some random project, any of these three will do fine. You probably won't have to update QuaZip either, unless you find some bugs that need to be fixed.
As for the binary compatibility: it is guaranteed that third level version changes (x.y.z1 -> x.y.z2) are binary compatible. As for minor version changes (x.y1 -> x.y2), they probably won't be binary compatible for some time, until the Pimpl idiom is implemented properly.