Why compile GraphicsMagick with --with-modules - graphicsmagick

According to the docs GraphicMagick may be compiled with the --with-modules option to "enable building dynamically loadable modules". Simply, why would one want to do this?

Using modules decouples GraphicsMagick from many of the add-on libraries. It provides the opportunity to add new formats to GraphicsMagick without modifying GraphicsMagick itself. Since much less code (shared libraries and coders) are loaded into the running application, on some systems there is considerably less overhead when executing the 'gm' utility. The performance improvement is operating system dependent and on some systems it is actually slower.

Related

Simple image operations for .NET Core 6 and beyond

I have some C# code that relies on doing really basic graphic operations such as getting and setting pixels, and drawing texts over images. This uses the System.Drawing package which Microsoft has dropped the ball on.
Microsoft says this: Migrate to one of the following libraries: ImageSharp, SkiaSharp, Microsoft.Maui.Graphics. The latter appears to be ill-documented and unstable.
Could anyone with experience with these packages suggest an easy and simple way forward?
I am even wondering if it might pay to write my own library for the Windows bitmap format (which is sufficient for my purposes).
Yours frustratingly...
It depends on which platform you are building your app, but SkiaSharp 2.80.2 or 2.80.3 can be a good choice. There are quite a bit problems in newer packages, so you need to be careful which version you install so that you do not run into performance problems or bugs.

JIT compilers for math

I am looking for a JIT compiler or a small compiler library that can be embedded in my program. I indent to use it to compile dynamically generated code that perform complex number arithmetics. The generated code are very simple in structure: no loops, no conditionals, but they can be quite long (a few MB when compiled by GCC). The performance of the resulting machine code is important, while I don't really care about the speed of compilation itself. Which JIT compiler is best for my purpose? Thanks!
Detailed requirements
Support double precision complex number arithmetics
Support basic optimization
Support many CPUs (x86 and x86-64 at least)
Make use of SSE on supported CPUs
Support stack or a large set of registers for local variables
ANSI-C or C++ interface
Cross platform (mainly Linux, Unix)
You might want to take a look at LLVM.
Cint is a c++(ish) environment that offers the ability to mix compiled code and interpreted code. There is a set of optimization tools for the interpreter. ROOT extends this even further by supporting compile and link at run-time at run-time (see the last section of http://root.cern.ch/drupal/content/cint-prompt), though it appears to use the system compiler and thus may not help. All the code is open source.
I make regular use of all these features as part of my work.
I don't know if it makes active use of SIMD instructions, but it seems to meet all your other requirements.
As I see that you are currently using the compile to dynamic library at link on the fly methond, you might consider TCC, though I don't believe that it does much optimization and suspect that it does not support SIMD.
Sounds like you want to be able to compile on the fly and then dynamically load the compiled library (.DLL or .so). This would give you the best performance, with an ANSI-C or C++ interface. So, forget about JITing and consider spawning a C/C++ compiler to do the compilation.
This of course assumes that a compiler can be installed at the point where the dynamically generated code is actually generated.

What is currently the best build system [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
A few years ago I looked into using some build system that isnt Make, and tools like CMake and SCons seemed pretty primitive. I'd like to find out if the situation has improved. So, under the following criteria, what is currently the best build tool:
platform agnostic: should work on windows, linux, mac
language agnostic: should have built-in support for common things like building C/C++ and other static langs. I guess it doesn't need to support the full autotools suite.
extensible: I need to be able to write rules to generate files, like from restructuredText, latex, custom formats, etc. I dont really care what language I have to write the rules in, but I would prefer a real language rather than a DSL.
I would prefer to avoid writing any XML by hand, which I think for example ant requires.
Freely available (preferably open source)
The term "best" is slightly subjective, but I think answers can be rated objectively by the criteria above.
I'd definitively put my vote up for premake. Although it is not as powerful as it's older brothers, it's main advantage is absurd simplicity and ease of use. Makes writing multi-compiler, multi-platform code a breeze, and natively generates Visual Studio solutions, XCode projects, Makefiles, and others, without any additional work needed.
So, judging purely by the criteria set forth in the question, the build system that seems like the best fit is probably waf - pure Python, provides support for C++ and other languages, general, powerful, not a DSL.
However, from my personal experience, I prefer CMake for C++ projects. (I tried CMake, SCons, and waf, and liked them in roughly that order). CMake is a general solution, but it has built-in support for C++ that makes it nicer than a more generic solution when you're actually doing C++.
CMake's build model for C++ is more declarative and less imperative, and thus, to me, easier to use. The CMake language syntax isn't great, but a declarative build with odd syntax beats an imperative build in Python. Of the three, CMake also seems to have the best support for "advanced" things like precompiled headers. Setting up precompiled headers reduced my rebuild time by about 70%.
Other pluses for CMake include decent documentation and a sizable community. Many open source libraries have CMake build files either in-tree or provided by the CMake community. There are major projects that already use CMake (OGRE comes to mind), and other major projects, like Boost and LLVM, are in the process of moving to CMake.
Part of the issue I found when experimenting with build systems is that I was trying to build a NPAPI plugin on OS X, and it turns out that very few build systems are set up to give XCode the exact combination of flags required to do so. CMake, recognizing that XCode is a complex and moving target, provides a hook for manually setting commands in generated XCode projects (and Visual Studio, I think). This is Very Smart as far as I'm concerned.
Whether you're building a library or an application may also determine which build system is best. Boost still uses a jam-based system, in part because it provides the most comprehensive support for managing build types that are more complex than "Debug" and "Release." Most boost libraries have five or six different versions, especially on Windows, anticipating people needing compatible libraries that link against different versions of the CRT.
I didn't have any problems with CMake on Windows, but of course your mileage may vary. There's a decent GUI for setting up build dependencies, though it's clunky to use for rebuilds. Luckily there's also a command-line client. What I've settled on so far is to have a thin wrapper Makefile that invokes CMake from an objdir; CMake then generates Makefiles in the objdir, and the original Makefile uses them to do the build. This ensures that people don't accidentally invoke CMake from the source directory and clutter up their repository. Combined with MinGW, this "CMake sandwich" provides a remarkably consistent cross-platform build experience!
Of course that depends on what your priorities are. If you are looking primarily for ease of use, there are at least two new build systems that hook into the filesystem to automatically track dependencies in a language agnostic fashion.
One is tup:
http://gittup.org/tup/
and the other is fabricate:
http://code.google.com/p/fabricate/
The one that seems to be the best performing, portable, and mature (and the one I have actually used) is tup. The guy who wrote it even maintains a toy linux distro where everything is a git submodule, and everything (including the kernel) is build with tup. From what I've read about the kernel's build system, this is quite an accomplishment.
Also, Tup cleans up old targets and other cruft, and can automatically maintain your .gitignore files. The result is that it becomes trivial to experiment with the layout and names of your targets, and you can confidently jump between git revisions without rebuilding everything. It's written in C.
If you know haskell and are looking for something for very advanced use cases, check out shake:
http://community.haskell.org/~ndm/shake/
Update: I haven't tried it, but this new "buildsome" tool also hooks into the filesystem, and was inspired by tup, so is relevant:
https://github.com/ElastiLotem/buildsome
CMake
CMake is an extensible, open-source
system that manages the build process
in an operating system and in a
compiler-independent manner.
Gradle seems to match all the criteria mentioned above.
It's a build system which took the best of Maven and Ant combined. To me, that's the best.
The Selenium project is moving over to Rake, not because its the best but because it handles multiple languages slightly better than all the other build tools and is cross platform (developed in Ruby).
All build tools have their issues and people learn to live with them. Something that runs on the JVM tends to be really good for building apps so Ant, Maven (i know its hideous), Ivy, Rake
Final Builder is well known in Windows world
smooth build matches most of your requirements.
platform agnostic: yes, it's written in java
language agnostic: it doesn't support c/c++t yet, only java but it is extensible via plugins written in java so adding more compilers support is not a problem
extensible: yes, you can implement smooth function via java plugin, you can also create smooth function via defining it as expression built of other smooth functions.
I would prefer to avoid writing any XML: you won't see a single line of it in smooth build
Freely available: yes, Apache 2 license
disclosure: I'm the author of smooth build.

How do small software patches correct big software?

One thing I've always wondered about is how software patches work. A lot of software seems to just release new versions on their binaries that need to be installed over older versions, but some software (operating systems like Windows in particular) seem to be able to release very small patches that correct bugs or add functionality to existing software.
Most of the time the patches I see can't possibly replace entire applications, or even small files that are used within applications. To me it seems like the actual binary is being modified.
How are these kinds of patches actually implemented? Could anyone point me to any resources that explain how this works, or is it just as simple as replacing small components such as linked libraries in an application?
I'll probably never need to do a deployment in this manner, but I am curious to find out how it works. If I'm correct in my understanding that patches can really modify only portions of binary files, is this possible to do in .NET? If it is I'd like to learn it since that's the framework I'm most familiar with and I'd like to understand how it works.
This is usually implemented using binary diff algorithms -- diff the most recently released version against the new code. If the user's running the most recent version, you only need to apply the diff. Works particularly well against software, because compiled code is usually pretty similar between versions. Of course, if the user's not running the most recent version you'll have to download the whole thing anyway.
There are a couple implementations of generic binary diff algorithms: bsdiff and xdelta are good open-source implementations. I can't find any implementations for .NET, but since the algorithms in question are pretty platform-agnostic it shouldn't be too difficult to port them if you feel like a project.
If you are talking about patching windows applications then what you want to look at are .MSP files. These are similar to an .MSI but just patch and application.
Take a look at Patching and Upgrading in the MSDN documents.
What an .MSP files does is load updated files to an application install. This typically is updated dll's and resource files, but could include any file.
In addition to patching the installed application, the repair files located in C:\WINDOWS\Installer are updated as well. Then if the user selects "Repair" from Add / Remove programs the updated patch files are used as well.
I'm thinking that the binary diff method discussed by John Millikin must be used in other operating systems. Although you could make it work in windows it would be somewhat alien.

When should one use a project reference opposed to a binary reference?

My company has a common code library which consists of many class libary projects along with supporting test projects. Each class library project outputs a single binary, e.g. Company.Common.Serialization.dll. Since we own the compiled, tested binaries as well as the source code, there's debate as to whether our consuming applications should use binary or project references.
Some arguments in favor of project references:
Project references would allow users to debug and view all solution code without the overhead of loading additional projects/solutions.
Project references would assist in keeping up with common component changes committed to the source control system as changes would be easily identifiable without the active solution.
Some arguments in favor of binary references:
Binary references would simplify solutions and make for faster solution loading times.
Binary references would allow developers to focus on new code rather than potentially being distracted by code which is already baked and proven stable.
Binary references would force us to appropriately dogfood our stuff as we would be using the common library just as those outside of our organization would be required to do.
Since a binary reference can't be debugged (stepped into), one would be forced to replicate and fix issues by extending the existing test projects rather than testing and fixing within the context of the consuming application alone.
Binary references will ensure that concurrent development on the class library project will have no impact on the consuming application as a stable version of the binary will be referenced rather than an influx version. It would be the decision of the project lead whether or not to incorporate a newer release of the component if necessary.
What is your policy/preference when it comes to using project or binary references?
It sounds to me as though you've covered all the major points. We've had a similar discussion at work recently and we're not quite decided yet.
However, one thing we've looked into is to reference the binary files, to gain all the advantages you note, but have the binaries built by a common build system where the source code is in a common location, accessible from all developer machines (at least if they're sitting on the network at work), so that any debugging can in fact dive into library code, if necessary.
However, on the same note, we've also tagged a lot of the base classes with appropriate attributes in order to make the debugger skip them completely, because any debugging you do in your own classes (at the level you're developing) would only be vastly outsized by code from the base libraries. This way when you hit the Step Into debugging shortcut key on a library class, you resurface into the next piece of code at your current level, instead of having to wade through tons of library code.
Basically, I definitely vote up (in SO terms) your comments about keeping proven library code out of sight for the normal developer.
Also, if I load the global solution file, that contains all the projects and basically, just everything, ReSharper 4 seems to have some kind of coronary problem, as Visual Studio practically comes to a stand-still.
In my opinion the greatest problem with using project references is that it does not provide consumers with a common baseline for their development. I am assuming that the libraries are changing. If that's the case, building them and ensuring that they are versioned will give you an easily reproducible environment.
Not doing this will mean that your code will mysteriously break when the referenced project changes. But only on some machines.
I tend to treat common libraries like this as 3rd-party resources. This allows the library to have it's own build processes, QA testing, etc. When QA (or whomever) "blesses" a release of the library, it's copied to a central location available to all developers. It's then up to each project to decide which version of the library to consume by copying the binaries to a project folder and using binary references in the projects.
One thing that is important is to create debug symbol (pdb) files with each build of the library and make those available as well. The other option is to actually create a local symbol store on your network and have each developer add that symbol store to their VS configuration. This would allow you to debug through the code and still have the benefits of usinng binary references.
As for the benefits you mention for project references, I don't agree with your second point. To me, it's important that the consuming projects explicitly know which version of the common library they are consuming and for them to take a deliberate step to upgrade that version. This is the best way to guarantee that you don't accidentally pick up changes to the library that haven't been completed or tested.
when you don't want it in your solution, or have potential to split your solution, send all library output to a common, bin directory and reference there.
I have done this in order to allow developers to open a tight solution that only has the Domain, tests and Web projects. Our win services, and silverlight stuff, and web control libraries are in seperate solutions that include the projects you need when looking at those, but nant can build it all.
I believe your question is actually about when projects go together in the same solution; the reason being that projects in the same solution should have project references to each other, and projects in different solutions should have binary references to each other.
I tend to think solutions should contain projects that are developed closely together. Such as your API assemblies and your implementations of those APIs.
Closeness is relative, however. A designer for an application, by definition, is closely related to the app, however you wouldn't want to have the designer and the application within the same solution (if they are at all complex, that is). You'd probably want to develop the designer against a branch of the program that is merged at intervals further spaced apart than the normal daily integration.
I think that if the project is not part of the solution, you shouldn't include it there... but that's just my opinion
I separate it by concept in short

Resources