I am new to FontForge. When creating new fonts, I know there are two ways to set a project: one is a single .sfd files, which could be too large; the other is a .sfdir folder, but may contain too many files. I want to group the glyphs by unicode ranges, so there can be only a certain number of (not so large) files. How can I do that?
Those are the only two options: single file or single directory. I don't recommend splitting into multiple fonts, as it would be much harder to manage font-wide features including kerning.
You could perhaps save as sfdir then create a parallel directory structure containing shortcuts or symlinks to the individual glyph files. But without knowing what your pain point is I can't say if this would be useful or worth the effort.
Related
Over the past few weeks I have been getting into Ada, for various different reasons. But there is no doubt that information regarding my personal reasons as to why I'm using Ada is out of scope for this question.
As of the other day I started using the gprbuild command that comes with the Windows version of GNAT, in order to get the benefits of a system for managing my applications in a project-related manner. That is, being able to define certain attributes on a per-project basis, rather than manually setting up the compile-phase myself.
Currently when naming my files, their names are based off of what seems to be a standard for the grpbuild, although I could very much be wrong. For periods (in the package structure), a - is put in the name of the file, for underscores, an _ is put accordingly. As such, a package by the name App.Test.File_Utils would have a file name of app-test-file_utils: .ads and .adb accordingly.
In the .gpr project file I have specified:
for Source_Dirs use ("app/src/**");
so that I am allowed to use multiple directories for storing my files, rather than needing to have them all in the same directory.
The Problem
The problem that arises, however, is that file names tend to get very long. As I am already putting the files in a directory based on the package name contained by the file, I was wondering if there is a way to somehow make the compiler understand that the package name can be retrieved from the file's directory name.
That is, rather than having to name the App.Test.File_Utils' file name app-test-file_utils, I would like it to reside under the app/test directory by the name file_utils.
Is this doable, or will I be stuck with the horrors of eventually having to name my files along the lines of: app-test-some-then-one-has-more_files-another_package-knew-test-more-important_package.ads? Granted, I have not missed something about how an Ada application should actually be structured.
What I have tried
I tried looking for answers in the package Naming configuration of the gpr files in the documentation, but to no avail. Furthermore I have been browsing the web for information, but decided it might be better to get help through Stackoverflow, so that other people who might struggle with this problem in the future (granted it is a problem in the first place) might also get help.
Any pointers in the right direction would be very helpful!
In the top-secret GNAT documentation there is a description of how to use non-default file names. It's a great deal of effort. You will probably give up, use the default names, and put them all in a single directory.
You can also simplify much of the effort by using GPS and letting it build your project file as you add files to your source directories.
I have a Plone instance which contains some structures which I need to copy to a new Plone instance (but much more which should not be copied). Those structures are document trees ("books" of Archetypes folders and documents) which use resources (e.g. images and animations, by UID) outside those trees (in a separate structure which of course contains lots of resources not needed by the ones which need to be copied).
I tried already to copy the whole data and delete the unneeded parts, but this takes very (!) long, so I'm looking for a better way.
Thus, the idea is to traverse my little forest of document trees and transfer them and the resources they need (sparsely rebuilding that separate structure) to the new Plone instance. I have full access to both of them.
Is there a suggested way to accomplish this? Or should I export all of them, including the resources structure, and delete all unneeded stuff afterwards?
I found out that each time that I make this type of migration by hand, I make mistakes that force me to do it again.
OTOH, if migration is automated, I can run it, find out what I did wrong, fix the migration, and do it all over again until I am satisfied.
In this context, to automate your migration, I advise you to look at collective.transmogrifrier.
I recommend jsonmigrator - which is a twist on collective.transmogrifier mentioned by Godefroid. See my blog on it here
You can even use it to migrate from Archetypes to Dexterity types (you just need matching fieldnames (and matching types roughly speaking).
Trying to select the resources to import will be tricky though. Perhaps you can find a way to iterate through your document trees & "touch" (in a unix sense) any resource that you are using. Then copy across only resources whose "timestamp" indicates that they have been touched.
If I have a directory which contains a volumetric data from a variety of sources, say PET and a 4D CT. I know how for any given dataset to use say, vtkGDCMImageReader, to load a 3D image from a series of files. To handle multiple modalities and/or 4D datasets I am currently just manually peeking at tags and dividing up the files into lists and parsing them separately.
Is there a particularly general way of going this or even better a method within GDCM ? What I am doing seems to work but feels like a bit of a hack and there must be a proper way of doing it, I just can't seem to find it.
You can checkout the following example
For example, can I put 100.000.000 documents in one plone folder?
In recent Plone releases, folders use a BTree-backed storage, so you can store as many objects in there as you like.
The biggest folder that I can access in a production environment currently stores 25k items.
You will, of course, need to deal with large numbers of items in one location appropriately. The usual caveats about really large numbers of content in a site apply.
I work in the field of bioinformatics. My daily work processes several data files (DNA sequences, alignments, etc..) and produce many result files, so I want to use something like Unix make to automate the whole process, especially to resolve dependencies between different data.
However, the Unix make only supports one output per target, as it is designed for software build, which typically generates one object file from several source files, or one executable from several object files. If you use custom virtual targets, it won't benefit from timestamp checking. Is there any build system that supports multiple output file per one target? If there aren't any, I'm going to make the wheel.
Have a look at Drake, which is a replacement of make designed for data workflow management (make for data).
Another option is makepp, which is an improved make. Among other features it supports multiple targets.