Can We use a small part out of Enyo's ilib package? - enyo

I have a small requirement to internationalize strings. Honestly the topic itself is so wide. but I only wish to use its ResourceBundle functionality where I only wish to include strings.json files for each language and use $L("some key") in my enyo app. Is it possible with minimum number of individual dependent javascript files ?
This is what I am talking about. Thanks in advance for your efforts.

The enyo-ilib package has 3 "sizes" of ilib in it. By default, you get the "standard" size which has a reasonable set of things that people might need in it like date formatting, number formatting, etc. What you want is the "core" size which includes only the resource bundle class and the string formatter plus all the classes they depend on like the locale class. In order to use the core size of ilib in your enyo app, you would do the following in your package.js file:
enyo.depends({
"$lib/enyo-ilib/core-package.js",
<other libraries>,
...
});
Then, set up a resources directory right beside where your index.html is:
index.html
resources/
de/
strings.json -- strings for German
fr/
strings.json -- strings for French
etc.
Then you should be able to use $L without the memory footprint of the standard size of ilib.

Related

Problems with Graphhopper application

I am trying to make an map with Graphhopper but after I choose and load the map completely on the screen, there are some problems, you can see in the pictures below:
When the map loaded
When I hold tap on the screen for routing
Why did this happen? and how to fix it?
Hard to figure out without logs, but i encountered a similar problem recently.
If you're in the same situation as i was :
This means that your berlin-gh folder does not contain the data relative to the EncodingManager your are trying to use.
First it is important to know which kind of EncodingManager you are using, they can be either of the following :
"foot","car","bike","bike2",etc...
Now let's say you want to get the foot path from one point to another on your map, then you must be building somewhere in your code your graph request :
GHRequest yourRequest(latitudeStart, longitudeStart, latitudeEnd, longitudeEnd);
yourRequest.setVehicle(EncodingManager.FOOT); // or .CAR, .LOL (lol)
These data are constructed by the graphhopper.sh script when you import the map of your choice :
edit the config.properties file inside the graphhopper folder
Vehicles #####
Possible options: car,foot,bike,bike2,mtb,racingbike,motorcycle (comma separated)
bike2 takes elevation data into account (like up-hill is slower than down-hill)
and requires enabling graph.elevation.provider below
graph.flagEncoders=foot
^ Put here your comma separated list of vehicle you want to support
now delete any folder created by graphhopper with previous attempts of import (otherwise the command below will fail)
then launch graphhopper.sh import "your map path"
At this point it should have created the (let's say) berlin-gh folder with the support for the vehicle you chose.
Put this folder in your application (wherever you load it) and now you can configure your graphrequest to load paths for the encodingmanager of your choice.
Hope i'm clear enough.

Is it possible to intermix Modular templating and legacy VBScript CT?

In particular, the case I have in mind is this:
##RenderComponentPresentation(Component, "<vbs-legacy-ct-tcm-uri>")##
The problem I'm having is that in my case VBS code breaks when it tries to access component fields, giving "Error 13 Type mismatch ..".
(So, if I were to give the answer, I'd say: "Partially, of no practical use")
EDIT
The DWT above is from another CT, so effectively it's a rendering of component link, that's why parameterless overload as per Nuno's suggestion won't work unfortunately. BTW, the following lines inside VBS don't break and give correct values:
WriteOut Component.ID
WriteOut Component.Schema.Title
EDIT 2
Dominic was absolutely wright: it's a missing dependencies.
A bit more insight to make this info generally useful:
Suppose, the original CT looked like this ("VBScript [Legacy]" type):
[%
Call RenderComponent(Component)
%]
This CT was meant to be called from a PT, also VBS-based. That PT had a big chunk of "#include" statements in the beginning.
Now the story changes: the same CT is being called from another, DWT-based, CT. Obviously (thanks you all for your invaluable help!), dependencies are now not being included anywhere.
The solution to make original CT working again is to explicitly hand-pick and include all necessary VBS TBBs, so the original CT becomes:
[%
#include "tcm:<uri-of-vbs-tbb>"
Call RenderComponent(Component)
%]
Yes - it's perfectly possible to mix and match legacy and modular templates. Perhaps obviously, you can't mix and match template building blocks between the two techniques.
In VBScript "Error 13 Type mismatch" is sometimes used as a secret code that really means "I don't recognise the name of one of your variables, (including the names of Functions and Subs)" In the VBScript templating engine, variables from the page template could be in scope in your component template; it was very common, for example, to put the #includes in the PT so they could be used by the CT. My guess is that your component template is trying to use such a Function, and not finding it.
I know that you can render a Modular Page Template with VBScript Component Presentations, and also a VbScript page template can render a modular Component Template.
Your error is possibly due to something else? Have you tried just using the regular ##RenderComponentPresentation()## call without specifying which template?
The Page Template can render Compound Templates of different flavors - for example Razor, VBS, or XSLT.
The problem comes from the TBBs included in the Templates. Often the Razor templates will need to call functions that only exist in VBScript. So, the starting point when migrating templates is always to start with the helper functions and utility libraries. Then migrate the most generic PT / CT you have to the new format (Razor, XSLT, DWT, etc). This provides a nice basis to migrate the rest of the Templates as you have time to the new format.

boost-build/bjam constant for path to Jamroot

Is there a way to get the location of the Jamroot file, for use as a constant in another Jamfile in the project?
Right now, I have this kludge in my Jamroot:
constant HOME : [ os.environ HOME ] ;
constant MYPROJECT_ROOT : $(HOME)/src/myproject ;
And then later I might do something like this in another Jamfile, to allow me to include headers with paths from the root of the project.
<include>$(MYPROJECT_ROOT)
It's especially unsatisfactory because it means that if I share this project with others, they have to either keep it in exactly the same location relative to their $HOME or they have to update the Jamroot.
I'm interested in the smart way to do this specific include (instead of my ignorant beginner way of using constants). But I'd also be interested in solving the problem the way I asked -- by getting the Jamroot location into a constant -- because this might be useful in other ways too.
Use the path-constant rule.
path-constant MYPROJECT_ROOT : . ;
Then in sub-projects, you can get the directory of the Jamroot with $(MYPROJECT_ROOT).
Note that usually people name this variable TOP instead of MYPROJECT_ROOT, but that's just a convention.

What determines sorting of files in a QFileDialog?

Users open files in our app through a QFileDialog. The order of the filenames is bizarre. What is determining the sorting order, and how can we make it sort by filenames, or otherwise impose our own sorting, perhaps giving it a pointer to our own comparison function?
The documentation and online forums haven't been helpful. Unless it's well hidden, there doesn't seem to be any sorting method, property, etc.
This is a primarily Linux app, but also runs on Macs. (I know nothing about Mac.)
Here is the juicy part of the source code:
QtFileDialog chooser(parent, caption, directory, filter);
/// QtFileDialog is our class derived from QFileDialog
chooser.setModal(true);
chooser.setAcceptMode(acceptMode);
chooser.setFileMode(fileMode);
QStringList hist = chooser.history();
chooser.setHistory(hist);
/* point "x" */
if(chooser.exec()) {
QStringList files = chooser.selectedFiles();
...blah blah blah...
From one of the answers, I tried an evil experiment, adding this ill-informed guesswork code at "point x":
QSortFilterProxyModel *sorter = new QSortFilterProxyModel();
sorter->sort(1); // ???
chooser.setProxyModel(sorter);
But this crashed spectacularly at a point about 33 subroutine calls deep from this level of code. I admit, even after reading the Qt4 documentation and sample code, I have no idea of the proper usage of QSortFilterProxyModel.
Are you using QFileDialog by calling exec()? If you are, you should have a button to switch the view to Detail View. This will give you some column headers that you can click on to sort the files. It should remember that mode the next time the dialog opens but you can force it by calling setViewMode(QFileDialog::Detail) before calling exec().
An alternative is to call the static function QFileDialog::getOpenFileName() which will open a file dialog that is native to the OS on which you are running. Your users may like the familiarity of this option better.
Update 1:
About sort order in screen cap from OP:
This screen capture is actually showing a sorted list. I don't know if the listing behaviour is originating from the Qt dialog or the underlying file system but I know Windows XP and later do it this way.
When sorting filenames with embedded numbers, any runs of consecutive digits are treated as a single number. With the more classic plain string sorting, files would be sorted like this:
A_A_10e0
A_A_9a05
Going character by character, the first 1 sorts before the 9.
.. But with numerical interpretation (as in Windows 7 at least), they are sorted as:
A_A_9a05
A_A_10e0
The 9 sorts before the 10.
So, the sorting you are seeing is alphabetical with numerical interpretation and not just straight character by character. Some deep digging may be required to see if that is Qt behaviour or OS behaviour and whether or not it can be configured.
Update 2:
The QSortFilterProxyModel will sort the strings alphabetically by default so there is not much work to using it to get the behavior you are looking for. Use the following code where you have "point x" in your example.. (you almost had it :)
QSortFilterProxyModel *sorter = new QSortFilterProxyModel();
sorter->setDynamicSortFilter(true); // This ensures the proxy will resort when the model changes
chooser.setProxyModel(sorter);
I think what you need to do is create a QSortFilterProxyModel which you then set in your QFileDialog with QFileDialog::setProxyModel(QAbstractProxyModel * proxyModel)
Here are some relevant links to the Qt 4.6 docs about it.
http://doc.trolltech.com/4.6/qfiledialog.html#setProxyModel
http://doc.trolltech.com/4.6/qsortfilterproxymodel.html#details
I don't think it depends upon the implementation of Qt libraries... But upon the Native OS implementation..
For example in Windows,
if you use QFileDialog, it will display the Files and Directories by Name sorted.. It is the same when used in other applications. In the sense that, if you try to open a file through MS- Word, it indeed displays the Files and directories as Name sorted by default..
And am not sure about other environments since am not used to them...
But in Windows, you can change the sorted order by right-click in the area of Files and Directories display and can select the options you like.. For e.g like Name,size,type, modified... And also which is similar, when you use an MS-Word application...
So, I believe it does depend on the Native OS implementation and not on QFileDialog's...

Are there solutions for streamlining the update of legacy code in multiple places?

I'm working in some old code which was originally designed for handling two different kinds of files. I was recently tasked with adding a new kind of file to this code. Most of my problems were solved by filling out an extensive XML file with a new entry that handled everything from what lists were named to how the file is written in plural lower case. But this ended up being insufficient, as there were maybe 50 different places in 24 different code files where I had to update hardcoded switch-statements that only branched for the original two file types.
Unfortunately there is no consistency in this; there are methods which operate half from the XML file, and half off of hardcode. Some of the files which look like they would operate off of the XML file don't, and some that I would expect that I'd need to update the hardcode don't need it. So the only way to find the majority of these is to run through testing the whole system when only part of it is operational, finding that one step to fix (when I'm lucky that error logging actually tells me what is going on), and then running the whole thing again. This wastes time testing the parts of the code which are already confirmed to work, time better spent testing the new parts I have to add on top of it all.
It's a hassle and a half, and to my luck I can expect that I will have to add yet another new kind of file in the near future.
Are there any solutions out there which can aid in this kind of endeavour? Something which I can input some parameters of current features, document what points in a whole code project actually need to be updated, and run something nice the next time I need to add a new feature to the code. It needn't even be fully automated, something that'll help me navigate straight to the specific points in everything and maybe even record what kind of parameters need to be loaded.
Doubt it matters specifically, but the code is comprised of ASP.NET pages, some ASP.NET controls, hundreds of C# code files, and a handful of additional XML files. It's all currently in a couple big Visual Studio 2008 projects.
Not exactly what you are describing, but if you can introduce a seam into the code and lay down some interfaces you can break out and mock, a suite of unit/integration tests would go a long way to helping you modify old code you may not fully understand well.
I completely agree with the comment about using Michael Feathers' book to learn how to wedge new tests into legacy code. I'd also strongly recommend Refactoring, by Martin Fowler. What it sounds like you need to do for your code is to implement the "Replace conditionals with polymorphism" refactoring.
I imagine your code today looks somewhat like this:
if (filetype == 23)
{
type23parser.parse(file);
}
else if (filetype == 69)
{
filestore = type69reader.read(file);
File newfile = convertFSto23(filestore);
type23parser.parse(newfile);
}
What you want to do is to abstract away all the "if (type == foo)" kinds of logic into strategy patterns that are created in a factory.
class FileRules : pReader(NULL), pParser(NULL)
{
private:
FileReaderRules *pReader;
FileParserRules *pParser;
public:
void read(File* inFile) {pReader->read(inFile);};
void parse(File* inFile) {pParser->parse(inFile);};
};
class FileRulesFactory
{
FileRules* GetRules(int inputFiletype, int parserType)
{
switch (inputFiletype)
{
case 23:
pReader = new ASCIIReader;
break;
case 69:
pReader = new EBCDICReader;
break;
}
switch (parserType)
... etc...
then your main line of code looks like this:
FileRules* rules = FileRulesFactory.GetRules(filetype, parsertype);
rules.read(file);
rules.parse(file);
Pull off this refactoring, and adding a new set of file types, parsers, readers, etc., becomes as simple as writing one exclusive to your new type.
Of course, go read the book. I vastly oversimplified it here, and probably got stuff wrong, but you should get the general idea of how to approach it from this. I can also recommend another book, "Head First Design Patterns", which has a great section on the Factory patterns (if you like those "Head First" kinds of books.)

Resources