How do I overwrite a bitbake conf which references an include - bitbake

I have an openembedded build for some custom hardware that has a BSP layer. When I build, BSP layer throws an error because the size of one of the file systems is too large. I found this issue in my (names deliberately changed)
meta-my-bsp/conf/machine/machineA.conf
In machineA.conf, there's a line:
require conf/machine/include/machineA.inc
In machineA.inc is the value I need to change to resize my filesystem to make it the right size. When I edit this directly in the meta-my-bsp layer, this compiles and creates the filesystem correctly.
Now, I need to put it into meta-my-layer which is has a higher value than the BSP layer (overrides the BSP layer recipes). So I copied the files into my layer.
meta-my-layer/conf/machine/include/machineA.inc <--- Modified with the value I need
meta-my-layer/conf/machine/machineA.conf (No modifications)
However, when I rebuild everything, it doesn't look like my machineA.inc is getting picked up. Is this the right way to go about it, or am I missing something? I can pull in the whole meta-my-bsp conf and recipes into meta-my-layer, but that seems totally overkill and a bad design. What is the proper way to override this configuration?

To use the machineA.inc from meta-my-layer you should do
require ${COREBASE}/meta-my-layer/conf/machine/include/machineA.inc

Related

Benefits of using pyuic vs uic.loadUi

I am currently working with python and Qt which is kind of new for me coming from the C++ version and I realised that in the oficial documentation it says that an UI file can be loaded both from .ui or creating a python class and transforming the file into .py file.
I get the benefits of using .ui it is dynamically loaded so no need to transform it into python file with every change but what are the benefits of doing that?, Do you get any improvements in run time? Is it something else?
Thanks
Well, this question is dangerously near to the "Opinion-based" flag, but it's also a common one and I believe it deserves at least a partial answer.
Conceptually, both using the pyuic approach and the uic.loadUi() method are the same and behave in very similar ways, but with some slight differencies.
To better explain all this, I'll use the documentation about using Designer as a reference.
pyuic approach, or the "python object" method
This is probably the most popular method, especially amongst beginners. What it does is to create a python object that is used to create the ui and, if used following the "single inheritance" approach, it also behaves as an "interface" to the ui itself, since the ui object its instance creates has all widgets available as its attributes: if you create a push button, it will be available as ui.pushButton, the first label will be ui.label and so on.
In the first example of the documentation linked above, that ui object is stand-alone; that's a very basic example (I believe it was given just to demonstrate its usage, since it wouldn't provide a lot of interaction besides the connections created within Designer) and is not very useful, but it's very similar to the single inheritance method: the button would be self.ui.pushButton, etc.
IF the "multiple inheritance" method is used, the ui object will coincide with the widget subclass. In that case, the button will be self.pushButton, the label self.label, etc.
This is very important from the python point of view, because it means that those attribute names will overwrite any other instance attribute that will use the same name: if you have a function named "saveFile" and you name the button "saveFile", you won't have any [direct] access to that instance method any more as soon as setupUi is returned. In this case, using the single inheritance method might be helpful - but, in reality, you could just be more careful about function and object names.
Finally, if you don't know what the pyuic generated file does and what's it for, you might be inclined to use it to create your program. That is wrong for a lot of reasons, but, most importantly, because you might certainly realize at some point that you have to edit your ui, and merging the new changes with your modified code is clearly a PITA you don't want to face.
I recently answered a related question, trying to explain what happens when setupUi() is called in much more depth.
Using uic.loadUi
I'd say that this is a more "modular" approach, mostly because it's much more direct: as already pointed out in the question, you don't have to constantly regenerate the ui files each time they're modified.
But, there's a catch.
First of all: obviously the loading, parsing and building of an UI from an XML file is not as fast as creating the ui directly from code (which is exactly what the pyuic file does within setupUi()).
Then, there is at least one relatively small bug about layout contents margins: when using loadUi, the default system/form margins might be completely ignored and set to 0 if not explicitly set. There is a workaround about that, explained in Size of verticalLayout is different in Qt Designer and PyQt program (thanks to eyllanesc).
A comparison
pyuic approach
Pros:
it's faster; in a very simple test with a hundred buttons and a tablewidget with more than 1200 items I measured the following bests:
pyuic loading: 33.2ms
loadUi loading: 51.8ms
this ratio is obviously not linear for a multitude of reasons, but you can get the idea
if used with the single inheritance method, it can prevent accidental instance attribute overwritings, and it also means a more "contained" object structure
using python imports ensures a more coherent project structure, especially in the deployment process (having non-python files is a common source of problems)
the contents of those files are actually instructive, especially for beginners
Cons:
you always must remember to regenerate the python files everytime you update an ui; we all know how easy is to forget an apparently meaningless step like this might be, expecially after hours of coding: I've seen plenty of situations for which people was banging heads on desks (hopefully both theirs) for hours because of untraceable issues, before realizing that they just forgot to run pyuic or didn't run it on the right files; my own forehead still hurts ;-)
file tracking: you have to count two files for each ui, and you might forget one of them along the way when migrating/forking/etc, and if you forgot an ui file it possibly means that you have to recreate it completely from scratch
n00b alert: beginners are commonly led to think that the generated python file is the one to be used to create their programs, which is obviously wrong; unfortunately, the # WARNING! message is not clear enough (I've been almost begging the head PyQt developer about this); while this is obviously not an actual problem of this approach, right now it results in being so
some of the contents of a pyuic generated files are usually unnecessary (most importantly, the object name, which is used only for specific cases), and that's pretty obvious, since it's automatically generated ("you might need that, so better safe than sorry"); also, related to the issue above, people might be led to think that everything pyuic creates is actually needed for a GUI, resulting in unnecessary code that decreases its readability
loadUi method
Pros:
it's direct and immediate: you edit your ui on Designer, you save it (or, at least, you remember to do it...), and when you run your code it's already there; no fuss, no muss, and desks/foreheads are safe(r)
file tracking and deployment: it's just one file per ui, you can put all those ui files in a separate folder, you don't have to do anything else and you don't risk to forget something on the way
direct access to widgets (but this can be achieved using the multiple inheritance approach also)
Cons:
the layout issue mentioned above
possible instance attribute overwriting and no "ui" object "containment"
slightly slower loading
path and deployment: loading is done using os relative paths and system separators, so if you put the ui in a directory different from the py file that loads that .ui you'll have to consider that; also, some package managers use to compress everything, resulting in access errors unless paths are correctly managed
In my opinion, all considering, the loadUi method is usually the better choice. It doesn't distract me, it allows better conceptual compartmentation (which is usually good and also follows a pattern similar to MVC much more closely, conceptually speaking) and I strongly believe it as being far less prone to programmer errors, for a multitude of reasons.
But that's clearly a matter of choice.
We should also and always remember that, like every other choice we do, using ui files is an option.
There is people who completely avoids them (as there is people who uses them literally for anything), but, like everything, it all and always depends on the context.
A big benefit of using pyuic is that code autocompletion will work.
This can make programming much easier and faster.
Then there's the fact that everything loads faster.
pyuic6-Tool can be used to automate the call of pyuic6 when the application is run and only convert .ui files when they change.
It's a little bit longer to set up than just using uic.loadUi but the autocompletion is well worth it if you use something like PyCharm.

cmake: qt resources inside a module

i have this tree structure:
repository/modules/module1
repository/modules/module2
repository/modules/module..
repository/apps/application1
repository/apps/application2
repository/apps/application..
where the applications are using some modules.
now, I'd like to put some resources inside a module (like a very colorfull icons inside a widget used by several applications) but.. something gets wrong.
inside the module CMakeLists.txt if I use only:
set(${MODULE_NAME}_RCS
colors.qrc
)
...
qt4_add_resources (${MODULE_NAME}_RHEADERS ${${MODULE_NAME}_RCS})
no qrc_colors.cxx are created anywhere. so I've tried to add:
ADD_EXECUTABLE (${MODULE_NAME}
${${MODULE_NAME}_RHEADERS}
)
but.. I get this weird error:
CMake Error at repo/modules/ColorModule/CMakeLists.txt:51 (ADD_EXECUTABLE):
add_executable cannot create target "ColorModule" because another
target with the same name already exists. The existing target is a static
library created in source directory
"repo/modules/ColorModule". See documentation for
policy CMP0002 for more details.
(I've changed the path of the error of course)
so.. don't know what to think because i'm new both to cmake and qt..
what can i try?
EDIT:
if I add the ${MODULE_NAME}_RHEADERS and ${MODULE_NAME}_RCS in the add_library command the qrc_colors.cxx is created BUT it is in repository/modules/module1/built and not copied in the application built directory...
There is at least two errors in your code.
1) It is usually not necessary to use ${MODULE_NAME} everywhere like that, just "MODULE_NAME". You can see that the difference is the raw string vs. variable. It is usually recommended to avoid double variable value dereference if possible.
2) More importantly, you seem to be setting ${MODULE_NAME} in more than one executable place, which is "ColorModule" according to the error output. You should have individual executable names for different binaries.
Also, the resource file focus is a bit of red herring in here. There are several other issues with your project.
You can cmake files as CmakeLists.txt instead of CMakeLists.txt which inherently causes issues on case sensitive systes as my Linux box.
You use Findfoo.cmake, and find_package(foo) for that matter, rather than the usual FindFoo.cmake convention alongside find_package(Foo).
Your FindFoo.cmake is quite odd, and you should probably be rewritten.
Most importantly, you should use config files rather than find modules.
Documentation and examples can be found at these places:
http://www.cmake.org/Wiki/CMake/Tutorials#CMake_Packages
https://projects.kde.org/projects/kde/kdeexamples/repository/revisions/master/show/buildsystem
When you would like use a find module, you need to have that at hand already. That will tell you what to look for, where things are, or if they are not anywhere where necessary. It is not something that you should write. You should just reuse existing ones for those projects that are not using cmake, and hence the find modules are added separately.
It is a bit like putting the treasure map just next to the treasure. Do you understand the irony? :) Once you find the map, you would automatically have the treasure as well. i.e. you would not look for it anymore.

Closure: --namespace Foo does not include Foo.Bar, and related issues

I have a rather big library with a significant set of APIs that I need to expose. In fact, I'd like to expose the whole thing. There is a lot of namespacing going on, like:
FooLibrary.Bar
FooLibrary.Qux.Rumps
FooLibrary.Qux.Scrooge
..
Basically, what I would like to do is make sure that the user can access that whole namespace. I have had a whole bunch of trouble with this, and I'm totally new to closure, so I thought I'd ask for some input.
First, I need closurebuilder.py to send the full list of files to the closure compiler. This doesn't seem supported: --namespace Foo does not include Foo.Bar. --input only allows a single file, not a directory. Nor can I simply send my list of files to the closure compiler directly, because my code is also requiring things like "goog.assers", so I do need the resolver.
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
This is my main issue.
However, later the closure compiler, with ADVANCED_OPTIMIZATIONS on, will optimize all these names away. Now I can fix that by adding "#export" all over the place, which I am not happy about, but should work. I suppose it would also be valid to use an extern here. Or I could simply disable advanced optimizations.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
Finally, for working in source mode, I need to do goog.require() for every namespace I am using. This is merely an inconvenience, though I am mentioning because it sort of related to my trouble above. I would prefer to be able to do:
goog.requireRecursively('FooLibrary')
in order to pull all the child namespaces as well; thus, recreating with a single command the environment that I have when I am using the compiled version of my library.
I feel like I am possibly misunderstanding some things, or how Closure is supposed to be used. I'd be interested in looking at other Closure-based libraries to see how they solve this.
You are discovering that Closure-compiler is built more for the end consumer and not as much for the library author.
If you are exporting basically everything, then you would be better off with SIMPLE_OPTIMIZATIONS. I would still highly encourage you to maintain compatibility of your library with ADVANCED_OPTIMIZATIONS so that users can compile the library source with their project.
First, I need closurebuilder.py to send the full list of files to the closure compiler. ...
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
You would need to specify an --root of your source folder and specify the namespaces of the leaf nodes of your file dependency tree. You may have better luck with the now deprecated CalcDeps.py script. I still use it for some projects.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
You can't do that because it only makes sense based on the final usage. You as the library writer wish to export everything, but perhaps a consumer of your library wishes to include the source (uncompiled) version and have more dead code elimination. Library authors are stuck in a kind of middle ground between SIMPLE and ADVANCED optimization levels.
What I have done for this case is maintain a separate exports file for my namespace that exports everything. When compiling a standalone version of my library for distribution, the exports file is included in the compilation. However I can still include the library source (without the exports) into a project and get full dead code elimination. The work/payoff balance of this though must be weighed against just using SIMPLE_OPTIMIZATIONS for the standalone library.
My GeolocationMarker library has an example of this strategy.

How to query the containing partition of a file with KDE/Qt4?

I'm using KDE, and I'm toying with the idea of hacking the code for Dolphin File Manager (and potentially Konqueror if necessary) to get context-sensitive drag and drop behaviour (i.e. files are moved within the same partition, or copied if they're moved across partitions or the source is read only).
To do this, I think I'd need to find out the containing partition of the source and destination (easy enough on Windows using the drive letter, but on Linux, as mount points can be almost anywhere, it can't be reliably derived from the file path), and compare them.
Does anyone know how I can find out the partition that contains a given file?
It must be possible - I know Nautilus provides this sort of behaviour, but I'm not familiar enough with GTK to track down the appropriate section in the source code to see how its done...
Qt doesn't provide API for this. For POSIX, have a look at stat.
For KDE, you can use KIO::stat() to get mostly the same info as POSIX' stat function but asynchronously.
The device id should be in the field UDS_DEVICE_ID of the result.

Comparing views in ClearCase

I have two dynamic views in ClearCase which, as far as I know, are supposed to be "equal".
One is supposed to look at the "Main branch" and one at some other branch (let's call it A).
I did a merge from A to Main (in the Main view) and for some reason the code at the A view compiles while Main does not.
Is there a way to compare the views for differences?
The simplest way is to use an external diff tool on those two views (like WinMerge or BeyondCompare on Windows, KDiff3 on Unix or Windows, ...).
I would actually create two new views (with the same config spec than the two initial views), to remove any "cache" effect, and start the comparison there.
Once that initial examen is done, I would start the compilation in those two views, and see if one of them still don't compile.
Don't forget that merging A to Main will not always result in the same set of files after the Merge.
It would be the same only if no evolution has taken place in Main since A started (or since the last merge from A to Main).
The setcs -current you mention will:
–cur/rent
causes the view_server to flush its caches and reevaluate the current config spec, which is stored in file config_spec in the view storage directory. This includes:
Evaluating time rules with nonabsolute specifications (for example, now, Tuesday)
Reevaluating –config rules, possibly selecting different derived objects than previously
Re-reading files named in include rules
If you depend within your config spec on an "include file" which was at the wrong version, the first setcs would set it at the right version, and the second one would read its content and set the right version for the rest.

Resources