Somewhat new to salt here. I set up salt and managed to get everything working rather nicely. After the setup, I decided to try make small state files and run those from another state file. The main reason being ease of troubleshooting/changing a small file vs. troubleshooting a huge state file. Unfortunately, outside of the top file, I haven't been successful in getting a state to be called from another state.
For example, let's say I have foo.sls and bar.sls, and bar.sls is a state that installs packages properly. I have tried the following.
#foo.sls
packages:
state.apply:
- source: salt://packages/bar.sls
Also
#foo.sls
packages/bar.sls:
state.apply
And also
#foo.sls
state.apply:
- source: salt://packages/bar.sls
And few others that I'm not remembering right now.
Most times I've tried though, I get an error stating that state.apply is not available, leading me to believe this is either not possible, or I'm going about it wrong.
Can this be done? If so, how? If not, maybe I'll file a feature request for this, as it seems like it could be useful.
background
It sounds like your issue may stem from mixing state modules and execution modules when you are writing your states.
Brief recap, "states" are the declarative files you write (foo.sls, bar.sls), "state modules" are the directives you list inside those states (e.g. pkg.installed), and "execution modules" provide the commands that salt actually knows how to run (state.apply, test.ping, etc.).
state.apply is simply the execution module that knows how to interpret states. It may help to note that the fully qualified name of state.apply from the docs (or if you browse the salt source tree) is actually salt.modules.state.apply, whereas pkg.installed is salt.states.pkg.installed. A module in the modules namespace generally cannot be accessed from states namespace and vice versa, though there are exceptions. Knowing the full namespace is also a necessary distinction when an execution module and a state modules share a virtual name, e.g. test exists as both salt.modules.test and salt.states.test.
solution
If I understand correctly, you probably want to include your state files within each other.
For example, say you have the following folder structure:
$ tree srv
srv
└── salt
├── foo.sls
└── packages
└── bar.sls
and bar.sls has the following contents
# bar.sls
packages_bar_install_fun:
pkg.installed:
- pkgs:
- cowsay
- fortune
- sl
To include bar.sls into foo.sls you just need to reference it using dot notation, depending on your folder structure
# foo.sls
include:
- packages.bar
foo_another_example_state:
test.show_notification:
- text: |
foo.sls can have other states inside of it,
though you may need to use `require` if you want
them interspersed between multiple includes
Now you can either just include - foo in your top.sls, or run salt '<tgt>' state.apply foo test=True and you should see that package.bar would also be applied.
The salt docs also include a section titled Moving Beyond a Single SLS which discusses using include and extend to glue multiple states together.
Splitting up an SLS for organizational purposes is also a common use for init.sls
As a brief aside, there are some states which go the other direction and allow allow you to run execution modules from within an SLS. A few examples are salt.states.module.run and salt.states.saltmod.state, though the uses for these are far more specialized than what it seems you're trying to do.
Related
Assume we have a project with the following structure:
root/
.hhconfig
├── directory1
├── directory2
├── directory3
.........................
├── directory10
Is there a way to have a single .hhconfig file, and exclude onlydirectory8 from the typechecking? I think it would be really painful to put separate .hhconfig files, inside every directory or declare as UNSAFE all the files on directory8 in order to be excluded from the typechecking.
This is not supported. A Hack project is designed to be checked as a single project, with full analysis going across all of the different parts of it. If it doesn't typecheck as a whole, then the behavior of HHVM on it is undefined.
You should really, really carefully consider why you're trying to exclude part of the project from typechecking. You really shouldn't have a large body of type-incorrect code. You may want to consider leaving that code back in PHP -- it sounds unlikely to be valid Hack code, or to become such soon. Hiding these type errors is crippling the typechecker's ability to help you find problems in the other code in your project.
You may also be able to use a different mode, decl mode which will exclude all the code in a file from having function bodies typechecked (but which will still make the definitions available to other files). But again, this is just shoving a problem under the rug. Ideally you'd fix all of the type errors instead!
Also, definitely don't put separate .hhconfig files in each directory -- they'll be checked as separate subprojects and none of the analysis will look across the borders of the subdirectories!
i have this tree structure:
repository/modules/module1
repository/modules/module2
repository/modules/module..
repository/apps/application1
repository/apps/application2
repository/apps/application..
where the applications are using some modules.
now, I'd like to put some resources inside a module (like a very colorfull icons inside a widget used by several applications) but.. something gets wrong.
inside the module CMakeLists.txt if I use only:
set(${MODULE_NAME}_RCS
colors.qrc
)
...
qt4_add_resources (${MODULE_NAME}_RHEADERS ${${MODULE_NAME}_RCS})
no qrc_colors.cxx are created anywhere. so I've tried to add:
ADD_EXECUTABLE (${MODULE_NAME}
${${MODULE_NAME}_RHEADERS}
)
but.. I get this weird error:
CMake Error at repo/modules/ColorModule/CMakeLists.txt:51 (ADD_EXECUTABLE):
add_executable cannot create target "ColorModule" because another
target with the same name already exists. The existing target is a static
library created in source directory
"repo/modules/ColorModule". See documentation for
policy CMP0002 for more details.
(I've changed the path of the error of course)
so.. don't know what to think because i'm new both to cmake and qt..
what can i try?
EDIT:
if I add the ${MODULE_NAME}_RHEADERS and ${MODULE_NAME}_RCS in the add_library command the qrc_colors.cxx is created BUT it is in repository/modules/module1/built and not copied in the application built directory...
There is at least two errors in your code.
1) It is usually not necessary to use ${MODULE_NAME} everywhere like that, just "MODULE_NAME". You can see that the difference is the raw string vs. variable. It is usually recommended to avoid double variable value dereference if possible.
2) More importantly, you seem to be setting ${MODULE_NAME} in more than one executable place, which is "ColorModule" according to the error output. You should have individual executable names for different binaries.
Also, the resource file focus is a bit of red herring in here. There are several other issues with your project.
You can cmake files as CmakeLists.txt instead of CMakeLists.txt which inherently causes issues on case sensitive systes as my Linux box.
You use Findfoo.cmake, and find_package(foo) for that matter, rather than the usual FindFoo.cmake convention alongside find_package(Foo).
Your FindFoo.cmake is quite odd, and you should probably be rewritten.
Most importantly, you should use config files rather than find modules.
Documentation and examples can be found at these places:
http://www.cmake.org/Wiki/CMake/Tutorials#CMake_Packages
https://projects.kde.org/projects/kde/kdeexamples/repository/revisions/master/show/buildsystem
When you would like use a find module, you need to have that at hand already. That will tell you what to look for, where things are, or if they are not anywhere where necessary. It is not something that you should write. You should just reuse existing ones for those projects that are not using cmake, and hence the find modules are added separately.
It is a bit like putting the treasure map just next to the treasure. Do you understand the irony? :) Once you find the map, you would automatically have the treasure as well. i.e. you would not look for it anymore.
I have a rather big library with a significant set of APIs that I need to expose. In fact, I'd like to expose the whole thing. There is a lot of namespacing going on, like:
FooLibrary.Bar
FooLibrary.Qux.Rumps
FooLibrary.Qux.Scrooge
..
Basically, what I would like to do is make sure that the user can access that whole namespace. I have had a whole bunch of trouble with this, and I'm totally new to closure, so I thought I'd ask for some input.
First, I need closurebuilder.py to send the full list of files to the closure compiler. This doesn't seem supported: --namespace Foo does not include Foo.Bar. --input only allows a single file, not a directory. Nor can I simply send my list of files to the closure compiler directly, because my code is also requiring things like "goog.assers", so I do need the resolver.
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
This is my main issue.
However, later the closure compiler, with ADVANCED_OPTIMIZATIONS on, will optimize all these names away. Now I can fix that by adding "#export" all over the place, which I am not happy about, but should work. I suppose it would also be valid to use an extern here. Or I could simply disable advanced optimizations.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
Finally, for working in source mode, I need to do goog.require() for every namespace I am using. This is merely an inconvenience, though I am mentioning because it sort of related to my trouble above. I would prefer to be able to do:
goog.requireRecursively('FooLibrary')
in order to pull all the child namespaces as well; thus, recreating with a single command the environment that I have when I am using the compiled version of my library.
I feel like I am possibly misunderstanding some things, or how Closure is supposed to be used. I'd be interested in looking at other Closure-based libraries to see how they solve this.
You are discovering that Closure-compiler is built more for the end consumer and not as much for the library author.
If you are exporting basically everything, then you would be better off with SIMPLE_OPTIMIZATIONS. I would still highly encourage you to maintain compatibility of your library with ADVANCED_OPTIMIZATIONS so that users can compile the library source with their project.
First, I need closurebuilder.py to send the full list of files to the closure compiler. ...
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
You would need to specify an --root of your source folder and specify the namespaces of the leaf nodes of your file dependency tree. You may have better luck with the now deprecated CalcDeps.py script. I still use it for some projects.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
You can't do that because it only makes sense based on the final usage. You as the library writer wish to export everything, but perhaps a consumer of your library wishes to include the source (uncompiled) version and have more dead code elimination. Library authors are stuck in a kind of middle ground between SIMPLE and ADVANCED optimization levels.
What I have done for this case is maintain a separate exports file for my namespace that exports everything. When compiling a standalone version of my library for distribution, the exports file is included in the compilation. However I can still include the library source (without the exports) into a project and get full dead code elimination. The work/payoff balance of this though must be weighed against just using SIMPLE_OPTIMIZATIONS for the standalone library.
My GeolocationMarker library has an example of this strategy.
I'm seeing some of SF2 components have a folder named stubs placed inside the Resources/ folder. I wonder what is it for? And for my bundle I need to define some global functions, would the stubs folder be a good location to place the files containing these functions?
It seems there are a few meanings for stub. The most relevant I could find was one that described a stub as 'code that is used to stand in for some other programming functionality'.
I.e. acting as a substitute for code that is yet to be developed, or to simulate the behaviour of existing code that isn't usable (or viably usable) under certain circumstances, for example, in a development environment.
I have two dynamic views in ClearCase which, as far as I know, are supposed to be "equal".
One is supposed to look at the "Main branch" and one at some other branch (let's call it A).
I did a merge from A to Main (in the Main view) and for some reason the code at the A view compiles while Main does not.
Is there a way to compare the views for differences?
The simplest way is to use an external diff tool on those two views (like WinMerge or BeyondCompare on Windows, KDiff3 on Unix or Windows, ...).
I would actually create two new views (with the same config spec than the two initial views), to remove any "cache" effect, and start the comparison there.
Once that initial examen is done, I would start the compilation in those two views, and see if one of them still don't compile.
Don't forget that merging A to Main will not always result in the same set of files after the Merge.
It would be the same only if no evolution has taken place in Main since A started (or since the last merge from A to Main).
The setcs -current you mention will:
–cur/rent
causes the view_server to flush its caches and reevaluate the current config spec, which is stored in file config_spec in the view storage directory. This includes:
Evaluating time rules with nonabsolute specifications (for example, now, Tuesday)
Reevaluating –config rules, possibly selecting different derived objects than previously
Re-reading files named in include rules
If you depend within your config spec on an "include file" which was at the wrong version, the first setcs would set it at the right version, and the second one would read its content and set the right version for the rest.