How do I make one custom state dependent on another? - salt-stack

How do I make one custom state dependent on another with a requisite in an sls file?
Example: Two custom states in a _states/seuss.py module:
# seuss.py
def green_eggs():
return {'name': 'green_eggs', 'result': True, 'comment': '', 'changes': {}}
def ham():
return {'name': 'ham', 'result': True, 'comment': '', 'changes': {}}
I want ham to be dependent on green_eggs:
# init.sls
have_green_eggs:
seuss.green_eggs:
- require:
- user: seuss
have_ham:
seuss.ham:
- require:
- ???
How do I make ??? a dependency on the successful completion of green_eggs?

You would want:
have_ham:
seuss.ham:
- require:
- seuss: have_green_eggs
However, you are currently defining two states of a seuss resource, which means that either a seuss.ham or a seuss.green_eggs called have_green_eggs could fulfil that requirement.
If you don't want that, then you will have to define the states in separate files (e.g. seuss_ham.exists and seuss_green_eggs.exists).

Related

Vue3: redundant route.push, unexpected route redirect after some navigation

Good day all !
I have an unwanted redirect occurring from a manual route.push, after some navigation steps, as if it had been cached in history stack.
Edit: the only alternative i have found, is to no longer need home dependency and copying it to [A], then, no need to pushing the route causing the issue (but i dont like this code duplication).
Said more exactly, [A] has two uses, first is a data listing, second one is to fetch and print data from elements of this list (home has already the code for fetching and printing), then i call [home] from [A] with suitable parameters to avoid code duplication.
The parameters are provided by a custom store (not pinia store)
Any clue ?
Note: i'm using vue3/router4 and only composition features are used
Code overview:
component Home
// do something...
// click methods to go to [B]
component A
setup()
{
// called by #click
search = (id)
{
store.query.value = id
route.push({name:'home'})
}
...
component B
// do something
Steps to reproduce:
- opening app with [home] comp
- go to [A],
- then search click (to prepare query for home api call)
- comming from [A] to home (by route.push...)
- button click to go to [B] from home
- [B] loads but the route change unexpectedly to home before [B] route end
my logs say the route is already redirected when we arrive in the route guard beforeEach()
here is a summary of the router code
const routes = [
{
path: '/', // is [home]
name: 'root',
component: Home,
meta: { title: false }
},
{
path: '/categories', // is [A]
name: 'categories',
component: () => import('./views/Categories.vue'),
meta: { title: 'route.categories' }
},
{
path: '/tool/:id', // is [B]
name: 'tool',
component: () => import('./views/Tool.vue'),
meta: { title: false }
},
{
path: '/:pathMatch(.*)*',
name: '404',
component: () => import('./components/404.vue'),
meta: { title: 'route.404' }
}
]
export const router = createRouter({
history: createWebHistory(process.env.BASE_URL),
base: process.env.BASE_URL,
routes,
navigationFallback:
{
rewrite: '/',
exclude: ['/images/*.{png,jpg,gif}', '/css/*']
}
})
Expected behavior :
I would like to void this not wished redirect
I tried to use route.replace instead of push, nothing changed,
commented many piece of code, and this is only when i comment the route.push from [A], that the redirect no longer occurs.
I suspect the route.push in [A] to be cached somewhere. I can't understand why it runs twice (at call, and after two route navigation steps...)

Chunked upload validation: "The file could not be uploaded."

I am currently trying to let Symfonys Validator Component handle the validation of uploaded files, which works perfectly fine for normal files. However, if files are above a certain size they are uploaded as chunks, which are then merged and then validated. Both ways to upload are validated by the same function, which basically just looks like this:
public function validateFile(UploadedFile $uploadedFile): ConstraintViolationList {
return $this->validator->validate(
$uploadedFile,
[
new FileConstraints([
'maxSize' => '1000M',
]),
]
);
}
But somehow, the merged uploads trigger a violation, which, unfortunately, is quite uninformative to me:
Symfony\Component\Validator\ConstraintViolation {#658 ▼
-message: "The file could not be uploaded."
-messageTemplate: "The file could not be uploaded."
-parameters: []
-plural: null
-root: Symfony\Component\HttpFoundation\File\UploadedFile {#647 ▶}
-propertyPath: ""
-invalidValue: Symfony\Component\HttpFoundation\File\UploadedFile {#647 ▶}
-constraint: Symfony\Component\Validator\Constraints\File {#649 ▶}
-code: "0"
-cause: null
}
The logs are clean, no errors, only INFO regarding matched routes and deprecated stuff aswell as DEBUG regarding authentificastion tokens and such.
If I dump'n'die the UploadedObjects the only difference is that the chunked & merged one has executable: true and that its not stored in tmp.
Can someone here explain to me what causes this violation and what has to be done to prevent it or point me to some documentation regarding that?
EDIT: The upload of chunks and the merging seems to work perfectly fine - uploaded images can be viewed, text docs/pdfs can be read etc. Also used all the other code for quite a while now with different validation, just wanted to make everything a bit more pro and sorted by using the existing Validator infrastructure. To provide additional info regarding the uploaded objects, here the dd output, starting with regular file upload:
Symfony\Component\HttpFoundation\File\UploadedFile {#20 ▼
-test: false
-originalName: "foo.jpg"
-mimeType: "image/jpeg"
-error: 0
path: "/tmp"
filename: "phpEu7Xmw"
basename: "phpEu7Xmw"
pathname: "/tmp/phpEu7Xmw"
extension: ""
realPath: "/tmp/phpEu7Xmw"
aTime: 2021-05-27 10:47:56
mTime: 2021-05-27 10:47:54
cTime: 2021-05-27 10:47:54
inode: 1048589
size: 539474
perms: 0100600
owner: 1000
group: 1000
type: "file"
writable: true
readable: true
executable: false
file: true
dir: false
link: false
}
For chunked upload:
Symfony\Component\HttpFoundation\File\UploadedFile {#647 ▼
-test: false
-originalName: "foo.jpg"
-mimeType: "image/jpeg"
-error: 0
path: "/home/vagrant/MyProject/var/uploads"
filename: "foo.jpg"
basename: "foo.jpg"
pathname: "/home/vagrant/MyProject/var/uploads/foo.jpg"
extension: "jpg"
realPath: "/home/vagrant/MyProject/var/uploads/foo.jpg"
aTime: 2021-05-27 10:43:58
mTime: 2021-05-27 10:43:58
cTime: 2021-05-27 10:43:58
inode: 8154
size: 539474
perms: 0100777
owner: 1000
group: 1000
type: "file"
writable: true
readable: true
executable: true
file: true
dir: false
link: false
}
When the File constraint receives an UploadedFile instance, it triggers a call to isValid, which in turn calls is_uploaded_file:
Returns true if the file named by filename was uploaded via HTTP POST.
This is useful to help ensure that a malicious user hasn't tried to
trick the script into working on files upon which it should not be
working
After reassembling the chunks into a new file this check no longer passes and the constraint fails.
You could use your last file fragment to reassemble the original file or you could return a File from your function. File is not subject to that check, and the constraint will accept it along with UploadedFile.
When creating your UploadedFile object programatically use the 'test mode'. I use this with the VichUploaderBundle and the use of test mode is documented here.
new Count([
'min' => 1,
'minMessage' => 'Please select a file to upload'
]),
I think the NotBlank constraint is the problem here, I don't think it should be used on UploadedFile.
Try using only the File and Count constraints (to make sure there is a minimum of 1 file attached).

What does 'with context' mean when doing an import?

I've been looking for an explanation in the SaltStack docs about what 'with context' means. But there's only examples of using context.
What is 'context'?
What does it do here? And why is Debian ignored in the map.jinja file? (for example map.log_dir seems to "jump down" a level)
# config.sls
{% from "bind/map.jinja" import map with context %}
include:
- bind
{{ map.log_dir }}:
file.directory:
- user: root
- group: {{ salt['pillar.get']('bind:config:group', map.group) }}
- mode: 775
- require:
- pkg: bind
# map.jinja
{% set map = salt['grains.filter_by']({
'Debian': {
'pkgs': ['bind9', 'bind9utils', 'dnssec-tools'],
'service': 'bind9',
'config_source_dir': 'bind/files/debian',
'zones_source_dir': 'zones',
'config': '/etc/bind/named.conf',
'local_config': '/etc/bind/named.conf.local',
'key_config': '/etc/bind/named.conf.key',
'options_config': '/etc/bind/named.conf.options',
'default_config': '/etc/default/bind9',
'default_zones_config': '/etc/bind/named.conf.default-zones',
'named_directory': '/var/cache/bind/zones',
'log_dir': '/var/log/bind9',
'user': 'root',
'group': 'bind',
'mode': '644'
},
'RedHat': {
'pkgs': ['bind'],
'service': 'named',
'config_source_dir': 'bind/files/redhat',
'zones_source_dir': 'zones',
'config': '/etc/named.conf',
'local_config': '/etc/named.conf.local',
'default_config': '/etc/sysconfig/named',
'named_directory': '/var/named/data',
'log_dir': '/var/log/named',
'user': 'root',
'group': 'named',
'mode': '640'
},
Since this page is the top search result for "jinja import with context" (and the other answer doesn't actually say what it does), and I keep coming back to this page every couple of months when I need to mess with Salt but forget what with context does:
When you import foo in jinja, normally the macros you've defined in foo don't have access to variables in the file you're importing it from. As an optimization, Jinja will then cache this if you import again later in the file. If instead you do import foo with context, then the macros in foo can access the variables in the file it's being imported from. The trade-off is that Jinja no longer caches foo, so you pay in render time.
When you do include, your variables do get passed into the other file. You then render the contents of the other file and paste them in. If you do include foo without context, you don't pass the variables of the current file in. This is useful, because Jinja will optimize this by caching the contents of foo, speeding up your render.
with context is part of the jinja template engine.
You can read more about it in the jinja docs:
import context
behavior
context API
Regarding the missing debian data - is this your complete map.jinja? the snippet misses }, default='Debian') %} according to grains.filter_by

SaltStack - grains.filter_by, specify grain key and filter by sub key

I'm not sure if I'm wording this correctly, but I was hoping I could get an example of filtering by matching on grain keys and then filtering by values (or sub key:values). My concern is that another grain could be added some time in the future and be picked up by filter_by incorrectly. Example below...
Example list of grains:
Host1
role:
webserver
secondary:
none
Host2
role:
appserver1
secondary:
appserver2
Host3
role:
appserver1
appserver2
secondary:
none
Example map file:
{% set java = salt['grains.filter_by']({
'default': {
'target': '/some/default/file/path',
},
'appserver1': {
'target': '/app/server1/path',
},
'appserver2': {
'target': '/app/server2/path',
},
},
default='default'
)%}
In this example, imagine secondary was the additional grain that was added at a future time. What would the mapfile pick up for Host2 after this secondary grain is added? I know this isn't the best example, but this came up when code reviewing some states I wrote, and I didn't have a good answer as to how we can target grain keys. In this case, I would want to target the grain 'role' and filter on the values within that grain. How would I do that?
I completely missed this in the docs until I read them multiple times...
Solution is to add grain value to filter on like so:
{% set java = salt['grains.filter_by']({
'default': {
'target': '/some/default/file/path',
},
'appserver1': {
'target': '/app/server1/path',
},
'appserver2': {
'target': '/app/server2/path',
},
},
grain='role',
default='default'
)%}

How should a Rebol-structured data file (which contains no code) be written and read?

If you build up a block structure, convert it to a string with MOLD, and write it to a file like this:
>> write %datafile.dat mold [
[{Release} 12-Dec-2012]
[{Conference} [12-Jul-2013 .. 14-Jul-2013]]
]
You can LOAD it back in later. But what about headers? If a file contains code, it is supposed to start with a header like:
rebol [
title: "Local Area Defringer"
date: 1-Jun-1957
file: %defringe.r
purpose: {
Stabilize the wide area ignition transcriber
using a double ganged defringing algorithm.
}
]
If you are just writing out data and reading it back in, are you expected to have a rebol [] header, and extend it with any properties you want to add? Should you come up with your own myformat [] header concept with your own properties?
Also, given that LOAD does binding, does it make sense to use it for data or is there a different operation?
Rebol data doesn't have to have a header, but is best practice to include one (even if it's just data).
Some notes:
SAVE is your best bet for serializing to file! or port! and has a mechanism for including a header.
MOLD and SAVE both have an /ALL refinement that corresponds to LOAD (without /ALL, some data from MOLD and SAVE cannot be reliably recovered, including Object, Logic and None values).
LOAD discards the header, though you can load it using the /HEADER refinement.
Putting this together:
save/all/header %datafile.dat reduce [next "some" 'data][
title: "Some Data"
]
header: take data: load/header %datafile.dat
To use a header other than Rebol [], you'd need to devise a separate loader/saver.
For the case of reading, construct works very well alongside load to prevent evaluation (of code as opposed to data):
prefs: construct/with load %options.reb default-prefs
It is:
Similar to context
obj: [
name: "Fred"
age: 27
city: "Ukiah"
]
obj-context: context obj
obj-construct: construct obj
In this case, the same:
>> obj-context = obj-construct
== true
Different
when it comes to evaluating code:
obj-eval: [
name: uppercase "Fred"
age: 20 + 7
time: now/time
]
obj-eval-context: context obj-eval
obj-eval-construct: construct obj-eval
This time parsing differently:
>> obj-eval-context = obj-eval-construct
false
>> ?? obj-eval-construct
obj-eval-construct: make object! [
name: 'uppercase
age: 20
time: now/time
]
Aside:
This is the point I realize the following code wasn't behaving as I expected:
obj-eval: [
title: uppercase "Fred"
age: 20 + 7
city: "Ukiah"
time: now/time
]
gives in red (and by extension, rebol2):
>> obj-eval-construct: construct obj-eval
== make object! [
title: 'uppercase
age: 20
city: "Ukiah"
time: now/time
]
lit-word! and lit-path! is different.
TODO: question
It has also
Useful refinement /with
Which can be used for defaults, similar to make

Resources