execute a function on master in saltstack salt .sls file and run on a minion - salt-stack

I'm planning to use the salt-cloud function to take a snapshot before applying a yum update patch.
the command that needs to be run on master is salt-cloud -a create_snapshot vmname snashot_name
I have a state sls file to run patching on target. Is it possible to have a mix of executing commands on both the target and the master?
Thanks.

Yes you can. Using orchestration.
Which is a build of states that run against the master. telling the master what to do.
an example i normally give that does both run runner functions and salt remote execution functions is this update script https://github.com/whytewolf/salt-phase0-orch/blob/master/orch/sys/salt/update.sls that runs through a large series of commands to update both master and all minions into a ready state.
for your example you would use a salt.runner calling https://docs.saltproject.io/en/latest/ref/runners/all/salt.runners.cloud.html#salt.runners.cloud.action followed by salt.state to call the state that runs the patching.
more about orchestration
https://docs.saltproject.io/en/latest/topics/orchestrate/orchestrate_runner.html
https://docs.saltproject.io/en/latest/ref/states/all/salt.states.saltmod.html

Related

Do airflow workers share the same file system ? or are they isolated

I have a task in airflow which downloads a file from GitHub to the local file system. passes it to spark-submit and then deletes it. I wanted to know if this will create any issues.
Can this be possible that both the workers that are running the same task concurrently on two different dag runs are referencing the same file?
Sample code -->
def python_task_callback():
download_file(file_name='script.py')
spark_submit(path='/temp/script.py')
delete_file(path='/temp/script.py')
For your use case if you do all of the actions you mentioned (download, parse, delete) in a single task then you will have no problems regardless of which executor you are running.
If you are splitting the actions between several tasks then you should use a shared file system like S3, Google Storage etc. In that case it will also work regardless of which executor youa re using.
A possible workflow can be:
1st task: copy file from github to S3
2nd task: submit the file to processing
3rd task: delete the file from S3
As for your general question if tasks share disk - that depends on the executor that you are using.
In Local Executor you have only 1 worker thus all tasks run on the same machine and share it's disk.
In Celery Executor/ Kubernetes Executor/others tasks may run on different workers.
However as mentioned - don't assume that tasks share disk, if you will need to scale up the executor from Local to Celery you don't want to find yourself in a case where you need to refactor your code.

How to run the execution module as a specific user?

I would like to manage users settings interfaced by GSettings in salt. I have a Python code that can manage the GSettings, but it needs to do that as a specific user.
Salt execution modules (and everything actually) by default run as a user that executed salt-minion, which is root by default. I couldn't find in the documentation information how to run a specific module as someone other.
I can walkaround it by executing a shell with su -l <username>, that in turn would call my Python code, but I hope there is an elegant build-in way, like an option user: <username> in the module.
There are at least two ways to run a command as a specified user:
From a state, you can do something like this (docs):
mycommand:
cmd.run:
- name: python my_gsettings_script.py
- runas: alternate_user
There is also an execution module, that also provides a runas option.

Jenkins - How to stall a job until a notification is received?

Is there anyway that a Jenkins job can be paused until a notification is received. Ideally with a payload as well?
I have a "test" job which does a whole bunch of remote tests and I'd like it to wait until the test are done where I send a HTTP notification via curl with a payload including a test success code.
Is this possible with any default Jenkins plugins?
If Jenkins 2.x is an option for you, I'd consider taking a look at writing a pipeline job.
See https://jenkins.io/doc/book/pipeline/
Perhaps you could create a pipeline with multiple stages, where:
The first batch of work (your test job) is launched by the first pipeline stage.
That stage is configured (via Groovy code) to wait until your tests are complete before continuing. This is of course easy if the command to run your tests blocks, but if your tests launch and then detach without providing an easy way to determine when they exit, you can probably add extra Groovy code to your stage to make it poll the machine where the tests are running, to discover whether the work is complete.
Subsequent stages can be run once the first stage exits.
As for passing a payload from one stage to another, that's possible too - for exit codes and strings, you can use Groovy variables, and for files, I believe you can have a stage archive a file as an artifact; subsequent stages can then access the artifact.
Or, as Hani mentioned in a comment, you could create two Jenkins jobs, and have your tests (launched by the first job) use the Jenkins API to launch the second job when they complete successfully.
As you suggested, curl can be used to trigger jobs via the API, or you can use a Jenkins API wrapper package for to your preferred language (I've had success using the Python jenkinsapi package for this sort of work: http://pythonhosted.org/jenkinsapi/)
If you need to pass parameters from your API client code to the second Jenkins job, that's possible by adding parameters to the second job using the the Parameterized Build features built into Jenkins: https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Build

Oozie Where does a custom EL function execute

I am writing a custom EL function which will be used in oozie workflows.
this custom function is just plain java code it doesn't contain any hadoop code.
My question is where will this EL function be executed at the time the workflow is running?
Will it execute my EL function on the Oozie node itself? or will it push my custom java code to one of the data nodes and execute it there?
Oozie is a workflow scheduler system to manage jobs in Hadoop Cluster it self, which integrated with the rest of the Hadoop stack supporting several types of Hadoop jobs out of the box (such as Java map-reduce, Streaming map-reduce, Pig, Hive, Sqoop and Distcp) as well as system specific jobs (such as Java programs and shell scripts). Source
Which means if you submit a Job in Oozie, it will run in any of the available DataNode it self, even if your Oozie Service is configured in Datanode then it can run there as well.
For checking which Node the Job is processing, you have to check the same from JobTracker in Hadoop1 or Yarn in Hadoop2 which redirect the Process State to the Tasktracker node where the Job is being process
Acording to Apache Oozie: The Workflow Scheduler for Hadoop, page 177, it states:
It is highly recommended that the new EL function be simple, fast and
robust. This is critical because Oozie executes the EL functions on
the Oozie server
So It will be executed on your Oozie node itself.

TFS2010 Team build - waiting for an "InvokeProcess" step to complete

I am performing a database restore as part of our TFS 2010 Team build. Since a number of databases are being restored, I am using a batch file which is invoked via the InvokeProcess activity.
I have a number of issues that I am uncertain about:
1. Does the TFS wait for all the command in the batch file to complete or move to the next activity as soon as kicking the InvokeProcess?
2. Is there a way to have the build process wait for successful completion of the batch command?
I am using it as follows:
The FileName property of InvokeProcess has "c:\windows\system32\cmd.exe"
The Arguments property has the full path of my batch file.
Yes the InvokeProcess will wait for the external command to finish.

Resources