how to run a shell script on remote using salt-ssh - salt-stack

my web server generates a shell script with more than 100 lines of code based on complex user selections. I need to orchestrate this over several machines using salt-ssh. what I need is to copy this shell script to remote and execute it from there for all devices. how to achieve this with salt-ssh ?. I can not install minions on the remote device.

Just as with normal minion. Write a state...
add script:
file.managed:
- name: file.sh
- source: /path/to/file.sh
run script:
cmd.run:
- name: file.sh
...and apply it
salt-ssh 'minions*' state.apply mystate.sls

Related

How to run an existing shell script using airflow?

I wanted to run some existing bash scripts using airflow without modifying the code in the script itself. Is it possible without mentioning the shell commands in the script in a task?
Not entirely sure if understood your question, but you can load your shell commands into
Variables through Admin >> Variables menu as a json file.
And in your dag read the variable and pass as parameter into the BashOperator.
Airflow variables in more detail:
https://www.applydatascience.com/airflow/airflow-variables/
Example of variables file:
https://github.com/tuanavu/airflow-tutorial/blob/v0.7/examples/intro-example/dags/config/example_variables.json
How to read the variable:
https://github.com/tuanavu/airflow-tutorial/blob/v0.7/examples/intro-example/dags/example_variables.py
I hope this post helps you.
As long as the shell script is on the same machine that the Airflow Worker is running you can just call the shell script using the Bash Operator like the following:
t2 = BashOperator(
task_id='bash_example',
# Just call the script
bash_command="/home/batcher/test.sh ",
dag=dag)
you have to 'link' your local folder where your shell script is with worker, which means that you need to add volume in worker part of your docker-compose file..
so I added volume line under worker settings and worker now looks at this folder on your local machine:
airflow-worker:
<<: *airflow-common
command: celery worker
healthcheck:
test:
- "CMD-SHELL"
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery#$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
restart: always
volumes:
- /LOCAL_MACHINE_FOLDER/WHERE_SHELL_SCRIPT_IS:/folder_in_root_folder_of_worker

How to put seed data into SQL Server docker image?

I have a project using ASP.NET Core and SQL Server. I am trying to put everything in docker containers. For my app I need to have some initial data in the database.
I am able to use docker sql server image from microsoft (microsoft/mssql-server-linux), but it is (obviously) empty. Here is my docker-compose.yml:
version: "3"
services:
web:
build: .\MyProject
ports:
- "80:80"
depends_on:
- db
db:
image: "microsoft/mssql-server-linux"
environment:
SA_PASSWORD: "your_password1!"
ACCEPT_EULA: "Y"
I have an SQL script file that I need to run on the database to insert initial data. I found an example for mongodb, but I cannot find which tool can I use instead of mongoimport.
You can achieve this by building a custom image. I'm currently using the following solution. Somewhere in your dockerfile should be:
RUN mkdir -p /opt/scripts
COPY database.sql /opt/scripts
ENV MSSQL_SA_PASSWORD=Passw#rd
ENV ACCEPT_EULA=Y
RUN /opt/mssql/bin/sqlservr --accept-eula & sleep 30 & /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Passw#rd' -d master -i /opt/scripts/database.sql
Alternatively you can wait for a certain text to be outputted, useful when working on the dockerfile setup, as it is immediate. It's less robust as it relies on some 'random' text of course:
RUN ( /opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Service Broker manager has started" \
&& /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Passw#rd' -i /opt/scripts/database.sql
Don't forget to put a database.sql (with your script) next to the dockerfile, as that is copied into the image.
Roet's answer https://stackoverflow.com/a/52280924/10446284 ditn't work for me.
The trouble was with bash ampersands firing the sqlcmd too early. Not waiting for sleep 30 to finish.
Our Dockerfile now looks like this:
FROM microsoft/mssql-server-linux:2017-GA
RUN mkdir -p /opt/scripts
COPY db-seed/seed.sql /opt/scripts/
ENV MSSQL_SA_PASSWORD=Passw#rd
ENV ACCEPT_EULA=true
RUN /opt/mssql/bin/sqlservr & sleep 60; /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Passw#rd' -d master -i /opt/scripts/seed.sql
Footnotes:
Now the bash command works like this
run-asych(sqlservr) & run-asynch(wait-and-run-sqlcmd)`
We chose sleep 60, because the build of the docker image happens "offline", before all the runtine evironment is set up. Those 60 seconds don't occur at container runtime anymore. Giving more time for the sqlservr command gives our teammates' machines more time to complete the docker build phase successfully.
One simple option is to just navigate to the container file system and copy the database files in, and then use a script to attach.
This
https://learn.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker
has an example of using sqlcmd in a docker container although I'm not sure how you would add this to whatever build process you have

restrict commands that salt-minion is able to publish

configured the salt-stack environment like below:
machine1 -> salt-master
machine2 -> salt-minion
machine3 -> salt-minion
This setup is working for me and I can publish i.e. the command "ls -l /tmp/" from machine2 to machine3 with
salt-call publish.publish 'machine3' cmd.run 'ls - /tmp/'
How it's possible to restrict the commands that are able to be published?
In the currently setup it's possible to execute every command on machine3 and that we would be very risky. I was looking in the salt-stack documentation but unfortunately, I didn't find any example how to configure it accordingly.
SOLUTION:
on machine1 create file /srv/salt/_modules/testModule.py
insert some code like:
#!/usr/bin/python
import subprocess
def test():
return __salt__['cmd.run']('ls -l /tmp/')
if __name__ == "__main__":
test()
to distribute the new module to the minions run:
salt '*' saltutil.sync_modules
on machine2 run:
salt-call publish.publish 'machine3' testModule.test
The peer configuration in the salt master config can limit what commands certain minion can publish, e.g.
peer:
machine2:
machine1:
- test.*
- cmd.run
machine3:
- test.*
- disk.usage
- network.interfaces
This will allow minion machine2 to publish test.* and cmd.run commands.
P.S. Allowing minions to publish cmd.run command is not a good idea generally, just put it here as example.

Is it mandatory to put files in /srv/salt folder to transfer from master to minion

Is it mandatory to put files in /srv/salt folder of master to transfer file/directory from master to connected minions.
Can we transfer files without using salt file server
1) Can we transfer files without using salt file server?
2) Also the document says "You can't run interactive scripts" .
Does it mean there are some limitations to execute arbitrary linux commands with cmd.run eg. we can run salt "*" cmd.run ['ls -l /home'] .
Similarly can we run commands like scp,ssh with cmd.run.
You don't have to use the salt file server. You can also set the source to for example a http location. In that case though, you must also declare the hash of the file, for example:
/etc/nginx/sites-enabled/mysite:
file.managed:
- source: http://example.com/mysite
- source_hash: abc123....
The document suggests the default location as /srv/salt as mentioned here

Saltstack - Preview Highstate

Is there a way to preview what files will be served to a minion on a state.highstate? I know that you can run state.show_highstate, but that is not the output I am looking for. For example, inside /path/to/recurse/dir/ I have foo.txt and bar.txt and in my sls file I have
/path/to/recurse/dir/:
file.recurse:
- source: salt://dir/
I would like to run state.preview_highstate and it would show me the contents of foo.txt and bar.txt. Does anyone know how to go about this without just running state.highstate?
If you are able to run the state on the minion but just don't want to apply any changes you can append the test=True to your command:
salt '*' state.highstate test=True
This will run the highstate on the minion but will not make any changes to the system. Changes that would be applied are shown in yellow.

Resources