I'm getting my feet wet with SaltStack. I've made my first state (a Vim installer with a static configuration) and I'm working on my second one.
Unfortunately, there isn't an Ubuntu package for the application I'd like my state to install. I will have to build the application myself. Is there a "best practice" for doing "configure-make-install" type installations with Salt? Or should I just use cmd?
In particular, if I was doing it by hand, I would do something along the lines of:
wget -c http://example.com/foo-3.4.3.tar.gz
tar xzf foo-3.4.3.tar.gz
cd foo-3.4.3
./configure --prefix=$PREFIX && make && make install
There are state modules to abstract the first two lines, if you wish.
file.managed: http://docs.saltstack.com/ref/states/all/salt.states.file.html
archive.extracted: http://docs.saltstack.com/ref/states/all/salt.states.archive.html
But you could also just run the commands on the target minion(s).
install-foo:
cmd.run:
- name: |
cd /tmp
wget -c http://example.com/foo-3.4.3.tar.gz
tar xzf foo-3.4.3.tar.gz
cd foo-3.4.3
./configure --prefix=/usr/local
make
make install
- cwd: /tmp
- shell: /bin/bash
- timeout: 300
- unless: test -x /usr/local/bin/foo
Just make sure to include an unless argument to make the script idempotent.
Alternatively, distribute a bash script to the minion and execute. See:
How can I execute multiple commands using Salt Stack?
As for best practice? I would recommend using fpm to create a .deb or .rpm package and install that. At the very least, copy that tarball to the salt master and don't rely on external resources to be there three years from now.
Let's assume foo-3.4.3.tar.gz is checked into GitHub. Here is an approach that you might pursue in your state file:
git:
pkg.installed
https://github.com/nomen/foo.git:
git.latest:
- rev: master
- target: /tmp/foo
- user: nomen
- require:
- pkg: git
foo_deployed:
cmd.run:
- cwd: /tmp/foo
- user: nomen
- name: |
./configure --prefix=/usr/local
make
make install
- require:
- git: https://github.com/nomen/foo.git
Your configuration prefix location could be passed as a salt pillar. If the build process is more complicated, you may consider writing a custom state.
Related
I have a simple dotnet core 2.0 project with a simple issue which is failing SonarLint with an unused variable issue.
The code is stored in a public github repository (here). A travis job (here) runs and has the SonarQube plugin and should post to SonarCloud (here).
The problem I have is that this issue is not being picked up by the analysis and published as an issue. I obviously have something set up incorrectly but I dont know what.
My .travis.yml is below
language: csharp
dist: xenial
sudo: required
mono: none
dotnet: 2.0.0
solution: Dibware.Salon.sln
addons:
sonarcloud:
organization: "dibley1973-github" # the key of the org you chose at step #3
token:
secure: $SONAR_TOKEN
branches:
only:
- master
before_script:
- chmod +x build.sh
- chmod +x run-tests.sh
script:
- ./build.sh
- ./run-tests.sh
- sonar-scanner
My sonar-project.properties file is below
# Project identification
sonar.projectKey=Core:Dibware.Salon
sonar.projectVersion=1.0.0.0
sonar.projectName=Dibware.Salon
# Info required for SonarQube
sonar.sources=./Domain
sonar.language=cs
sonar.sourceEncoding=UTF-8
C# Settings
sonar.dotnet.visualstudio.solution=Dibware.Salon.sln
# MSBuild
sonar.dotnet.buildConfiguration=Release
sonar.dotnet.buildPlatform=Any CPU
# StyleCop
sonar.stylecop.mode=
# SCM
sonar.scm.enabled=false
In the travis log I do have:
INFO: 27 files to be analyzed
WARN: Shallow clone detected, no blame information will be provided. You can convert to non-shallow with 'git fetch --unshallow'.
INFO: 0/27 files analyzed
WARN: Missing blame information for the following files:
WARN: *
.
<lots of files>
.
WARN: This may lead to missing/broken features in SonarQube
INFO: Calculating CPD for 0 files
INFO: CPD calculation finished
INFO: Analysis report generated in 216ms, dir size=381 KB
INFO: Analysis report compressed in 56ms, zip size=89 KB
INFO: Analysis report uploaded in 340ms
INFO: ANALYSIS SUCCESSFUL, you can browse https://sonarcloud.io/dashboard?id=Core%3ADibware.Salon
INFO: Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report
INFO: More about the report processing at https://sonarcloud.io/api/ce/task?id=AWo0YQeAUanQDuOXxh79
INFO: Analysis total time: 11.484 s
Is this what is affecting the analysis? If so how do I resolve it? If not what else is stopping the analysis of the files, please?
EDIT:
I can see the following in the log, but it still does not get picked up by SoanrQube..
Chair.cs(17,17): warning CS0219: The variable 'a' is assigned but its value is never used
Edit 2:
I managed to getthe analzed number to go up, see below...
INFO: Sensor Zero Coverage Sensor
INFO: Sensor Zero Coverage Sensor (done) | time=6ms
INFO: SCM provider for this project is: git
INFO: 27 files to be analyzed
INFO: 27/27 files analyzed
INFO: Calculating CPD for 0 files
... using the following in my .travis.yml
install:
- git fetch --unshallow --tags
That came from here:
https://stackoverflow.com/a/47441734/254215
Ok, I am not out of the wood yet, but am getting some analysis using the following .travis.yml
language: csharp
dist: xenial
sudo: required
mono: none
dotnet: 2.1.300
solution: Dibware.Salon.sln
addons:
sonarcloud:
organization: "dibley1973-github" # the key of the org you chose at step #3
token:
secure: $SONAR_TOKEN
branches:
only:
- master
install:
- dotnet tool install --global dotnet-sonarscanner
- git fetch --unshallow --tags
before_script:
- export PATH="$PATH:$HOME/.dotnet/tools"
- chmod +x build.sh
- chmod +x run-tests.sh
script:
- dotnet sonarscanner begin /k:"Core:Dibware.Salon" /d:sonar.login="$SONAR_TOKEN" /d:sonar.exclusions="**/bin/**/*,**/obj/**/*" /d:sonar.cs.opencover.reportsPaths="lcov.opencover.xml" || true
- ./build.sh
- ./run-tests.sh
- dotnet sonarscanner end /d:sonar.login="$SONAR_TOKEN" || true
In the end the travis.yml file I used which worked is this:
language: csharp
dist: xenial
sudo: required
mono: none
dotnet: 2.1.300
solution: Dibware.Salon.sln
addons:
sonarcloud:
organization: "dibley1973-github" # the key of the org you chose at step #3
token:
secure: $SONAR_TOKEN
branches:
only:
- master
install:
- dotnet tool install --global dotnet-sonarscanner
- git fetch --unshallow --tags
before_script:
- export PATH="$PATH:$HOME/.dotnet/tools"
- chmod +x build.sh
- chmod +x run-tests.sh
script:
- dotnet sonarscanner begin /k:"Core:Dibware.Salon" /d:sonar.login="$SONAR_TOKEN" /d:sonar.cs.opencover.reportsPaths="**/coverage.opencover.xml" /d:sonar.exclusions="**/bin/**/*,**/obj/**/*,**/Dibware.Salon.Web/**/*" || true
- ./build.sh
- ./run-tests.sh
- dotnet sonarscanner end /d:sonar.login="$SONAR_TOKEN" || true
The build file is this.
#!/usr/bin/env bash
dotnet restore
dotnet clean -c Release
dotnet build Dibware.Salon.sln -c Release
The test is this.
# Run the tests and collate code coverage results
dotnet test -c Release --no-build --no-restore Domain/SharedKernel/Dibware.Salon.Domain.SharedKernel.UnitTests/Dibware.Salon.Domain.SharedKernel.UnitTests.csproj /p:CollectCoverage=true /p:CoverletOutputFormat=opencover
I did not use a sonar-project.properties file.
HTH someone, one day
About two-month-old sls files are working no more. I've tried to put the minimal example below:
salt 'myserver.internal' state.highstate gave:
myserver.internal:
Data failed to compile:
----------
Requisite declaration dhparam in SLS nginx is not formed as a single key dictionary
----------
Requisite declaration /etc/nginx/sites-available/myapp.conf in SLS nginx is not formed as a single key dictionary
ERROR: Minions returned with non-zero exit code
with the following nginx.sls:
/etc/nginx/sites-available/myapp.conf:
file.managed:
- name: /etc/nginx/sites-available/myapp.conf
- source: salt://nginx-myapp.conf.jinja
- template: jinja
- require:
- dhparam
dhparam:
cmd:
- run
- name: "mkdir -p /etc/nginx/ssl/; openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048"
- unless: ls /etc/nginx/ssl/dhparam.pem
And there are tens of those errors when I run the whole configuration. Am I missing something? Maybe, some crucial dependency not installed/updated/broken? yamllint did not find any problems in my SLS files. Same files worked well on another server two months ago.
Versions:
salt-master 2016.11.6+ds-1
salt-minion 2015.8.8+ds-1
The system is Ubuntu Xenial 16.04.2 LTS on both master and minion.
The problem is the version of the minion. While I added repo key for saltstack, I forgot to add
deb http://repo.saltstack.com/apt/ubuntu/16.04/amd64/latest xenial main
to /etc/apt/sources.list.d/saltstack.list and run apt update before installing salt-minion.
When I corrected that, files started to work again.
I have a multi-container Symfony application that uses docker-compose to handle the relationships between the containers. To simplify a little, i have 4 main services :
code:
image: mycode
web:
image: mynginx
volumes-from:
- code
ports:
- "80:80"
links:
- php-fpm
php-fpm:
image: myphpfpm
volumes-from:
- code
links:
- mongo
mongo:
image: mongo
The "mycode" image contains the code of my application and is built from the following Dockerfile :
FROM composer/composer
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libmcrypt-dev \
libxml2-dev \
libicu-dev \
libcurl4-openssl-dev \
libssl-dev \
pkg-config
RUN docker-php-ext-install iconv mcrypt mbstring bcmath json ctype iconv posix intl
RUN pecl install mongo \
&& echo extension=mongo.so >> /usr/local/etc/php/conf.d/mongo.ini
COPY . /code
WORKDIR /code
RUN rm -rf /code/app/cache/* \
&& rm -rf /code/app/logs/* \
&& chown -R root /code/app/cache \
&& chown -R root /code/app/logs \
&& chmod -R 777 /code/app/cache \
&& chmod -R 777 /code/app/logs \
&& composer install \
&& rm -f /code/web/app_dev.php \
&& rm -f /code/web/config.php
VOLUME ["/code", "/code/app/logs", "/code/app/cache"]
At first, deploying this application was easy. I just had to do a simple docker-compose up -d and it created all the containers and ran them without any issue. But then i had to deploy a new version.
This configuration uses volumes to store data :
the source code is mounted on the /code volume, and shared between 3
containers (code, web, php-fpm). It has to be replaced by a new version when deploying.
the MongoDb data is on another
volume, mounted only by the mongo container. I have to keep this data between deployments.
When i deploy an update to my code, i publish the new version of the mycode image and re-create the container. But since the /code volume is still used by the web and php-fpm containers, the old volume can't be replaced by the new one. I have to stop all the running services to delete the old volume, and if i use the docker-compose rm -v command, it will delete the mongodb data too !
Can't i replace only one volume with a new version, without any downtime ?
So i'm kind of stuck here. I'm thinking of having a permanent volume to store the code and update it through SSH with Capistrano, old style. This will allow me to run doctrine migrations scripts after deployment too. But i have other issues with it as Capistrano uses symlinks to handle versions so i can't just mount the /current folder to /code.
Do you have a solution to handle the deployment of a Docker application without losing data and without downtime ?
Should i use manual scripts instead of docker-compose ?
the source code is mounted on the /code volume
This is the problem, it is not what you want.
Code never goes into a volume, it should change when the image changes. Volumes are for things that you want to preserve between changes to the image (data, logs, state, etc).
Code is the immutable thing that you want to replace when you change a container. So remove the /code volume from the Dockerfile entirely, and instead do an ADD . /code in the mynginx and myphpfpm Dockerfiles.
With that change, you can deploy with just up -d. It will recreate any container that have changed, and your volumes will be copied over. You don't need an rm anymore.
If you have your Dockerfile for myphpfpm and mynginx in a different directory, you can build using docker build -f path/to/dockerfile .
Using a host volume (as suggested in another answer) is another option, however that's not usually what you want outside of development. With a host volume you would still remove the /code VOLUME from the dockerfile.
Do not copy the code via the Dockerfile, just attach volumes to the 'code' container.
Few edits:
code:
image: mycode
volumes:
- .:/code
- /code
web:
image: mynginx
volumes-from:
- code
ports:
- "80:80"
links:
- php-fpm
php-fpm:
image: myphpfpm
volumes-from:
- code
links:
- mongo
mongo:
image: mongo
Same thing applies to mongo mount it to an external volume so it persists when the container shuts down. Actually there is also another method, they mention it in their dockerhub page https://hub.docker.com/_/mongo/
Where to Store Data
Important note: There are several ways to store data used by
applications that run in Docker containers. We encourage users of the
mongo images to familiarize themselves with the options available,
including:
Let Docker manage the storage of your database data by writing the
database files to disk on the host system using its own internal
volume management. This is the default and is easy and fairly
transparent to the user. The downside is that the files may be hard to
locate for tools and applications that run directly on the host
system, i.e. outside containers.
Create a data directory on the host system (outside the container) and
mount this to a directory visible from inside the container. This
places the database files in a known location on the host system, and
makes it easy for tools and applications on the host system to access
the files. The downside is that the user needs to make sure that the
directory exists, and that e.g. directory permissions and other
security mechanisms on the host system are set up correctly.
I am trying to install Meteor on the HP14 Chromebook. It is a linx x86_64 chrome os system.
Each time I try to install it I run into errors.
The first time I tried to install it the installer just downloaded the Meteor preengine but never downloaded the tarball or installed the actual meteor application structure.
So, I decided to try as sudo.
sudo curl https://install.meteor.com | /bin/sh
This definitely installed it because you can see it when ls
chronos#localhost ~/projects $ chronos#localhost ~/projects $ ls /home/chronos/user/.meteor/
bash: chronos#localhost: command not found
Now when I try to run meteor --version or meteor create myapp without sudo I get the following error.
````
chronos#localhost ~/projects $ meteor create myapp
'/home/chronos/user/.meteor' exists, but '/home/chronos/user/.meteor/meteor' is not executable.
Remove it and try again.
````
When I try to run sudo meteor --version or sudo meteor create myapp I get this error.
chronos#localhost ~/projects $ sudo meteor create myapp
mkdir: cannot create directory ‘/root/.meteor-install-tmp’: Read-only file system
Any ideas? Thinking I have to make that partition writeable. I made partition 4 writeable.
Put your chrome book into dev mode.
http://www.chromium.org/chromium-os/developer-information-for-chrome-os-devices
Boot into dev mode.
ctrl-alt t to crosh
shell
sudo su -
cd /usr/share/vboot/bin/
./make_dev_ssd.sh --remove_rootfs_verification --partitions 4
reboot
After rebooting
sudo su -
mount -o remount,rw /
mount -o remount,exec /mnt/stateful_partition
Write yourself a read/write script
sudo vim /sbin/rw
#!/bin/bash
echo "Making FS Read/Write"
sudo mount -o remount,rw /
sudo mount -o remount,exec /mnt/stateful_partition
sudo mount -i -o remount,exec /home/chronos/user
echo "You should now have full Read/Write access"
exit
Change permissions on script
sudo chmod a+x /sbin/rw
Run to set read/write root
sudo rw
Install Meteor as indicated on www.meteor.com via curl and meteor create works!
Alternatively you can edit the chomeos_startup though that might not be the best idea. It is probably best to have read/write on demand as illustrated above.
cd /sbin sudo
sudo vim chromeos_startup
Go to lines 51 and 58 and remove the noexec options from the mount command.
Down at the bottom of the script, above the note about ureadahead and below the if statement, add in:
mount -o remount,exec /mnt/stateful_partition
#uncomment this to mount root r/w on boot
mount -o remount,rw /
Again, editing chromeos_startup probably isn't the best idea unless you are so lazy you can't type sudo rw.
Enjoy.
This is super easy to fix!!
Just run this (or put it in .bashrc or .zshrc to make it permanent):
sudo mount -i -o remount,exec /home/chronos/user
Based on your question (you are using sudo) I assume you already have Dev Mode enabled, which is required for the above sudo command to work.
ChromeOS mounts the home folder using the noexec option by default, and this command remounts it with exec instead. And boom, Meteor will work just fine after that (and so will a bunch of other programs running out of your home folder).
Original tip: https://github.com/dnschneid/crouton/issues/928
I tried to add:
mypack:
pkg:
- installed
- pkgs:
- mercurial
- git
cmd.run:
- name: 'mkdir -p /opt/mypack'
cmd.run: 'hg pull -u -R /opt/mypack || hg clone -R /opt https://...'
cmd.run: 'ln -s /opt/mypack/etc/init.d/xxx /etc/init.d/xxx'
But for some reason this the state seems to execute/install but the commands are not executed, or at least not all of them.
I need a solution to run multiple commands and to fail the deployment if any of these fails.
I know that I could write a bash script and include this bash script, but I was looking for a solution that would work with only the YAML file.
You want this:
cmd-test:
cmd.run:
- name: |
mkdir /tmp/foo
chown dan /tmp/foo
chgrp www-data /tmp/foo
chmod 2751 /tmp/foo
touch /tmp/foo/bar
Or this, which I would prefer, where the script is downloaded from the master:
cmd-test:
cmd.script:
- source: salt://foo/bar.sh
- cwd: /where/to/run
- user: fred
In addition to the above (better) suggestions, you can do this:
cmd-test:
cmd.run:
- names:
- mkdir -p /opt/mypack
- hg pull -u -R /opt/mypack || hg clone -R /opt https://...
- ln -s /opt/mypack/etc/init.d/xxx /etc/init.d/xxx
For reasons I don't understand yet (I'm a Salt novice), the names are iterated in reverse order, so the commands are executed backwards.
You can do as Dan pointed out, using the pipe or a cmd.script state. But it should be noted that you have some syntax problems in your original post. Each new state needs a name arg, you can't just put the command after the colon:
mypack:
pkg:
- installed
- pkgs:
- mercurial
- git
cmd.run:
- name: 'my first command'
cmd.run:
- name: 'my second command'
However, that actually may fail as well, because I don't think you can put multiple of the same state underneath a single ID. So you may have to split them out like this:
first:
cmd.run:
- name: 'my first command'
second:
cmd.run:
- name: 'my second command'
As one of the users pointed out above, this works in proper order (salt 3000.2)
install_borg:
cmd.run:
- names:
- cd /tmp
- wget https://github.com/borgbackup/borg/releases/download/1.1.15/borg-linux64
- mv borg-linux64 /usr/local/bin/borg
- chmod u+x /usr/local/bin/borg
- chown root:root /usr/local/bin/borg
- ln -s /usr/local/bin/borg /usr/bin/borg
- unless: test -f /usr/bin/borg