Ansible command to check the java version in different servers - unix

I am writing a Test case using ansible.There are totally 9 servers in which I need to check whether the installed java version is 1.7.0 or not?
If it is less than 1.7.0 then test case should fail.
Can anyone help me to write this Test case as I am very new to ansible.
Thanks in advance

Ansible has a version_compare filter since 1.6. But since Ansible doesn't know about your Java version you first need to fetch it in a separate task and register the output, so you can compare it.
- name: Fetch Java version
shell: java -version 2>&1 | grep version | awk '{print $3}' | sed 's/"//g'
register: java_version
- assert:
that:
- java_version.stdout | version_compare('1.7', '>=')
On a sidenote, if your main use case for Ansible is to validate the server state you might want to have a look at using an infrastructure test tool instead: serverspec, goss, inspec, testinfra.

Altough in your question you havn't specified what have you tried, but still
You can run a commands like this
ansible your_host -m command -a 'java -version'
If you need to parse the output of java -version there is a very good script from Glenn Jackman here adapt it to your needs and use it.
If you are still looking for help, be more specific and show what you tried to do.

Since 2.0 you can make this
- name: Check if java is installed
command: java -version
become_user: '{{ global_vars.user_session }}' // your user session
register: java_result
ignore_errors: True
- debug:
msg: "Failed - Java is not installed"
when: java_result is failed
- debug:
msg: "Success - Java is installed"
when: java_result is success

Related

AWS Sagemaker pipeline definition error while running from aws-cli

Im trying to integrate Sagemaker pipeline with Jenkins. Im using aws-cli ( version - 2.1.24 ).
Since this version doesnt support --pipeline-definition-s3-location, Im trying to do something like below -
aws s3 cp s3://some_bucket/folder1/pipeine_definition.json - | \ jq -c . | \ tee /dev/stderr | \ xargs -0 -I{} aws sagemaker update-pipeline --pipeline-name "pipelinename" --role-arn "arn:aws:iam::<account_id>:role/sagemaker-role" --pipeline-definition '{}'
And I found this error -
An error occurred (ValidationException) when calling the UpdatePipeline operation: Pipeline definition: At least 1 step must be provided
When I recheck the definition.json, Im able to see the steps defined inside json.
Can someone help me?
I tried adding the quotes for --pipeline-definition, which isnt working.
Since jenkins has aws-cli 2.1.24 version, I want to someone copy the contents of json file in s3 and pass it to --pipeline-definition argument using aws sagemaker --update-pipeline command.

Use other code coverage driver than xdebug

I want to use pcov instead of xdebug for code coverage generation.
I'm using Docker and I have xdebug installed.
Can I be sure that xdebug won't affect test execution if I run the following command?
php -d xdebug.default_enable=0 -d pcov.enabled=1 path/to/phpunit --coverage-text
I read that pcov might be faster but as I understood xdebug has to be disabled.
Is it better do the following to achieve the fastest coverage instead of running the above command?
remove/truncate xdebug config
run tests
php -d pcov.enabled=1 path/to/phpunit --coverage-text
restore xdebug config
Xdebug and PCOV both overload the same parts of the engine, as a result they are incompatible, and there's no sense in the authors trying to make them compatible.
Xdebug must not be loaded if you want to use PCOV as the driver for php-code-coverage.
Source: I wrote pcov ...
xdebug will cost performance even if disabled with:
xdebug.default_enable=0
The impact is not negligible.
You're better off disabling the xdebug extension completely before running your tests.
This will give you the best performance if you're using pcov to generate the code-coverage.
Is it better do the following to achieve the fastest coverage instead of running the above command? [disable/enable xdebug/pcov instead of loading them]
as you're running php on the commandline, you don't need to fiddle with ini files invoking phpunit.
instead you can make the runtime configuration explicit with command-line parameters, this offers an interesting effect often.
it works with the -n switch, that is disabling all configuration files (see php --help for usage info):
php -n [...]
where [...] stands for the arguments that are specifically for the use-case, in general and specific here for phpunit:
php -n <php-file> [<php-file-argument>...]
`------------ [...] --------------´
php -n path/to/phpunit --coverage-text
`--------- [...] -------------´
the -n switch/option makes the runtime really naked, so you start with a clean slate.
first of all running phpunit may not work at all and would not for some features (like reading the configuration file), because phpunit needs some php extensions and -n controlled to not load any php extensions (that is php only have the core extension or those that are compiled in and can not be deactivated).
Therefore you have to add them, e.g. Dom for the XML configuration file and Tokenizer for generating the HTML code-coverage report (soon):
php -n -d extension=dom -d extension=tokenizer [...]
Then your test-suite most likely also tests paths of your code that requires extensions. invoking phpunit will highlight these in failures. Therefore you have to add them (e.g. here json):
php -n -d extension=dom -d extension=tokenizer -d extension=json [...]
This is perhaps the interesting part, as you learn about the extension requirements your code has (at least for unit-testing).
Finally add the coverage extension of choice. Let's take pcov for the example:
php -n -d extension=dom -d extension=tokenizer -d extension=json \
-d extension=pcov -d pcov.enabled=1 [...]
and then you get your results:
PHPUnit 9.5.4 by Sebastian Bergmann and contributors.
Runtime: PHP 7.4.20 with PCOV 1.0.8
Configuration: phpunit-cfg.xml
............... 15 / 15 (100%)
Time: 00:00.191, Memory: 6.00 MB
OK (15 tests, 33 assertions)
Generating code coverage report in HTML format ... done [00:00.021]
Compare against xdebug? Why not:
hp -n -d extension=dom -d extension=tokenizer -d extension=json \
-d zend_extension=xdebug -d xdebug.mode=coverage [...]
^^^^^
and have the results:
PHPUnit 9.5.4 by Sebastian Bergmann and contributors.
Runtime: PHP 7.4.20 with Xdebug 3.0.4
Configuration: phpunit-cfg.xml
............... 15 / 15 (100%)
Time: 00:00.222, Memory: 8.00 MB
OK (15 tests, 33 assertions)
Generating code coverage report in HTML format ... done [00:00.024]
The hinted phpunit-cfg.xml file was one created with phpunit --generate-configuration and code-coverage enabled. The output examples have been shortened for clarity.

Saltstack Packages Failed to Install on OpenBSD 5.8

I am kinda new to Saltstack so I may need some hand holding, but here it goes.
First some background info:
I am running a salt-master server on a CentOS 6.7 VM.
I am running a salt-minion on an OpenBSD 5.8 machine.
I have accepted the keys from the minion on the master and I am able to test.ping from the master to the minion. So the connection is fine.
I have created a bunch of .sls files for all of the packages I want to install under a directory called OpenBSD.
As an example, here is my bash/init.sls file:
bash:
pkg:
- installed
Very simple, right?
Now I run the command: # salt 'machinename' state.sls OpenBSD/bash
However this is what the salt-server responds with:
Machinename:
----------
ID: bash
Function: pkg.installed
Result: False
Comment: The following packages failed to install/update: bash
Started: 19:03:50.191735
Duration: 1342.497 ms
Changes:
Summary
------------
Succeeded: 0
Failed: 1
------------
What am I doing wrong?
Can you run with the option -ldebug attached to it and see if there is anything useful in the output? Also can you run the following and paste any useful output on the bsd box itself:
salt-call -ldebug state.sls OpenBSD.bash

Saltstack for "configure make install"

I'm getting my feet wet with SaltStack. I've made my first state (a Vim installer with a static configuration) and I'm working on my second one.
Unfortunately, there isn't an Ubuntu package for the application I'd like my state to install. I will have to build the application myself. Is there a "best practice" for doing "configure-make-install" type installations with Salt? Or should I just use cmd?
In particular, if I was doing it by hand, I would do something along the lines of:
wget -c http://example.com/foo-3.4.3.tar.gz
tar xzf foo-3.4.3.tar.gz
cd foo-3.4.3
./configure --prefix=$PREFIX && make && make install
There are state modules to abstract the first two lines, if you wish.
file.managed: http://docs.saltstack.com/ref/states/all/salt.states.file.html
archive.extracted: http://docs.saltstack.com/ref/states/all/salt.states.archive.html
But you could also just run the commands on the target minion(s).
install-foo:
cmd.run:
- name: |
cd /tmp
wget -c http://example.com/foo-3.4.3.tar.gz
tar xzf foo-3.4.3.tar.gz
cd foo-3.4.3
./configure --prefix=/usr/local
make
make install
- cwd: /tmp
- shell: /bin/bash
- timeout: 300
- unless: test -x /usr/local/bin/foo
Just make sure to include an unless argument to make the script idempotent.
Alternatively, distribute a bash script to the minion and execute. See:
How can I execute multiple commands using Salt Stack?
As for best practice? I would recommend using fpm to create a .deb or .rpm package and install that. At the very least, copy that tarball to the salt master and don't rely on external resources to be there three years from now.
Let's assume foo-3.4.3.tar.gz is checked into GitHub. Here is an approach that you might pursue in your state file:
git:
pkg.installed
https://github.com/nomen/foo.git:
git.latest:
- rev: master
- target: /tmp/foo
- user: nomen
- require:
- pkg: git
foo_deployed:
cmd.run:
- cwd: /tmp/foo
- user: nomen
- name: |
./configure --prefix=/usr/local
make
make install
- require:
- git: https://github.com/nomen/foo.git
Your configuration prefix location could be passed as a salt pillar. If the build process is more complicated, you may consider writing a custom state.

symfony2 app console: no available command

After HDD crash, I had to reimport my symfony2 app into Eclipse from my SVN server.
After syncing everything, I can't use the console anymore. I only get 2 commands : list and help.
I tried:
php bin/vendors install --reinstall
At the end, I got the following message:
[InvalidArgumentException]
There are no commands defined in the "assets" namespace.
[InvalidArgumentException]
There are no commands defined in the "cache" namespace.
My configuration is pretty simple:
- ubuntu server 11.04 (64bits)
- virtualbox OSE
How can I fix it?
Here is the result of app/console list command:
oc#ubuntu-server:/var/www/projets/Simoov2/src$ app/console list
Symfony version 2.0.0-RC4 - app/dev/debug
Usage:
[options] command [arguments]
Options:
--help -h Display this help message.
--quiet -q Do not output any message.
--verbose -v Increase verbosity of messages.
--version -V Display this program version.
--ansi Force ANSI output.
--no-ansi Disable ANSI output.
--no-interaction -n Do not ask any interactive question.
--shell -s Launch the shell.
--env -e The Environment name.
--no-debug Switches off debug mode.
Available commands:
help Displays help for a command
list Lists commands
OK. As I thought, this is not related to symfony2 but related to virtualbox mounting system.

Resources