Could not evaluate: Could not retrieve information from environment production source(s) puppet:///modules/toolchain - directory

I am trying to write a simple resource to copy the content of a dire tory from puppet master to puppet agent.
file { "/usr/local/scaligent/" :
ensure => 'directory',
source => "puppet:///modules/toolchain",
recurse => 'true',
#owner => 'root',
#group => 'root',
#mode => '0755',
}
source is /etc/puppetlabs/code/environments/production/modules/files/toolchain/ in puppet master and destination is /usr/local/scaligent/ in puppet agent.
getting below error in puppet agent:
[~]$ sudo puppet agent -tv --noop
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Applying configuration version '1600365429'
Error: /Stage[main]/Main/File[/usr/local/scaligent/]: Could not evaluate: Could not retrieve information from environment production source(s) puppet:///modules/toolchain
Notice: Applied catalog in 0.04 seconds
[ ~]$

Per the Puppet resource type reference, the form of a puppet: URI is
puppet:///modules/<MODULE NAME>/<FILE PATH>
It refers to the content of a file or directory in a module, where Puppet will look for it in that module's files directory. The file system path would be something like /etc/puppetlabs/code/environments/production/modules/<MODULE NAME>/files/toolchain
The URI you are trying to use, puppet:///modules/toolchain, is not well formed, and the path you are trying to reference is not in any module's files/ directory.
It would be conventional, albeit not required, to put the "toolchain" directory among the files of the module containing the resource declaration. But then, it would also be conventional to put the File declaration in a class, in a module, which you have not done. There are approximately zero circumstances under which it would be good style to declare that resource at top scope, as it appears you have done.

Related

How to handle the "wildcard" * asterisk on a Windows Command

Using
Windows
Local by Flywheel
Composer (task runner)
Working on a Child Theme to a Custom Made "Core" theme.
I'm running composer install on the root of my WP directory to install dependencies on a parent and child theme.
Problem is when the installations are running it errors out when it gets to the src/*.js. I asked the team that set up the page and they told me that it was a problem with how Windows handles wildcard characters, which I understand it as the asterisk* and that they solved it in the pass by getting their employee a Mac. Since these dependencies are not going through, there's files that aren't loading on the website.
Getting a Mac, to solve this issue, is not on the table, so I'm looking to figure out what other options do I have to run this command successfully on a windows?
I tried going directly to the src/ folder and running npx wp-scripts start src on each .js file individually, and it gave a "successful build" message each time, but that didn't fix the issues on the page.
> wp-scripts build src/*.js
assets by status 95 bytes [cached] 2 assets
Entrypoint * = *.js *.asset.php
ERROR in *
Module not found: Error: Can't resolve './src/*.js' in 'C:\Users\UserName\Local Sites\localSiteName\app\public\wp-content\themes\parentThemeName'
resolve './src/*.js' in 'C:\Users\UserName\Local Sites\localSiteName\app\public\wp-content\themes\parentThemeName'
using description file: C:\Users\UserName\Local Sites\localSiteName\app\public\wp-content\themes\parentThemeName\package.json (relative path: .)
Field 'browser' doesn't contain a valid alias configuration
using description file: C:\Users\UserName\Local Sites\localSiteName\app\public\wp-content\themes\parentThemeName\package.json (relative path: ./src/*.js)
no extension
Field 'browser' doesn't contain a valid alias configuration
C:\Users\UserName\Local Sites\localSiteName\app\public\wp-content\themes\parentThemeName\src\*.js doesn't exist
.js
Field 'browser' doesn't contain a valid alias configuration
C:\Users\UserName\Local Sites\localSiteName\app\public\wp-content\themes\parentThemeName\src\*.js.js doesn't exist
.json
Field 'browser' doesn't contain a valid alias configuration
C:\Users\UserName\Local Sites\localSiteName\app\public\wp-content\themes\parentThemeName\src\*.js.json doesn't exist
.wasm
Field 'browser' doesn't contain a valid alias configuration
C:\Users\UserName\Local Sites\localSiteName\app\public\wp-content\themes\parentThemeName\src\*.js.wasm doesn't exist
as directory
C:\Users\UserName\Local Sites\localSiteName\app\public\wp-content\themes\parentThemeName\src\*.js doesn't exist
webpack 5.64.1 compiled with 1 error in 149 ms
Script cd wp-content/themes/parentThemeName && npm install && npm run build && composer install --no-interaction --ansi --ignore-platform-reqs handling the install-core-deps event returned with error code 1
Script composer install-core-deps && composer install-child-deps handling the post-install-cmd event returned with error code 1
Thanks for any and all help.
Cheers!

SBT Basic Auth Problems

I'm setting up SBT on our buildserver (bamboo) for multiple buildagents. For this I created for each agent a separate directory which contains the agent specific config and the .ivy home to make sure agent isolation is fullfilled.
The build itself is call like this:
/sbt-launcher-packaging-0.13.13/bin/sbt -java-home /usr/lib/jvm/jdk1.7.0_79 -Dsbt.override.build.repos=true -Dsbt.repository.config=/data/bamboo/localbuildagents/${bamboo.agentId}/sbt/sbt.conf -Dsbt.ivy.home=/data/bamboo/localbuildagents/${bamboo.agentId}/.ivy2 clean compile dist
The credentials (basic realm) are store under the user home which is starting the bamboo server (~/.sbt/.credentials and ~/.sbt/0.13/plugins/credentials.sbt)
Each sbt.conf contains the agent-specific repos e.g the agent specific local maven repo and urls for the remote artifactory.
[repositories]
local-buildagent-mvn: file:///data/bamboo/home/.m2/AGENT-xxxxxxxx/repository/
ivy-release: http://xxx/artifactory/ivy-release/, [organization]/[module]/(scala_[scalaVersion]/)(sbt_[sbtVersion]/)[revision]/[type]s/[artifact](-[classifier]).[ext]
mvn-release: http://xxx/artifactory/libs-release/
mvn-snapshot: http://xxx/artifactory/libs-snapshot/
[ivy]
ivy-home: file:///data/bamboo/localbuildagents/xxxxxxxx/.ivy2/
I'm encountering login problems while sbt is checking the remote artifactory repos (first http error 401 and then surprisingly 403). A curl with the same credentials and repo url is working as expected (first 401 and then 200).
I guess, that if -D switches are used for sbt startup, the credentials are not considered. I'm really stuck any advise warmly welcome...
From your question I don't see if you specified where your credentials are defined. In case you didn't do it, you must add something like this to your build definition (documentation):
// inline
credentials += Credentials("Some Nexus Repository Manager", "my.artifact.repo.net", "admin", "admin123")
// file
credentials += Credentials(Path.userHome / ".ivy2" / ".credentials")

How to use the extension modules in saltstack from Git repository?

I have one extension python module in Git repository, named compute_pillar.py.
I want to use this as an external pillar, below are my extension_module settings:
extension_modules: /var/cache/salt/master/gitfs
gitfs_ssl_verify: False
gitfs_provider: gitpython
gitfs_remotes:
- git#git.corp.company.com:Saltstack/saltit-automation.git:
- root: salt
- base: master
- file:///var/cache/salt/master/gitfs
Below is my pillar.conf:
ext_pillar:
- cmd_json: 'echo {\"arg\":\"value\"}'
- compute_pillar: True
Now when calling pillar.items, it calls the cmd_json as it is local, but for compute_pillar it never executes, below is the error message in the log:
[salt.utils.lazy ][DEBUG ][24791] Could not LazyLoad
compute_pillar.ext_pillar: 'compute_pillar.ext_pillar' is not
available. [salt.pillar ][CRITICAL][24791] Specified ext_pillar
interface compute_pillar is unavailable
What is the configuration setting to call the extension modules directly from git repository?
You do not need to point salt to /var/cache/salt/master/gitfs.
Assuming your gitfs backend is configured properly and working, create a directory called _modules under salt directory (for example for roots backend /srv/salt/_modules) and put your extension python module here, push to git, wait 60 seconds or run salt-run fileserver.update.
Now just sync your minion salt minion_A saltutil.sync_all and you should be able to use the module.

Can Ansible unarchive be made to write static folder modification times?

I am writing a build process for a WordPress installation using Ansible. It doesn't have a application-level build system at the moment, and I've chosen Ansible so that it can cleanly integrate with server build scripts, so I can bring up a working server at the touch of a button.
Most of my WordPress plugins are being installed with the unarchive feature, pointing to versioned plugin builds on the official wordpress.org installation server. I've encountered a problem with just one of these, which is that it is always being marked as "changed" even though the files are exactly the same.
Having examined the state of ls -Rl before and after, I noticed that this plugin (WordPress HTTPS) is the only one to use internal sub-directories, and upon each decompression, the modification time of folders is getting bumped.
It may be useful to know that this is a project build script, with a connection of local. I guess therefore that means that SSH is not being used.
Here is a snippet of my playbook:
- name: Install the W3 Total Cache plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/w3-total-cache.0.9.4.1.zip
dest=wp-content/plugins
copy=no
- name: Install the WP DB Manager plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wp-dbmanager.2.78.1.zip
dest=wp-content/plugins
copy=no
# #todo Since this has internal sub-folders, need to work out
# how to preserve timestamps of the original folders rather than
# re-writing them, which forces Ansible to record a change of
# server state.
- name: Install the WordPress HTTPS plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wordpress-https.3.3.6.zip
dest=wp-content/plugins
copy=no
One hacky way of fixing this is to use ls -R before and after, using options to include file sizes but not timestamps, and then md5sum that output. I could then mark it as changed if there is a change in checksum. It'd work but it's not very elegant (and I'd want to do that for all plugins, for consistency).
Another approach is to abandon the task if a plugin file already exists, but that would cause problems when I bump the plugin version number to the latest copy.
Thus, ideally, I am looking for a switch to present to unarchive to say that I want the folder modification times from the zip file, not from playbook runtime. Is it possible?
Update: a commenter asked if the file contents could have changed in any way. To determine whether they have, I wrote this script, which creates a checksum for (1) all file contents and (2) all file/directory timestamps:
#!/bin/bash
# Save pwd and then change dir to root location
STARTDIR=`pwd`
cd `dirname $0`/../..
# Clear collation file
echo > /tmp/wp-checksum
# List all files recursively
find wp-content/plugins/wordpress-https/ -type f | while read file
do
#echo $file
cat $file >> /tmp/wp-checksum
done
# Get checksum of file contents
sha1sum /tmp/wp-checksum
# Get checksum of file sizes
ls -Rl wp-content/plugins/wordpress-https/ | sha1sum
# Go back to original dir
cd $STARTDIR
I ran this as part of my playbook (running it in isolation using tags) and received this:
PLAY [Set this playbook to run locally] ****************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_before.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"10d66f7bdbbdd3af531d1b11a3db3059a5868838 -"
]
}
TASK [jonblog : Install the WordPress HTTPS plugin] ***************
changed: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_after.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"719c9da94b525e723b1abe188ee9f5bbaf121f3f -"
]
}
PLAY RECAP *********************************************************************
localhost : ok=6 changed=3 unreachable=0 failed=0
The debug lines reflect the checksum hash of the contents of the files (this is identical) and then the checksum hash of ls -Rl of the file structure (this has changed). This is in keeping with my prior manual finding that directory checksums are changing.
So, what can I do next to track down why folder modification times are incorrectly flagging this operation as changed?
Rather than overwriting all files each time and find a way to keep the same modification datetime, you may want to use the creates option of the unarchive module.
As you maybe already know, this tells Ansible that a specific file/folder will be created as a result of the task. Thus, next time the task will not be run again if that file/folder already exists.
See http://docs.ansible.com/ansible/unarchive_module.html#options
My solution is to modify the checksum script and to make that a permanent feature of the Ansible process. It feels a bit hacky to do my own checksumming, when Ansible should do it for me, but it works.
New answers that explain that I am doing something wrong, or that a new version of Ansible fixes the problem, would be most welcome.
If I get a moment, I will raise this as a possible bug with the Ansible team. However I do sometimes wonder about the effort/reward ratio when raising bugs on a busy tracker - I already have one item outstanding, it has been waiting a while, and I've chosen to work around that too.
Update (18 months later)
This Ansible build system never made it into live. It felt like I was always working around something. Recently, when I decided I needed to move my blog to another server, I finally Dockerised it. This took several weeks (since there is a surprising amount of things to think about in a real WordPress installation) but in general I found the process much nicer than using orchestration tools.

solr cloud: RROR: Error loading config name for collection

I'm trying to create a new collection with solr-cloud setup, fails with the following:
ERROR: Error loading config name for collection test
I tried deleting the collection:
sudo /opt/solr/bin/solr delete -c test
but with the same results
My setup: solr-cloid with external zookeeper and 5 solr nodes
How do I purge it or reload it again ?
thanks
Solr is not able to find configuration files in zookeeper. Solrcloud try to recreate the core from zookeeper configuration file.
Looks like you have deleted zookeeper configuration node for collection test.
Two steps to completely purge collection test:
Stop solr and Delete folder "test" if exists from Solr home folder.(default: /var/lib/solr)
Navigate to zookeeper node and edit clusterstate.json . Remove entries of collection test. I wanted to start fresh so reset clusterstate.json file to default i.e {}
you should disable autoAddReplicas property of solr before shutting down any solrnode.
Check if the config folder exists in zookeeper. If this is the case, try to link the collection with the configname using the commmand:
zkcli.sh -cmd linkconfig -collection collectionname -confname configname

Resources