How to disable running builds on commit for a gitrepo resource - artifactory

I have a git repo resource, but I don't want to trigger run on every commit into this repository. is there anyway I can achieve this?

We should disable buildOn Commit here. for example,
- name: my_app_repo
type: GitRepo
configuration:
gitProvider: my_github
path: myuser/repo-name
branches:
include: master
buildOn:
commit: false
Reference

Related

Salt state to enable re-run systemd service

I am trying to craft a salt state file to simply ensure-enabled and re-run my one-shot service. I thought it would be nice to re-run if any of the dependent files changed, but honestly this is simple enough and the short-lived service is almost never going to be running when I want to update.
Current attempt:
myown-systemd-service-unit-file:
...
myown-systemd-service-executable-file:
...
myown-service:
systemd.force_reload:
- name: myown
- enable: True
- watch:
- myown-systemd-service-unit-file
- myown-systemd-service-executable-file
is failing at with errror:
----------
ID: myown-service
Function: systemd.force_reload
Name: myown
Result: False
Comment: State 'systemd.force_reload' was not found in SLS 'something.myown'
Reason: 'systemd.force_reload' is not available.
Changes:
By enable, I mean to have the equivalent of this CLI call be applied:
sudo systemctl enable myown.service
Relevant docs: https://docs.saltproject.io/en/latest/ref/modules/all/salt.modules.systemd_service.html#module-salt.modules.systemd_service
The systemd_service module is an execution module, and the syntax to use such modules is slightly different. The state declaration you are using is for state modules. Also, the example from the documentation points to use of service.force_reload rather than systemd.force_reload.
salt '*' service.force_reload <service name>
Considering all this, the below example restarts and enables myown service when the service unit file changes.
myown-service:
module.run:
- service.restart:
- name: myown
onchanges:
- file: myown-systemd-service-unit-file
- service.enable:
- name: myown
Note that I've used restart instead of force_reload to bounce the service. Also I'm using onchanges for file module as you haven't shown how you manage the two files. You can use the appropriate module and state IDs.

How to keep fork master branch in sync with azerothcore master branch

How can I automatically keep my fork master branch in sync with the AzerothCore master branch?
You can use GitHub Actions to keep your fork master branch in sync:
on:
schedule:
- cron: "0 */6 * * *"
jobs:
repo-sync:
runs-on: ubuntu-latest
steps:
- name: repo-sync
uses: wei/git-sync#v2
with:
source_repo: "https://github.com/azerothcore/azerothcore-wotlk.git"
source_branch: "master"
destination_repo: "https://${{ secrets.GH_USERNAME }}:${{ secrets.GH_TOKEN }}#github.com/${{ secrets.GH_USERNAME }}/azerothcore-wotlk.git"
destination_branch: "master"
Create the secrets in the fork repo settings. You can refer here on how to add them:
https://docs.github.com/en/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets#creating-encrypted-secrets-for-a-repository
In my case I created the secrets GH_USERNAME and GH_TOKEN
GH_USERNAME should be set to your github username.
GH_TOKEN should be set to a personal access token that you create.
Refer here for information on how to create one:
https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token

Next.js - ERROR Build directory is not writeable on Google Cloud Build

I was trying to automate the deployment process of my Next.JS application to App Engine using Cloud Build but at the build phase it keeps on failing with:
Error: > Build directory is not writeable. https://err.sh/vercel/next.js/build-dir-not-writeable
I cant seem to figure out what to fix for this.
My current build file is and it keeps failing on step 2:
steps:
# install dependencies
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
# build the container image
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'build']
# deploy to app engine
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
env:
- 'PORT=8080'
- 'NODE_ENV=production'
timeout: "1600s"
app.yaml:
runtime: nodejs12
handlers:
- url: /.*
secure: always
script: auto
env_variables:
PORT: 8080
NODE_ENV: 'production'
any help would be appreciated
Can reproduce the same behavior after upgrading to next version 9.3.3.
Cause
The issue is related to the npm dependency which is managed by google if you use gcr.io/cloud-builders/npm seems they are running your build inside of Google Cloud Build on an old node version.
Here you can find the currently supported version
https://console.cloud.google.com/gcr/images/cloud-builders/GLOBAL/npm?gcrImageListsize=30
As you can see Googles latest node version is 10.10. The newest next.js version requires at least node 10.13
Solution
Change gcr.io/cloud-builders/npm to
- name: node
entrypoint: npm
in order to use the official docker npm package which runs on node12.
After those changes your build will be successful again.
Sidenote
Switching to the official npm will increase the build duration (at least in my case). It takes around 2 minutes longer then the gcr npm.

How to use the extension modules in saltstack from Git repository?

I have one extension python module in Git repository, named compute_pillar.py.
I want to use this as an external pillar, below are my extension_module settings:
extension_modules: /var/cache/salt/master/gitfs
gitfs_ssl_verify: False
gitfs_provider: gitpython
gitfs_remotes:
- git#git.corp.company.com:Saltstack/saltit-automation.git:
- root: salt
- base: master
- file:///var/cache/salt/master/gitfs
Below is my pillar.conf:
ext_pillar:
- cmd_json: 'echo {\"arg\":\"value\"}'
- compute_pillar: True
Now when calling pillar.items, it calls the cmd_json as it is local, but for compute_pillar it never executes, below is the error message in the log:
[salt.utils.lazy ][DEBUG ][24791] Could not LazyLoad
compute_pillar.ext_pillar: 'compute_pillar.ext_pillar' is not
available. [salt.pillar ][CRITICAL][24791] Specified ext_pillar
interface compute_pillar is unavailable
What is the configuration setting to call the extension modules directly from git repository?
You do not need to point salt to /var/cache/salt/master/gitfs.
Assuming your gitfs backend is configured properly and working, create a directory called _modules under salt directory (for example for roots backend /srv/salt/_modules) and put your extension python module here, push to git, wait 60 seconds or run salt-run fileserver.update.
Now just sync your minion salt minion_A saltutil.sync_all and you should be able to use the module.

Symfony/Codeception Run Errors

I have a Symfony 2.4.4 site with simple Codeception (2.1.7) acceptance tests setup. When running the acceptance tests I get the following error:
[Codeception\Exception\ConfigurationException]
AcceptanceTester class doesn't exist in suite folder.
Run the 'build' command to generate it
The AcceptanceTester class does exist in the tests/acceptance directory. If I run a build I get the following error:
[Codeception\Exception\ConfigurationException]
Configuration file could not be found.
Run `bootstrap` to initialize Codeception.
The codeception.yml file does exist and contains the following:
actor: Tester
paths:
tests: tests
log: tests/_output
data: tests/_data
helpers: tests/_support
settings:
bootstrap: _bootstrap.php
colors: false
memory_limit: 1024M
coverage:
enabled: true
remote: true
include:
- app/*
exclude:
- app/cache/*
include:
...
modules:
config:
Db:
dsn: ''
user: ''
password: ''
dump: tests/_data/dump.sql
If I run a bootstrap it confirms this:
Project is already initialized in '.'
acceptance.suite.yml contains:
class_name: AcceptanceTester
modules:
enabled:
- AcceptanceHelper
Any suggestions?
AcceptanceTester.php file should be in tests/_support directory.
Run codecept build to regenerate Tester files.
You have no modules enabled acceptance.suite.yml.
You have to enable one of Symfony2, PhpBrowser and WebDriver.
Your site uses an old version of Symfony and it can cause issues for Codeception if Symfony2 module is used, so I recommend to test it using PhpBrowser or WebDriver.
I believe, you need to run command codecept bootstrap.
I had this problem, and I realise that I ran the codecept bootstrap but the vendor and the tests folders were in a wrong directory.
I followed this video and it helped me.

Resources