How to modify a Jelastic installation when wrapping a jps manifest in my own manifest? - nginx

The Jelastic Marketplace is full of interesting software. However, sometimes, they do not comply to my security needs. In those cases, I would like to write my own manifest that would install the manifest from the marketplace and add up the components that I need for my use-case. Let's take an example: I would like to wrap the kubernetes installation with the addition of a load-balancer. I would like to do something like this:
type: install
name: My Example Manifest
onInstall:
- install:
jps: https://github.com/jelastic-jps/kubernetes/blob/1.23.6/manifest.jps
envName: env-${fn.random}
settings:
deploy: cmd
cmd: echo "do nothing"
topo: 0-dev
dashboard: general
ingress-controller: Nginx
storage: true
api: true
monitoring: true
version: 1.23.6
jaeger: false
- addNodes:
- nodeType: nginx-dockerized
nodeGroup: bl
count: 1
fixedCloudlets: 1
flexibleCloudlets: 4
The issue I am having here is that the manifest cannot add the nodes, because of the following error:
user [xyz] doesn't have any access rights to app [dashboard]
What am I doing wrong? How can I make this manifest work? I tried to set user: root in the addNodes function but it doesn't help.
Of course, I am interested in suggestions involving one single install manifest. I know I could make it happen by first installing the kubernetes manifest and then running an update manifest that would add my load-balancer nodes. I would like, however, to package the whole thing within one single step, as described by my manifest above.

Related

Salt state to enable re-run systemd service

I am trying to craft a salt state file to simply ensure-enabled and re-run my one-shot service. I thought it would be nice to re-run if any of the dependent files changed, but honestly this is simple enough and the short-lived service is almost never going to be running when I want to update.
Current attempt:
myown-systemd-service-unit-file:
...
myown-systemd-service-executable-file:
...
myown-service:
systemd.force_reload:
- name: myown
- enable: True
- watch:
- myown-systemd-service-unit-file
- myown-systemd-service-executable-file
is failing at with errror:
----------
ID: myown-service
Function: systemd.force_reload
Name: myown
Result: False
Comment: State 'systemd.force_reload' was not found in SLS 'something.myown'
Reason: 'systemd.force_reload' is not available.
Changes:
By enable, I mean to have the equivalent of this CLI call be applied:
sudo systemctl enable myown.service
Relevant docs: https://docs.saltproject.io/en/latest/ref/modules/all/salt.modules.systemd_service.html#module-salt.modules.systemd_service
The systemd_service module is an execution module, and the syntax to use such modules is slightly different. The state declaration you are using is for state modules. Also, the example from the documentation points to use of service.force_reload rather than systemd.force_reload.
salt '*' service.force_reload <service name>
Considering all this, the below example restarts and enables myown service when the service unit file changes.
myown-service:
module.run:
- service.restart:
- name: myown
onchanges:
- file: myown-systemd-service-unit-file
- service.enable:
- name: myown
Note that I've used restart instead of force_reload to bounce the service. Also I'm using onchanges for file module as you haven't shown how you manage the two files. You can use the appropriate module and state IDs.

Concourse unauthorized error pushing to Artifactory using docker-image-resource

I'm trying to use Concourse to grab a dockerfile defintion from a git repository, do some work, build the docker image, and push the new image to Artifactory. See below for the pipeline definition. At this time I have all stages up to the artifactory stage (the one that pushes to Artifactory) working. The artifactory stage exits with error with the following output:
waiting for docker to come up...
sha256:c6039bfb6ac572503c8d97f42b6a419b94139f37876ad331d03cb7c3e8811ff2
The push refers to repository [artifactory.server.com:2077/base/golang/alpine]
a4ab5bf94afd: Preparing
unauthorized: The client does not have permission to push to the repository.
This would seem straight-forward as an Artifactory permissions issue, except that I've tested locally with the docker cli and am able to push using the same user/pass as specified within destination_username and destination_password. I double checked the credentials to make sure I'm using the same ones and find that I am.
Question #1: is there any other known cause for getting this error? I've scoured the resource github page without finding anything. Any ideas why I may be getting the permissions error?
Without having an answer to the above question, I'd really like to dig deeper into troubleshooting the problem. To do so I use fly hijack to get a shell in the corresponding container. I notice that docker is installed on the container, so next step I think would be to do a docker import on the tarball for the image I'm trying to push and then perform a docker push to push it to the repo. When attempting to run the import I get the error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
Question #2: Why can't I use docker commands from within the container? Perhaps this has something to do with the issue I'm seeing with pushing to repo when running the pipeline (I don't think so)? Is it because the container isn't running with privilege? I thought that the privileged argument would be supplied in the resource type definition, but if not, how can I run with privilege?
resources:
- name: image-repo
type: git
source:
branch: master
private_key: ((private_key))
uri: ssh://git#git-server/repo.git
- name: artifactory
type: docker-image
source:
repository: artifactory.server.com:2077/((repo))
tag: latest
username: ((destination_username))
password: ((destination_password))
jobs:
- name: update-image
plan:
- get: image-repo
- task: do-stuff
file: image-repo/scripts/do-stuff.yml
vars:
repository-directory: ((repo))
- task: build-image
privileged: true
file: image-repo/scripts/build-image.yml
- put: artifactory
params:
import_file: image/image.tar
Arghhhh. Found after much troubleshooting that the destination_password wasn't being picked up properly due to special characters and a lack of quotes. Fixed the issue by properly setting the password within yaml file being included with the --load-vars flag.

How to use the extension modules in saltstack from Git repository?

I have one extension python module in Git repository, named compute_pillar.py.
I want to use this as an external pillar, below are my extension_module settings:
extension_modules: /var/cache/salt/master/gitfs
gitfs_ssl_verify: False
gitfs_provider: gitpython
gitfs_remotes:
- git#git.corp.company.com:Saltstack/saltit-automation.git:
- root: salt
- base: master
- file:///var/cache/salt/master/gitfs
Below is my pillar.conf:
ext_pillar:
- cmd_json: 'echo {\"arg\":\"value\"}'
- compute_pillar: True
Now when calling pillar.items, it calls the cmd_json as it is local, but for compute_pillar it never executes, below is the error message in the log:
[salt.utils.lazy ][DEBUG ][24791] Could not LazyLoad
compute_pillar.ext_pillar: 'compute_pillar.ext_pillar' is not
available. [salt.pillar ][CRITICAL][24791] Specified ext_pillar
interface compute_pillar is unavailable
What is the configuration setting to call the extension modules directly from git repository?
You do not need to point salt to /var/cache/salt/master/gitfs.
Assuming your gitfs backend is configured properly and working, create a directory called _modules under salt directory (for example for roots backend /srv/salt/_modules) and put your extension python module here, push to git, wait 60 seconds or run salt-run fileserver.update.
Now just sync your minion salt minion_A saltutil.sync_all and you should be able to use the module.

How to set Floating IP for VM using Apache Brooklyn ?- Floating IPs are required by options, but the extension is not available Error

I am trying to launch a very basic VM using Apache Brooklyn 0.8 on OpenStack ( Liberty) setup . I have mentioned the option
auto-create-floating-ip true
in the YAML but I see the following error-
java.lang.IllegalArgumentException: Floating IPs are required by
options, but the extension is not available!
Blueprint used:
location:
jclouds:openstack-nova:
endpoint: https://myurl
identity: tenant-name:username
credential: "My-password"
jclouds.openstack-nova.auto-create-floating-ips: true
name: VM
services:
- type: brooklyn.entity.basic.EmptySoftwareProcess
name: Empty software process
provisioning.properties:
imageId: RegionOne/image-id
keyPair: my-keypair-name
securityGroups: my-security-group
privateKeyFile: /path/to/my-key/in/brooklyn-machine
loginUser: ubuntu
templateOptions:
availabilityZone: nova
Any help ?
Thanks in advance .
This error normally means one of two things:
that the OpenStack endpoint you are targeting does not support the Nova floating IP extension; or
the namespace is different from a "normal" OpenStack setup, so jclouds fails to correctly retrieve the available extensions (e.g. this currently happens for OpenStack devtest).
Can your provision a VM using floating IP manually? If no, it is likely (1) above - see the cloud provider's docs, or ask the administrator which extension should be used instead.
If yes, it is likely (2) - see the jira issue JCLOUDS-1013. You can check this using the nova python client, running the commands below:
nova list-extensions | grep FloatingIps
nova --debug list-extensions 2>&1 | grep namespace
If the namespace is equals to http://docs.openstack.org/compute/ext/fake_xml, then you'll need a special jclouds "provider" for openstack-devtest, to tell jclouds to expect this alternate namespace.
Work has been done by Andrea Turli at Cloudsoft for this. The code is at https://github.com/cloudsoft/jclouds-openstack-devtest, and there is a pre-built jar at https://drive.google.com/a/cloudsoftcorp.com/file/d/0Bxv4hWMwaFRKRWtsMFdhZlZnek0/view?usp=drive_web. This code may well move into the github jclouds org over time.
Note this code is written against jclouds 1.9.2. That means you'd have to upgrade to Brooklyn 0.9.0. Or if you really want to stick to Brooklyn 0.8.0, create a fork of jclouds-openstack-devtest so you can update the pom/code to be against jclouds 1.9.1.
To use the jclouds-openstack-devtest jar, put it into $BROOKLYN_HOME/lib/patch/, restart Brooklyn, and change your location definition to jclouds:openstack-devtest-compute (instead of jclouds:openstack-nova).
jclouds-openstack-devtest jar with Brooklyn 0.10 solved the above issue

Deleting artifacts in artifactory

I want to delete artifacts in artifactory.I googled and found this link
https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API
Here the Delete build,using REST API,is what we are going for at the moment.Can any one give me a general idea how the command should look using curl command.Also in buildname what do i need to specify?
For deleting a single artifact or folder you should use the Delete Item API, for example
curl -uadmin:password -XDELETE http://localhost:8080/artifactory/libs-release-local/ch/qos/logback/logback-classic/0.9.9
Notice that you will need a user with delete permissions.
If all goes well you should expect a response with a 204 status and no content.
The delete API is intended for deleting build information and is relevant if you are using the Artifactory build integration.
Nowadays there's a tool that can be used for it (note that I am a contributor to that tool):
https://github.com/devopshq/artifactory-cleanup
Assume i have 10 repositories and i want to keep only last 20 artifacts in 5 repositories and unlimited in other 5 repositories
The rule for 10 repositories would look like:
# artifactory-cleanup.yaml
artifactory-cleanup:
server: https://repo.example.com/artifactory
# $VAR is auto populated from environment variables
user: $ARTIFACTORY_USERNAME
password: $ARTIFACTORY_PASSWORD
policies:
- name: reponame
rules:
- rule: Repo
name: "reponame"
- rule: KeepLatestNFiles
count: 20

Resources