I am trying to use Artifactory as a front for our Helm Charts. I have the following set up:
helm-remote-stable : stable community Helm Charts
helm-local-stable : stable company Helm Charts
helm-stable: virtual repo with both of the above as upstreams
What's supposed to be happening is that the helm-stable virtual repo manages merging the two upstream index.yaml files.
However, I am getting the following exception in the logs:
2018-03-20 18:58:04,483 [art-exec-276943] [ERROR] (o.a.a.h.r.m.HelmVirtualMerger:194) - Couldn't read index file in remote repository helm-remote-stable : (was com.github.zafarkhaja.semver.UnexpectedCharacterException) (through reference chain: org.jfrog.repomd.helm.model.HelmIndexYamlMetadata["entries"]->java.util.LinkedHashMap["grafana"]->java.util.TreeSet[6])
It looks like Artifactory is trying to enforce semver through some library and it's not parsing the community index.yaml file. This breaks the entire feature of the product.
Here's what's breaking from the community index.yaml:
- created: 2018-01-28T21:04:13.090211594Z
description: The leading tool for querying and visualizing time series and metrics.
digest: 6c25c79e16df4c31637d3f8b1b379bb4c0a34157fa5b817f4c518ef50d43911b
engine: gotpl
home: https://grafana.net
icon: https://raw.githubusercontent.com/grafana/grafana/master/public/img/logo_transparent_400x.png
maintainers:
- email: zanhsieh#gmail.com
name: Ming Hsieh
name: grafana
sources:
- https://github.com/grafana/grafana
urls:
- https://kubernetes-charts.storage.googleapis.com/grafana-0.6.tgz
version: "0.6"
Please note the version: "0.6" which borking the entire thing.
Any idea on how to get around this? I am using the Artifactory cloud offering.
This was fixed in Artifactory version 5.9.0.
You can find more details here: https://www.jfrog.com/jira/browse/RTFACT-15668
have you tried changing the version of grafana chart from 0.6 to 0.6.0 and pushing it to helm-local-stable.
Related
The Jelastic Marketplace is full of interesting software. However, sometimes, they do not comply to my security needs. In those cases, I would like to write my own manifest that would install the manifest from the marketplace and add up the components that I need for my use-case. Let's take an example: I would like to wrap the kubernetes installation with the addition of a load-balancer. I would like to do something like this:
type: install
name: My Example Manifest
onInstall:
- install:
jps: https://github.com/jelastic-jps/kubernetes/blob/1.23.6/manifest.jps
envName: env-${fn.random}
settings:
deploy: cmd
cmd: echo "do nothing"
topo: 0-dev
dashboard: general
ingress-controller: Nginx
storage: true
api: true
monitoring: true
version: 1.23.6
jaeger: false
- addNodes:
- nodeType: nginx-dockerized
nodeGroup: bl
count: 1
fixedCloudlets: 1
flexibleCloudlets: 4
The issue I am having here is that the manifest cannot add the nodes, because of the following error:
user [xyz] doesn't have any access rights to app [dashboard]
What am I doing wrong? How can I make this manifest work? I tried to set user: root in the addNodes function but it doesn't help.
Of course, I am interested in suggestions involving one single install manifest. I know I could make it happen by first installing the kubernetes manifest and then running an update manifest that would add my load-balancer nodes. I would like, however, to package the whole thing within one single step, as described by my manifest above.
I have set up kong in dbless mode on RHEL by following the below documentation
https://docs.konghq.com/gateway/latest/install-and-run/rhel/
Kong gateway is successfully started. Below are the configurations I added in kong.conf file where database is turned to off and path to declarative kong.yaml is specified
declarative_config = /temp/kong/kong.yml
database = off
Also, below is current .yaml file where I created a service using below link
https://docs.konghq.com/gateway/2.8.x/get-started/comprehensive/expose-services/
_format_version: "1.1"
services:
- host: mockbin.org
name: example_service
port: 80
protocol: http
routes:
- name: mocking
paths:
- /mock
strip_path: true
I have also installed deck to sync this the declarative configuration.
However, when I use the deck sync command to add this service to kong, I get below error
creating service example_service
Summary:
Created: 0
Updated: 0
Deleted: 0
Error: 1 errors occurred:
while processing event: {Create} service example_service failed: HTTP status 405 (message: "cannot create or update 'services' entities when not using a database")
Kindly need ideas on what could be wrong as I believe we can create a service in dbless mode, and I also think that this is the declarative format which should work. Looking forward to hear. Thanks
Kindly need ideas on what could be wrong as I believe we can create a service in dbless mode, and I also think that this is the declarative format which should work. Looking forward to hear. Thanks
You are correct that we can create a service in dbless mode, however the approach will be different.
If you already have the new config file in yaml format. you can load it to Kong using /config endpoint
I also think that decK should be process-agnostic and can be used with both db and dbless mode, But as it stands, loading yaml config file with /config endpoint looks like the best option.
I was trying to automate the deployment process of my Next.JS application to App Engine using Cloud Build but at the build phase it keeps on failing with:
Error: > Build directory is not writeable. https://err.sh/vercel/next.js/build-dir-not-writeable
I cant seem to figure out what to fix for this.
My current build file is and it keeps failing on step 2:
steps:
# install dependencies
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
# build the container image
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'build']
# deploy to app engine
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
env:
- 'PORT=8080'
- 'NODE_ENV=production'
timeout: "1600s"
app.yaml:
runtime: nodejs12
handlers:
- url: /.*
secure: always
script: auto
env_variables:
PORT: 8080
NODE_ENV: 'production'
any help would be appreciated
Can reproduce the same behavior after upgrading to next version 9.3.3.
Cause
The issue is related to the npm dependency which is managed by google if you use gcr.io/cloud-builders/npm seems they are running your build inside of Google Cloud Build on an old node version.
Here you can find the currently supported version
https://console.cloud.google.com/gcr/images/cloud-builders/GLOBAL/npm?gcrImageListsize=30
As you can see Googles latest node version is 10.10. The newest next.js version requires at least node 10.13
Solution
Change gcr.io/cloud-builders/npm to
- name: node
entrypoint: npm
in order to use the official docker npm package which runs on node12.
After those changes your build will be successful again.
Sidenote
Switching to the official npm will increase the build duration (at least in my case). It takes around 2 minutes longer then the gcr npm.
I'm trying to use Concourse to grab a dockerfile defintion from a git repository, do some work, build the docker image, and push the new image to Artifactory. See below for the pipeline definition. At this time I have all stages up to the artifactory stage (the one that pushes to Artifactory) working. The artifactory stage exits with error with the following output:
waiting for docker to come up...
sha256:c6039bfb6ac572503c8d97f42b6a419b94139f37876ad331d03cb7c3e8811ff2
The push refers to repository [artifactory.server.com:2077/base/golang/alpine]
a4ab5bf94afd: Preparing
unauthorized: The client does not have permission to push to the repository.
This would seem straight-forward as an Artifactory permissions issue, except that I've tested locally with the docker cli and am able to push using the same user/pass as specified within destination_username and destination_password. I double checked the credentials to make sure I'm using the same ones and find that I am.
Question #1: is there any other known cause for getting this error? I've scoured the resource github page without finding anything. Any ideas why I may be getting the permissions error?
Without having an answer to the above question, I'd really like to dig deeper into troubleshooting the problem. To do so I use fly hijack to get a shell in the corresponding container. I notice that docker is installed on the container, so next step I think would be to do a docker import on the tarball for the image I'm trying to push and then perform a docker push to push it to the repo. When attempting to run the import I get the error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
Question #2: Why can't I use docker commands from within the container? Perhaps this has something to do with the issue I'm seeing with pushing to repo when running the pipeline (I don't think so)? Is it because the container isn't running with privilege? I thought that the privileged argument would be supplied in the resource type definition, but if not, how can I run with privilege?
resources:
- name: image-repo
type: git
source:
branch: master
private_key: ((private_key))
uri: ssh://git#git-server/repo.git
- name: artifactory
type: docker-image
source:
repository: artifactory.server.com:2077/((repo))
tag: latest
username: ((destination_username))
password: ((destination_password))
jobs:
- name: update-image
plan:
- get: image-repo
- task: do-stuff
file: image-repo/scripts/do-stuff.yml
vars:
repository-directory: ((repo))
- task: build-image
privileged: true
file: image-repo/scripts/build-image.yml
- put: artifactory
params:
import_file: image/image.tar
Arghhhh. Found after much troubleshooting that the destination_password wasn't being picked up properly due to special characters and a lack of quotes. Fixed the issue by properly setting the password within yaml file being included with the --load-vars flag.
I was developing a Custom plugin for Kong.
To start off I followed guidelines listed in this tutorial
http://streamdata.io/blog/developing-an-helloworld-kong-plugin/
Few changes that I made along the way were changing dependency in the rockspec file for "lrexlib-pcre" from version 2.8.0-1 to 2.7.2-1 due to compilation problems that I faced with 2.8.0-1 version.
Please note that I am working in the next branch. The master branch has version 2.7.2-1 listed.
The tutorial assumes Kong version 0.4.2-1 while I am working with Kong version 0.5.2-1.
I have listed my plugin in kong.yml. Last listed is helloworld plugin
plugins_available:
- ssl
- jwt
- acl
- cors
- oauth2
- tcp-log
- udp-log
- file-log
- http-log
- key-auth
- hmac-auth
- basic-auth
- ip-restriction
- mashape-analytics
- request-transformer
- response-transformer
- request-size-limiting
- rate-limiting
- response-ratelimiting
- helloworld
I have listed the helloworld files in rockspec file at the last.
["kong.plugins.helloworld.handler"] =
"kong/plugins/helloworld/handler.lua",
["kong.plugins.helloworld.access"] = "kong/plugins/helloworld/access.lua",
["kong.plugins.helloworld.schema"] = "kong/plugins/helloworld/schema.lua"
Compilation is successful but kong refuses to list helloworld plugin as available in the node. All other builtin plugins are shown as available in the server
I tried enabling the plugin anyway with mock api. It doesn't work as expected and trying to restart kong lists error
nginx: [error] [lua] init_by_lua:5: Startup error:
/usr/local/share/lua/5.1/kong.lua:82: You are using a plugin that has
not been enabled in the configuration: helloworld [INFO] dnsmasq
stopped [ERR] Could not start Kong
I know there were some breaking changes introduced in Kong version 0.5. I followed the changelog, but I found nothing that would help.
Am i missing a setting a configuration somewhere?
Any help would be appreciated.
Try the following in your kong.yml:
custom_plugins:
- helloworld
I fixed this issue by adding things in custom_plugins and lua_package_path.
Here are the steps to enable and use custom plugin in kong Env.
1 - Add custom plugin name in --- custom_plugins = hello-world
2 - Install hello-world plugin by using following steps ---
If you have source code of your plugin then move into it and execute the command --- "luarocks make"
it will install your plugin.
now you have to execute a command "make install-dev" make sure your plugin have makefile like as --
Once you execute this command "make install-dev". it will create lua file at a location something like that -
/your-plugin-path/lua_modules/share/lua/5.1/kong/plugins/your-plugin-name/?.lua
just copy this path and add it into the kong configuration file in lua_package_path
Something like that --
lua_package_path=/your-plugin-path/lua_modules/share/lua/5.1/kong/plugins/your-plugin-name/?.lua
Now you done your job.
Just start kong -- kong start --vv
You will see that the plugin loaded into kong plugin env.
#Enjoy