We are using S2i Build command in our Azure Devops pipeline and using the below command task.
`./s2i build http://azuredevopsrepos:8080/tfs/IT/_git/shoppingcart --ref=S2i registry.access.redhat.com/ubi8/dotnet-31 --copy shopping-service`
The above command asks for user name and password when the task is executed,
How could we provide the username and password of the git repository from the command we are trying to execute ?
Git credential information can be put in a file .gitconfig on your home directory.
As I looked at the document*2 for s2i cli, I couldn't find any information for secured git.
I realized that OpenShift BuildConfig uses .gitconfig file while building a container image.*3 So, It could work.
*1: https://git-scm.com/book/en/v2/Git-Tools-Credential-Storage
*2: https://github.com/openshift/source-to-image/blob/master/docs/cli.md#s2i-build
*3: https://docs.openshift.com/container-platform/4.11/cicd/builds/creating-build-inputs.html#builds-gitconfig-file-secured-git_creating-build-inputs
I must admit I am unfamiliar with Azure Devops pipelines, however if this is running a build on OpenShift you can create a secret with your credentials using oc.
oc create secret generic azure-git-credentials --from-literal=username=<your-username> --from-literal=password=<PAT> --type=kubernetes.io/basic-auth
Link the secret we created above to the builder service account, this account is the one OpenShift uses by default behind the scenes when running a new build.
oc secrets link builder azure-git-credentials
Lastly, you will want to link this source-secret to the build config.
oc set build-secret --source bc/<your-build-config> azure-git-credentials
Next time you run your build the credentials should be picked up from the source-secret in the build config.
You can also do this from the UI on OpenShift, steps below are a copy of what is done above, choose one but not both.
Create a secret from YAML, modify the below where indicated:
kind: Secret
apiVersion: v1
metadata:
name: azure-git-credentials
namespace: <your-namespace>
data:
password: <base64-encoded-password-or-PAT>
username: <base64-encoded-username>
type: kubernetes.io/basic-auth
Then under the ServiceAccounts section on OpenShift, find and edit the 'builder' service account.
kind: ServiceAccount
apiVersion: v1
metadata:
name: builder
namespace: xxxxxx
secrets:
- name: azure-git-credentials ### only add this line, do not edit anything else.
And finally, edit your build config for the build finding where the git entry is and adding the source-secret entry:
source:
git:
uri: "https://github.com/user/app.git"
### Add the entries below ###
sourceSecret:
name: "azure-git-credentials"
Related
I'm trying to run dagster using celery-k8s and using the examples/celery-k8s as a start. upon running the pipeline from playground I get
Initialization of resources [s3, io_manager] failed.
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I have configured aws credentials in env variables as mentioned in the document
deployments:
- name: "user-code-deployment-test"
image:
repository: "somasays/dagster-usercode-example"
tag: "0.5"
pullPolicy: Always
dagsterApiGrpcArgs:
- "-f"
- "/workspace/repo.py"
port: 3030
env:
AWS_ACCESS_KEY_ID: AAAAAAAAAAAAAAAAAAAAAAAAA
AWS_SECRET_ACCESS_KEY: qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
AWS_DEFAULT_REGION: eu-central-1
and I can also see these values are set in the env variables of the pod and can also access the s3 location after pip install awscli and aws s3 ls see the screenshot below the job pod however throws Unable to locate credentials
Please help
The deployment configuration applies to the user code servers. Meanwhile the celery executor runs your pipeline code in separate kubernetes jobs. To provide your secrets there, you will want to configure the env_secrets field of the celery-k8s executor in your pipeline run config.
See https://github.com/dagster-io/dagster/blob/master/python_modules/libraries/dagster-k8s/dagster_k8s/job.py#L321-L327 for details on the config.
I have a working shinyproxy app with LDAP authentication. However, for retrieving data from the SQL-database I now use (not recommended) a hardcoded connection string in my R code with the credentials mentioned herein (I use a service user because my end users don't have permissions to query the database):
con <- DBI::dbConnect(odbc::odbc(),
encoding = "latin1",
.connection_string = 'Driver={Driver};Server=Server;Database=dbb;UID=UID;PWD=PWD')
I tried to replace the connection string with an environmental variable, that I pass from my Linux host to the container. This works when running the container outside ShinyProxy, and thus by passing the environmental variables at runtime with the following Docker command:
docker run -it --env-file env.list app123
However, when using ShinyProxy, it is not clear to me how to configure this in the yaml config file. How do I pass the statement --env-file env.list at this level so that it is picked up in the linked containers?
Any help kindly appreciated!
From this closed issue: https://github.com/openanalytics/shinyproxy/issues/99
Your application.yaml could look something like this:
proxy:
title: Open Analytics Shiny Proxy
logo-url: http://www.openanalytics.eu/sites/www.openanalytics.eu/themes/oa/logo.png
landing-page: /
heartbeat-rate: 10000
heartbeat-timeout: 60000
port: 8080
authentication: simple
admin-groups: admin
# Example: 'simple' authentication configuration
users:
- name: admin
password: password
groups: admin
# Docker configuration
docker:
internal-networking: true
specs:
- id: 01_hello
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app
container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
container-image: openanalytics/shinyproxy-demo
container-env-file: /app/shinyproxy/test.env
container-env:
bar: baz
access-groups: admin
container-network: shinyproxy_reprex_default
logging:
file:
shinyproxy.log
Specifically it seems you could set environment variables with a file using container-env-file.
I have the following jps manifest:
jpsVersion: 1.3
jpsType: install
application:
id: my-app
name: My App
version: 0.0
settings:
fields:
- name: envName
caption: Env Name
type: string
required: true
- name: topo
type: radio-fieldset
values:
0-dev: '<b>Development:</b> one master (1) and one scalable worker (1+)'
1-prod: '<b>Production:</b> multi master (3) with API balancers (2+) and scalable workers (2+)'
default: 0-dev
- name: k8s-version
type: string
caption: k8s manifest version
default: v1.16.3
onInstall:
- installKubernetes
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.k8s-version}/manifest.jps
envName: ${settings.envName}
displayName: ${settings.envName}
settings:
deploy: cc
topo: ${settings.topo}
dashboard: version2
ingress-controller: Nginx
storage: true
api: true
monitoring: true
version: ${settings.k8s-version}
jaeger: false
Now, I'd like to add a load balancer in front of the k8s cluster, something like
env:
topology:
nodes:
- nodeGroup: bl
nodeType: nginx-dockerized
tag: 1.16.1
displayName: Node balancing
count: 1
fixedCloudlets: 1
cloudlets: 4
Of course, the above kubernetes jps installation creates a topology. Therefore, there is no way I can call the above env section. How can I add a new node to the topology created by the jelastic kubernetes jps? I found addNodes, but it does not seem to allow to define what comes into the bl node group.
In the Jelastic API, I was able to find the EditNodeGroup method, which I believe would solve my problem. However, the documentation is not very clear, it's kind of missing an example from which I could guess how to fill up the parameters. How do I use that method to add an nginx load balancer to my k8s environment?
EDIT
The EditNodeGroup method is of no use for that problem. I think, currently, my best option is to fork the jelastic-jps/kubernetes and adapt the beforeinstall for my needs. Do I have any other option? I browsed the API and found no way to add my nginx load balancer.
The environment topology cannot be changed during an external manifest invocation, since it's created within that manifest. But it can be altered after the manifest finish.
The whole approach is:
onInstall:
- installKubernetes
- addBalancer
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.k8s-version}/manifest.jps
envName: ${settings.envName}
...
addBalancer:
- install:
envName: ${settings.envName}
jps:
type: update
name: Add Balancer Node
onInstall:
- addNodes:
....
Please refer https://github.com/jelastic-jps/kubernetes/blob/ad62208a5b3796bb7beeaedfce5c42b18512d9f0/addons/storage.jps example on how to use "addNodes" action in the manifest.
Also, the reference https://docs.cloudscripting.com/creating-manifest/actions/#addnodes describes all fields that can be used.
The latest published version of K8s for Jelastic is: v1.16.6, so you could use it in your manifest.
But, please note, that via this Balancer instance you will be accessing the default Kubernetes ingress controller, i.e. the same ingresses/paths that you currently have at "http(s)://".
Of course, you can assign a public ip to added BL, and access the same functionality not via Shared Balancers as before, but via public IP from now on.
In a nutshell, Jelastic Balancer instance currently doesn't provide a Kubernetes service LoadBalancer functionality — if you need exactly this one. The K8S LoadBalancer functionality will be added in the next release: public IPs added to "cp" worker can be automatically used for LoadBalancers created inside the Kubernetes cluster. We expect this functionality be added to 1.16.8+
Please let us know if you have any further questions.
I am trying to follow this tutorial on configuring nginx-ingress-controller for a Kubernetes cluster I deployed to AWS using kops.
https://daemonza.github.io/2017/02/13/kubernetes-nginx-ingress-controller/
When I run kubectl create -f ./nginx-ingress-controller.yml, the pods are created but error out. From what I can tell, the problem lies with the following portion of nginx-ingress-controller.yml:
volumes:
- name: tls-dhparam-vol
secret:
secretName: tls-dhparam
- name: nginx-template-volume
configMap:
name: nginx-template
items:
- key: nginx.tmpl
path: nginx.tmpl
Error shown on the pods:
MountVolume.SetUp failed for volume "nginx-template-volume" : configmaps "nginx-template" not found
This makes sense, because the tutorial does not have the reader create this configmap before creating the controller. I know that I need to create the configmap using:
kubectl create configmap nginx-template --from-file=nginx.tmpl=nginx.tmpl
I've done this using nginx.tmpl files found from sources like this, but they don't seem to work (always fail with invalid NGINX template errors). Log example:
I1117 16:29:49.344882 1 main.go:94] Using build: https://github.com/bprashanth/contrib.git - git-92b2bac
I1117 16:29:49.402732 1 main.go:123] Validated default/default-http-backend as the default backend
I1117 16:29:49.402901 1 main.go:80] mkdir /etc/nginx-ssl: file exists already exists
I1117 16:29:49.402951 1 ssl.go:127] using file '/etc/nginx-ssl/dhparam/dhparam.pem' for parameter ssl_dhparam
F1117 16:29:49.403962 1 main.go:71] invalid NGINX template: template: nginx.tmpl:1: function "where" not defined
The image version used is quite old, but I've tried newer versions with no luck.
containers:
- name: nginx-ingress-controller
image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
This thread is similar to my issue, but I don't quite understand the proposed solution. Where would I use docker cp to extract a usable template from? Seems like the templates I'm using use a language/syntax incompatible with Docker...?
To copy the nginx template file from the ingress controller pod to your local machine, you can first grab the name of the pod with kubectl get pods then run kubectl exec [POD_NAME] -it -- cat /etc/nginx/template/nginx.tmpl > nginx.tmpl.
This will leave you with the nginx.tmpl file you can then edit and push back up as a configmap. I would recommend though keeping custom changes to the template to a minimum as it can make it hard for you to update the controller in the future.
Hope this helps!
I'm trying to deploy one in my project with DeployBundle and made the following settings:
parameter.yml
jordi_llonch_deploy:
config:
project: delivve
vcs: git
servers_parameter_file: app/config/parameters_deployer_servers.yml
local_repository_dir: /home/deploy/local_repository
clean_max_deploys: 7
ssh:
proxy: cli
user: user
password: 'password'
public_key_file: '/home/user/.ssh/id_rsa.pub'
private_key_file: '/home/user/.ssh/id_rsa'
private_key_file_pwd: 'password'
zones:
prod_myproj:
deployer: delivve
environment: prod
checkout_url: 'https://user#bitbucket.org/user/project-webservice.git'
checkout_branch: master
repository_dir: /var/www/production/delivve/deploy
production_dir: /var/www/production/delivve/code
parameters_deployer_servers.yml
prod_myproj:
urls:
- user#localhost:22
It has also the service and the setting but it seems this working out that part.
My problem is when I give the command:
sudo php app/console deployer:initialize --zones=prod_myproj
of the following error:
[prod_myproj]
[2016-01-04 18:25:55] app.CRITICAL: Not implemented
ROLLBACK [prod_myproj]
[2016-01-04 18:25:55] app.CRITICAL: Not implemented
Anyone know what can this happening, and how could solve, or to deploy with this bundle?
This looks like comming from the password authentication (https://github.com/jordillonch/DeployBundle/blob/3f8e679eb2ac87d0cef9ea9dd4765afd24c6a266/SSH/CLISshProxy.php#L60).
Try removing jordi_llonch_deploy.config.ssh.password from your config.yml (https://github.com/jordillonch/DeployBundle/blob/3f8e679eb2ac87d0cef9ea9dd4765afd24c6a266/SSH/SshClient.php#L76).