Grant permission to use the key in GKE - encryption

I'm trying to provision an encrypted disk for GKE dynamically.
But I really don't understand below part.
Grant permission to use the key
You must assign the Compute Engine service account used by nodes in your cluster the Cloud KMS CryptoKey Encrypter/Decrypter role. This is required for GKE Persistent Disks to access and use your encryption key.
The Compute Engine service account's name has the following format:
service-[PROJECT_NUMBER]#compute-system.iam.gserviceaccount.com
Does it really necessary to grant "Cloud KMS CryptoKey Encrypter/Decryter" to Compute Engine Service account? Can I create a new SA and grant this role to it? The description said, the SA used by nodes. So I'm wondering if I can create a new SA and grant Cloud KMS role then use this SA to spin up GKE cluster. Then I think it should be available to provision encrypted disks for GKE.
official document below:
dynamically_provision_an_encrypted

I tried to follow this documentation step by step:
create gke cluster (check Kubernetes Compatibility compatibility, I decided to stick with 1.14 this time), key-ring and key
deploy CSI driver to the cluster
2.1. download driver $git clone https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver /PATH/gcp-compute-persistent-disk-csi-driver
2.2. configure variables for your project in /PATH/gcp-compute-persistent-disk-csi-driver/deploy/setup-project.sh
2.3. create service account with /PATH/gcp-compute-persistent-disk-csi-driver/deploy/setup-project.sh
2.4. configure variables for driver deployment in /PATH/gcp-compute-persistent-disk-csi-driver/deploy/kubernetes/deploy-driver.sh and /PATH/gcp-compute-persistent-disk-csi-driver/deploy/kubernetes/deploy-driver.shinstall-kustomize.sh
2.5. deploy CSI driver (I stick with stable version)
$./deploy-driver.sh
enable the Cloud KMS API
assign the Cloud KMS CryptoKey Encrypter/Decrypter role (roles/cloudkms.cryptoKeyEncrypterDecrypter) to the Compute Engine Service Agent (service-[PROJECT_NUMBER]#compute-system.iam.gserviceaccount.com)
create StorageClass
$cat storage.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: csi-gce-pd
provisioner: pd.csi.storage.gke.io
parameters:
type: pd-standard
disk-encryption-kms-key: projects/test-prj/locations/europe-west3/keyRings/TEST-KEY-RING/cryptoKeys/TEST-KEY
$kubectl describe storageclass csi-gce-pd
Name: csi-gce-pd
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1beta1","kind":"StorageClass","metadata":{"annotations":{},"name":"csi-gce-pd"},"parameters":{"disk-encryption-kms-key":"projects/test-prj/locations/europe-west3/keyRings/TEST-KEY-RING/cryptoKeys/TEST-KEY","type":"pd-standard"},"provisioner":"pd.csi.storage.gke.io"}
Provisioner: pd.csi.storage.gke.io
Parameters: disk-encryption-kms-key=projects/test-prj/locations/europe-west3/keyRings/TEST-KEY-RING/cryptoKeys/TEST-KEY,type=pd-standard
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
create persistent volume
$kubectl apply -f pvc.yaml
persistentvolumeclaim/podpvc created
$kubectl describe pvc podpvc
Name: podpvc
Namespace: default
StorageClass: csi-gce-pd
Status: Bound
Volume: pvc-b383584a-32c5-11ea-ad6e-42010a9c007d
Labels:
Annotations:
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"podpvc","namespace":"default"},"spec":{"accessModes...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 6Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Provisioning 31m pd.csi.storage.gke.io_gke-test-cluster-default-pool-cd22e088-t1h0_c158f4fc-07ba-411e-8a94-74595f2b2f1d External provisioner is provisioning volume for claim "default/podpvc"
Normal ExternalProvisioning 31m (x2 over 31m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "pd.csi.storage.gke.io" or manually created by system administrator
Normal ProvisioningSucceeded 31m pd.csi.storage.gke.io_gke-test-cluster-default-pool-cd22e088-t1h0_c158f4fc-07ba-411e-8a94-74595f2b2f1d Successfully provisioned volume pvc-b383584a-32c5-11ea-ad6e-42010a9c007d
And it's successfully provisioned.
Then I removed Cloud KMS CryptoKey Encrypter/Decrypter role from the Compute Engine Service Agent and persistent volume created at step 6 and tried again:
$kubectl apply -f pvc.yaml
persistentvolumeclaim/podpvc created
$kubectl describe pvc podpvc
Name: podpvc
Namespace: default
StorageClass: csi-gce-pd
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"podpvc","namespace":"default"},"spec":{"accessModes...
volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Provisioning 2m15s (x10 over 11m) pd.csi.storage.gke.io_gke-serhii-test-cluster-default-pool-cd22e088-t1h0_c158f4fc-07ba-411e-8a94-74595f2b2f1d External provisioner is provisioning volume for claim "default/podpvc"
Warning ProvisioningFailed 2m11s (x10 over 11m) pd.csi.storage.gke.io_gke-serhii-test-cluster-default-pool-cd22e088-t1h0_c158f4fc-07ba-411e-8a94-74595f2b2f1d failed to provision volume with StorageClass "csi-gce-pd": rpc error: code = Internal desc = CreateVolume failed to create single zonal disk "pvc-b1a238b5-35fa-11ea-bec8-42010a9c01e6": failed to insert zonal disk: unkown Insert disk error: googleapi: Error 400: Cloud KMS error when using key projects/serhii-test-prj/locations/europe-west3/keyRings/SERHII-TEST-KEY-RING/cryptoKeys/SERHII-TEST-KEY: Permission 'cloudkms.cryptoKeyVersions.useToEncrypt' denied on resource 'projects/serhii-test-prj/locations/europe-west3/keyRings/SERHII-TEST-KEY-RING/cryptoKeys/SERHII-TEST-KEY' (or it may not exist)., kmsPermissionDenied
Normal ExternalProvisioning 78s (x43 over 11m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "pd.csi.storage.gke.io" or manually created by system administrator
and persistent volume stayed in pending status.
And, as you can see, in the documentation it's necessary:
Grant permission to use the key
You must assign the Compute Engine service account used by nodes in
your cluster the Cloud KMS CryptoKey Encrypter/Decrypter role. This is
required for GKE Persistent Disks to access and use your encryption
key.
and it's not enough to create service account with /PATH/gcp-compute-persistent-disk-csi-driver/deploy/setup-project.sh provided by CSI driver.
EDIT Please notice that:
For CMEK-protected node boot disks, this Compute Engine service
account is the account which requires permissions to do encryption
using your Cloud KMS key. This is true even if you are using a custom
service account on your nodes.
So, there's no way to use only service account in this case without Compute Engine service account, because CMEK-protected persistent volumes are managed by GCE, not by GKE. Meanwhile, you can provide only necessary perdition to your custom service account to improve security of your project.

Related

S2i build command pass user name password in Azure Devops pipeline

We are using S2i Build command in our Azure Devops pipeline and using the below command task.
`./s2i build http://azuredevopsrepos:8080/tfs/IT/_git/shoppingcart --ref=S2i registry.access.redhat.com/ubi8/dotnet-31 --copy shopping-service`
The above command asks for user name and password when the task is executed,
How could we provide the username and password of the git repository from the command we are trying to execute ?
Git credential information can be put in a file .gitconfig on your home directory.
As I looked at the document*2 for s2i cli, I couldn't find any information for secured git.
I realized that OpenShift BuildConfig uses .gitconfig file while building a container image.*3 So, It could work.
*1: https://git-scm.com/book/en/v2/Git-Tools-Credential-Storage
*2: https://github.com/openshift/source-to-image/blob/master/docs/cli.md#s2i-build
*3: https://docs.openshift.com/container-platform/4.11/cicd/builds/creating-build-inputs.html#builds-gitconfig-file-secured-git_creating-build-inputs
I must admit I am unfamiliar with Azure Devops pipelines, however if this is running a build on OpenShift you can create a secret with your credentials using oc.
oc create secret generic azure-git-credentials --from-literal=username=<your-username> --from-literal=password=<PAT> --type=kubernetes.io/basic-auth
Link the secret we created above to the builder service account, this account is the one OpenShift uses by default behind the scenes when running a new build.
oc secrets link builder azure-git-credentials
Lastly, you will want to link this source-secret to the build config.
oc set build-secret --source bc/<your-build-config> azure-git-credentials
Next time you run your build the credentials should be picked up from the source-secret in the build config.
You can also do this from the UI on OpenShift, steps below are a copy of what is done above, choose one but not both.
Create a secret from YAML, modify the below where indicated:
kind: Secret
apiVersion: v1
metadata:
name: azure-git-credentials
namespace: <your-namespace>
data:
password: <base64-encoded-password-or-PAT>
username: <base64-encoded-username>
type: kubernetes.io/basic-auth
Then under the ServiceAccounts section on OpenShift, find and edit the 'builder' service account.
kind: ServiceAccount
apiVersion: v1
metadata:
name: builder
namespace: xxxxxx
secrets:
- name: azure-git-credentials ### only add this line, do not edit anything else.
And finally, edit your build config for the build finding where the git entry is and adding the source-secret entry:
source:
git:
uri: "https://github.com/user/app.git"
### Add the entries below ###
sourceSecret:
name: "azure-git-credentials"

Tables not creating after running worker and dashborad in wso2 api manager 3.2.0?

Following ables not creating after running worker and dashborad in wso2 api manager 3.2.0 oracle config:
a. WSO2_DASHBOARD_DB
b. BUSINESS_RULES_DB
c. WSO2_PERMISSIONS_DB
d. WSO2_METRICS_DB
what is the problem?
name: WSO2_PERMISSIONS_DB
description: The datasource used for permission feature
jndiConfig:
name: jdbc/PERMISSION_DB
useJndiReference: true
definition:
type: RDBMS
configuration:
jdbcUrl: 'jdbc:oracle:thin:#apigwdb-scan.shoperation.net:1521/APIGWDB'
username: 'WSO2_PERMISSIONS_DB'
password: 'apigw14'
driverClassName: oracle.jdbc.driver.OracleDriver
maxPoolSize: 10
idleTimeout: 60000
connectionTestQuery: SELECT 1 FROM DUAL
validationTimeout: 30000
isAutoCommit: false
connectionInitSql: alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'
By default all these dbs have h2 configs in the deployment.yaml. You have to create the relevant dbs in the Oracle server and change each config for the tables to be created in those dbs.
Also, please check whether the user you have used have sufficient permission.
For more information, please check https://apim.docs.wso2.com/en/3.2.0/learn/analytics/configuring-apim-analytics/#step-42-configure-the-analytics-dashboard

How do I add an nginx load balancer to a kubernetes cluster on Jelastic?

I have the following jps manifest:
jpsVersion: 1.3
jpsType: install
application:
id: my-app
name: My App
version: 0.0
settings:
fields:
- name: envName
caption: Env Name
type: string
required: true
- name: topo
type: radio-fieldset
values:
0-dev: '<b>Development:</b> one master (1) and one scalable worker (1+)'
1-prod: '<b>Production:</b> multi master (3) with API balancers (2+) and scalable workers (2+)'
default: 0-dev
- name: k8s-version
type: string
caption: k8s manifest version
default: v1.16.3
onInstall:
- installKubernetes
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.k8s-version}/manifest.jps
envName: ${settings.envName}
displayName: ${settings.envName}
settings:
deploy: cc
topo: ${settings.topo}
dashboard: version2
ingress-controller: Nginx
storage: true
api: true
monitoring: true
version: ${settings.k8s-version}
jaeger: false
Now, I'd like to add a load balancer in front of the k8s cluster, something like
env:
topology:
nodes:
- nodeGroup: bl
nodeType: nginx-dockerized
tag: 1.16.1
displayName: Node balancing
count: 1
fixedCloudlets: 1
cloudlets: 4
Of course, the above kubernetes jps installation creates a topology. Therefore, there is no way I can call the above env section. How can I add a new node to the topology created by the jelastic kubernetes jps? I found addNodes, but it does not seem to allow to define what comes into the bl node group.
In the Jelastic API, I was able to find the EditNodeGroup method, which I believe would solve my problem. However, the documentation is not very clear, it's kind of missing an example from which I could guess how to fill up the parameters. How do I use that method to add an nginx load balancer to my k8s environment?
EDIT
The EditNodeGroup method is of no use for that problem. I think, currently, my best option is to fork the jelastic-jps/kubernetes and adapt the beforeinstall for my needs. Do I have any other option? I browsed the API and found no way to add my nginx load balancer.
The environment topology cannot be changed during an external manifest invocation, since it's created within that manifest. But it can be altered after the manifest finish.
The whole approach is:
onInstall:
- installKubernetes
- addBalancer
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.k8s-version}/manifest.jps
envName: ${settings.envName}
...
addBalancer:
- install:
envName: ${settings.envName}
jps:
type: update
name: Add Balancer Node
onInstall:
- addNodes:
....
Please refer https://github.com/jelastic-jps/kubernetes/blob/ad62208a5b3796bb7beeaedfce5c42b18512d9f0/addons/storage.jps example on how to use "addNodes" action in the manifest.
Also, the reference https://docs.cloudscripting.com/creating-manifest/actions/#addnodes describes all fields that can be used.
The latest published version of K8s for Jelastic is: v1.16.6, so you could use it in your manifest.
But, please note, that via this Balancer instance you will be accessing the default Kubernetes ingress controller, i.e. the same ingresses/paths that you currently have at "http(s)://".
Of course, you can assign a public ip to added BL, and access the same functionality not via Shared Balancers as before, but via public IP from now on.
In a nutshell, Jelastic Balancer instance currently doesn't provide a Kubernetes service LoadBalancer functionality — if you need exactly this one. The K8S LoadBalancer functionality will be added in the next release: public IPs added to "cp" worker can be automatically used for LoadBalancers created inside the Kubernetes cluster. We expect this functionality be added to 1.16.8+
Please let us know if you have any further questions.

how to create a user with a random password?

I installed cloud-init in openstack's image(centos-7),so how can I create a user with random password after instance launched (public key will also inject in this user)?
I wold not prefer copy script in instance launching panel, thank you all...
There are options to generate the password for the in-built users using cloud-init:
Option-1: Using OpenStack horizon
If the user is using horizon to launch the instance then for the post-configuration by providing the config as:
#cloud-config
chpasswd:
list: |
cloud-user:rhel
root:rheladmin
expire: False
Here the passwords are generated for cloud-user and root users of RHEL image. The same is used for any user of any image simply by replacing the user.
Option-2: Using OpenStack heat template
Using the openstack heat template by providing the user-data as below:
heat_template_version: 2015-04-30
description: Launch the RHEL VM with a new password for cloud-user and root user
resources:
rhel_instance:
type: OS::Nova::Server
properties:
name: 'demo_instance'
image: '15548f32-fe27-449b-9c7d-9a113ad33778'
flavor: 'm1.medium'
availability_zone: zone1
key_name: 'key1'
networks:
- network: '731ba722-68ba-4423-9e5a-a7677d5bdd2d'
user_data_format: RAW
user_data: |
#cloud-config
chpasswd:
list: |
cloud-user:rhel
root:rheladmin
expire: False
Here the passwords are generated for cloud-user and root users of RHEL image. The same is used for any user of any image.
You can replace the rhel and rheladmin with your desired passwords.

Login password for ubuntu, RHEL, any cloud image

OpenStack cloud Images:
There are multiple cloud images which are available at https://docs.openstack.org/image-guide/obtain-images.html. In order to login to the VMs once those are deployed is either by using ssh key pair or password. But there are images where the sshkeypairlogin is disabled and there is no in-built password by default, then how to login to these VMs where the user have only information on the user-name
There are options to generate the password for the in-built users using cloud-init:
Option-1: Using OpenStack horizon
If the user is using horizon to launch the instance then for the post-configuration by providing the config as:
#cloud-config
chpasswd:
list: |
cloud-user:rhel
root:rheladmin
expire: False
Here the passwords are generated for cloud-user and root users of RHEL image. The same is used for any user of any image simply by replacing the user.
Option-2: Using OpenStack heat template
Using the openstack heat template by providing the user-data as below:
heat_template_version: 2015-04-30
description: Launch the RHEL VM with a new password for cloud-user and root user
resources:
rhel_instance:
type: OS::Nova::Server
properties:
name: 'demo_instance'
image: '15548f32-fe27-449b-9c7d-9a113ad33778'
flavor: 'm1.medium'
availability_zone: zone1
key_name: 'key1'
networks:
- network: '731ba722-68ba-4423-9e5a-a7677d5bdd2d'
user_data_format: RAW
user_data: |
#cloud-config
chpasswd:
list: |
cloud-user:rhel
root:rheladmin
expire: False
Here the passwords are generated for cloud-user and root users of RHEL image. The same is used for any user of any image.

Resources