OpenStack cloud Images:
There are multiple cloud images which are available at https://docs.openstack.org/image-guide/obtain-images.html. In order to login to the VMs once those are deployed is either by using ssh key pair or password. But there are images where the sshkeypairlogin is disabled and there is no in-built password by default, then how to login to these VMs where the user have only information on the user-name
There are options to generate the password for the in-built users using cloud-init:
Option-1: Using OpenStack horizon
If the user is using horizon to launch the instance then for the post-configuration by providing the config as:
#cloud-config
chpasswd:
list: |
cloud-user:rhel
root:rheladmin
expire: False
Here the passwords are generated for cloud-user and root users of RHEL image. The same is used for any user of any image simply by replacing the user.
Option-2: Using OpenStack heat template
Using the openstack heat template by providing the user-data as below:
heat_template_version: 2015-04-30
description: Launch the RHEL VM with a new password for cloud-user and root user
resources:
rhel_instance:
type: OS::Nova::Server
properties:
name: 'demo_instance'
image: '15548f32-fe27-449b-9c7d-9a113ad33778'
flavor: 'm1.medium'
availability_zone: zone1
key_name: 'key1'
networks:
- network: '731ba722-68ba-4423-9e5a-a7677d5bdd2d'
user_data_format: RAW
user_data: |
#cloud-config
chpasswd:
list: |
cloud-user:rhel
root:rheladmin
expire: False
Here the passwords are generated for cloud-user and root users of RHEL image. The same is used for any user of any image.
Related
I have a working shinyproxy app with LDAP authentication. However, for retrieving data from the SQL-database I now use (not recommended) a hardcoded connection string in my R code with the credentials mentioned herein (I use a service user because my end users don't have permissions to query the database):
con <- DBI::dbConnect(odbc::odbc(),
encoding = "latin1",
.connection_string = 'Driver={Driver};Server=Server;Database=dbb;UID=UID;PWD=PWD')
I tried to replace the connection string with an environmental variable, that I pass from my Linux host to the container. This works when running the container outside ShinyProxy, and thus by passing the environmental variables at runtime with the following Docker command:
docker run -it --env-file env.list app123
However, when using ShinyProxy, it is not clear to me how to configure this in the yaml config file. How do I pass the statement --env-file env.list at this level so that it is picked up in the linked containers?
Any help kindly appreciated!
From this closed issue: https://github.com/openanalytics/shinyproxy/issues/99
Your application.yaml could look something like this:
proxy:
title: Open Analytics Shiny Proxy
logo-url: http://www.openanalytics.eu/sites/www.openanalytics.eu/themes/oa/logo.png
landing-page: /
heartbeat-rate: 10000
heartbeat-timeout: 60000
port: 8080
authentication: simple
admin-groups: admin
# Example: 'simple' authentication configuration
users:
- name: admin
password: password
groups: admin
# Docker configuration
docker:
internal-networking: true
specs:
- id: 01_hello
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app
container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
container-image: openanalytics/shinyproxy-demo
container-env-file: /app/shinyproxy/test.env
container-env:
bar: baz
access-groups: admin
container-network: shinyproxy_reprex_default
logging:
file:
shinyproxy.log
Specifically it seems you could set environment variables with a file using container-env-file.
I've configured opendistro_security for OpenID. When I attempt to authenticate a user, it fails. Presumably because that user has no permissions. How do I give permissions an openid user? I can't seem to find an obvious way to do so with the internal_user.yml.
I Solved it. For posterity, here's what needed to do in addition to the openis settings in the Kibana.yml File.
1: In the config.yml file on each of my Elasticsearch nodes I needed to add the following:
authc:
openid_auth_domain:
http_enabled: true
transport_enabled: true
order: 0
http_authenticator:
type: openid
challenge: false
config:
subject_key: email
roles_key: roles
openid_connect_url: https://accounts.google.com/.well-known/openid-configuration
authentication_backend:
type: noop
Since I'm using google as my identity provider I needed to make sure my subject_key was "email"
2: Needed to run security config script on each node:
docker exec -it elasticsearch-node1 /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cacert /usr/share/elasticsearch/config/root-ca.pem -cert /usr/share/elasticsearch/config/kirk.pem -key /usr/share/elasticsearch/config/kirk-key.pem -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl && docker exec -it elasticsearch-node2 /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cacert /usr/share/elasticsearch/config/root-ca.pem -cert /usr/share/elasticsearch/config/kirk.pem -key /usr/share/elasticsearch/config/kirk-key.pem -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl
3: I needed to configure the usersthat I want to have access admin access to a role:
all_access:
reserved: false
backend_roles:
- "admin"
users:
- "name#email.com"
description: "Maps an openid user to all_access"
Now I can assign other users from Kibana
I'm trying to provision an encrypted disk for GKE dynamically.
But I really don't understand below part.
Grant permission to use the key
You must assign the Compute Engine service account used by nodes in your cluster the Cloud KMS CryptoKey Encrypter/Decrypter role. This is required for GKE Persistent Disks to access and use your encryption key.
The Compute Engine service account's name has the following format:
service-[PROJECT_NUMBER]#compute-system.iam.gserviceaccount.com
Does it really necessary to grant "Cloud KMS CryptoKey Encrypter/Decryter" to Compute Engine Service account? Can I create a new SA and grant this role to it? The description said, the SA used by nodes. So I'm wondering if I can create a new SA and grant Cloud KMS role then use this SA to spin up GKE cluster. Then I think it should be available to provision encrypted disks for GKE.
official document below:
dynamically_provision_an_encrypted
I tried to follow this documentation step by step:
create gke cluster (check Kubernetes Compatibility compatibility, I decided to stick with 1.14 this time), key-ring and key
deploy CSI driver to the cluster
2.1. download driver $git clone https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver /PATH/gcp-compute-persistent-disk-csi-driver
2.2. configure variables for your project in /PATH/gcp-compute-persistent-disk-csi-driver/deploy/setup-project.sh
2.3. create service account with /PATH/gcp-compute-persistent-disk-csi-driver/deploy/setup-project.sh
2.4. configure variables for driver deployment in /PATH/gcp-compute-persistent-disk-csi-driver/deploy/kubernetes/deploy-driver.sh and /PATH/gcp-compute-persistent-disk-csi-driver/deploy/kubernetes/deploy-driver.shinstall-kustomize.sh
2.5. deploy CSI driver (I stick with stable version)
$./deploy-driver.sh
enable the Cloud KMS API
assign the Cloud KMS CryptoKey Encrypter/Decrypter role (roles/cloudkms.cryptoKeyEncrypterDecrypter) to the Compute Engine Service Agent (service-[PROJECT_NUMBER]#compute-system.iam.gserviceaccount.com)
create StorageClass
$cat storage.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: csi-gce-pd
provisioner: pd.csi.storage.gke.io
parameters:
type: pd-standard
disk-encryption-kms-key: projects/test-prj/locations/europe-west3/keyRings/TEST-KEY-RING/cryptoKeys/TEST-KEY
$kubectl describe storageclass csi-gce-pd
Name: csi-gce-pd
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1beta1","kind":"StorageClass","metadata":{"annotations":{},"name":"csi-gce-pd"},"parameters":{"disk-encryption-kms-key":"projects/test-prj/locations/europe-west3/keyRings/TEST-KEY-RING/cryptoKeys/TEST-KEY","type":"pd-standard"},"provisioner":"pd.csi.storage.gke.io"}
Provisioner: pd.csi.storage.gke.io
Parameters: disk-encryption-kms-key=projects/test-prj/locations/europe-west3/keyRings/TEST-KEY-RING/cryptoKeys/TEST-KEY,type=pd-standard
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
create persistent volume
$kubectl apply -f pvc.yaml
persistentvolumeclaim/podpvc created
$kubectl describe pvc podpvc
Name: podpvc
Namespace: default
StorageClass: csi-gce-pd
Status: Bound
Volume: pvc-b383584a-32c5-11ea-ad6e-42010a9c007d
Labels:
Annotations:
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"podpvc","namespace":"default"},"spec":{"accessModes...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 6Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Provisioning 31m pd.csi.storage.gke.io_gke-test-cluster-default-pool-cd22e088-t1h0_c158f4fc-07ba-411e-8a94-74595f2b2f1d External provisioner is provisioning volume for claim "default/podpvc"
Normal ExternalProvisioning 31m (x2 over 31m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "pd.csi.storage.gke.io" or manually created by system administrator
Normal ProvisioningSucceeded 31m pd.csi.storage.gke.io_gke-test-cluster-default-pool-cd22e088-t1h0_c158f4fc-07ba-411e-8a94-74595f2b2f1d Successfully provisioned volume pvc-b383584a-32c5-11ea-ad6e-42010a9c007d
And it's successfully provisioned.
Then I removed Cloud KMS CryptoKey Encrypter/Decrypter role from the Compute Engine Service Agent and persistent volume created at step 6 and tried again:
$kubectl apply -f pvc.yaml
persistentvolumeclaim/podpvc created
$kubectl describe pvc podpvc
Name: podpvc
Namespace: default
StorageClass: csi-gce-pd
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"podpvc","namespace":"default"},"spec":{"accessModes...
volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Provisioning 2m15s (x10 over 11m) pd.csi.storage.gke.io_gke-serhii-test-cluster-default-pool-cd22e088-t1h0_c158f4fc-07ba-411e-8a94-74595f2b2f1d External provisioner is provisioning volume for claim "default/podpvc"
Warning ProvisioningFailed 2m11s (x10 over 11m) pd.csi.storage.gke.io_gke-serhii-test-cluster-default-pool-cd22e088-t1h0_c158f4fc-07ba-411e-8a94-74595f2b2f1d failed to provision volume with StorageClass "csi-gce-pd": rpc error: code = Internal desc = CreateVolume failed to create single zonal disk "pvc-b1a238b5-35fa-11ea-bec8-42010a9c01e6": failed to insert zonal disk: unkown Insert disk error: googleapi: Error 400: Cloud KMS error when using key projects/serhii-test-prj/locations/europe-west3/keyRings/SERHII-TEST-KEY-RING/cryptoKeys/SERHII-TEST-KEY: Permission 'cloudkms.cryptoKeyVersions.useToEncrypt' denied on resource 'projects/serhii-test-prj/locations/europe-west3/keyRings/SERHII-TEST-KEY-RING/cryptoKeys/SERHII-TEST-KEY' (or it may not exist)., kmsPermissionDenied
Normal ExternalProvisioning 78s (x43 over 11m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "pd.csi.storage.gke.io" or manually created by system administrator
and persistent volume stayed in pending status.
And, as you can see, in the documentation it's necessary:
Grant permission to use the key
You must assign the Compute Engine service account used by nodes in
your cluster the Cloud KMS CryptoKey Encrypter/Decrypter role. This is
required for GKE Persistent Disks to access and use your encryption
key.
and it's not enough to create service account with /PATH/gcp-compute-persistent-disk-csi-driver/deploy/setup-project.sh provided by CSI driver.
EDIT Please notice that:
For CMEK-protected node boot disks, this Compute Engine service
account is the account which requires permissions to do encryption
using your Cloud KMS key. This is true even if you are using a custom
service account on your nodes.
So, there's no way to use only service account in this case without Compute Engine service account, because CMEK-protected persistent volumes are managed by GCE, not by GKE. Meanwhile, you can provide only necessary perdition to your custom service account to improve security of your project.
I installed cloud-init in openstack's image(centos-7),so how can I create a user with random password after instance launched (public key will also inject in this user)?
I wold not prefer copy script in instance launching panel, thank you all...
There are options to generate the password for the in-built users using cloud-init:
Option-1: Using OpenStack horizon
If the user is using horizon to launch the instance then for the post-configuration by providing the config as:
#cloud-config
chpasswd:
list: |
cloud-user:rhel
root:rheladmin
expire: False
Here the passwords are generated for cloud-user and root users of RHEL image. The same is used for any user of any image simply by replacing the user.
Option-2: Using OpenStack heat template
Using the openstack heat template by providing the user-data as below:
heat_template_version: 2015-04-30
description: Launch the RHEL VM with a new password for cloud-user and root user
resources:
rhel_instance:
type: OS::Nova::Server
properties:
name: 'demo_instance'
image: '15548f32-fe27-449b-9c7d-9a113ad33778'
flavor: 'm1.medium'
availability_zone: zone1
key_name: 'key1'
networks:
- network: '731ba722-68ba-4423-9e5a-a7677d5bdd2d'
user_data_format: RAW
user_data: |
#cloud-config
chpasswd:
list: |
cloud-user:rhel
root:rheladmin
expire: False
Here the passwords are generated for cloud-user and root users of RHEL image. The same is used for any user of any image.
You can replace the rhel and rheladmin with your desired passwords.
I am using Cloudify 3.3 and OpenStack Kilo.
After I have successfully installed a blueprint, I tried to scale out the host VM (associated with a floating IP W.X.Y.Z) using the default scale workflow. My expected result is that a new VM will be created with a new floating IP, say A.B.C.D, associated to it.
However, after the scale workflow has been completed, I found that the floating IP W.X.Y.Z has been disassociated from the original host VM while this floating IP has been associated to the newly created VM.
My testing "blueprint.yaml":
tosca_definitions_version: cloudify_dsl_1_2
imports:
- http://www.getcloudify.org/spec/cloudify/3.3/types.yaml
- http://www.getcloudify.org/spec/openstack-plugin/1.3/plugin.yaml
inputs:
image:
description: Openstack image ID
flavor:
description: Openstack flavor ID
agent_user:
description: agent username for connecting to the OS
default: centos
node_templates:
web_server_floating_ip:
type: cloudify.openstack.nodes.FloatingIP
web_server_security_group:
type: cloudify.openstack.nodes.SecurityGroup
properties:
rules:
- remote_ip_prefix: 0.0.0.0/0
port: 8080
web_server:
type: cloudify.openstack.nodes.Server
properties:
cloudify_agent:
user: { get_input: agent_user }
image: { get_input: image }
flavor: { get_input: flavor }
relationships:
- type: cloudify.openstack.server_connected_to_floating_ip
target: web_server_floating_ip
- type: cloudify.openstack.server_connected_to_security_group
target: web_server_security_group
I have tried to create a node_template with type cloudify.nodes.Tier and put all the things inside this container. However, the scale workflow cannot be executed normally in this case.
I wonder what should I do so that the newly created VM can be associated to a new floating IP?
Thanks, Sam
What you are describing is a "one to one" relationship between the node and the resources related to it.
Currently Cloudify does not support this kind of relationship and your blueprint is working just as it should.
This feature will be available as of Cloudify 3.4 that will be released in few months