Can we rename our ports in heat template ? Do they appear in VM in same order as listed in heat template? - openstack

I want to create a VM with 3 SRIOV ports ( heat template pasted below). I would like each port to appear with some specific name in VM, is it possible? . Is there a guarantee that port will appear in VM in order specified in heat template?
For Eg
resources:
vm1_server_0:
type: OS::Nova::Server
properties:
name: {get_param: [vm1_names, 0]}
image: {get_param: vm1_image_name}
flavor: {get_param: vm1_flavor_name}
availability_zone: {get_param: availability_zone_0}
networks:
- port: {get_resource: vm1_0_direct_port_0}
- port: {get_resource: vm1_0_direct_port_1}
- port: {get_resource: vm1_0_direct_port_2}
Can I rename vm1_0_direct_port_0 to "eth0" and vm1_0_direct_port_1 to "10Geth0" and vm1_0_direct_port_2 to "10Geth1" in heat template itself ?
If above is not possible, I need to be sure of order with which they appear in lspci|grep "Virtual Function" ( if those are sriov ports) ie like vm1_0_direct_port_0 appearing as 0000:04.01.00 and next vm1_0_direct_port_1 as 0000:04:01.01 and vm1_0_direct_port_2 as 0000:04:01.02 ? for me to rename using udev rules in VM.

Related

Azure: Unable to use volumeMount with MariaDB container instance

I'm trying to store my MariaDB in a Azure Storage Account
In my YAML I've got this to define the MariaDB image:
- name: mariadb
properties:
image: mariadb:latest
environmentVariables:
- name: "MYSQL_INITDB_SKIP_TZINFO"
value: "1"
- name: "MYSQL_DATABASE"
value: "metrics"
- name: "MYSQL_USER"
value: "user"
- name: "MYSQL_PASSWORD"
value: "password"
- name: "MYSQL_ROOT_PASSWORD"
value: "root_password"
ports:
- port: 3306
protocol: TCP
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
volumeMounts:
- mountPath: /var/lib/mysql
name: filesharevolume
My volume definition looks like this:
volumes:
- name: filesharevolume
azureFile:
sharename: <share-name>
storageAccountName: <name>
storageAccountKey: <key>
When this image starts however, it gets terminated with an error explaining that the ibdata1 file size doesn't match what's in the config file.
If I remove the volumeMount, the database image works fine.
Is there something I'm missing?
For this issue, the reason had shown in the Note:
Mounting an Azure Files share to a container instance is similar to a
Docker bind mount. Be aware that if you mount a share into a container
directory in which files or directories exist, these files or
directories are obscured by the mount and are not accessible while the
container runs.
The File share mounts on the existing directory, then it overwrites the directory. And MariaDB will rebuild the ibdata1 file according to the requirement, but it's empty and not match with the previous before.
For the use of Azure File Share, I recommend you only mount the File Share to the directory which does not exist before to persist the data. Or the files in the directory does not affect the normal running of the application.

Error when access to Nextcloud in Kubernetes

My goal is :
create a pod with Nextcloud
create a service to access this pod
from another machine with nginx route a CNAME to the service
I tried to deploy a pod with Nextcloud and a service to access it but actually I can't access it. I have an error :
message ERR_SSL_PROTOCOL_ERROR.
I just followed a tutorial at the beginning but I didn't want to use nginx like it was explained because I have it on another machine.
When I look at pods (nextcloud + db) and services they look ok but I have no response when I try to access nextcloud.
(nc = nextcloud)
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nc
name: nc
spec:
replicas: 1
selector:
matchLabels:
app: nc
strategy:
type: Recreate
template:
metadata:
labels:
app: nc
spec:
containers:
- env:
- name: DEBUG
value: "false"
- name: NEXTCLOUD_URL
value: http://test.fr
- name: NEXTCLOUD_ADMIN_USER
value: admin
- name: NEXTCLOUD_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: NEXTCLOUD_ADMIN_PASSWORD
- name: NEXTCLOUD_UPLOAD_MAX_FILESIZE
value: 4G
- name: NEXTCLOUD_MAX_FILE_UPLOADS
value: "20"
- name: MYSQL_DATABASE
value: nextcloud
- name: MYSQL_HOST
value: mariadb
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_USER
value: nextcloud
name: nc
image: nextcloud
ports:
- containerPort: 80
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/www/html
name: vnextcloud
subPath: html
- mountPath: /var/www/html/custom_apps
name: vnextcloud
subPath: apps
- mountPath: /var/www/html/config
name: vnextcloud
subPath: config
- mountPath: /var/www/html/data
name: vimages
subPath: imgnc
- mountPath: /var/www/html/themes
name: vnextcloud
subPath: themes
restartPolicy: Always
volumes:
- name: vnextcloud
persistentVolumeClaim:
claimName: nfs-pvcnextcloud
- name: vimages
persistentVolumeClaim:
claimName: nfs-pvcimages
For creating the service I use this command line :
kubectl expose deployment nc --type=NodePort --name=svc-nc --port 80
And to access my nextcloud I tried the address #IP_MASTER:32500
My questions are:
How to check if a pod is working well ?to know if the problem is coming from the service or the pod
What should I do to have access to my nextcloud ?I didn't do the tuto part "Create self-signed certificates" because I don't know how to manage. Should it be on my other Linux machine or in my Kubernetes Cluster
1. Please consider using stable nextcloud helm chart
2. This tutorial is a little outdated and can be found also here
In kubernetes 1.16 release you should change in all your deployments apiVersion to apiVersion: apps/v1 please take a look at Deprecations and Removals.
In addition you should get an error ValidationError(Deployment.spec): missing required field "selector" so please add selectors in your deployment under Deployment.spec like:
selector:
matchLabels:
app: db
3. Finally Create self-signed certificates. this repo is using OMGWTFSSL - Self Signed SSL Certificate Generator. Once you provide necessary information like server name, path to your local hostpath and names for your SSL certificates it will be automatically created after one pod-run under specified hostpath:
volumes:
- name: certs
hostPath:
path: "/home/<someFolderLocation>/certs-pv"
those information should be re-used in the section Nginx reverse Proxy for nginx.conf
4. In your nc-svc.yaml you can change the service type to the type: NodePort
5. How to verify if your sercie is working properly:
kubectl get pods,svc,ep -o wide
Pods:
pod/nc-6d8694659d-5przx 1/1 Running 0 15m 10.244.0.6
Svc:
service/svc-nc NodePort 10.102.90.88 <none> 80:32500/TCP
Endpoints:
endpoints/svc-nc 10.244.0.6:80
You can test your service from inside the cluster running separate pod (f.e. ubuntu)
curl your_svc_name
you can verify if service discovery is working properly:
cat /etc/resolv.conf
nslokup svc_your_svc_name (your_svc_name.default.svc.cluster.local)
From outside the cluster using NodePort:
curl NODE_IP:NODE_PORT ( if not please verify your firewall rules)
Once you provided hostname for your nextcloud service you should use
curl -vH 'Host:specified_hostname' http://external_ip/ (using http or https according to your configuration)
In addition you can exec directly into your db pod
kuebctl exec -it db_pod -- /bin/bash and run
mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD
mysqlshow -uroot -p$MYSQL_ROOT_PASSWORD --status nextcloud
6. What should I do to have access to my nextcloud ?
I didn't do the tuto part "Create self-signed certificates" because I don't know how to manage.
7. As described under point 3.
8. This part is not clear to me: from another machine with nginx route a CNAME to the service
Please refer to:
An ExternalName Service is a special case of Service that does not have selectors and uses DNS names instead.
Additional resources:
Expose your Kubernetes service from your own custom domains
What’s the difference between a CNAME and a Web Redirect?
Hope this help.

Can't access cross VM of different zones

I have 2 VM instances using the same network(default), same subnet (default), but in 2 different zones. I accessed the VM and then ping to another VM but they did not resolve! What do I have to do to make them communicate? Below is the information of the system:
Network:
- Name: default
Subnet:
- Name: default
- Network: default
- Ip range: 10.148.0.0/20
- Region: asia-southeast1
VM1:
- Subnet: default
- IP: 10.148.0.54
- Zone: asia-southeast1-c
VM2:
- Subnet: default
- IP: 10.148.0.56
- Zone: asia-southeast1-b
Please help me! thank you!
First check if the ARP is resolved for the remote VM you want to ping.
Also check if there is a firewall rule for the default network blocking the communication between the VM's.

How to get SFTP details to each PV in Kubernetes

I'm hosting multiple sites in a Kubernetes cluster for each client. WP sites have its own persistent disk using NFS server with ReadWriteMany mode. Each customer needs SFTP/FTP login details
I managed to run SFTP in K8s using https://github.com/atmoz/sftp and get credentials. It works, but I'm not able to edit/delete files. Also after creating this, WP now asks for FTP credentials for doing everything. Looks like it lost the permission
Here is how my spec looks like:
spec:
#secrets and config
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
containers:
#the sftp server itself
- name: sftp
image: atmoz/sftp:latest
imagePullPolicy: Always
args: ["admin:admin:1010:1013"]
ports:
- containerPort: 22
volumeMounts:
- mountPath: /var/www/html
name: nfs
securityContext:
capabilities:
add: ["SYS_ADMIN"]
resources: {}

Ping failed to second ip in openstack instance

I have RDO openstack environment in a machine for testing. The RDO was installed with packstack --allinone command. Using HOT I have created two instances. One with cirros image and another with Fedora. The Fedora instance have two interfaces that are connected to same network while cirros have only one interface and connected to same network. The template looks like this -
heat_template_version: 2015-10-15
description: Simple template to deploy two compute instances
resources:
local_net:
type: OS::Neutron::Net
local_signalling_subnet:
type: OS::Neutron::Subnet
properties:
network_id: { get_resource: local_net }
cidr: "50.0.0.0/24"
ip_version: 4
fed:
type: OS::Nova::Server
properties:
image: fedora
flavor: m1.small
key_name: heat_key
networks:
- network: local_net
networks:
- port: { get_resource: fed_port1 }
- port: { get_resource: fed_port2 }
fed_port1:
type: OS::Neutron::Port
properties:
network_id: { get_resource: local_net }
fed_port2:
type: OS::Neutron::Port
properties:
network_id: { get_resource: local_net }
cirr:
type: OS::Nova::Server
properties:
image: cirros
flavor: m1.tiny
key_name: heat_key
networks:
- network: local_net
networks:
- port: { get_resource: cirr_port }
cirr_port:
type: OS::Neutron::Port
properties:
network_id: { get_resource: local_net }
The Fedora instance got two ips (50.0.0.3 and 50.0.0.4). Cirros got ip 50.0.0.5. I can ping 50.0.0.3 from cirros instance but not the ip 50.0.0.4. If I manually down the interface with ip 50.0.0.3 in the Fedora instance, then only I can ping 50.0.0.4 from cirros instance. Is there a restriction in the configuration of neutron that prohibits ping to both the ips of Fedora instance at same time. Please help.
This happens because of the default firewall-ing done by OpenStack networking (neutron) -- it simply drops any packets received on a port if the source address of the packet does not match the IP address assigned to the port.
When cirros instance sends ping packet to 50.0.0.4, fedora instance receives it on the interface with IP address 50.0.0.4. However, when it is responding back to cirros's IP address 50.0.0.5, the linux networking stack on your fedora machine has two interfaces to choose from to send out the response (because both those interfaces are connected to the same network). In your case, fedora choose to respond back on on 50.0.0.3. However, the source IP address in the packet is still 50.0.0.4, and thus the OpenStack networking layer simply drops it.
General recommendation is to not have multiple interfaces on the same network. If you want multiple IP addresses from the same network for your VM, you can use "fixed_ips" option in your heat template:
fed_port1:
type: OS::Neutron::Port
properties:
network_id: { get_resource: local_net }
fixed_ips:
- ip_address: "50.0.0.4"
- ip_address: "50.0.0.3"
Since DHCP server would offer only IP address, fedora would be configured with only one IP. You can add another IP to your interface using "ip addr add" command (see http://www.unixwerk.eu/linux/redhat/ipalias.html):
ip addr add 50.0.0.3/24 brd + dev eth0 label eth0:0

Resources