Conditions for Stack Update Restrictions in Openstack - openstack

I am trying to update a stack using openstack heat templates to bring up and configure new nova Servers as a part of existing Server Group. When added configuration information(for new nova Servers) in the templates which are shared by existing and new VMs, the existing nova Servers are also affected and re-spawned.
Is there any way to restrict the openstack from re-spawning or manipulating the existing nova servers of the stack?

Adding the following tag to the OS::Nova::Server properties prevented the re-spwaning of the existing instances.
user_data_update_policy: IGNORE

Related

what is the url for wp core install --url for kubernetes?

In kubernetes, I have a wordpress container and a wp-cli Job.
To create the WordPress tables in the database using the URL, title, and default admin user details, I am running this command in the wp-cli Job:
wp core install --url=http://localhost:8087 --title=title --admin_user=user --admin_password=pass --admin_email=someone#email.com --skip-email
The --url parameter prevents Minikube from serving the wordpress site.
You should put the ip address of your service there in place of "localhost".
When i say service, I'm talking about the service that exposes your deployment/pods (its another kubernetes object you got to create).
You can pass the IP address using an environment variable . When creating a service, the pods that are started inherit extra environment variables that kubernetes places in them, through which you can access the ip address, the port, both etc... check documentation.
The second option is to place there the name of your service (still talking about the kubernetes object you created to expose your deployment). It will be resolved to the IP address in fine by the DNS of the cluster (CoreDNS as of today is started along with minikube).
Those two options are in the documentation in the same section called discovering services.
I had trouble to understand that things like: service-name.name-space.svc.cluster.default are like any url (like subsubdomain.subdomain.stackoverflow.com) but being resolved within the cluster.

Kubernetes: using OpenStack Cinder from one cloud provider while nodes on another

Maybe my question does not make sense, but this is what I'm trying to do:
I have a running Kubernetes cluster running on CoreOS on bare metal.
I am trying to mount block storage from an OpenStack cloud provider with Cinder.
From my readings, to be able to connect to the block storage provider, I need kubelet to be configured with cloud-provider=openstack, and use a cloud.conf file for the configuration of credentials.
I did that and the auth part seems to work fine (i.e. I successfully connect to the cloud provider), however kubelet then complains that it cannot find my node on the openstack provider.
I get:
Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object
This is similar to this question:
Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object
However, I know kubelet will not find my node at the OpenStack provider since it is not hosted there! The error makes sense, but how do I avoid it?
In short, how do I tell kubelet not to look for my node there, as I only need it to look up the storage block to mount it?
Is it even possible to mount block storage this way? Am I misunderstanding how this works?
There seem to be new ways to attach Cinder storage to bare metal, but it's apparently just PoC
http://blog.e0ne.info/post/Attach-Cinder-Volume-to-the-Ironic-Instance-without-Nova.aspx
Unfortunately, I don't think you can decouple the cloud provider for the node and that for the volume, at least not in the vanilla kubernetes.

Sharing resources between two independent Openstack cloud setups

Is there any possibility to share resources from one openstack cloud to another similar one with different resource pools.Thanks in advance.
You can try CloudFerry:Github Link
CloudFerry is a tool for resources and workloads migration between two OpenStack clouds.
Another Tool Stack2Stack: Github Link
stack2stack is a simple python script to aid data migration from one Openstack cloud to another through use of the APIs. It aims to cleanly migrate the data keeping as much in sync as possible, up to the limitations of the Openstack APIs themselves. Currently the script migrates
Keystone Users
Keystone Tenants
Keystone roles
Keystone Tenant Memberships
Glance Images
Networks from nova networking to neutron
Security groups from nova networking to neutron

Configure OpenStack nova with remote Bind Server

How can we configure OpenStack to use and dynamically update remote Bind DNS Server.
This is not currently supported. There is a DNS driver layer, but the only driver at the moment is for LDAP backed PowerDNS. I have code for dynamic DNS updates (https://review.openstack.org/#/c/25194/), but have had trouble getting it landed because we need to fix eventlet monkey patching first.
So, its in progress, but you probably wont see it until Havana is released.
OpenStack relies on dnsmasq internally.
I am not aware of any way integrate an external bind server. Or plans to do that. Or even a reason to do that.
Check out Designate (https://docs.openstack.org/developer/designate/)
This could be what you are looking for:
Designate provides DNSaaS services for OpenStack:
- REST API for domain & record management
- Multi-tenant support
- Integrated with Keystone for authentication
- Framework in place to integrate with Nova and Neutron notifications (for auto-generated records)
- Support for PowerDNS and Bind9 out of the box

How to setup a wordpress site in multiple amazon ec2 micro instances

I'm new to AWS and cloud computing in general. For a personal project I've created a micro instance on amazon ec2 and installed and configured a wordpress multisite site. For the database, I use an RDS instance.
My question is, how can I create a second micro instance that serve the same content and use a load balancer to spread the traffic to these two instances? I want to do this so that so if the first EC2 instance crashes then it will get served from second instance and the site doesn't go down.
Thanks for your help and sorry for any english related error.
As far as the wordpress installation is concerned, there are 2 main components.
Wordpress Database
Wordpress Files (Application Files including plugins,themes etc.)
The Database
For enabling auto scaling setup and to ensure consistency, you will
need to have the database outside the auto-scaling EC2 instance. As mentioned
in your question, this is in RDS and hence it wont be a problem
The second EC2 instance
Step 1: First create an AMI of your Wordpress Instance from the existing one.
Step 2: Launch a new EC2 instance from this AMI which you created from the first one. This will result in 2 EC2 instances. Instance 1 (the original one with Database) and Instance 2 (The copy of Instance 1)
However, any changes that you do in Instance 1 wont reflect in Instance 2.
If you want to get rid of this problem, consider using EFS service to create a shared volume across 2 EC2 instances and configure the wordpress installation to work from that EFS volume. This way, your installation files and other content will be in shared EFS volume commonly accessed by both EC2 instances.
You will have to move your database out of your localhost (I guess that you have it on the same micro instance), either to another ec2 instance or preferably to an RDS instance.
Afterwards you need to create a copy of your ec2 instance to another ec2 micro instance and put the both behind a load balancer.
First create the image of your existing micro ec2 instance on which you have configured word press
Second, create a classic load balancer
Third, create a launch configuration (LC) with above AMI you created.
Fourth, create auto scaling group with above LC, ELB and keep the group size count to 2.
This will make sure you have 2 instances running all time and if at all any instance goes down, ASG will create another new instance from AMI and terminate failed instance. REF-
https://docs.aws.amazon.com/autoscaling/latest/userguide/as-register-lbs-with-asg.html
Or
If you want you can use elastic bean stalk also-
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/php-hawordpress-tutorial.html
Thanks

Resources