Does using the Openstack client glance image-create command require that it be repeated to all controllers in the environment? - openstack

Working on creating an OpenStack environment under Fuel for 18 XenServer based physical servers based on some prior development work which started with 4 physical servers. In the original configuration, there was only one OpenStack controller. Due to the XenServer hypervisor requirement, the images have to be custom massaged. This induced the developers to use the OpenStack CLI client and the glance image-create command for image installation, vs. using the Fuel Horizon dashboard GUI.
I'm adding at least two other controllers as recommended by best practices.
The question is pretty generic, I hope.
When using the OpenStack client CLI glance command, you set up the environment for communication purposes. Does the glance command in this configuration create the image on ALL controllers or just one?
When I look at the available images via the Fuel Horizon dashboard, the newly created image IS available. My concern is whether it is on all of the controllers? And if not, is there a way to level set all of the controllers WRT images?
Thank you for your time,
Ed Kiefer

If you have two controllers (and one Openstack region) configured properly. You should only ever need to upload the image once. If it is only available from one controller, something with your setup is wrong.

Related

How to manage multiple symfony projects in a development computer

I've seen some post, including How to manage multiple backend stacks for development?, but nothing related to use lxc for a stable, safe and separate development environment, matching the production environment and regardless the desktop and/or linux distribution.
There was a feature previous to symfony cli release that allowed to specify a socket via ip:port, and this allowed to use different names in /etc/hosts using the 127.0.0.0/8 loopback network, so I could always use "bin/console server:start -p:myproject:8000", and I knew that using http://myproject:8000 (specified in /etc/hosts) I could access my project and keep the sessions, etc.
The symfony cli, as far as I've tried, doesn't allow this. Reading the docs, there's a built-in proxy in symfony cli, but though I've set a couple of projects to use this in the container, clicking on the list doesn't open the project (with .wip suffix), and issues an error about proxy redirections. If I browse to the port and ip of the container ip, it works perfectly, but the port is something that can change with every reboot of the container.
If there's nothing that can be set on the proxy side to solve this scenario, I'd ask to take back the socket feature that existed previously, so I can manage this situation as I used to do before, and solve this.
Thanks in advance.
I think I've finally found a good solution. I've created an issue to improve the situation that seemed not to work, so I'll try to explain for whoever might be interested.
I've setup the proxy server built-in with the symfony cli, but instead of allowing it to run with the defaults, I've had to specify --host=proxyhost (resolvable from the host) and setting proxy exceptions for .com, .org, .net, .tv, etc, together with setting a name to attach for every project (issuing symfony proxy:domain:attach myproject from inside the project dir), I can go to http://myproject.wip just like http://proxyhost:portX, no matter which port is portX.

Single install Apache Karaf with failover configuration using shared disk

I'm looking to implement failover (master/slave) for Karaf. Our current
server setup has two application servers that have a shared SAN disk where
our current Java applications are installed in a single location and can
be started on either machine or both machines at the same time.
I was looking to implement Karaf master/slave failover in a similar way
(one install being shared by both app servers), however I'm not sure that
this is really a well beaten path and would appreciate some advice on
whether the alternatives (mentioned below) are significantly better.
Current idea for failover:
Install Karaf once on the shared SAN and setup basic file locking on this
shared disk.
Both application servers will effectively initiate the Karaf start script,
however only one (the first) will fully start (grabbing the lock) and the
second will remain in standby until it grabs the lock (if the master falls
over)
The main benefit I can see from this is that I only have to manage
deploying components to one Karaf installation and I only need to manage
one Karaf installation.
Alternatives:
We install Karaf in two separate locations on the shared SAN and setup to
lock to the same lock file.
Each application server will have their own Karaf instance, thus start
script to run.
This will make our deployment slightly more complicated (2 Karaf
installations to manage and deploy to).
I'd be interested if anyone can indicate any specific concerns that they
have with the current idea.
Note: I understand that Karaf-cellar can simplify my Karaf instance
management, however we would need to undertake another round of PoCs etc..
to approve our company use of cellar (as a separate product). Something
that I'd like to migrate to in the future.
Take a look at the documentation
This is from the documentation on how to set a lockfile for HA:
karaf.lock=true
karaf.lock.class=org.apache.karaf.main.lock.SimpleFileLock
karaf.lock.dir=<PathToLockFileDirectory>
karaf.lock.delay=10000
as can be seen there, you can also set a level for the bundle start-levels to start or not to start:
karaf.lock.level=50

bosh-lite installation on openstack

I have already installed bosh-lite and cloud foundry on single VM using the tutorials at https://docs.cloudfoundry.org/deploying/run-local.html. Is there a way to install the bosh-lite and cloud-foundry on OpenStack?
I searched a lot but could not find a proper answer, what I found is something disconnected like installing bosh and OpenStack on a single VM but I don't know if that can be useful to me.
I am pretty new to cloud-foundry and OpenStack so, the things are pretty confusing for me. My ultimate goal is to deploy and test docker with cloud-foundry which means installing Diego, I could have used cf_nise_installer, but I am not sure if it supports Diego.
Thanks.
I'm not sure why you want to deploy CF and Diego on a single VM on OpenStack.
Why a single VM, could it be 2 or 3?
Why OpenStack, why not AWS, or DigitalOcean, or something else?
Do you need all the features of CF (multi-tenancy, service integration, buildpacks) or is Docker + Diego + Routing + Logging + a nice CLI sufficient, etc?
At any rate, there is no out-of-the-box solution for your exact set of requirements, but you have several options, with tradeoffs:
Use local BOSH-Lite instead of OpenStack. You can deploy Diego to your BOSH-Lite alongside your CF, and play around with the Docker support there. See instructions here: https://github.com/cloudfoundry-incubator/diego-release/#deploying-diego-to-bosh-lite
Use Lattice. Lattice is basically Diego, along with routing, log aggregation, and a CLI to make it easy to push Docker-based apps, scale them up and down, get logs, etc. You will not get the full CF feature set, for instance there is no UAA which CF uses for user authentication, managing multi-tenancy, scopes and permissions, etc. You can check out the Lattice website: http://lattice.cf/. Docs on deploying Lattice are here: http://lattice.cf/docs/terraform/. You can see several deployment options there, including OpenStack if you search the page.
If you're willing to do a lot more work, you could either figure out how to make BOSH-Lite work against the OpenStack provider, or you could figure out enough about how BOSH manifests are structured and then use bosh-init to deploy a single-VM deployment to OpenStack that colocates all of CF and Diego into a single "job".

Live migration on Openstack

I'm working on a projet on OpenStack. I have installed OpenStack by creating two virtual machines, one for the controller node and the other for the compute node.
Actually, I want to test an example of live migration on openstack and I have found a video which describes the aproch. As the video shows, I need to have 2 compute nodes, and I want to know if I just need to create a second compute node or this second compute should be created at the phase of installation of openstack.
This is the link of the video that I have watched: https://www.youtube.com/watch?v=_4vJUYFGbEM
Thank you
It doesn't matter when you add the compute nodes (During the install or later on). Please also remember that the live-migration piggy backs on the hypervisor. So depending on hypervisor that one uses, this may or may not be possible.
Please look at this http://docs.openstack.org/admin-guide/compute-configuring-migrations.html#section-configuring-compute-migrations to ensure that the migration capability exists
It simply boils down to a few things
The storage is not moved in case of a live migration, so if you have a VM with instance storage, you will need to have a shared file system like NFS or something, If you have an instance backed by a cinder volume you will be able do the migration without the shared storage.
The Nova-Compute application needs to be installed on the destiantion
The hypervisor version should be the same.
I hope this clarifies.
Either works. OpenStack allows you dynamically add and remove computes nodes from a cloud environment.
Please refer to http://docs.openstack.org/admin-guide/compute-configuring-migrations.html for extra details.
Live migration for light instances can be done over network ,without shared storage, but for heavy instances ,shared storage or shared volume will be preferred. As you mentioned you have two compute nodes ,theirs nova storage should be shared storage.
Long answer short in my perspective,
You can add/remove compute node at any time from an OpenStack installation.
For adding compute, follow installation guide to add new compute node right from environment setup.
Also, dont forget to install networking part in your new Compute node.

The right way to create multiple instances for Load Balancer (EC2)

I installed Wordpress using EC2. I created a Load Balancer by creating image (AMI) then adding both Wordpress1 and Wordpress2 on Load Balancer. But I'm still getting database error and have to restart the instances. If I'd like to make 4 instances as Load balancer, are the steps the same? because I saw a "Number of Instances" option when I launched an AMI. Default value is 1. I'm not sure if I should enter 3 or 4 to create multiple instances in one click.
Also, if I update on Wordpress1 instance, will the updates show if the domain will load Wordpress2 instance?
If you want to launch multiple instances and a database etc, you should consider using
AWS CloudFormation. CloudFormation is just a big json string that contains the configuration of your environment, including the servers, autoscaling, access, register with the loadbalancer, etc.
See http://aws.amazon.com/en/cloudformation/ for more details.
There is already an example template for wordpress including a database and autoscaling groups (example wordpress template)
However like datasage mentioned you will need to make adjustments to wordpress to make it working in a multiserver environment.
The "problem" with multiserver environments is that if you upload a file or in your case upgrade wordpress, it will only happen on one server, which could be terminated at any point. Furthermore the upgrade could contain changes in the database structure and then its getting complicated.
If you are building something in the cloud you should always keep in mind that every service you build, in you case the frontend webservers and the database should be allowed to fail without interrupting your service.
Another point is, that you should avoid doing stuff by hand, automation is the key.
An environment where you need to link your server by hand to a loadbalancer is not very useful in the cloud where servers are continuously terminated, rebooted and exchanged.
For you webservers you can use "autoscaling groups" to get this behavior.
If you are using autoscaling groups and a server is terminated or considered unhealthy, a new one will be started automatically and registered with the loadbalancer as soon as it is considered as healthy.
For your database amazon offers for rds multi AZ environments which provide a automatic failover.
Applying upgrades in the cloud can be a tricky and there are different ways to do this. for example using a shared NFS mount with the code base, git deployments or the way you already started: creating a new AMI for every upgrade and then replacing the servers. There are a lot options and they all have their benefits and drawbacks.
As far as i understand you use-case the cloud is maybe not the right choice at the moment.
Normally hosting a small business in the cloud is much more expensive than using a single server. You will only save money if you need like 20 servers in the evening and only 2 or 3 for the rest of the day. Of course there are a lot more points to consider but that would be to much.
Autoscaling in ec2 is horizontal scaling. Which means that instances are added as your infrastructure scales up. This in contrast to vertical scaling where the a single instance is given more resources.
In order to use this effectively, each instance cannot store data that may be needed by other instances. The most common requirement is the database which will need to exist on its own instance outside of the autoscaled instances. You could use RDS for this.
Wordpress also stores file uploads, plugins and themes within the wp-content folder within the wordpress install. By default, if you upload a file, it will be stored on one instance but not any of the others. You could store everything on an NFS volume shared by one of the instances, or you could try a plugin like this: http://wordpress.org/plugins/wp2cloud-wordpress-to-cloud/

Resources