bosh-lite installation on openstack - openstack

I have already installed bosh-lite and cloud foundry on single VM using the tutorials at https://docs.cloudfoundry.org/deploying/run-local.html. Is there a way to install the bosh-lite and cloud-foundry on OpenStack?
I searched a lot but could not find a proper answer, what I found is something disconnected like installing bosh and OpenStack on a single VM but I don't know if that can be useful to me.
I am pretty new to cloud-foundry and OpenStack so, the things are pretty confusing for me. My ultimate goal is to deploy and test docker with cloud-foundry which means installing Diego, I could have used cf_nise_installer, but I am not sure if it supports Diego.
Thanks.

I'm not sure why you want to deploy CF and Diego on a single VM on OpenStack.
Why a single VM, could it be 2 or 3?
Why OpenStack, why not AWS, or DigitalOcean, or something else?
Do you need all the features of CF (multi-tenancy, service integration, buildpacks) or is Docker + Diego + Routing + Logging + a nice CLI sufficient, etc?
At any rate, there is no out-of-the-box solution for your exact set of requirements, but you have several options, with tradeoffs:
Use local BOSH-Lite instead of OpenStack. You can deploy Diego to your BOSH-Lite alongside your CF, and play around with the Docker support there. See instructions here: https://github.com/cloudfoundry-incubator/diego-release/#deploying-diego-to-bosh-lite
Use Lattice. Lattice is basically Diego, along with routing, log aggregation, and a CLI to make it easy to push Docker-based apps, scale them up and down, get logs, etc. You will not get the full CF feature set, for instance there is no UAA which CF uses for user authentication, managing multi-tenancy, scopes and permissions, etc. You can check out the Lattice website: http://lattice.cf/. Docs on deploying Lattice are here: http://lattice.cf/docs/terraform/. You can see several deployment options there, including OpenStack if you search the page.
If you're willing to do a lot more work, you could either figure out how to make BOSH-Lite work against the OpenStack provider, or you could figure out enough about how BOSH manifests are structured and then use bosh-init to deploy a single-VM deployment to OpenStack that colocates all of CF and Diego into a single "job".

Related

How to keep persistent volumes in sync between clusters?

I'm trying to get an installation of Wordpress running in Kubernetes, as well as have an option of running the same configuration locally in minikube. I want to use the standard Docker image of Wordpress: https://hub.docker.com/_/wordpress/.
I'm having trouble with making sure that the plugins and templates are in sync though. The Docker container exposes a Volume at /var/www/html. Wordpress installation, as well as my plugins will live there.
Assuming I do the development on Minikube, along with the installation of plugins etc. How do handle the move between Persistent Volumes between my local cluster and the target cluster? Should I just reinstall Wordpress every time when the Pod is scaled?
You can follow Writing Portable Configuration (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#writing-portable-configuration) guide for persistent volume if you are planning to migrate it to different cluster.
In a real production scenario you would want to use a standard tool to backup and migrate persistent volumes between clusters. Valero is such a tool which enables you to achieve that.

Does using the Openstack client glance image-create command require that it be repeated to all controllers in the environment?

Working on creating an OpenStack environment under Fuel for 18 XenServer based physical servers based on some prior development work which started with 4 physical servers. In the original configuration, there was only one OpenStack controller. Due to the XenServer hypervisor requirement, the images have to be custom massaged. This induced the developers to use the OpenStack CLI client and the glance image-create command for image installation, vs. using the Fuel Horizon dashboard GUI.
I'm adding at least two other controllers as recommended by best practices.
The question is pretty generic, I hope.
When using the OpenStack client CLI glance command, you set up the environment for communication purposes. Does the glance command in this configuration create the image on ALL controllers or just one?
When I look at the available images via the Fuel Horizon dashboard, the newly created image IS available. My concern is whether it is on all of the controllers? And if not, is there a way to level set all of the controllers WRT images?
Thank you for your time,
Ed Kiefer
If you have two controllers (and one Openstack region) configured properly. You should only ever need to upload the image once. If it is only available from one controller, something with your setup is wrong.

Single install Apache Karaf with failover configuration using shared disk

I'm looking to implement failover (master/slave) for Karaf. Our current
server setup has two application servers that have a shared SAN disk where
our current Java applications are installed in a single location and can
be started on either machine or both machines at the same time.
I was looking to implement Karaf master/slave failover in a similar way
(one install being shared by both app servers), however I'm not sure that
this is really a well beaten path and would appreciate some advice on
whether the alternatives (mentioned below) are significantly better.
Current idea for failover:
Install Karaf once on the shared SAN and setup basic file locking on this
shared disk.
Both application servers will effectively initiate the Karaf start script,
however only one (the first) will fully start (grabbing the lock) and the
second will remain in standby until it grabs the lock (if the master falls
over)
The main benefit I can see from this is that I only have to manage
deploying components to one Karaf installation and I only need to manage
one Karaf installation.
Alternatives:
We install Karaf in two separate locations on the shared SAN and setup to
lock to the same lock file.
Each application server will have their own Karaf instance, thus start
script to run.
This will make our deployment slightly more complicated (2 Karaf
installations to manage and deploy to).
I'd be interested if anyone can indicate any specific concerns that they
have with the current idea.
Note: I understand that Karaf-cellar can simplify my Karaf instance
management, however we would need to undertake another round of PoCs etc..
to approve our company use of cellar (as a separate product). Something
that I'd like to migrate to in the future.
Take a look at the documentation
This is from the documentation on how to set a lockfile for HA:
karaf.lock=true
karaf.lock.class=org.apache.karaf.main.lock.SimpleFileLock
karaf.lock.dir=<PathToLockFileDirectory>
karaf.lock.delay=10000
as can be seen there, you can also set a level for the bundle start-levels to start or not to start:
karaf.lock.level=50

How do you push updates to a deployed meteor app that has a filesystem?

I have an app running on my own digitalocean VM that I'm trying to play around with to figure out how to run a meteor production server. I deployed it with meteor build, but now I'm a bit unsure about how to push updates. If I build a new tarball on my own machine, I will loose file references that my users have made to files in bundle/uploads, because the remote filesystem isn't incorporated into my local project. I can imagine some hacky ways to work around this, but besides hosting the files on s3 or another 3rd party server, is there any way to "hot code push" into the deployed app without needing to move files around on my server?
Am I crazy for wondering what the meteor equivalent of git push/pull is in production, or just ignorant?
You can use dokku (https://github.com/progrium/dokku). DigitalOcean allows you to create an instance pre-installed with dokku too.
Once you've set up your ssh keys, set the environment variables, ROOT_URL, PORT and MONGO_URL you can add that server as a git remote and simply git push to it.
Dokku will automatically build up the Meteor app and have it running, and keep it up to date whenever you git push.
I find Dokku is very convenient. There's also flynn and deis which are able to do the same in multi tenant environment with way more options.
Just one thing to keep in mind with this is to push the guys who own the repo to keep the Node version in the buildpack up to date. Meteor is a bit overzealous when it comes to using the latest version of Node and refusing older versions.
Meteor does lack a bit in this department. I can't remember where I may have heard this, but I believe they intend on adding this very popular Meteor deployment package to their library. Short of switching to a more compatible host, I'm not aware of any better solutions.

Railo on AWS Opsworks

Does anyone have any information or experience deploying Railo (cfml) apps on AWS OpsWorks? It seems like it should be possible (similar to cloudbees or heroku) since Opsworks now supports java apps. I'm just having a hard time getting started.
The official and active cookbook for this seems to be : https://github.com/ringgi/railo-cookbook. You're not specifying what specific issue you're having. You would need to modify any Chef community cookbook to implement on Opsworks. You would need to replace any mention of a role with the the name of the layer short name. That usually would be enough to get most simple cookbooks to behave with the Chef 11.10 version of the stack.
Most likely you would need to create a new cookbook with the community cookbook specified above + the required additional cookbooks mentions in the metadata.rb files.

Resources