How to check if there are sufficient resources to create an instance? - openstack

I want to check whether there are sufficient resources available to create an instance of specific flavor without creating the instance. I tried stack --dry-run but it does not check whether there are available resources.
I also tried to go through CLI and rest api docs, but did not find any other solution than
to check available resources on each hypervisor and calculate it manually (compare it with resources required by the flavor). Isn't there any easier solution, like CLI command that would give me a yes/no answer?
Thank you.

Related

When migrating from an old Artifactory instance to a new one, what is the point of copying $ARTIFACTORY_HOM/data/filestore?

Artifactory recommends the steps outlined here when moving from an old Artifactory server to a new one: https://jfrog.com/knowledge-base/what-is-the-best-way-to-migrate-a-large-artifactory-instance-with-minimal-downtime/
Under both methods it says that you're supposed to copy over $ARTIFACTORY_HOME/data/filestore, but then you just go ahead an export the old data and import it into the new instance, and in the first method you also rsync the files. This seems like you're just doing the exact same thing three times in a row. JFrog really doesn't explain why each of these steps is necessary and I don't understand what each does differently that cannot be done by the other.
When migrating Artifactory instance we need to take two things into consideration:
Artifactory Database - Contains the information about the binaries, configurations, security information (users, groups, permission targets, etc)
Artifactory Filestore - Contains all the binaries
Regardless to your questions, I would like to add that from my experience, in case of a big filestore size (500GB+) it is recommended to use a skeleton export (export the database only, without the filestore. This can be done by marking "Exclude Content" in Export System) and copy the filestore with the help of a 3rd party tool such as Rsync.
I hope this clarifies further.
The main purpose of this article is to provide a bit faster migration comparing to simple full export & import.
The idea of both methods is to select the "Exclude Content". The content we select to exclude is exactly the one that is stored in $ARTIFACTORY_HOME/data/filestore/.
The difference between the methods is that Method #1 exposes some downtime, as you will have to shut down Artifactory at a certain point, sync the diffs, and start the new one.
While method #2 exposes a bit more complexed process, that includes in-app replications to sync the diffs.
Hope that makes more sense.

Writing an appspec.yml File for Deployment from S3 (and/or Bit Bucket) to AWS CodeDeploy

I'd like to make it so that a commit to our BitBucket repo (or S3 Bucket) automatically deploys code (using CodeDeploy) to our EC2 instances. I'm not clear what to use for the 'source' and 'destination' entry under the 'files' section in the appspec.yml file and also I am not cleared what to mention in BeforeInstall and AfterInstall under 'Hooks' section. I've found some examples on Google and AWs documentation but I am confused what to mention in above fields. The more I am exploring more I am getting confused.
Consider I am new to AWS Code Deploy.
Also it will be very helpful if someone can provide me step y step link how to configure and how to automate the CodeDeploy.
I was wondering if someone could help me out?
Thanks in advance for your help!
Thanks for using CodeDeploy. For new users, I'd like to recommend the following things to do:
Try to run First Run Wizard on console, it will should you the general process how the deployment goes. It also provide a default deployment bundle, also an appspec file included.
Once you want to try a deployment yourself, the Get Started doc is a great place to help you with some pre-requiste settings like IAM role
Then probably try some tutorials for a sample app too, which gives you some idea about deployment groups, deployment configuration, revision and so on.
The next step should be create a bundle for your own use cases, Appspec file doc would be a great place to refer. And for your concerns about BeforeInstall and AfterInstall, if your application doesn't need to do anything, the lifecycle events can be left as empty. BeforeInstall can be used to for for preinstall tasks, such as decrypting files and creating a backup of the current version, while AfterInstall can be used for tasks such as configuring your application or changing file permissions.
Now it comes to the fun part! This blog talks about details about how to integrate with Github(similar for Bitbucket). It's a little long, but really useful, and it also includes how to do automatically deployment once there is a new pushed commit. Currently Jenkins and CodePipline are really popular for auto-triggered deplyoments, but there are always a lot of other ways can achieve the same purpose like Lamda and so on

How to export and sync an Artifactory repository to the filesystem?

I am looking for a solution that would allow me to have a network share where people can access (read-only) the artifacts from an Artifactory repository.
Why? We use Artifactory to also keep track of big binaries like installation kits, ISO images and so on and it takes a lot of time to download all of them (sometimes as zips), unpack and run them. If these would be exported to a NFS/SMB share people would be able to only mount them in order to use them.
How can we achieve this? Please keep in mind that we also want to automate this, so the files would be updated by Artifactory when needed.
Artifactory supports WebDAV out of the box.
It seems that's not possible at this moment and there is a feature request for enabling it:
https://www.jfrog.com/jira/browse/RTFACT-8302
Feel free to vote and to comment on it, allowing jFrog to realise how important is this use case.
I guess they should be able to provide a script that does mirror/sync a repository to a NFS share but that would almost double the storage space needed.
Instead if they would use hardlinks or symlinks to create a browsable tree of the repository inside the storage directory, this would be solved and no sync will be needed.

Live migration on Openstack

I'm working on a projet on OpenStack. I have installed OpenStack by creating two virtual machines, one for the controller node and the other for the compute node.
Actually, I want to test an example of live migration on openstack and I have found a video which describes the aproch. As the video shows, I need to have 2 compute nodes, and I want to know if I just need to create a second compute node or this second compute should be created at the phase of installation of openstack.
This is the link of the video that I have watched: https://www.youtube.com/watch?v=_4vJUYFGbEM
Thank you
It doesn't matter when you add the compute nodes (During the install or later on). Please also remember that the live-migration piggy backs on the hypervisor. So depending on hypervisor that one uses, this may or may not be possible.
Please look at this http://docs.openstack.org/admin-guide/compute-configuring-migrations.html#section-configuring-compute-migrations to ensure that the migration capability exists
It simply boils down to a few things
The storage is not moved in case of a live migration, so if you have a VM with instance storage, you will need to have a shared file system like NFS or something, If you have an instance backed by a cinder volume you will be able do the migration without the shared storage.
The Nova-Compute application needs to be installed on the destiantion
The hypervisor version should be the same.
I hope this clarifies.
Either works. OpenStack allows you dynamically add and remove computes nodes from a cloud environment.
Please refer to http://docs.openstack.org/admin-guide/compute-configuring-migrations.html for extra details.
Live migration for light instances can be done over network ,without shared storage, but for heavy instances ,shared storage or shared volume will be preferred. As you mentioned you have two compute nodes ,theirs nova storage should be shared storage.
Long answer short in my perspective,
You can add/remove compute node at any time from an OpenStack installation.
For adding compute, follow installation guide to add new compute node right from environment setup.
Also, dont forget to install networking part in your new Compute node.

git build number c#

I'm trying to embed git describe-generated version info into AssemblyInfo.cs plus some label within ASP.NET website.
I already tried using git-vs-versionino but this assumes Git executable on PATH. However default install of msysgit on Windows does not set this up; it uses git bash. This caused problems.
Now I am looking for a way to utilize libgit2sharp library (for zero external dependencies) to use as build number generator. However this library has no describe command...
Thanks!
git-describe is a UI feature that nobody has implemented in the library or bindings yet (or at least nobody's contributed it), but you can do it yourself fairly easily.
You get a list of the tags and what commits they point to, do a walk down the commits and count how many steps it took to get to a commit that you have in the list you built. This already gives you the information you need. If the steps were zero, then your description would be the tag name only; otherwise you append the number of steps and the current commit's id to it.
There's a work in progress libgit2 pull request that proposes an implementation of git-describe functionalities.
See #1066 for more information.
It's not finished yet. Make sure to subscribe to it in order to be notified of its future progress.
Once it's done, it should be quite easy to bind it and make it available through LibGit2Sharp.

Resources