Railo on AWS Opsworks - railo

Does anyone have any information or experience deploying Railo (cfml) apps on AWS OpsWorks? It seems like it should be possible (similar to cloudbees or heroku) since Opsworks now supports java apps. I'm just having a hard time getting started.

The official and active cookbook for this seems to be : https://github.com/ringgi/railo-cookbook. You're not specifying what specific issue you're having. You would need to modify any Chef community cookbook to implement on Opsworks. You would need to replace any mention of a role with the the name of the layer short name. That usually would be enough to get most simple cookbooks to behave with the Chef 11.10 version of the stack.
Most likely you would need to create a new cookbook with the community cookbook specified above + the required additional cookbooks mentions in the metadata.rb files.

Related

Can you just store binaries?

We are using Artifactory Enterprise and, in addition to "normal" usage, we would like to just store some binaries in Artifactory. This is so we can limit egress and pull the binaries from Artifactory instead of the general Internet. Is this possible? Is there a documentation link that will help explain the process?
Yes, this can be done by creating a generic local repository and deploy the binaries thru UI or using the REST API and you can use the binaries from generic local repository. Refer to this blog as well.

Writing an appspec.yml File for Deployment from S3 (and/or Bit Bucket) to AWS CodeDeploy

I'd like to make it so that a commit to our BitBucket repo (or S3 Bucket) automatically deploys code (using CodeDeploy) to our EC2 instances. I'm not clear what to use for the 'source' and 'destination' entry under the 'files' section in the appspec.yml file and also I am not cleared what to mention in BeforeInstall and AfterInstall under 'Hooks' section. I've found some examples on Google and AWs documentation but I am confused what to mention in above fields. The more I am exploring more I am getting confused.
Consider I am new to AWS Code Deploy.
Also it will be very helpful if someone can provide me step y step link how to configure and how to automate the CodeDeploy.
I was wondering if someone could help me out?
Thanks in advance for your help!
Thanks for using CodeDeploy. For new users, I'd like to recommend the following things to do:
Try to run First Run Wizard on console, it will should you the general process how the deployment goes. It also provide a default deployment bundle, also an appspec file included.
Once you want to try a deployment yourself, the Get Started doc is a great place to help you with some pre-requiste settings like IAM role
Then probably try some tutorials for a sample app too, which gives you some idea about deployment groups, deployment configuration, revision and so on.
The next step should be create a bundle for your own use cases, Appspec file doc would be a great place to refer. And for your concerns about BeforeInstall and AfterInstall, if your application doesn't need to do anything, the lifecycle events can be left as empty. BeforeInstall can be used to for for preinstall tasks, such as decrypting files and creating a backup of the current version, while AfterInstall can be used for tasks such as configuring your application or changing file permissions.
Now it comes to the fun part! This blog talks about details about how to integrate with Github(similar for Bitbucket). It's a little long, but really useful, and it also includes how to do automatically deployment once there is a new pushed commit. Currently Jenkins and CodePipline are really popular for auto-triggered deplyoments, but there are always a lot of other ways can achieve the same purpose like Lamda and so on

Chef like version management in salt stack?

I really like the way that you can upload multiple versions of the same cookbook to Chef server. And also you can specify the cookbook version in metadata file. e.g.
depends 'base-config', '= 1.2.1'
I like Salt. However, I couldn't find any version management and requisite for Salt states/formula. I am really surprised since I think it's a fundamental requirement for a configuration management. Did I miss anything? How do salt handles states/formula file versions?
Some of this will be added in Nitrogen using SPM
There are several more things that can be added to this, we are still working on getting an environment to setup uploading spms too, right now you would need to manage them yourself.
Instructions to build spms are here.
https://docs.saltstack.com/en/develop/topics/spm/spm_formula.html
Daniel

bosh-lite installation on openstack

I have already installed bosh-lite and cloud foundry on single VM using the tutorials at https://docs.cloudfoundry.org/deploying/run-local.html. Is there a way to install the bosh-lite and cloud-foundry on OpenStack?
I searched a lot but could not find a proper answer, what I found is something disconnected like installing bosh and OpenStack on a single VM but I don't know if that can be useful to me.
I am pretty new to cloud-foundry and OpenStack so, the things are pretty confusing for me. My ultimate goal is to deploy and test docker with cloud-foundry which means installing Diego, I could have used cf_nise_installer, but I am not sure if it supports Diego.
Thanks.
I'm not sure why you want to deploy CF and Diego on a single VM on OpenStack.
Why a single VM, could it be 2 or 3?
Why OpenStack, why not AWS, or DigitalOcean, or something else?
Do you need all the features of CF (multi-tenancy, service integration, buildpacks) or is Docker + Diego + Routing + Logging + a nice CLI sufficient, etc?
At any rate, there is no out-of-the-box solution for your exact set of requirements, but you have several options, with tradeoffs:
Use local BOSH-Lite instead of OpenStack. You can deploy Diego to your BOSH-Lite alongside your CF, and play around with the Docker support there. See instructions here: https://github.com/cloudfoundry-incubator/diego-release/#deploying-diego-to-bosh-lite
Use Lattice. Lattice is basically Diego, along with routing, log aggregation, and a CLI to make it easy to push Docker-based apps, scale them up and down, get logs, etc. You will not get the full CF feature set, for instance there is no UAA which CF uses for user authentication, managing multi-tenancy, scopes and permissions, etc. You can check out the Lattice website: http://lattice.cf/. Docs on deploying Lattice are here: http://lattice.cf/docs/terraform/. You can see several deployment options there, including OpenStack if you search the page.
If you're willing to do a lot more work, you could either figure out how to make BOSH-Lite work against the OpenStack provider, or you could figure out enough about how BOSH manifests are structured and then use bosh-init to deploy a single-VM deployment to OpenStack that colocates all of CF and Diego into a single "job".

How to use 51Degrees via NuGet with Azure?

I'm tryign to use 51Degrees in a .NET project that I deploy to Azure. August 2011, they released v1.2.1.3 marked as "Azure Compatible":
Foundation can now be deployed on to the Windows Azure Cloud service.
See the release note for full details on requirements and how to
setup. Azure related changes include: Instead of a log file, log
entries are written to a log table Instead of a devices file, previous
device requests are written to a device table A new conditional
compilation symbol - 'AZURE'. AZURE enabled builds will not work in
traditional ASP.NET.
Since then there have been a dozen releases and they are up to v2.1.4.9. However, their documentaiton is super light on how to use it with Azure. In fact, there was a bug originally because v1.2.1.3 stated
To make use of the changes you must create a storage account called
‘fiftyonedegrees’. The foundation will then create two tables, one for
previous devices, and another for logs.
This isn't possible because Azure storage accounts need to be unique across all instances so everyone can't create ones named fifityonedegrees.
Their response was:
After rereading the blog it seems I've made an oversight in this
regard, and will update shortly.
The storage account that the Foundation looks for can be changed in
the Foundation source code. Go to Foundation/Properties/Constants.cs
and change the string 'AZURE_STORAGE_NAME' to the name of your storage
account.
However, I'm still at a loss at how to utilize it within my project. Here's my issues:
I'm not clear whether v1.2.1.3 is the only Azure compatible release, or every release after is Azure compatible. Their documentation doesn't say.
When I install 51Degrees via NuGet, my project doesn't get an App_Data folder created which contradicts their documentation. The web.config file even has entries in it that reference the App_Data folder such as <log logFile="~/App_Data/Log.txt" logLevel="Info"/>.
Based on the response to the Azure storage account bug I quoted earlier, they are sayign IN need to edit the file Foundation/Properties/Constants.cs. However, since I'm installing via NuGet and it's a DLL, NuGet is presumably the wrong approach? Do I need to download the source and compile it myself and wire it up to my project manually?
I'm generally new to .NET, NuGet, VS, etc so appreciate the help.
All versions are Azure compatible from 1.2.1.3 onwards. I'm assuming this is the blog post you were talking about. After you've created your azure storage account, you'll have to edit the Constants.cs file in the source code and add in your account name. It's my understanding that this means you'll have to get access to the source code and edit it directly. One you have done this you'll need to recompile for the software to work correctly. I'm not sure if there is a way to perform the same task using NuGet, but I'll look into it. Hope this helps.

Resources