Cloudify and High Availability - cloudify

I'm using Cloudify 2.7 with OpenStack Icehouse, for a production environment.
I would like to have High Availability of Cloudify Manager, so that the state of the deployed applications can be saved and restored in case of problems.
Which of the following solutions is the correct?
1) Create 2 Cloudify Managers, in each of them set a persistencePath property in the cloud configuration file;
2) Create 2 Cloudify Managers, without setting a persistencePath property in the cloud configuration file;
3) Create 1 Cloudify Manager, with a persistencePath property in the cloud configuration file
Could I execute, by means of a bash script into the Cloudify Manager, the Cloudify command that saves the state of the deployed applications (I mean, the command that corresponds to the shutdown-managers) using a cron task ?
Thanks

Option 1 is a bad one and not supported (note that Cloudify 2.7 is not supported AT ALL - it has reached end of life).
Both 2 and 3 are good options. 2 gives you High-Availability, 3 gives you manager recovery.
There is no command that saves the state of the manager - state is saved every time there is a change (and persisted to the persistencePath directory)

Related

CI / CD and repository integration for Azure ML Workspace

I am interested in knowing how can I integrate a repository with Azure Machine Learning Workspace.
What have I tried ?
I have some experience with Azure Data Factory and usually I have setup workflows where
I have a dev azure data factory instance that is linked to azure repository.
Changes made to the repository using the code editor.
These changes are published via the adf_publish branch to the live dev instance
I use CI / CD pipeline and the AzureRMTemplate task to deploy the templates in the publish branch to release the changes to production environment
Question:
How can I achieve the same / similar workflow with Azure Machine Learning Workspace ?
How is CI / CD done with Azure ML Workspace
The following workflow is the official practice to be followed to achieve the task required.
Starting with the architecture mentioned below
we need to have a specific data store to handle the dataset
Perform the regular code modifications using the IDE like Jupyter Notebook or VS Code
Train and test the model
To register and operate on the model, deploy the model image as a web service and operate the rest.
Configure the CI Pipeline:
Follow the below steps to complete the procedure
Before implementation:
- We need azure subscription enabled account
- DevOps activation must be activated.
Open DevOps portal with enabled SSO
Navigate to Pipeline -> Builds -> Choose the model which was created -> Click on EDIT
Build pipeline will be looking like below screen
We need to use Anaconda distribution for this example to get all the dependencies.
To install environment dependencies, check the link
Use the python environment, under Install Requirements in user setup.
Select create or get workspace select your account subscription as mentioned in below screen
Save the changes happened in other tasks and all those muse be in same subscription.
The entire CI/CD procedure and solution was documented in link
Document Credit: Praneet Singh Solanki

Solrcloud service keeps on restarting when created using NSSM

I'm trying to setup SOLR Cloud to work with my Sitecore 9.0 (Update-1) instance. I'm using 3 different VMs with [1-Zookeeper + 1-SOLR 6.6.2] configured on all machines.
we use following parameters for creating solrcloud service using NSSM
"start -cloud -p 8983 -z (servername):2181 -noprompt"
If I see in services.msc it is running all the time.
But when I see event logs it shows service stopped and started every minute.
same command working fine if executed from CMD.
Does the command I use is correct to create Solrcloud Nssm service?
One of my workmates had this issue. Solution to this is adding -f parameter. This page describes its purpose.

Cloudify Manager: how to associate FloatingIp On Bootstrap

I'm using Cloudify 2.7 with OpenStack Icehouse.
I would like to bootstrap the Cloudify Manager attached to an existing private network, and I would provide a Public Floating IP only to the Cloudify Manager on bootstrap, so that the Cloudify Shell can ssh into the Cloudify Manager.
How can I do it?
The particular combination you asking for is not possible with the openstack cloud driver that is packaged with Cloudify. You can see the possible networking options here:
http://getcloudify.org/guide/2.7/clouddrivers/network.html
For what you are asking for, you will need to create a custom cloud driver that uses an existing network (referred to as static network bootstrapping) and then allocate a floating IP to the manager.
This should be a fairly straight-forward change in the cloud driver's startManagementMachines() method (https://github.com/CloudifySource/cloudify/blob/master/esc/src/main/java/org/cloudifysource/esc/driver/provisioning/openstack/OpenStackCloudifyDriver.java#L440)
Please note that Cloudify 2.X has reached End-of-Life and is not supported. You should check out Cloudify 3.2

how to use cloudera manager for monitor the components of CDH4

I have already install CDH4 without using cloudera manager. I wanted to use cloudera manager so that i can monitor the different components of CDH4. Please suggest me how to use the manager now.
I have recently had to undertake the same task of importing already installed and running clusters into new Cloudera Manager instances.
I would firstly suggest taking your time to read through as much documentation as possible to fully understand the processes and key components.
As a short answer, you need to manually import all your cluster configurations and assignments into Cloudera Manager so that they can be managed. A rough outline of the plan I used is below:
Setup MySQL instance on NEW hardware (can use postgresql)
Create Cloudera Manager user on all servers (must be sudo enabled)
Setup ssh key access between cloudera-manager server and all other hosts
Useful Docs below:
- http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Cloudera-Manager-Installation-Guide/cmig_install_mysql.html
- http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Cloudera-Manager-Installation-Guide/cmig_install_path_B.html
Install Cloudera Manager and agent/daemon packages on Cloudera Manager server
Shutdown all services using cluster and cluster services
Save the namespace
Backup Meta Data and Configuration files to MULTIPLE LOCATIONS
Ensure the backup can be loaded by starting a single instance NN
Install Cloudera Manager agent and daemon on all production servers
Start the services on the Cloudera Manager server
Access the Cloudera Manager interface
Skip Setup Wizard
Add all hosts to Cloudera Manager
Create HDFS service - DO NOT start the service
Check hosts assignments are correct
Input all configuration file parameters and verify (this means each servers conf files need to be input manually)
Run host inspector and configuration check
Perform the above process for remaining services
I hope this provides a some assistance for you. If you have any other questions I will be happy to try and assist you as much as I can.
Regards,
James
I just recorded a webinar titled "Installing Cloudera Manager in < 30 mins" for Global Knowledge. Available at: http://www.globalknowledge.com/training/coursewebsem.asp?pageid=9&courseid=20221&catid=248&country=United+States (register in the upper right of page). In the video, I install CM on Ubuntu, set up the core components (Hadoop only), and then browse through some of the graphs for monitoring.

Best way to install web applications (e.g. Jira) on Unixes?

Can you throw some points on how it is a best way, best practice
to install web application on Unixes?
Like:
where to place app and its bases and so for,
how to configure to be secure and easy to backup,
etc
For example I know such suggestion -- to set uniq user for each app.
App in question is Jira on FreeBSD, but more general suggestions are also welcomed.
Here's what I did for my JIRA install on Fedora Linux:
Create a separate user to run JIRA
Install JIRA under the JIRA user's home directory
Made a soft link "/home/jira/jira" pointing to the JIRA installation directory (the directory as installed contains the version number, something like /home/jira/atlassian-jira-enterprise-4.0-standalone)
Created an /etc/init.d script to run JIRA as a service, and added it to chkconfig so that it runs at system startup - see these instructions
Created a MySQL database for JIRA on a separate data volume
Set up scheduled XML backups via the JIRA admin interface
Set up a remote backup script to dump the MySQL database and copy the DB dump and XML backups to a separate backup server
In order to avoid having to open extra firewall ports, set up an Apache virtual host "jira.myhost.com" and used mod_proxy to forward requests to the JIRA URL.
I set everything up on a virtual machine (an Amazon EC2 instance in my case) and cloned the machine image so that I can easily restart a new instance if the current one goes down.

Resources