Cluster WSO2 AM All-in-One Active-Active Deployment - wso2-api-manager

I have deployed API manager 4.0.0 All-in-one on 2 VMs. I am using Mysql as DB which is on separate VM and I am sharing the databases as mentioned in the document. Now I am trying to cluster these 2 nodes as mentioned in this document. There are few things which are not clear to me from this document.
Which node is manager and which is worker or they are both manager or worker? What is basic difference between manager and worker?
If I use nfs to share the resources between all the nodes, which node do we setup nfs?
(I setup nfs in a different vm , and both nodes are mounted to nfs server, is that right?)
What happens under the hood when you publish an API in version 4.0.0. I understand that when an API is published, it gets deployed on the API Gateway, and the API lifecycle state will be changed to PUBLISHED. What artifacts are persisted in the DB (and where) and what artifacts are persisted to the filesystem (my understanding is that they are located at <APIM_HOME>/repository/deployment/server/synapse-configs/default directory as XMLs, but i don't think i notice something changed in the directory, where are they?).
What does step1 and step9 mean? why we need this?

We had this manager-worker concept in the older versions. But in APIM v4, we don't have that concept. Both nodes are accepting requests.
In APIM v4 we have inbuilt artifact synchroniser by default and you don't need a NFS for API artifacts and rate limiting policies. But if you are using tenants and userstores, then you need to have the NFS. In that case, you can mount both the nodes to the NFS server.
Before APIM v4, we had this file system based artifact approach. But in the latest it is loaded to the memory. When you create an API, it publishes an event to itself and to the other node. Then both the nodes load the API from the database to the memory.
Step 1: You can't use the default shipped keystores in production. You have to change those.
Step 9: This is for distributed caching.

Related

How to deploy corda project to the server

I have made a simple project in Corda. My project has 4 nodes including notary and also SpringBoot APIs in the clients folders. I don't know how to deploy my project to the server. I saw the Corda docs but that tutorial was for a single node. So, my question is how to deploy the Corda project with Multiple nodes on the Server and also the SpringBoot APIs. Can anyone help me with this?
There are actually some good youtube videos on this (from me!).
You can find that here: https://www.youtube.com/watch?v=NtVbkUCSt7s
There's other videos there for GCP and Azure as well.
Essentially you need to make sure that your corda node config's p2pAddress specifies the IP address of the machine in your cloud provider of choice.

How to self-deploy web applications running in AWS EC2 Spot Windows instances?

My ASP.Net site runs as farm of Windows EC2 web servers. Due to a recent traffic surge, I switched to Spot instances to control costs. Spot instances are created from an AMI when hourly rates are below a set rate. The web servers do not store any data, so creating and terminating them on the fly is not an issue. So far the website has been running fine.
The problem is deploying updates. The application is updated most days.
Before the switch to a Spot fleet, updates were deployed as follows (1) a CI server would build and deploy the site to a staging server (2) I would do a staggered deployment to a web farm using a simple xcopy of files to mapped drives.
After switching to Spot instances, the process is: (1) {no change} (2) deploy the update to one of the spot instances (3) create a new AMI from that deployment (4) request a new Spot fleet using the new AMI (5) terminate the old Spot fleet. (The AMI used for a Spot request cannot be changed.)
Is there a way to simplify this process by enabling the nodes to either self-configure or use a shared drive (as Microsoft Azure does)? The site is running the Umbraco CMS, which supports multiple instances from the physical location, but I ran into security errors trying to run a .Net application from a network share.
Bonus question: how can I auto-add new Spot instances to the load balancer? Presumably if there was a script which fetched the latest version of the application, it could add the instance to the load balancer when it is done.
I have somewhat similar setup (except I don't use spot instances and I have linux machines), here is the general idea:
CI creates latest.package.zip and uploads it to designated s3 bucket
CI sequentially triggers the update script on current live instances which downloads the latest package from S3 and installs/restarts service
New instances are launched in Autoscaling group, attached to Load balancer, with IAM role to allow access to S3 bucket and User data script that will trigger update script on initial boot.
This should all be doable with windows spot instances I think.

Azure Web Sites - multiple versioned deployments

We have multiple clients and we use Azure web sites to host our web application. When we upgrade a client to a newer version of our software we have to upgrade all of our clients to the latest version.
We would like to be able to upgrade a subset of clients when we release a new version. This would give us the ability to test that the new solution is working properly before we bring all of our clients to the new version. We would like to offer a beta version option to selected clients so that they can access new features of our software and are aware that the version they are using is still in a 'beta' mode.
When we deploy a new version we would like to create a web site just for this new version whilst leaving the other clients on the more stable previous version. To do this we are thinking of writing a reverse proxy that directs traffic to the different versioned web sites depending on the client.
Can we host multiple versions of the web site using the same Azure web site. (IIS directories) The documentation I have read relating to this does not mention being able to build multiple versions of the web site based on different code bases.
Is there a way to set up the build so that each new version is deployed to a directory on the Azure same web site so we can effectively host multiple versions of our app under the same azure web site?
We could do every versioned build to a new Azure web site but this could get quite expensive as we run two instances so as to maintain a good SLA. It is feasible that we could end up with ten versions in the wild at once, running 20 Azure web sites to support these versions could get expensive. How can we save on costs and give our clients a good experience?
You can have up to 5 deployment slots including production on azure web apps. Each slot can use a different branch of your source control system like git or tfs. If you use any of these two, deploy is also automatic (continuous deployment) and you can swap slots any time very fast with minimium to none downtime. Each slot has it's own url for external access.
To save costs, you can run multiple web apps on the same hosting plan. There's no limit for the number of web apps running on the same hosting plan. For each hosting plan it's possible to have 10 small/medium/large instances.
Set up staging environments for web apps in Azure App Service
https://azure.microsoft.com/en-us/documentation/articles/web-sites-staged-publishing/
Azure App Service plans in-depth overview
https://azure.microsoft.com/en-us/documentation/articles/azure-web-sites-web-hosting-plans-in-depth-overview/
Yes this is possible. In management portal, You need to configure the details for the IIS virtual directory or application in the website’s configuration.
Ref - http://blogs.msdn.com/b/tomholl/archive/2014/09/22/deploying-multiple-virtual-directories-to-a-single-azure-website.aspx

How does artifactory interact with multiple web containers

I am new to Artifactory and I have Artifactory installed in my local machine and is deployed on the standard Tomcat web container and a Weblogic web container. I want to know how Artfactory stores the artifacts. Would it be in the web container or would it be stored on my local machine?
Also is it possible to connect the storage meaning that if I deployed an artifact on my local machine with the Weblogic server is it possible to configure Artifactory such that if I use the Tomcat container I can still access the artifact deployed when I was using the Weblogic server?
Artifactory stores the actual binaries on the disk (by recommended default) and metadata about the binaries in a JDBC compliant database (Derby by default, but you can use other supported http://wiki.jfrog.org/confluence/display/RTF/Changing+the+Default+Storage)
Usually, you need only one Artifactory instance. Even though technically you could configure multiple instances of Artifactory to use the same directory for artifacts and the same connection to the metadata database, this setup will probably corrupt both the artifacts storage and the metadata database by executing concurrent writes.
DO NOT DO IT.
Artifactory stores data into a JDBC compliant database, I guess it's Derby by default but you can use MySql, etc. http://wiki.jfrog.org/confluence/display/RTF20/Running+Artifactory+on+MySQL
Usually, you need only one Artifactory instance. Even though it should works on multiple containers if you share the data through the same database, I would advice you to use a unique instance

Deploy web site to azure and traditional IIS

I currently work with a legacy asp.net web application and one of the requirements going forward is that it be deployable to windows azure.
I would like to know how difficult it will be to manage deployment to both Azure and a traditional IIS web server.
Azure seems to require a specific customized version of a web applicaiton project is it possible to deploy the customized web application to a standard IIS instance once it has been converted.
EDIT:
It is a ASP.NET Web Application rather than a Web Site (compiles everything into one dll)
UPDATE:
In the end due to the amount of work involved in converting the application to work in Azure and the cost of Azure compared with other cloud solutions it was decided to go with a traditional Cloud hosted virtual server.
And thank you for the really good answers.
Whether or not you can deploy your application to Azure almost as is depends a lot on how your application works. Azure pretty much requires your application be stateless. If it's a plain vanilla web application that stores data in the session or application cache only and saves data to a database only, then you can deploy it to Azure.
If you have stateful services running like background threads (which is bad anyways), or if you save data to the file system (besides temporary caching), then you may have issues. Really, the issues moving to Azure are really the same as moving to any multi-server load balanced solution. One caveat is permanent storage.
If you need to store data in a place other than the database, then you're best off working with Azure's storage solution which has an API and client library for storing binary data, key/value data (they call it tables, but really, it's not tables), and queues. They also do have a transparent blob-as-file-system option for compatibility. If you want to use these in your app that also is used outside of Azure then you need to write an extra layer between your code and the Azure client library that supports both Azure services and standard local service. Azure SDK does include emulators for Azure services, but they're definitely not meant for production use.
As far as the mechanics of Azure-specific projects, that is actually not that difficult. Yes, you need to create an Azure-specific project in your solution that defines the Web Role and what gets deployed, but it will reference your existing Web Application, not the other way around. You can deploy the Azure Web Role to Azure or you can continue to deploy the existing application to IIS normally and concurrently.
Web Site, Web Application, MVC, really doesn't make much of a difference. Actually doesn't have to be .NET either. Can be PHP or Java or whatever you want to put on your VM. It'll all work the same as far as Azure is concerned.
MS likes to push Azure as a Platform-as-a-Service (Paas) solution where they have a ton of services they offer and you run apps on their standard platform, and contrasts that with Amazon AWS which they call Infrastructure-as-a-Service (Iaas) which is "just" a Virtual Machine. However, MS is really just as much a IaaS solution as AWS, perhaps even more so. The only difference between AWS and Azure is AWS allows you to choose what to install on your VM and with Azure you have to use Windows Server 2008 R2 as the basis for your VM (but you can customize the VM image to install custom software on top of windows). With both Azure and AWS, the hosts offer additional PaaS services you can take advantage of for data storage and message routing. AWS also offers tons of extra services like video streaming.
Also note that with Azure (and AWS I think) you can use the services they offer even in a non-hosted application. If you want to use Azure's data storage from a non-Azure application, you can do that, it's just HTTP REST calls to get/put data. The only differences you pay for data in/out between datacenter and your non-datacenter-hosted application which would be free if the app was also inside the datacenter (just the data in/out is free in-datacenter, you still have storage and transaction fees).
A few things:
Samuel Neff's answer mentioned mounting a file system in a blob (a Cloud Drive). Only one instance may lock this cloud drive for writing, so it does not behave like a network file share. You'll need to plan for this.
You'll need to integrate with the Windows Azure diagnostics subsystem, to gain visibility into your app's run state (e.g. performance counters, trace logs, etc.).
If there are 3rd-party apps that your web app depends on, you'll need to install these. These actually get installed as part of the role instance's boot process, either via your OnStart() event handler or as a startup task. The latter allows for admin-level installs (including registry changes, COM component installations, etc.). You'll need to carefully manage these installations, as they impact the boot time of the instance.
For an asp.net app, you'll need to think about session state. In-proc session state won't work, because each instance will have its own state store in memory. The SQL Azure session state provider doesn't have background cleanup agents, so you'll need to build this into your web or worker role instance (see this blog post by the SQL Azure team for the implementation). The best option is to use the AppFabric Cache, a new service that just went into production. This cache-as-a-service provides an custom session state provider for asp.net as well. Note: As of today, the AppFabric Cache service is only accessible via a .NET interface; there's no REST interface for it (all other storage services - tables, blobs, queues - have a REST interface). .NET, Java, and PHP all have storage client libraries. Ruby has one from the open source community.
You'll have to manage scaling out to more than one instance, when the need arises. This is not a built-in service today, but there are 3rd-party services such as ParaLeap's AzureWatch. There's also Microsoft's System Center Operations Manager, which now has Windows Azure monitoring support. You'll also need to handle scale-back situations, where you reduce the number of server instances.
I have some additional details in an answer for a similar StackOverflow question, here.
I have not tried Windows Azure Migration Scanner personally, but if it works as advertised, this would really come in handy.

Resources