I am trying to do active active deployment and went through the guide https://docs.wso2.com/display/AM210/Configuring+an+Active-Active+Deployment
My question if I want to maintain two manager nodes how do I sync the server directory because in the tutorial mentioned above one is manager and other is worker so using rsync we can sync the directory
But for both manager instances how to sync up directories
Thanks
You can't have active-active manager nodes. They should be active-passive.
Related
I have already provisioned databricks instance, now I need to add Deploy Azure Databricks workspace in your own Virtual Network (VNet) this setting as YES. As I need to make my databricks instance behind Vnet.
How Can I edit it to the already configured instance. Is there any possible way as Networking section is grid out as I selected no previously with Vnet option.
You can't change networking configuration of already deployed workspace. You need to create new one with correct configuration, and then migrate from your existing workspace.
I have deployed API manager 4.0.0 All-in-one on 2 VMs. I am using Mysql as DB which is on separate VM and I am sharing the databases as mentioned in the document. Now I am trying to cluster these 2 nodes as mentioned in this document. There are few things which are not clear to me from this document.
Which node is manager and which is worker or they are both manager or worker? What is basic difference between manager and worker?
If I use nfs to share the resources between all the nodes, which node do we setup nfs?
(I setup nfs in a different vm , and both nodes are mounted to nfs server, is that right?)
What happens under the hood when you publish an API in version 4.0.0. I understand that when an API is published, it gets deployed on the API Gateway, and the API lifecycle state will be changed to PUBLISHED. What artifacts are persisted in the DB (and where) and what artifacts are persisted to the filesystem (my understanding is that they are located at <APIM_HOME>/repository/deployment/server/synapse-configs/default directory as XMLs, but i don't think i notice something changed in the directory, where are they?).
What does step1 and step9 mean? why we need this?
We had this manager-worker concept in the older versions. But in APIM v4, we don't have that concept. Both nodes are accepting requests.
In APIM v4 we have inbuilt artifact synchroniser by default and you don't need a NFS for API artifacts and rate limiting policies. But if you are using tenants and userstores, then you need to have the NFS. In that case, you can mount both the nodes to the NFS server.
Before APIM v4, we had this file system based artifact approach. But in the latest it is loaded to the memory. When you create an API, it publishes an event to itself and to the other node. Then both the nodes load the API from the database to the memory.
Step 1: You can't use the default shipped keystores in production. You have to change those.
Step 9: This is for distributed caching.
I have made a simple project in Corda. My project has 4 nodes including notary and also SpringBoot APIs in the clients folders. I don't know how to deploy my project to the server. I saw the Corda docs but that tutorial was for a single node. So, my question is how to deploy the Corda project with Multiple nodes on the Server and also the SpringBoot APIs. Can anyone help me with this?
There are actually some good youtube videos on this (from me!).
You can find that here: https://www.youtube.com/watch?v=NtVbkUCSt7s
There's other videos there for GCP and Azure as well.
Essentially you need to make sure that your corda node config's p2pAddress specifies the IP address of the machine in your cloud provider of choice.
We occasionally have the need to restart services that are deployed with AWS CodeDeploy. Is it possible to have the CodeDeploy agent do this directly, without having to create a new deployment?
The AWS service you're looking for is AWS Systems Manager. You can run arbitrary commands or scripts on instances with this2. All recent Ubuntu and Amazon Linux instances have AWS SSM agent installed. But if you have an older instance, you'll have to install the SSM agent manually or through your configuration manager.
No, you need to have a deployment to restart. The agent does not take actions on its own. It receives commands from the CodeDeploy service.
Depending on your usecase you can have your application emit CloudWatch event and have that trigger a deployment in the deploymentGroup. Note that it will create a deployment that will deploy to the entire fleet.
To expand on eternaltyro's answer, you could leverage CodeDeploy's CLI tool via SSM to run the same CodeDeploy event hooks that were/are used to start and stop your application.
My ASP.Net site runs as farm of Windows EC2 web servers. Due to a recent traffic surge, I switched to Spot instances to control costs. Spot instances are created from an AMI when hourly rates are below a set rate. The web servers do not store any data, so creating and terminating them on the fly is not an issue. So far the website has been running fine.
The problem is deploying updates. The application is updated most days.
Before the switch to a Spot fleet, updates were deployed as follows (1) a CI server would build and deploy the site to a staging server (2) I would do a staggered deployment to a web farm using a simple xcopy of files to mapped drives.
After switching to Spot instances, the process is: (1) {no change} (2) deploy the update to one of the spot instances (3) create a new AMI from that deployment (4) request a new Spot fleet using the new AMI (5) terminate the old Spot fleet. (The AMI used for a Spot request cannot be changed.)
Is there a way to simplify this process by enabling the nodes to either self-configure or use a shared drive (as Microsoft Azure does)? The site is running the Umbraco CMS, which supports multiple instances from the physical location, but I ran into security errors trying to run a .Net application from a network share.
Bonus question: how can I auto-add new Spot instances to the load balancer? Presumably if there was a script which fetched the latest version of the application, it could add the instance to the load balancer when it is done.
I have somewhat similar setup (except I don't use spot instances and I have linux machines), here is the general idea:
CI creates latest.package.zip and uploads it to designated s3 bucket
CI sequentially triggers the update script on current live instances which downloads the latest package from S3 and installs/restarts service
New instances are launched in Autoscaling group, attached to Load balancer, with IAM role to allow access to S3 bucket and User data script that will trigger update script on initial boot.
This should all be doable with windows spot instances I think.