Migrate existing instance to new network - networking

In one of my GCP projects, I have created a new network with heightened security settings compared to the default network created for any project. I would like to migrate my existing instances from the default network to this new network without deleting them.
I'm under the impression that this can be accomplished by removing the existing access config, and then adding a new one that will be associated with the new network. So:
gcloud compute delete-access-config <name of instance> access-config-name=<name of config> followed by gcloud compute add-access-config <name of instance> access-config-name=<name of new config>. However, this only seems to affect the external IP of the instance and not the network itself. How would I go about removing the instance from the default network and moving it to the new one?

The helptext for these commands do not have a "name" flag. Did you mean to say "--access-config-name?:
https://cloud.google.com/sdk/gcloud/reference/compute/instances/add-access-config
https://cloud.google.com/sdk/gcloud/reference/compute/instances/delete-access-config

Related

Corda Accounts - Ability to move an account to a different host node

In the Corda accounts library, in order to change the host "ownership" of the account from one node to another, one would need to change the Host in the AccountInfo state to the new host (node), along with share all vault states relevant to this account.
AccountInfo doesn't have an Update command (AccountInfo commands), meaning you cannot change the host once it is created.
Has this feature been excluded for any reason? Are there any plans to introduce an Update command (with supporting flows)?
What steps would be involved in a move/transfer (host ownership)? And what are the potential caveats around this implementation?
Amol there will be work done on this in the future but as of now, there are two options which could help you resolve your issue.
Set up a new account on the new node. Generate new key(s) for the new account. Spend all the states from the old account to this new account.
If you control the all the keys used to participate in your states and can migrate them to the new node then you just need to import the key pairs somehow and then copy all the states across from the old node to the new node.
Hope that helps.

Web.Config transforms for Multi-Tenant deployment of WebForms app in docker over AWS ECS

Environment
ASP.NET WebForms app over IIS
Docker container host
AWS ECS hosting platform
Each client hosting its own copy of the app with private database connection string
Background
In the non-docker environment, each copy is a virtual directory under IIS, and thus have their own individual web.config pointing to dedicated databases. The underlying codebase is the same for each client, with no client-specific customization involved. The route becomes / here.
In the docker environment (one container per client), each copy goes over as a central root application.
Challange
Since the root image is going to be the same, how to have the web.config overridden for each client deployment.
We shouldn't create multiple images (one per client) as that will mean having extra deployment jobs and losing out on centralization. The connection strings should ideally be stored in some kind of dictionary storage applicable at ECS level which can provide client-specific values upon loading of corresponding containers.
Presenting the approach we used to solve this issue. Hope it may help others struck in similar cases.
With the problem statement tied to having a single root image and having any customization being applied at runtime, we knew that there needs to be a transformation of web.config at time of loading of the corresponding containers.
The solution was to use a PowerShell script that will read the web.config and get replace the specific values which were having a custom prefix embedded to the key. The values got passed from custom environmental variables within ECS and the web.config also got updated to have the keys with the prefix added.
Now since the docker container can have only a single entry point, a new base image was created which instantiated an IIS server and called a PowerShell script as startup. The called script called this transformation script and then set the ServiceMonitor on the w3cwp.
Thanks a lot for this article https://anthonychu.ca/post/overriding-web-config-settings-environment-variables-containerized-aspnet-apps/
I would use environment variables as the OP suggests for this with a start up transform, however I want to make the point that you do not want sensitive information in ENV variables, like DB passwords, in your ECS task definition.
For that protected information, you should use ECS secrets coupled with Parameter Store in Systems Manager. These values can be stored encrypted in the Parameter Store (using a KMS key) and the ECS Agent will 'inject' them as ENV variables on task startup.
For me, to simplify matters, I simply use secrets for everything although you can choose to only encrypt the sensitive information and leave the others clear.
I dynamically add the secrets for the given application into my task definitions at deploy time by looking up the 'secrets' for the given app by 'namespace' (something that Parameter Store supports). Then, if I need to add a new parameter, I can just add a new secret to the store in the given namespace and re-deploy the app. It will pick up and inject into the task definition any newly defined secrets automatically (or remove ones that have been retired).
Sample ruby code for creating task definition:
params = ssm_client.get_parameters_by_path(path: '/production/my_app/').parameters
secrets = params.map{ |p| { name: p.name.split("/")[-1], value_from: p.arn } }
task_def.container_definitions[0].secrets = secrets
This last transform injects the secrets such that the secret 'name' is the ENV variable name... which ends up looking like this:
"secrets": [
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_HOSTNAME",
"name": "DB_HOSTNAME"
},
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_PASSWORD",
"name": "DB_PASSWORD"
}
You can see there are no values now in the task definition. They are retrieved and injected when ECS starts up your task.
More information:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html

Adding new party to the existing network

I have a corda network (corda as system service) (v3.1) running in devMode, the network structure goes like:
Party A
Party B
Party C
NotaryA (simple notary)
NotaryB (validating notary)
Oracle
The network is running perfectly fine until I try to add one more party to the network (Party D). Steps how I try to add new party:
Generate nodeInfo, certificates etc using network-bootstrapper tool for new party
Place the node folder parallel to the other node folders and add required cordapp to the cordapps folder inside the newly added party
Share the nodeinfo to all other nodes and vice-versa
This didn't work, probably because newly added node has a different network parameter file then other nodes and has no information about the notary nodes.
I tried the other way:
Keep node.conf of all the nodes along with the node.conf of new node and generate nodeInfo, network-parameters etc for all nodes.
Place the folder of the new node parallel to other nodes and replace network-parameter, additional-nodeinfo folder and nodeinfo files of old nodes with the newly created files and folders.
Add required cordapps to node/cordapps folders
But this way also it didn't work.
Can you help me with correct steps for adding new node to the existing network?
The bootstrapper can only generate information for a set of nodes that are on the same machine. If a node needs to be added to a bootstrapped network, all the nodes need to be collected back together on the same machine.
The instructions for adding a node to a bootstrapped network are available here: https://docs.corda.net/head/network-bootstrapper.html#adding-a-new-node-to-the-network.

Can you FTP into an EC2 instance if you opted out of creating a key pair when you generated it?

I followed this AWS tutorial to get a Wordpress site up and running but it instructed me not to use the keypair option so now I can not follow those instructions to FTP and make simple CSS etc. changes.
Before I blow up the whole instance, am I missing an approach that can make FTP possible?
If you skipped creating key pair during instance launch, you can't connect to it. The only way to connect to that instance with (S)FTP now is to put a working key on the disk:
Stop the instance.
Detach the EBS volume and attach it to the instance that you can connect to.
Mount the volume and put a public key in ./ssh folder in the home directory of the user named bitnami.
Dismount the volume, detach it and attach back to the original instance.
Seem like it's easier to just recreate the instance, this time with a private key.

How to migrate Wordpress between Compute Engine instances

I have recently created a very small Google Compute Engine instance, naively thinking it's one of those easily scalable things Google people keep raving about.
I used the quick deployment feature of Wordpress and it all installed itself nicely, so I started configuring and adding data etc.
However, I then found out that I can't scale an existing instance (i.e. it won't allow me to change the instance type to a bigger one. I don't get why not, but there you go.), so it looks like I need to find a way to migrate my Wordpress installation to a new instance.
Will I simply be able to create a new instance and point it at the persistent disk my small instance currently uses, et voila, Bob's your uncle?
Or do I need to manually get the files and MySql data off the first instance and re-import into an empty new instance?
What's the easiest way?
Any advise or helpful links would be appreciated.
Thanks.
P.S.: Btw, should I try to use the Google Cloud SQL store instead of a local MySql installation?
In order to upgrade your VM:
access the VM's settings in the Developers Console (your project -> Compute -> Compute Engine -> VM instances -> click on the VM's name)
Scroll down to the "Disks" section, and un-check "Delete boot disk when instance is deleted"
Delete the VM in question. Take note that the disk, named after the instance, will remain.
Create a new VM, selecting "Existing disk" under Boot disk - Boot source. In the next box down, select the disk from point 3 above, as well as a bigger machine type.
The resulting new instance will use the existing disk from the old one, with improved hardware / performance.
As for using Cloud SQL in lieu of a VM-installed database, it's perfectly feasible, and allows to adjust the Cloud SQL instance to match your actual use. A few consideration when setting up this kind of instance:
limit the IPs allowed to connect to your Cloud SQL instance to your frontend's IP, and perhaps the workstation's IP or subnet from which you maintain the database out of.
configure Cloud SQL to use SSL certificates.
Sammy's answer covers the important stuff I just wanted to clarify how your files are arranged on the two disks that are attached to your instance:
The data disk contains /var/www/ which is all of the wordpress files. It's mounted on the instance at /wordpress
The boot disk contains everything else, including the MySQL database that was created for the Wordpress installation.

Resources