Can I setup an Indy node to read from the Sovrin Mainnet? - hyperledger-indy

Is it possible to setup an Indy node using the Sovrin pool's genesis files in order to read from the Sovrin mainet?
I would like to sync a node to the Sovrin mainet in order to resolve Sovrin DIDs, e.g. "did:sov:1a2b3c4d5e6f7g". Is it possible for anyone to do this? Or do I need special permission to connect to other nodes on the network?
If it's not possible, how would I go about resolving arbitrary DIDs?

You can talk to the Indy main-net nodes as a client. You do not need to set up your own node.
You can use the indy-cli program to talk to the validator nodes in the ledger. You need the pool genesis transactions to get connected to the pool, and then you can issue get-nym commands.
You can get the genesis transactions from this directory. Then in indy-cli you will use commands like this:
pool create builder gen_txn_file=pool_transactions_builder_genesis
pool connect builder
ledger get-nym did=did:sov:3WzFTEzmU95mAF4CHHi9np

Related

Is it possible to setup MYSQL replication with binlog files generated from server A (which we are considering as Master) to server B

We are migrating from the Magento community to Magento cloud for one of our projects and we need to access DB for our custom developed CRM.
But unfortunately magento cloud does not support DB replication and they have enabled binlogs and they are not supporting for creating replication user and server id setup, The binlog files can be synced to our CRM server periodically.
Now we want to know whether we can use the binlog files to replicate the database or is there any workaround for doing the same?
We have tried using tunnel setup but the query execution time is more while using tunnel setup which will affect our CRM performance badly.
Also we need to reconfirm whether there are any other possibilities we can try to access the Magento Cloud DB in our CRM without performance lag.
Thanks in advance for your suggestions.
Yes, it is possible, but it may be a little fiddly in the setup you are describing. You can replay the binlogs as relay logs. Have a look at this article for more details:
https://lefred.be/content/howto-make-mysql-point-in-time-recovery-faster/
Specifically, these parts are relevant (you'll need to edit them appropriately):
[root#mysql1 mysql]# for i in $(ls /tmp/binlogs/*.0*)
do
ext=$(echo $i | cut -d'.' -f2);
cp $i mysql1-relay-bin.$ext;
done
[root#mysql1 mysql]# ls ./mysql1-relay-bin.0* >mysql1-relay-bin.index

Multiple Config File for Flyway

Let say we have 10-15 micro services running and they have separate DB's. So how to maintain all these 15 config files at a single deployment server. As every db server has different creds, ip address and urls. So how will we manage all these in a single file or we'll have to create separate DB file per micro service ?
Each configuration refers to a single database, so you need to create one file per DB for properties such as URL and credentials. Probably the best way to handle this is to have a repository of configurations named after their DBs, and your deployment mechanism uses the appropriate one. However, you can have a base configuration for common settings and just have connection details per database using the configFiles parameter, eg:
flyway migrate -configFiles=/usr/configurations/base.conf,/usr/configurations/db1.conf
will start from the base configuration and override any settings that also appear in the db1-specific configuration.

How to deploy multiple nodes in Corda Testnet network?

We are building a POC using Corda and Springboot web server.
Following are the versions of Corda platform, Springboot server, and other essential dependencies used for building the POC-
cordaReleaseGroup=net.corda
cordaVersion=4.0
gradlePluginsVersion=4.0.45
kotlinVersion=1.2.71
junitVersion=4.12
quasarVersion=0.7.10
spring_version = '4.3.11.RELEASE'
spring_boot_version = '2.0.2.RELEASE'
spring_boot_gradle_plugin_version = '2.1.1.RELEASE'
jvmTarget = "1.8"
log4jVersion =2.11.2
platformVersion=4
slf4jVersion=1.7.25
nettyVersion=4.1.22.Final
The CorDapp developed for POC has four nodes -
Notary Node
Provider Company Node
Consumer Company 1 Node
Consumer Company 1 Sub Contact Node
The POC is running in dev mode in our local network.
We require to test the POC in the Corda Testnet.
We went through the following documentation on Corda Testnet -
1: Join Corda TestNet
URL : https://docs.corda.net/releases/release-V4.0/corda-testnet-intro.html?highlight=joining%20corda%20testnet
2: Deploying Corda to Corda Testnet from your local environment
https://docs.corda.net/releases/release-V4.0/deploy-locally.html?highlight=deploying%20corda%20corda%20testnet%20from%20your%20local%20environment
We understood from the above documentation that we can download only one "Corda node" using one registration. Will we have to join the Corda Testnet using four different accounts in order to download four Corda nodes?
Is our understanding correct?
As per my understanding you can spin up multiple nodes with a refresh of the Testnet node installation page, or by creating a node and then clicking 'next', which should take you back to the node list and present a button to allow creation of another node.
It should provide you with a unique ONE_TIME_DOWNLOAD_KEY each time, and automatically assign you a randomised alphanumeric O (organisation) value to use within the nodes configuration file, helping the network map (and thus other nodes on the network) to distinguish your nodes individually.
Make sure you don't unintentionally run copies of the same node with the same identity, otherwise the network map will just assume there has been a change in the address of the original node and route P2P traffic to the newest instance.
Take a look at the node.conf docs to understand node configuration further:
https://docs.corda.net/corda-configuration-file.html

Provision 2 node-type Service Fabric ARM

I've been trying to provision a 2-node-type service fabric cluster using ARM. The secondary node type (backend) should not be exposed to the internet. For that I've created a loadbalancer with an internal IP-Address.
Everything gets provisioned correctly but I cannot get the nodes added to the cluster. From the Azure portal when I open the cluster it says it has no nodes in it even though it has the node types configured.
I have even tried downloading the template produced by the azure portal after creating a service fabric cluster. I have also executed one of the templates provided on github and I cannot still see any nodes in the cluster.
Any suggestion what I could be missing?
Thanks
Glad to hear you got that sorted. Regarding your follow-up question on deploying to the backend node-types, that's where you'd use placement constraints. When you create clusters in Azure through ARM, it automatically sets up a placement property on each node using the node type name you defined. So on your back-end nodes, assuming your node type is called "backendnode" you'll have the following placement policy defined:
NodeTypeName: backendnode
When you deploy your services, just use that as your placement constraint:
New-ServiceFabricService -ApplicationName "fabric:/myapp" -ServiceName "fabric:/myapp/myservice" -ServiceTypeName "myservicetype" -Stateful -MinReplicaSetSize 2 -TargetReplicaSetSize 3 -PartitionSchemeSingleton -PlacementConstraint "NodeTypeName == backendnode"

How to migrate Wordpress between Compute Engine instances

I have recently created a very small Google Compute Engine instance, naively thinking it's one of those easily scalable things Google people keep raving about.
I used the quick deployment feature of Wordpress and it all installed itself nicely, so I started configuring and adding data etc.
However, I then found out that I can't scale an existing instance (i.e. it won't allow me to change the instance type to a bigger one. I don't get why not, but there you go.), so it looks like I need to find a way to migrate my Wordpress installation to a new instance.
Will I simply be able to create a new instance and point it at the persistent disk my small instance currently uses, et voila, Bob's your uncle?
Or do I need to manually get the files and MySql data off the first instance and re-import into an empty new instance?
What's the easiest way?
Any advise or helpful links would be appreciated.
Thanks.
P.S.: Btw, should I try to use the Google Cloud SQL store instead of a local MySql installation?
In order to upgrade your VM:
access the VM's settings in the Developers Console (your project -> Compute -> Compute Engine -> VM instances -> click on the VM's name)
Scroll down to the "Disks" section, and un-check "Delete boot disk when instance is deleted"
Delete the VM in question. Take note that the disk, named after the instance, will remain.
Create a new VM, selecting "Existing disk" under Boot disk - Boot source. In the next box down, select the disk from point 3 above, as well as a bigger machine type.
The resulting new instance will use the existing disk from the old one, with improved hardware / performance.
As for using Cloud SQL in lieu of a VM-installed database, it's perfectly feasible, and allows to adjust the Cloud SQL instance to match your actual use. A few consideration when setting up this kind of instance:
limit the IPs allowed to connect to your Cloud SQL instance to your frontend's IP, and perhaps the workstation's IP or subnet from which you maintain the database out of.
configure Cloud SQL to use SSL certificates.
Sammy's answer covers the important stuff I just wanted to clarify how your files are arranged on the two disks that are attached to your instance:
The data disk contains /var/www/ which is all of the wordpress files. It's mounted on the instance at /wordpress
The boot disk contains everything else, including the MySQL database that was created for the Wordpress installation.

Resources