Hyperledger Indy: Create genesis transaction file - hyperledger-indy

I have 4 nodes set up on 4 vagrant ubuntu-based machines. I have generated the keys required for these nodes using the command: init_indy_node VAL1 0.0.0.0 9701 0.0.0.0 9702 111111111111111111111111111N1. According to the documentation, there is a script named generate_indy_pool_transactions which generates the keys with predefined node names i.e Node1, Node2 and generates the keys that are always the same.
I want to create my custom network with my generated keys. I could not find any document to generate the genesis transaction file.
Is there any way to generate this file so that I can bootstrap my network?
Any suggestions/comments are welcome.

Thewre is genesis_from_files.py script that you are welcome to try: https://github.com/sovrin-foundation/steward-tools/tree/master/create_genesis
As described start-nodes.md, in order to setup a pool the following actions are needed:
set Network name in config file
generate keys (init_indy_node script can be used for this)
provide genesis transactions files which will be a basis of initial Pool
Indy doesn't have any genesis files going with it since this is up to Indy-based Networks (such as Sovrin genesis).
What Indy has is a generate_indy_pool_transactions script which should be used for test purposes only. It generates keys based on the Nodes names (so if the same Node names are passed there, then the keys will be the same every time).
So, there are the following options on how to create genesis files in Indy:
Create them manually.
Contribute to Indy creating a script for generation (I think the logic from generate_indy_pool_transactions can be used for this).
Run generate_indy_pool_transactions (which will generate keys and genesis files), then re-init keys correctly and modify the genesis files from generate_indy_pool_transactions to point to correct keys.
Use other helper scripts such as Sovrin Foundation's one: https://github.com/sovrin-foundation/steward-tools/tree/master/create_genesis (Sovrin is the main Indy deployment now).

To create custom network with actors generated keys and to generate pool_transactions_genesis and domain_transactions_genesis file, you have to use indy-plenum.
You can find details on the following tutorial:
https://taseen-junaid.medium.com/hyperledger-indy-custom-network-with-indy-node-plenum-protocol-ledger-85fd10eb5bf5
You can find the code base of that tutorial into following link:
https://github.com/Ta-SeenJunaid/Hyperledger-Indy-Tutorial

Related

How to keep generation number when copy file from gcs bucket to another bucket

I'm using gcs bucket for wordpress (wp-stateless plugin)
after create and upload media file to a bucket. I copy it to other bucket (duplicate). But generation number of each object has been change (maybe random).
My question is: How to keep generation number same bucket source like in destination bucket?
Thanks in advance.
Basically, there’s not an official way of keeping the same version and generation numbers when copying files from one bucket to another. This is WAI and intuitive because the version number refers to this object (which resides on this bucket), when you copy it to another bucket, it's not the same object (it's a copy) so it cannot keep the same version number.
I could think of a workaround, keeping somewhere your own version of the objects and then through the API make an organized copy. This would mean you would be dumping the bucket but you would need to have a list of all the objects and its versions and then add them in sequential order (sounds like a lot of work). You could keep your own versioning (or the same versioning) in the metadata of each object.
I would recommend that if your application depends on the object’s versioning, to use custom metadata. Basically, if you did your own versioning using custom metadata, when copying the objects to a new bucket, it would keep the same metadata.
There is already a feature request created about this. But, it has mentioned that it's currently infeasible.
However, you can raise a new feature request here

How to create DataBase Configuration as library in Mule 4?

I have a big Monolithic Oracle DB. I may end up creating around 20 System APIs to get Various data from this DB. So instead of configuring DB connection in all 20 system APIs, like to create a DB connector and make it as a jar file. So that every system APIs can add this in their POM and use that for connection.
Is that something possible or is there any better approach to handle it?
One method if all applications are in the same server is to create a domain and share the configuration by placing it in the domain. This is usually the recommended approach. This method is documented at https://docs.mulesoft.com/mule-runtime/4.3/shared-resources
If that's not possible (for example CloudHub doesn't support domains) or desired, then you have to package the flow in a jar by following the instructions in this KB article: https://help.mulesoft.com/s/article/How-to-add-a-call-to-an-external-flow-in-Mule-4. Note that while the article title mentions flows, the method works with both configurations and flows.

How to lock a python variable file in robot framework?

I need to store my user id and password in a python variable file in robot framework. This credential will be utilized to login to website to test it. No other person should be able to view my credential (even in git also). Hence, I have to lock this variable file. Is there any way to lock this python variable file?
Due to their nature Source Code Repository systems are public in nature. So, either you lock the repository or it's open to everyone. This makes storing any type of sensitive data in such a system a bad idea.
For these types of information it is typically best to have a separate file and refer to that file when executing the run. In Robot Framework this can be done using Variable files. These can be referred to using the Variables myvariables.<ext>. There is support for Python and YAML files.
Securing these files can be as easy as placing them in a location that only few can access to setting up tools to store them encrypted and only make them available when having the right key. This is a separate topic on it's own with it's own challenges.

Flyway support to re-run SQL file multiple times using placeholder params

Does FlywayDB support the use-case where a script can be re-run multiple times using different parameter sets through "placeholder" and be treated either as separate versions or repeatable migration (though with different SQL files)? I have a requirement where we'd want to run the same set of scripts to organize data according to "regions" (US, UK, CA, etc.)
e.g...
Files:
sql/V1__customer_info.sql
sql/V2__customer_address.sql
Commands:
# Migrate US customers
mvn -Dflyway.placeholders.region_id=us flyway:migrate
# Migrate UK customers
mvn -Dflyway.placeholders.region_id=uk flyway:migrate
# Migrate Australian customers
mvn -Dflyway.placeholders.region_id=au flyway:migrate
No is the short answer. I have had a couple of ideas you might like to explore further:
Implement it in a callback specifically afterMigrate.sql and then call as per your example. afterMigrate is called even if there are no pending migrations to apply. This is "extending" the callback feature and you would be constrained by a single sql file so would need to combine info and address into a single file. Java callbacks are more flexible however I have not used them.
Pass a list into the placeholder and have your database split and loop over it. This would be achievable with Oracle and PLSQL but may be tricky with other databases or if you need to support multiple database types.

How do permissions on a PlasticSCM repository work in a DVCS scenario

So I've been working on a rather large project and using PlasticSCM as by VCS. I use it with a DVCS model, but so far it's pretty much just been me sync'ing between my office machine and home.
Now we're getting other people involved in the project, and what I would like to do is restrict the other developers to specific branches so that only I can merge branches into /main.
So I went to my local repository, and I made the permissions changes (that part's pretty straight forward). But now how does that work with the other developers? When they sync up, are the permissions replicated on their local repositories? If they attempted to merge into /main on their local repository, does it allow that, and then they get an error when they attempt to push the changes to my repository?
This is my first foray into DVCS so I'm not quite sure how this kind of thing works.
Classic DVCS (Mercurial, Git) don't include ACL, meaning a clone wouldn't keep any ACL restriction.
This is usually maintain through hook on the original repo (meaning you might be able to modify the wrong branch on a cloned repo, but you wouldn't be able to push back to the original repo).
As the security page mentions, this isn't the case for PlasticSCM, and a clone should retain the ACL (caveat below) set on an object, which will inherit said ACL through two realms: the file system hierarchy (directory, subdirectories, files) and the repository object hierarchy:
The caveat in a DVCS settings is that there must be a mechanism in place to translate users and groups from one site to another.
The Plastic replication system supports three different translation modes:
Copy mode: it is the default behaviour. The security IDs are just copied between repositories on replication. It is only valid when the servers hosting the different repositories involved work in the same authentication mode.
Name mode: translation between security identifiers is done based on name. In the sample at Figure above suppose user daniel has to be translated by name from repA to repB. At repB the Plastic server will try to locate a user with name daniel and will introduce its LDAP SID into the table if required.
Translation table: it also performs a translation based on name, but driven by a table. The table, specified by the user, tells the destination server how to match names: it tells how a source user or group name has to be converted into a destination name. Figure below explains how a translation table is built and how it can translate between different authentication modes.
Note: a translation table is just a plain text file with two names per line separated by a semi-colon “;”. The first name indicates the user or group to be translated (source) and the one on the right the destination one.

Resources