How Hyperledger Fabric java chaincode set peerAddresses and tlsRootCertFiles - hyperledger-fabric-sdk-java

I have a transaction that involves reading the implicit private data set of ORG1. I can use the following command in cli, where I specify peerAddresses and tlsRootCertFiles:
peer chaincode invoke -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls --cafile "${PWD}/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem" -C mychannel -n hyperledger-fabric-contract-java-demo --peerAddresses localhost:7051 --tlsRootCertFiles "${PWD} /organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt" -c'{"function":"queryPrivateFileinfo","Args":["_implicit_org_Org1MSP","key"]}'。
But how can I specify this transaction in the chain code?
I checked the official document but no idea

Related

How to test Firestore Security Rules with Jenkins?

I'm developing some Firestore security rules locally. I use mocha to test the rules, and locally everything works. I've a Jenkins pipeline that every time I merge a PR on develop it published the rules on Firebase in cloud. What I want to do is running my unit tests within Jenkins. Anyway, every time Jenkins calls yarn test from the pipeline, I get an error that says
#firebase/firestore: Firestore (7.18.0): Could not reach Cloud Firestore backend. Connection failed 1 times. Most recent error: FirebaseError: [code=internal]: 13 INTERNAL: Received RST_STREAM with code 2 triggered by internal client error: Protocol error
This typically indicates that your device does not have a healthy Internet connection at the moment. The client will operate in offline mode until it is able to successfully connect to the backend.
Is there a way to run the firebase emulators from Jenkins?
Thanks!
I found a way to do that.
By using firebase-tools-docker I can easily run my tests inside a docker container that brings up the emulators suite.
The Jenkinsfile goes like this:
def jenkinsUser = 1001
def firebaseDocker = 'andreysenov/firebase-tools:9.14.0'
stage('Pull docker image') {
sh "docker pull $firebaseDocker"
}
stage('Unit tests') {
sh "docker run -d --rm \
--user $jenkinsUser:$jenkinsUser \
-p 8080:8080 \
-v ${pwd()}:/home/node \
--name firebase-emulators \
$firebaseDocker \
firebase emulators:start"
sleep(5)
sh "docker exec firebase-emulators /bin/bash -c 'cd tests && yarn test'"
sh "docker stop firebase-emulators"
}
This is my folder structure (for reference):
Hope this helps 😉

Conclave does not start in release mode after passing -PenclaveMode=release

I am trying to start Conclave in release mode, followed instructions as below :
// Firstly, built the signing material:
./gradlew prepareForSigning -PenclaveMode=release
// Generated a signature from the signing material. The password for the sample external key is '12345'
openssl dgst -sha256 -out signing/signature.bin -sign signing/external_signing_private.pem -keyform PEM enclave/build/enclave/Release/signing_material.bin
// Finally built the signed enclave:
./gradlew build -PenclaveMode="release" -x test
./gradlew host:installDist
cd host/build/install
./host/bin/host
After invoking request from client , the attestation still prints:
Mode: SIMULATION
Is there any flag/step being missed ?
You need to include -PenclaveMode=release when building the host:installDist target otherwise it will build the default Simulation version and package that, even if you previously built the release enclave.
Just run this command and it will use the release enclave instead:
./gradlew host:installDist -PenclaveMode=release

How to set --enable-post-process-file in openapi-generator CLI for c++ CPP_POST_PROCESS_FILE

I got the following messages every time i generate c++ client from openapi-generator:
[main] INFO o.o.c.languages.AbstractCppCodegen - Environment variable CPP_POST_PROCESS_FILE not defined so the C++ code may not be properly formatted. To define it, try 'export CPP_POST_PROCESS_FILE="/usr/local/bin/clang-format -i"' (Linux/Mac)
[main] INFO o.o.c.languages.AbstractCppCodegen - NOTE: To enable file post-processing, 'enablePostProcessFile' must be set to `true` (--enable-post-process-file for CLI).
[main] WARN o.o.codegen.DefaultCodegen - The value (generator's option) must be either boolean or string. Default to `false`.
I used the following command to run the generator:
npx openapi-generator generate -i api.yaml -g cpp-restsdk -o %CD%
How can I fix these messages.
Please use npx #openapitools/openapi-generator-cli instead as https://www.npmjs.com/package/#openapitools/openapi-generator-cli is the official repo of the npm wrapper for openapi-generator.
To enable post file processing, please add --enable-post-process-file to the command, e.g.
export CPP_POST_PROCESS_FILE="/usr/local/bin/clang-format -i"
npx #openapitools/openapi-generator-cli generate -i api.yaml -g cpp-restsdk -o %CD% --enable-post-process-file

How to Handle Runtime Configuration of Symfony2 Using Consul Service Discovery

Our team is presently exploring the idea of service discovery for a Symfony2 application using Consul. Being in the relative frontier, there's very little out there in the way of discussion. So far we've discovered:
Runtime configuration has previously been shot down.
A bundle exists to handle such a use case, but has also hasn't seen a lot of activity as of late.
One of the contributors of said bundle suggested that External Parameters might be the answer to the problem.
Sensio has created its own Consul SDK. However, there seems to be little in the way of documentation/official blog articles re: Symfony2 integration
Consol provides watches which can be triggered on various changes
Current thoughts are to explore utilizing the Consul watchers to re-trigger a cache build along with external parameters. That said, there is some concern on the overhead of such an operation if services change semi-frequently.
Based on the above, and knowledge of Consul/Symfony internals, would that be an advisable approach? If not, why, and what alternatives are available?
In the company I work, we took a different route.
Instead of fighting against Symfony to accept runtime configuration (something it should, like Spring Data Consul, for example), we decided to make Consul update Symfony configuration, in a similar in concept, different in implementation than Frank did.
We installed Consul and Consul Template. We create a K/V entry pair that contains the entire parameters.yml file. Example:
Key: eblock/config/parameters.yml
parameters:
router.request_context.host: dev.eblock.ca
router.request_context.scheme: http
router.request_context.base_url: /
Then a consul template configuration file was added at location /opt/consul-template/config/eblock.cfg:
template {
source = "/opt/consul-template/templates/eblock-parameters.yml.ctmpl"
destination = "/var/www/eblock/app/config/parameters.yml"
command = "/opt/eblock/scripts/parameters_updated.sh"
}
The contents of ctmpl file are:
{{key "eblock/config/parameters.yml"}}
Finally, our parameters_updated.sh script does:
#!/bin/bash
readonly PROGNAME=$(basename "$0")
readonly LOCKFILE_DIR=/tmp
readonly LOCK_FD=201
lock() {
local prefix=$1
local fd=${2:-$LOCK_FD}
local lock_file=$LOCKFILE_DIR/$prefix.lock
# create lock file
eval "exec $fd>$lock_file"
# acquire the lock
flock -n $fd \
&& return 0 \
|| return 1
}
lock $PROGNAME || exit 0
export HOME=/root
logger "Starting composer install" && \
/usr/local/bin/composer install -d=/var/www/eblock/ --no-interaction && \
logger "Running composer dump-autoload" && \
/usr/local/bin/composer dump-autoload -d=/var/www/eblock/--optimize && \
logger "Running app/console c:c/c:w" && \
/usr/bin/php /var/www/eblock/app/console c:c -e=prod --no-warmup && \
/usr/bin/php /var/www/eblock/app/console c:w -e=prod && \
logger "Running doctrine commands" && \
/usr/bin/php /var/www/eblock/app/console doctrine:database:create --env=prod --if-not-exists && \
/usr/bin/php /var/www/eblock/app/console doctrine:migrations:migrate -n --env=prod && \
logger "Restarting php-fpm" && \
/bin/systemctl restart php-fpm &
Knowing that both consul and consul-template services are up, as soon as your value changes in the specified key for consul template, it'll dump the file into configured destination and run the command for parameters updated.
It works like a charm. =)
A simple KV watcher that puts the value into parameters.yml, triggers a cache:clear is the simplest option in my opinion and also provides the benefit of compilation so that it doesn't have to go to Consul each time to check if values are updated. Like you said, some overhead but seems to be ok if you don't change your parameters every 5 minutes.
We're exploring that option now but if you made any progress on this, an update would be appreciated.
[Update 2016-02-23] We've implemented the idea I mentioned above and it works as expected: well. Mind you, we change our parameters only on deploy of a new version (because we also use service discovery by Consul so no need to update service lists in parameters). We mostly did it because it saves us the boring job of changing parameters on several servers. As usual: this might not work for you but I think you would be safe if, like I said before, you don't change your parameters every 5 mins :)

How to re-run cloud-init without reboot

I am using openstack to create a VM using 'nova boot' command. My image is cloud-init enabled. I pass a --user-data script which is a bash shell format for cloud-init to run during VM boot up time. All this happens successfully.
Now my use-case is to re-run cloud-init to execute the same user-data script without rebooting the VM. I saw /usr/bin/cloud-init options and they do talk about running specific modules but nothing is able to make it execute the same user-data script. How can this be achieved ? Any help would be appreciated.
While re-running all of cloud-init without reboot isn't a recommended approach, the following commands will allow you to accomplish this on a system.
The commands have been updated so to re-run you need to clean out the existing config:
sudo cloud-init clean --logs
cloud-init typically runs multiple boot stages in order due to systemd service dependencies. If you want to repeat that process without a reboot you can run the following 4 commands:
Detect local datasource (cloud platform):
sudo cloud-init init --local
Detect any datasources which require network up and run "cloud_init_modules" defined in /etc/cloud/cloud.cfg:
sudo cloud-init init
Run all cloud_config_modules defined in /etc/cloud/cloud.cfg:
sudo cloud-init modules --mode=config
Run all cloud_final_modules defined in /etc/cloud/cloud.cfg:
sudo cloud-init modules --mode=final
Beware: things like ssh host keys maybe regenerated.
In order for cloud-init to reset, you need to execute rm -rf /var/lib/cloud/instances.
Then re run the cloud-init start and it will run the full boot script process again.
Since this keeps popping up in search results, what works for me:
Delete semaphores in /var/lib/cloud/instances/i-xxxxxxx/sem. Cloud-init will not re-run if these files are present.
Edit /var/lib/cloud/instances/i-xxxxxxxx/scripts/part-001. This is your user-data script.
Execute only the user scripts module of cloud-init. This will not re-download user data but execute the already downloaded (and now, modified) script from step 2.
sudo /usr/bin/cloud-init single -n cc_scripts_user
Given that this post was actively touched 6 months ago I wanted to provide a more complete answer here from cloud-init upstream.
The original question: "how to re-run a user-data script again at a later time with cloud-init"
Generally user-scripts are only run once per-instance by the config module config-user-scripts. If the instance-id in metadata doesn't change it won't re-run.
The per-instance semaphores can be bypassed with the following command line by telling it to run the user-scripts module regardless of instance-id:
sudo cloud-init single --name scripts-user --frequency always
Per the other suggesting to re-running all of cloud-init without system reboot. It isn't a recommended approach because some parts of cloud-init are run at systemd generator timeframe to detect new datasource types. That said, the following commands will allow you to accomplish this without reboot on a system.
cloud-init supports a clean subcommand to remove all semaphore files and allow cloud-init to re-run all config modules again. Beware that this will mean SSH host-keys are regenerated and .ssh config files re-written so it could impact your ability to get back into the VM.
To clean all semaphores so cloud-init modules will all re-run on next boot:
sudo cloud-init clean --logs
cloud-init typically runs multiple boot stages in sequence due to systemd service dependencies. If you want to repeat that process without a reboot you can run the following 4 commands:
Detect local datasource (cloud platform):
sudo cloud-init init --local
Detect any datasources which require network up and run "cloud_init_modules" defined in /etc/cloud/cloud.cfg:
sudo cloud-init init
Run all cloud_config_modules defined in /etc/cloud/cloud.cfg:
sudo cloud-init modules --mode=config
Run all cloud_final_modules defined in /etc/cloud/cloud.cfg:
sudo cloud-init modules --mode=final
To run the packages module of cloud-config part of cloud-init, you can run
# cloud-init-cfg all config
To run the runcmd module of cloud-config part of cloud-init, you can run
# cloud-init-cfg all final

Resources