Is there any way to create a default account whenever a node is created? - corda

I am using Corda version 4.3 and doing all the transactions on the account level by creating accounts for each node. However, I want that whenever I create a node a default account gets created so that no node is created without an account.
I wonder if I can do that in the RPC settings or in the main build.gradle file where I initialize a node like this :
node {
name "O=Node1,L=London,C=GB"
p2pPort 10005
rpcSettings {
address("localhost:XXXXX")
adminAddress("localhost:XXXXX")
}
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]]
}

Try the following:
Create a class and annotate it as #CordaService -which means this class gets loaded as soon as the node starts- (https://docs.corda.net/api/kotlin/corda/net.corda.core.node.services/-corda-service/index.html).
Inside your service class:
Fetch the default account (AccountService class from the Accounts library has methods to fetch and create accounts; it's inside com.r3.corda.lib.accounts.workflows.services).
If the default account is not found, create it.

Related

Bicep roleAssignments/write permission error when assigning a role to Keyvault

I am using GitHub Actions to deploy via Bicep:
- name: Login
uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Deploy Bicep file
uses: azure/arm-deploy#v1
with:
scope: subscription
subscriptionId: ${{ secrets.AZURE_CREDENTIALS_subscriptionId }}
region: ${{ env.DEPLOY_REGION }}
template: ${{ env.BICEP_ENTRY_FILE }}
parameters: parameters.${{ inputs.selectedEnvironment }}.json
I have used a contributor access for my AZURE_CREDENTIALS based on the output of the next command:
az ad sp create-for-rbac --n infra-bicep --role contributor --scopes /subscriptions/my-subscription-guid --sdk-auth
I am using Azure Keyvault with RBAC. This Bicep has worked fine until I tried to give an Azure Web App a keyvault read access to the Keyvault as such:
var kvSecretsUser = '4633458b-17de-408a-b874-0445c86b69e6'
var kvSecretsUserRole = subscriptionResourceId('Microsoft.Authorization/roleDefinitions', kvSecretsUser)
resource kx_webapp_roleAssignments 'Microsoft.Authorization/roleAssignments#2022-04-01' = {
name: 'kv-webapp-roleAssignments'
scope: kv
properties: {
principalId: webappPrincipleId
principalType: 'ServicePrincipal'
roleDefinitionId: kvSecretsUserRole
}
}
Then I was hit with the following error:
'Authorization failed for template resource 'kv-webapp-roleAssignments'
of type 'Microsoft.Authorization/roleAssignments'.
The client 'guid-value' with object id 'guid-value' does not have permission to perform action
'Microsoft.Authorization/roleAssignments/write' at scope
'/subscriptions/***/resourceGroups/rg-x/providers/Microsoft.KeyVault/vaults/
kv-x/providers/Microsoft.Authorization/roleAssignments/kv-webapp-roleAssignments'.'
What are the total minimal needed permissions and what should my az ad sp create-for-rbac statement(s) be and are there any other steps I need to do to assign role permissions?
To assign RBAC roles, you need to have either User Access
Administrator or Owner role that includes below permission:
Microsoft.Authorization/roleAssignments/write
With Contributor role, you cannot assign RBAC roles to Azure resources. To confirm that, you can check this MS Doc.
I tried to reproduce the same in my environment and got below results:
I used the same command and created one service principal with Contributor role as below:
az ad sp create-for-rbac --n infra-bicep --role contributor --scopes /subscriptions/my-subscription-guid --sdk-auth
Response:
I generated one access token via Postman with below parameters:
POST https://login.microsoftonline.com/<tenantID>/oauth2/v2.0/token
grant_type:client_credentials
client_id: <clientID from above response>
client_secret: <clientSecret from above response>
scope: https://management.azure.com/.default
Response:
When I used this token to assign Key Vault Secrets User role with below API call, I got same error as you like below:
PUT https://management.azure.com/subscriptions/d689e7fb-47d7-4fc3-b0db-xxxxxxxxxxx/providers/Microsoft.Authorization/roleAssignments/xxxxxxxxxxx?api-version=2022-04-01
{
"properties": {
"roleDefinitionId": "/subscriptions/d689e7fb-47d7-4fc3-b0db-xxxxxxxxxx/providers/Microsoft.Authorization/roleDefinitions/4633458b-17de-408a-b874-0445c86b69e6",
"principalId": "456c2d5f-12e7-4448-88ba-xxxxxxxxx",
"principalType": "ServicePrincipal"
}
}
Response:
To resolve the error, create a service principal with either User Access Administrator or Owner role.
In my case, I created a service principal with Owner role like below:
az ad sp create-for-rbac --n infra-bicep-owner --role owner --scopes /subscriptions/my-subscription-guid --sdk-auth
Response:
Now, I generated access token again via Postman by replacing clientId and clientSecret values like below:
POST https://login.microsoftonline.com/<tenantID>/oauth2/v2.0/token
grant_type:client_credentials
client_id: <clientID from above response>
client_secret: <clientSecret from above response>
scope: https://management.azure.com/.default
Response:
When I used this token to assign Key Vault Secrets User role with below API call, I got response successfully like below:
PUT https://management.azure.com/subscriptions/d689e7fb-47d7-4fc3-b0db-xxxxxxxxxxx/providers/Microsoft.Authorization/roleAssignments/xxxxxxxxxxx?api-version=2022-04-01
{
"properties": {
"roleDefinitionId": "/subscriptions/d689e7fb-47d7-4fc3-b0db-xxxxxxxxxx/providers/Microsoft.Authorization/roleDefinitions/4633458b-17de-408a-b874-0445c86b69e6",
"principalId": "456c2d5f-12e7-4448-88ba-xxxxxxxxx",
"principalType": "ServicePrincipal"
}
}
Response:
UPDATE:
Considering least privileges principle, you need to create custom RBAC role instead of assigning Owner role.
To create custom RBAC role, follow below steps:
Go to Azure Portal -> Subscriptions -> Your Subscription -> Access control (IAM) -> Add -> Add custom role
Fill the details with name and description, make sure to select Contributor role after choosing Clone a role like below:
Now, remove below permission from NotAction Microsoft.Authorization/roleAssignments/write
Now, add Microsoft.Authorization/roleAssignments/write permission in Action:
Now, click on Create like below:
You can create service principal with above custom role using this command:
az ad sp create-for-rbac --n infra_bicep_custom_role --role 'Custom Contributor' --scopes /subscriptions/my-subscription-guid --sdk-auth
Response:

Restarting nodes in a flow test throws an error

My nodes have a custom configuration file, and the flow of events is as follows:
1. Start my network
2. Run the flow that creates my TokenType
3. Stop the nodes
4. Add the token type identifier to the custom config
5. Start the nodes
6. Now my other flows can read that value from the custom config and do their job
// Custom config map
Map<String, String> customConfig = new LinkedHashMap<>();
// Assign custom config to nodes
network = new MockNetwork(new MockNetworkParameters().withCordappsForAllNodes(ImmutableList.of(
TestCordapp.findCordapp("xxx").withConfig(customConfig),
// Run the network and my flow that creates some value to be stored in the config
// Stop the nodes
network.stopNodes();
// Add new value to custom config
customConfig.put("new_value", someNewValue);
// Start the nodes
network.startNodes();
But I get this error when starting the network the second time:
java.lang.IllegalStateException: Unable to determine which flow to use when responding to:
com.r3.corda.lib.tokens.workflows.flows.rpc.ConfidentialRedeemFungibleTokens.
[com.r3.corda.lib.tokens.workflows.flows.rpc.ConfidentialRedeemFungibleTokensHandler,
com.r3.corda.lib.tokens.workflows.flows.rpc.ConfidentialRedeemFungibleTokensHandler] are all registered
with equal weight.
Do you have multiple flows present in corda-app?. I got similar error when while trying to override an existing flow. After adding the flowOverride in the the node definition under deployNodes gradle task, the issue is gone.
Example:
node {
name "O=PartyA,L=London,C=GB"
p2pPort 10004
rpcSettings {
address("localhost:10005")
adminAddress("localhost:10006")
}
rpcUsers = [[user: "user1", "password": "test", "permissions": ["ALL"]]]
flowOverride("com.example.flow.ExampleFlow.Initiator",
"com.example.flow.OverrideAcceptor")
}
More information on this present in below links:
https://docs.corda.net/head/flow-overriding.html#configuring-responder-flows
https://lankydan.dev/2019/03/02/extending-and-overriding-flows-from-external-cordapps

Corda: Trying to put the RPC Permissions on an external database

I'm trying to put the RPC Permissions, along with the users and their password on an external database. I've followed the documentation for Corda v. 3.3 (https://docs.corda.net/clientrpc.html#rpc-security-management).
It says that I need to create a "security" field for the node in question and fill out all the necessary information. I've done it, but as soon as I try to deploy the Node, it gives me this error:
"Could not set unknown property 'security' for object of type net.corda.plugins.Node."
The node's information looks like this in the build.gradle document:
node {
name "O=myOrganisation,L=Lisbon,C=PT"
p2pPort 10024
rpcSettings {
address("localhost:10025")
adminAddress("localhost:10026")
}
security = {
authService = {
dataSource = {
type = "DB"
passwordEncryption = "SHIRO_1_CRYPT"
connection = {
jdbcUrl = "localhost:3306"
username = "*******"
password = "*******"
driverClassName = "com.mysql.jdbc.Driver"
}
}
}
}
cordapps = [
"$project.group:cordapp:$project.version"
]
}
You are confusing two syntaxes:
The syntax for configuring a node block inside a Cordform task such as deployNodes
The syntax for configuring a node directly via node.conf
The security settings are for inside node.conf. You have to create the node first, then modify the node's node.conf with these settings once it has been created.
Corda 4 will introduce an extraConfig option for use inside Cordfrom node blocks, as described here.

Terraform: how to support different providers

I have a set of terraform codes in a directory called myproject:
\myproject\ec2.tf
\myproject\provider.tf
\myproject\s3.tf
....
the provider.tf shows:
provider "aws" {
region = "us-west-1"
profile = "default"
}
so, if I terraform apply in myproject folder, a set of aws resources are launched in us-west-1 under my account.
Now I want to introduce a AWS Glue resource, which is only available in a different region us-west-2. then how do I layout glue.tf file?
Currently I store it in a sub-directory under myproject and run terraform apply in that sub-directory i.e.
\myproject\glue\glue.tf
\myproject\glue\another_provider.tf
another_provider.tf is:
provider "aws" {
region = "us-west-2"
profile = "default"
}
Is it the only way to store a file launching resources in different regions? any better way?
If there is no better way, then I need to have another backend file in glue sub-folder as well, besides, some common variables in myproject directory cannot be shared.
--------- update:
I followed the link posted by Phuong Nguyen,
provider "aws" {
region = "us-west-1"
profile = "default"
}
provider "aws" {
alias = "oregon"
region = "us-west-2"
profile = "default"
}
resource "aws_glue_connection" "example" {
provider = "aws.oregon"
....
}
But I saw:
Error: aws_glue_connection.example: Provider doesn't support resource: aws_glue_connection
you can use provider alias to define multiple providers, .e.g.
# this is default provider
provider "aws" {
region = "us-west-1"
profile = "default"
}
# additional provider
provider "aws" {
alias = "west-2"
region = "us-west-2"
profile = "default"
}
and then in your glue.tf, you can refer to alias provider as:
resource "aws_glue_job" "example" {
provider = "aws.west-2"
# ...
}
More details at Multiple Provider Instances section: https://www.terraform.io/docs/configuration/providers.html
Read my comment ...
Which basically means that you should keep out aws profiles and regions and what not from your terraform code as much as possible and use them as configuration as follows:
terraform {
required_version = "1.0.1"
required_providers {
aws = {
version = ">= 3.56.0"
source = "hashicorp/aws"
}
}
backend "s3" {}
}
provider "aws" {
region = var.region
profile = var.profile
}
Than use tfvars configuration files:
cat cnf/env/spe/prd/tf/03-static-website.backend-config.tfvars
profile = "prd-spe-rcr-web"
region = "eu-north-1"
bucket = "prd-bucket-spe"
foobar = "baz"
which you will apply during the terraform plan and apply calls as follows:
terraform -chdir=$tf_code_path plan -var-file=<<line-one-^^^>>.tfvars
terraform -chdir=$tf_code_path plan -var-file=<<like-the-one-^^^>>.tfvars -auto-approve
As a rule of thumb you SHOULD separate your code and configuration always, the more mixed they are the deeper you will get into troubles ... this applies to ANY programming language / project etc. Now some wise heads will argue that terraform code is in itself configuration , but no it is not. The terraform code in your application is the declarative source code, which is used to provision your binary infrastructure used by the application source code etc. in your application ...

Can't create cloudsql role for Service Account via api

I have been trying to use the api to create service accounts in GCP.
To create a service account I send the following post request:
base_url = f"https://iam.googleapis.com/v1/projects/{project}/serviceAccounts"
auth = f"?access_token={access_token}"
data = {"accountId": name}
# Create a service Account
r = requests.post(base_url + auth, json=data)
this returns a 200 and creates a service account:
Then, this is the code that I use to create the specific roles:
sa = f"{name}#dotmudus-service.iam.gserviceaccount.com"
sa_url = base_url + f'/{sa}:setIamPolicy' + auth
data = {"policy":
{"bindings": [
{
"role": roles,
"members":
[
f"serviceAccount:{sa}"
]
}
]}
}
If roles is set to one of roles/viewer, roles/editor or roles/owner this approach does work.
However, if I want to use, specifically roles/cloudsql.viewer The api tells me that this option is not supported.
Here are the roles.
https://cloud.google.com/iam/docs/understanding-roles
I don't want to give this service account full viewer rights to my project, it's against the principle of least privilege.
How can I set specific roles from the api?
EDIT:
here is the response using the resource manager api: with roles/cloudsql.admin as the role
POST https://cloudresourcemanager.googleapis.com/v1/projects/{project}:setIamPolicy?key={YOUR_API_KEY}
{
"policy": {
"bindings": [
{
"members": [
"serviceAccount:sa#{project}.iam.gserviceaccount.com"
],
"role": "roles/cloudsql.viewer"
}
]
}
}
{
"error": {
"code": 400,
"message": "Request contains an invalid argument.",
"status": "INVALID_ARGUMENT",
"details": [
{
"#type": "type.googleapis.com/google.cloudresourcemanager.projects.v1beta1.ProjectIamPolicyError",
"type": "SOLO_REQUIRE_TOS_ACCEPTOR",
"role": "roles/owner"
}
]
}
}
With the code provided it appears that you are appending to the first base_url which is not the correct context to modify project roles.
This will try to place the appended path to: https://iam.googleapis.com/v1/projects/{project}/serviceAccount
The POST path for adding roles needs to be: https://cloudresourcemanager.googleapis.com/v1/projects/{project]:setIamPolicy
If you remove /serviceAccounts from the base_url and it should work.
Edited response to add more information due to your edit
OK, I see the issue here, sorry but I had to set up a new project to test this.
cloudresourcemanager.projects.setIamPolicy needs to replace the entire policy. It appears that you can add constraints to what you change but that you have to submit a complete policy in json for the project.
Note that gcloud has a --log-http option that will help you dig through some of these issues. If you run
gcloud projects add-iam-policy-binding $PROJECT --member serviceAccount:$NAME --role roles/cloudsql.viewer --log-http
It will show you how it pulls the existing existing policy, appends the new role and adds it.
I would recommend using the example code provided here to make these changes if you don't want to use gcloud or the console to add the role to the user as this could impact the entire project.
Hopefully they improve the API for this need.

Resources