I want to make my CordApp to be accessible through web . What is the best way to do it . Currently I am using my Cordapp through terminal .
What is the best way for my goal?
Will using corda-webserver will help me? How to use corda-webserver.jar? Do we create it or we get a template ?
Please follow this example which creates a SpringBoot webserver. You can call your flows or query the vault through the RPC connection that you define.
The example comes in Java and Kotlin.
Define the RPC connection that your server will use.
Define the SpringBoot app (i.e. your webserver).
Define a controller that will expose your API's.
Define a gradle task to start your webserver.
Below are examples for Gradle tasks to start the webservers (read the comments in the code):
task runPartyAServer(type: JavaExec, dependsOn: assemble) {
classpath = sourceSets.main.runtimeClasspath
main = 'com.example.webserver.Server'
// The server port for PartyA is 10070, while for PartyB it's 10080,
// and so on... (you can basically choose any port number, usually it's 8080)
// Also, I only wrote one parameter `config.rpc.host`, but you need to
// supply all the parameters that your RPCConnection class is expecting (i.e
// config.rpc.username, config.rpc.password, config.rpc.port).
args '--server.port=10070', '--config.rpc.host=localhost', etc....
}
task runPartyBServer(type: JavaExec, dependsOn: assemble) {
classpath = sourceSets.main.runtimeClasspath
main = 'com.example.webserver.Server'
args '--server.port=10080', '--config.rpc.host=localhost', etc....
}
Related
Is it possible to replace a private service in the DI container? I need to do this in my test environment, so that I can run integration tests but still mock HTTP calls to external APIs.
For example, I have this code that sets the mock for the HttpClientInterface:
$response = new MockResponse('"some json body"');
$client = new MockHttpClient([$response]);
self::$container->set(HttpClientInterface::class, $client);
// Execute controller / command and perform assertions
// ...
I have already tried to define the HttpClientInterface as a public service for my test environment with the config below, but this does not work as it isn't instantiable (it's an interface).
services:
Symfony\Contracts\HttpClient\HttpClientInterface:
public: true
I resolved this issue myself for my specific case by creating my own MockHttpClient class that doesn't require setting the responses in the constructor. Replacing a service in the container at runtime seemed to be the wrong way to go based on how Symfony have intended for the container to be immutable
Environment
ASP.NET WebForms app over IIS
Docker container host
AWS ECS hosting platform
Each client hosting its own copy of the app with private database connection string
Background
In the non-docker environment, each copy is a virtual directory under IIS, and thus have their own individual web.config pointing to dedicated databases. The underlying codebase is the same for each client, with no client-specific customization involved. The route becomes / here.
In the docker environment (one container per client), each copy goes over as a central root application.
Challange
Since the root image is going to be the same, how to have the web.config overridden for each client deployment.
We shouldn't create multiple images (one per client) as that will mean having extra deployment jobs and losing out on centralization. The connection strings should ideally be stored in some kind of dictionary storage applicable at ECS level which can provide client-specific values upon loading of corresponding containers.
Presenting the approach we used to solve this issue. Hope it may help others struck in similar cases.
With the problem statement tied to having a single root image and having any customization being applied at runtime, we knew that there needs to be a transformation of web.config at time of loading of the corresponding containers.
The solution was to use a PowerShell script that will read the web.config and get replace the specific values which were having a custom prefix embedded to the key. The values got passed from custom environmental variables within ECS and the web.config also got updated to have the keys with the prefix added.
Now since the docker container can have only a single entry point, a new base image was created which instantiated an IIS server and called a PowerShell script as startup. The called script called this transformation script and then set the ServiceMonitor on the w3cwp.
Thanks a lot for this article https://anthonychu.ca/post/overriding-web-config-settings-environment-variables-containerized-aspnet-apps/
I would use environment variables as the OP suggests for this with a start up transform, however I want to make the point that you do not want sensitive information in ENV variables, like DB passwords, in your ECS task definition.
For that protected information, you should use ECS secrets coupled with Parameter Store in Systems Manager. These values can be stored encrypted in the Parameter Store (using a KMS key) and the ECS Agent will 'inject' them as ENV variables on task startup.
For me, to simplify matters, I simply use secrets for everything although you can choose to only encrypt the sensitive information and leave the others clear.
I dynamically add the secrets for the given application into my task definitions at deploy time by looking up the 'secrets' for the given app by 'namespace' (something that Parameter Store supports). Then, if I need to add a new parameter, I can just add a new secret to the store in the given namespace and re-deploy the app. It will pick up and inject into the task definition any newly defined secrets automatically (or remove ones that have been retired).
Sample ruby code for creating task definition:
params = ssm_client.get_parameters_by_path(path: '/production/my_app/').parameters
secrets = params.map{ |p| { name: p.name.split("/")[-1], value_from: p.arn } }
task_def.container_definitions[0].secrets = secrets
This last transform injects the secrets such that the secret 'name' is the ENV variable name... which ends up looking like this:
"secrets": [
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_HOSTNAME",
"name": "DB_HOSTNAME"
},
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_PASSWORD",
"name": "DB_PASSWORD"
}
You can see there are no values now in the task definition. They are retrieved and injected when ECS starts up your task.
More information:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html
I am going through this documentation and I have several uncertainties.
Performing explicit contract and state upgrades
Preserve the existing state and contract definitions
Write the new state and contract definitions
Create the new CorDapp JAR
Distribute the new CorDapp JAR
Stop the nodes
Re-run the network bootstrapper (only if you want to whitelist the
new contract)
Restart the nodes
Authorise the upgrade
Perform the upgrade
Migrate the new upgraded state to the Signature Constraint from the
zone constraint
Questions:
1. Preserve the existing state and contract definitions
2. Write the new state and contract definitions
3. Create the new CorDapp JAR
How do I do that? is it meant only to preserve jars with contracts and states on nodes, not preserving them in source code? If I do not preserve them in source code then how can I create the upgrade method?
interface UpgradedContract<in OldState : ContractState, out NewState : ContractState> : Contract {
val legacyContract: ContractClassName
fun upgrade(state: OldState): NewState
}
If I do not preserve old state in source code, then shoud I name the jar differently each time I need to do an upgrade?
Can old jars be reoved from the node when the upgrade was completed?
6. Re-run the network bootstrapper (only if you want to whitelist the
new contract)
8. Authorise the upgrade
Am I right that only those 2 steps are related to Explicit contact upgrades? And If I use implicit flow with signature, then I need to skip only those two steps, while the others are still aplicable and must be performed?
9. Perform the upgrade
Should this be done for each state separately by the owner of the state? In that case should I run it on each node for specific contrcats where the node is the participant of the state? (In doc it is mentioned to be run on single node - but what id=f a single node is not participant of some state)
Other questions
This section describes explicit contracts and states update.
https://docs.corda.net/upgrading-cordapps.html#performing-explicit-contract-and-state-upgrades while signature constraint section (https://docs.corda.net/api-contract-constraints.html#signature-constraints) does not describe an update process for states.
is it the same as for explicit upgrades with the difference only in steps 6,8 or it is somewhat completely different?
Do I need to create the function transforming old states to new states in that case? if not , then how the old states will be handled by new flows?
I see you have many some great questions about contract upgrades. Here is an article that is written by one of our dev-relation engineers. https://medium.com/corda/contract-upgrades-and-constraints-in-corda-425055a9a47f
Feel free to follow up any additional questions that you have.
If you are new to Corda, feel free to join the Corda community channels #http://slack.corda.net/
While performing legacy contract upgrades, you need both the old and new contract jars installed on your node. (present in the cordapps folder).
You can create a new Gradle module say v2-contract and write the new contract in this. This is where you will write your UpgradedContract. You will need to refer to the old v1-conract jar as well as it needs to know what the old state was. To do this add a gradle dependency in v2-contract like below.
dependencies {
// Corda dependencies.
cordapp project(v1_contract)
}
The old jar can be removed from the cordapps folder, once all the states have been upgraded to new v2-contract.
a. For HashConstraints there is no need to run the bootstrapper again. You will write the new contract by implementing UpgradedContractWithLegacyConstraint,run the jar task to build this new jar, add it to the cordapps folder, run the Authorise Flow from all nodes, run the Initiate flow from one of he node's terminal. This is the explicit way of upgrading.
b. However if you are using Whitelistzoneconstraint, you want to make sure to add the new v2-contract jar's hash to whitelist param in network param. You will need to run the network Bootstrapper to whitelist this new v2-contracts jar hash. https://docs.corda.net/network-bootstrapper.html#whitelisting-contracts.
Once you do that you can either go for an explicit upgrade by implementing UpgradedContract, or you can use implicit upgrade.
c. If you are using Signature Constraints, no need to run the network Bootstrapper for the new jar, write the new v2-contract, build it using gradle jar task, replace old jar with new jar. This is the implicit way of upgrading.
You should run the Authorise Flow for all the participants only.
Other questions
There is no explicit upgrade in Signature constraints. You need to make sure you write your state in a backward compatible way, build new jar, replace old jar with new jar. The states will refer to the new contract then.
Hope that helps. Feel free to post more questions on the above answer or ping on Slack.
I have a nameko service that deals with lots of entities, and having the entrypoints in a single service.py module would render the module highly unreadable and harder to maintain.
So I've decided to split up the module in to multiple Services which are then used to extend the main service. I am kind of worried about dependency injection and thought a dependency like the db, might have multiple instances due to this approach. This is what I have so far:
the customer service module with all customer-related endpoints
# app/customer/service.py
class HTTPCustomerService:
"""HTTP endpoints for customer module"""
name = "http_customer_service"
db = None
context = None
dispatch = None
#http("GET,POST", "/customers")
def customer_listing(self, request):
session = self.db.get_session()
return CustomerListController.as_view(session, request)
#http("GET,PUT,DELETE", "/customers/<uuid:pk>")
def customer_detail(self, request, pk):
session = self.db.get_session()
return CustomerDetailController.as_view(session, request, pk)
and main service module that inherits from customer service, and possibly other abstract services
# app/service.py
class HTTPSalesService(HTTPCustomerService):
"""Nameko http service."""
name = "http_sales_service"
db = Database(Base)
context = ContextData()
dispatch = EventDispatcher()
and finally I run it with:
nameko run app.service
So this works well, but is the approach right? Especially with regards to dependency injection?
Yep, this approach works well.
Nameko doesn't introspect the service class until run-time, so it sees whatever standard Python class inheritance produces.
One thing to note is that your base class is not "abstract" -- if you point nameko run at app/customer/service.py it will attempt to run it. Related, if you put your "concrete" subclass in the same module, nameko run will try to run both of them. You can mitigate this by specifying the service class, i.e. nameko run app.services:HTTPSalesService
I have a scenario that I want to use the Agent and USM component to implement the customer command to control a computer, but I donot want to install an application or a service,
how can i reach this goal?
thank you very much!
Steve
You can install an empty service, one that does not actually run anything. This is often called a 'recipe about nothing'. It should look something like this:
service {
name "empty"
type "WEB_SERVER"
lifecycle {
locator {
NO_PROCESS_LOCATORS
}
}
}
Add your custom commands here and you can run them from the Cloudify CLI/REST API as usual.
Note the use of the NO_PROCESS_LOCATORS directive - this tells the USM not to perform any process liveness scans.