I just started using Corda. I know that in unit testing, I'm supposed to add the line
setCordappPackages("net.corda.finance")
But when I debug using NodeDriver, I just receive the message.
net.corda.core.transactions.MissingContractAttachments: Cannot find contract attachments for [net.corda.finance.contracts.asset.Cash]
What's missing?
You set the packages the driver scans as follows:
driver(
startNodesInProcess = true,
extraCordappPackagesToScan = listOf("net.corda.examples.attachments"),
isDebug = true)
{
...
}
Related
I have been using .NET Core and RabbitMQ client to publish the messages in the queue. I am using the official RabbitMQ docker image and I can validate that the BasicPublish method is working fine and I can get the message as well. However, When I open the admin dashboard, I do not see any queues and messages over there.
This is the first time when I am using RabbitMQ so can someone please highlight where I am making mistake ? I am also pasting code for the reference.
channel.QueueDeclare(queue: queueName, durable: false, exclusive: false, autoDelete: false, arguments: null);
var message = JsonConvert.SerializeObject(publishModel);
var body = Encoding.UTF8.GetBytes(message);
IBasicProperties properties = channel.CreateBasicProperties();
properties.Persistent = true;
properties.DeliveryMode = 2;
channel.ConfirmSelect();
channel.BasicPublish(exchange: "", routingKey: queueName, mandatory: true, basicProperties: properties, body: body);
channel.WaitForConfirmsOrDie();
channel.ConfirmSelect();
RabbitMQ Admin Dashboard
Let me know if more details are required.
I created the following resource to encrypt 'All' disk of a VM, and it worked fine so far:
resource "azurerm_virtual_machine_extension" "vm_encry_win" {
count = "${var.vm_encry_os_type == "Windows" ? 1 : 0}"
name = "${var.vm_encry_name}"
location = "${var.vm_encry_location}"
resource_group_name = "${var.vm_encry_rg_name}"
virtual_machine_name = "${var.vm_encry_vm_name}"
publisher = "${var.vm_encry_publisher}"
type = "${var.vm_encry_type}"
type_handler_version = "${var.vm_encry_type_handler_version == "" ? "2.2" : var.vm_encry_type_handler_version}"
auto_upgrade_minor_version = "${var.vm_encry_auto_upgrade_minor_version}"
tags = "${var.vm_encry_tags}"
settings = <<SETTINGS
{
"EncryptionOperation": "${var.vm_encry_operation}",
"KeyVaultURL": "${var.vm_encry_kv_vault_uri}",
"KeyVaultResourceId": "${var.vm_encry_kv_vault_id}",
"KeyEncryptionKeyURL": "${var.vm_encry_kv_key_url}",
"KekVaultResourceId": "${var.vm_encry_kv_vault_id}",
"KeyEncryptionAlgorithm": "${var.vm_encry_key_algorithm}",
"VolumeType": "${var.vm_encry_volume_type}"
}
SETTINGS
}
When i ran the first time - ADE encryption is done for both OS and data disk.
However, When I re-run terraform using terraform plan or terraform apply, it wants to replace all my data disks I have already created, like the following screenshot illustrates.
I do not know how to solve it. And my already created disks should not be replaced.
I check on the lines of ignore_chnages
lifecycle {
ignore_changes = [encryption_settings]
}
I am not sure where to add or does this reference actually solves the problem?
Which resource block should i add them.
Or is there another way ?
After updating HERE SDK to version 3.12 we started getting a GRAPH_DISCONNECTED error when calling calculateRoute method of CoreRouter class (Wifi and mobile data turned off).
Update
This is how we are creating and using CoreRouter:
val routeOptions = RouteOptions().apply {
transportMode = TransportMode.SCOOTER
routeType = RouteOptions.Type.FASTEST
routeCount = 1
}
val routePlan = RoutePlan()
routePlan.routeOptions = routeOptions
val fromGeoCoordinate = GeoCoordinate(from.latitude, from.longitude)
val destinationGeoCoordinate = GeoCoordinate(destination.latitude, destination.longitude)
routePlan.addWaypoint(RouteWaypoint(fromGeoCoordinate))
routePlan.addWaypoint(RouteWaypoint(destinationGeoCoordinate))
val coreRouter = CoreRouter()
coreRouter.connectivity = CoreRouter.Connectivity.DEFAULT
coreRouter.calculateRoute(
routePlan,
object : Router.Listener<List<RouteResult>, RoutingError> {
override fun onCalculateRouteFinished(routes: List<RouteResult>?, error: RoutingError?){
Log.d(TAG, "onCalculateRouteFinished")
}
override fun onProgress(p0: Int) {
Log.d(TAG, "onProgress")
}
})
Android version: 8.1.0
Currently, we are using version 3.9.0 which works fine in the same scenario.
Is there something else we need to do on our side to get it working with the new version?
In our latest release (3.16), you have to follow these steps to download and preload maps for offline use:
Map Package Download
The second method of getting offline maps capabilities is enabled through the use of MapLoader and its associated objects. The MapLoader class provides a set of APIs that allow manipulation of the map data stored on the device. Operations include:
getMapPackages() - to retrieve the map data state on the device
installMapPackages(List packageIdList) - to download and install new country or region data
checkForMapDataUpdate() - to check whether a new map data version is available
performMapDataUpdate() - to perform a map data version update if available
The complete documentation with snippets are at:
https://developer.here.com/documentation/android-premium/3.16/dev_guide/topics/routing-offline.html
I have defined my Driver based tests under src/integrationTest/kotlin/com.example.IntegationTest dir of my CorDapp project:
class IntegrationTest {
private val nodeAName = CordaX500Name("NodeA", "", "GB")
private val nodeBName = CordaX500Name("NodeB", "", "US")
#Test
fun `run driver test`() {
driver(DriverParameters(isDebug = true, startNodesInProcess = true)) {
// This starts three nodes simultaneously with startNode, which returns a future that completes when the node
// has completed startup. Then these are all resolved with getOrThrow which returns the NodeHandle list.
val (nodeAHandle, nodeBHandle) = listOf(
startNode(providedName = nodeAName),
startNode(providedName = nodeBName)
).map { it.getOrThrow() }
// This test will call via the RPC proxy to find a party of another node to verify that the nodes have
// started and can communicate. This is a very basic test, in practice tests would be starting flows,
// and verifying the states in the vault and other important metrics to ensure that your CorDapp is working
// as intended.
Assert.assertEquals(nodeAHandle.rpc.wellKnownPartyFromX500Name(nodeBName)!!.name, nodeBName)
Assert.assertEquals(nodeBHandle.rpc.wellKnownPartyFromX500Name(nodeAName)!!.name, nodeAName)
}
}
}
If we try to execute the test using gradle integrationTest from command line, how can we ensure that the integrationTest got executed successfully?
If tried with Inteliij IDE, the Junit test works as expected with appropriate test reports/logs.
To ensure the integration tests are actually run, you need to use the clean argument:
./gradlew clean integrationTest
The output of this command doesn't always make it clear which tests have been run. You can make it display more information using the --info flag:
./gradlew clean integrationTest --info
First, Let me explain why I need do this.
I have an inbound port with EDIReceive Pipeline configuration. it receives EDI X12 837I files and disassemble these files to 837I messages.
There's one file failed with error description below:
The following elements are not closed: ns0:X12_00501_837_I. Line 1, position 829925.
It looks like the incoming file have some structure issue. Making the disassembler cannot produce the message correctly. But the error itself don't help to locate the issue. Also, no TA1 and 999 generated to help us locate the issue.
So I created a little console application using the Pipeline Component Test Library try to run this file through the edidisassembler pipeline component to see if I can find what cause the error.
The code is pretty straightforward:
namespace TestEDIDasm
{
using System;
using System.IO;
using Microsoft.BizTalk.Edi.Pipelines;
using Microsoft.BizTalk.Message.Interop;
using Winterdom.BizTalk.PipelineTesting;
using Microsoft.BizTalk.Edi.BatchMarker;
class Program
{
static void Main(string[] args)
{
var ediDasmComp = new EdiDisassembler();
ediDasmComp.UseIsa11AsRepetitionSeparator = true;
ediDasmComp.XmlSchemaValidation = true;
var batchMaker = new PartyBatchMarker();
IBaseMessage testingMessage = MessageHelper.LoadMessage(#"c:\temp\{1C9420EB-5C54-43E5-9D9D-7297DE65B36C}_context.xml");
ReceivePipelineWrapper testPipelineWrapper = PipelineFactory.CreateEmptyReceivePipeline();
testPipelineWrapper.AddComponent(ediDasmComp, PipelineStage.Disassemble);
testPipelineWrapper.AddComponent(batchMaker, PipelineStage.ResolveParty);
var outputMessages = testPipelineWrapper.Execute(testingMessage);
if (outputMessages.Count <= 0)
{
Console.WriteLine("No output message");
Console.ReadKey();
return;
}
var msg = outputMessages[0];
StreamReader sr = new StreamReader(msg.BodyPart.Data);
Console.WriteLine(sr.ReadToEnd());
Console.ReadKey();
}
}
}
I added some breakpoint but end up with following errors in message context:
"X12 service schema not found"
Clearly, the EDIDisassembler component rely on some other stuff to do its job.
Now goes to my question:
Is there anyway to make EdiDisassembler working in testing
environment?
If there any other way to debug/trace the disassembler component
processing file other than Pipeline Component Test Library?
Theoretically, sure, but you have to replicate a lot of engine context that exists during Pipeline execution. The EDI components have issues running inside Orchestrations so it's likely a pretty tall order.
Have you tried a Preserve Interchange Pipeline with the Fallback Settings? That's about as simple as you can get with the EDI Disassembler.