We're starting to see issues occurring when sending transactions to counter-parties over the Corda network, but it's very intermittent, so I'm wondering whether it's a transaction size issue.
What is the default Corda transaction size?
Can the default be changed, or is it set by the network?
How can we obtain the size of a serialised transaction?
The Default Corda transaction size id 524 Megabyte.
Transaction size is set at a network level, it will be set by the network operator.
For development use cases, you can use network-bootstrapper to update the transaction size using command line. Run below command.
java -jar network-bootstrapper-4.1.jar --max-transaction-size new_value_in_bytes
Related
We are using Cosmos DB SDK whose version is 2.9.2. We perform Document CRUD operations. Usually, the end-to-end P95 latency is 20ms. But sometimes the latency is over 1000ms. The high latency period lasts for 10 hours to 1 day. The collection is not throttling.
We have get some background information from:
https://icm.ad.msft.net/imp/v3/incidents/details/171243015/home
https://icm.ad.msft.net/imp/v3/incidents/details/168242283/home
There are some diagnostics strings in the tickets.
We know that the client maintains a cache of the mapping of logical partition and physical replica address. This mapping may be outdated because of replicas movement or outage. So client tries to read from the second/third replica. However, this retry has significant impact on end to end latency. We also observe that the high latency/timeout can last for several hours, even days. I expect there’s some mechanism of refreshing mapping cache in the client. But it seems the client stops visiting more than one replica only after we redeploy our service.
Here are my questions:
How can the client tell whether it’s unable to connect to a certain replica? Will the client wait until timeout or server tells client that the replica is unavailable?
In which condition the mapping cache will be refreshed? We are using Session consistency and TCP mode.
Will restarting our service force the cache to be refreshed? Or refreshing only happens when the machine restarts?
When we find there’s replica outage, is there any way to quickly mitigate?
What operations are performed (Document CRUD or query)?
And what are the observed latencies & frequencies? Also please check if the collection is throttling (with custom throttling policy).
Client do manage the some metada and does handle its staleness efficiently with-in SLA bounds.
Can you please create a support ticket with account details and 'RequestDiagnostis' and we shall look into it.
I tried sending almost 5000 output states in a single transaction and I ran out of memory. I am trying to figure out how to increase memory. I tried increasing in the runnodes.bat file by teaking command
java -Xmx1g -jar runnodes.jar %*
But this doesn't seem to increase the heap size. So I tried running the following command for each node manually with memory option given -Xmx1g.
bash -c 'cd "/build/nodes/Notary" ; "/Library/Java/JavaVirtualMachines/jdk1.8.0_152.jdk/Contents/Home/jre/bin/java" "-Dname=Notary-corda.jar" "-Dcapsule.jvm.args=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 -javaagent:drivers/jolokia-jvm-1.3.7-agent.jar=port=7005,logHandlerClass=net.corda.node.JolokiaSlf4Adapter" **"-Xmx1g"** "-jar" "corda.jar" && exit'
This solved out of memory issue but now I started seeing ActiveMQ large message size issue
E 10:57:31-0600 [Thread-1 (ActiveMQ-IO-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$4#2cfd9b0a)] impl.JournalImpl.run - appendAddRecord::java.lang.IllegalArgumentException: Record is too large to store 22545951 {}
java.lang.IllegalArgumentException: Record is too large to store 22545951
at org.apache.activemq.artemis.core.journal.impl.JournalImpl.switchFileIfNecessary(JournalImpl.java:2915) ~[artemis-journal-2.2.0.jar:2.2.0]
Any idea?
This is because you are trying to send a transaction that is almost 20MB in size. In Corda 3 and earlier, the limit on transaction size is 10MB, and this amount is not configurable.
In Corda 4, the limit on transaction size can be configured by the network operator as one of the network parameters (see https://docs.corda.net/head/network-map.html#network-parameters). The logic for allowing a configurable limit is that otherwise, larger nodes could bully smaller nodes off the network by sending extremely large transactions that it would be infeasible for the smaller nodes to process.
If I build a CorDapp in Corda open source that walks through all transaction chain to collect some data, I suppose that when SGX is enabled in Enterprise version it won't be possible, right?
That's correct. When SGX is enabled, the transaction chain will be stored in encrypted form, readable only by the enclave. You thus won't be able to walk through and read the contents of the chain.
Admission control is embedded within each impalad daemon and communicates through the statestore service. The impalad daemon determines if a query runs immediately or if the query is queued.
However If a sudden flow of requests causes more queries to run concurrently than expected, the overall Impala memory limit and the Linux cgroups mechanism at the cluster level serve as hard limits to prevent over allocation of memory. When queries hit these limits, Impala cancels the queries.
Does this mean Impala Resource Limits Enforced at individual Impala daemon level or at the cluster level?.
The answer is both. Each impalad daemon has its own MEM_LIMIT. Exceeding it will cause the query to be canceled. The admission control pool works at the cluster level, even though the gatekeep (decide whether the query should be run or queued) is at each impalad level, even though these impalad makes the admission decision based on the pool resource at the cluster level. That's why when there are a flood of queries sent to different impalad instances, the impalad daemons might admit more queries than they should because they cannot get the most current cluster resource usage information at the time. CGroup limit does not cause query to be canceled. It determines the percentage of CPU that the impalad should get when there is CPU contention.
Because a required pipeline components seems to have trouble hitting a database to details of messages, I am planning to use Host Throttling to limit the amount of files BizTalk is processing at the receive location. I want to be able to indicate that X number of messages should be processed within Y seconds (or any other feasible timespan). Does anyone know which throttling settings can be used to force this behavior?
I know how to set values, however I cannot find the best configuration.
(note: one of the solutions might also adjust the pipeline, but it contains third party components which cannot be adjusted.)
From How BizTalk Server Implements Host Throttling BizTalk looks at
Amount of memory in use (both systemwide and host process memory).
Number of in-process messages being delivered or processed (threshold for outbound throttling).
Number of threads in use. Database size, measured by the number of items in the queue tables for all hosts and the number of items in the spool and tracking tables.
Number of concurrent database connections.
Rate of message publishing (inbound)
and delivery or processing (outbound).
The only one that throttles inbound is the Rate of message publishing, but that is possible after the pipeline/port has processed the message so may not be of any use in this scenario, but you would have to test that.
You will probably want to set up that process under it's own host so if it hits throttling thresholds it does not throttle everything else as well.
If possible you should move the component to a send port pipeline as throttling send ports is much more controllable. One way is to set the send port to ordered delivery, although that can cause a backlog especially if you get a suspended message.
I think your most straightforward approach here would be to write a custom adapter. Unfortunately, the out of the box File Adapter does not directly support throttling/polling intervals, and I don't think the suggestions given already would not directly impact the custom pipeline processing if it's directly hitting the DB through ADO.NET (but it couldn't hurt to try). You can set the BatchSize property on the file adapter settings, but even then there's nothing stopping it from submitting that batchsize as fast as it possibly can over and over again.
A custom adapter could be created to wait for a period before submitting additional files for processing. You could base it on the SDK File Adapter sample.