Why does `app.sdb.load('Account', senderId)` returns an object? - asch-development

When i call app.sdb.load('Account', senderId) with senderId set to a valid addres the result is (unexpected) an object which contains the account info, like that shown beneath:
{
"address":"AEc252iX7f75DzEYybe5EtfjwX8GEBsdxB",
"name":null,
"xas":100000000000,
"publicKey":null,
"secondPublicKey":null,
"isLocked":0,
"isAgent":0,
"isDelegate":0,
"role":0,
"lockHeight":0,
"agent":null,
"weight":0,
"agentWeight":0,
"_version_":1
}
According the docs at https://github.com/AschPlatform/asch-docs/blob/master/sdk_api/en.md#11-aync-appsdbloadmodel-fields-indices:
The operation has no return value, it throws an Exception when an error occurs Load the data for the specified model into memory and index the table which can further improve the efficiency of the query When a data model needs frequent updates and inquiries, it is recommended to use this interface, such as the system's built-in account balance, the increment ID uses this operation
So did it change since version 1.4?

The function call to app.sdb.load('Account', senderId)can only find an address if an address entry was created in the Accounts blockchain database table.
An entry is created if XAS is send to this specific address. Because there can be hundreds of million of possible addresses, we don't want them from the beginning in the blockchain database. That only would bloat the database and had no practical use.

Related

How to set a field for every document in a Cosmos db?

What would a Cosmos stored procedure look like that would set the PumperID field for every record to a default value?
We are needing to do this to repair some data, so the procedure would visit every record that has a PumperID field (not all docs have this), and set it to a default value.
Assuming a one-time data maintenance task, arguably the simplest solution is to create a single purpose .NET Core console app and use the SDK to query for the items that require changes, and perform the updates. I've used this approach to rename properties, for example. This works for any Cosmos database and doesn't require deploying any stored procs or otherwise.
Ideally, it is designed to be idempotent so it can be run multiple times if several passes are required to catch new data coming in. If the item count is large, one could optionally use the SDK operations to scale up throughput on start and scale back down when finished. For performance run it close to the endpoint on an Azure Virtual Machine or Function.
For scenarios where you want to iterate through every item in a container and update a property, the best means to accomplish this is to use the Change Feed Processor and run the operation in an Azure function or VM. See Change Feed Processor to learn more and examples to start with.
With Change Feed you will want to start it to read from the beginning of the container. To do this see Reading Change Feed from the beginning.
Then within your delegate you will read each item off the change feed, check it's value and then call ReplaceItemAsync() to write back if it needed to be updated.
static async Task HandleChangesAsync(IReadOnlyCollection<MyType> changes, CancellationToken cancellationToken)
{
Console.WriteLine("Started handling changes...");
foreach (MyType item in changes)
{
if(item.PumperID == null)
{
item.PumperID = "some value"
//call ReplaceItemAsync(), etc.
}
}
Console.WriteLine("Finished handling changes.");
}

Accounts Library: AccountInfoCommand doesn't have an Update command

AccountInfo state has a field called status which is initialized with the value ACTIVE, but currently AccountInfoCommand class only has one command which is Create, so should we use that if we want to write a flow that deactivates an account (i.e. updates it, not creates it)? I don't feel that's right since there should be certain checks that are related to an update command (like there should be one input and one output with same linearId, etc...).
Is there a reason why RequestAccountFlow was designed to return an AccountInfo as opposed to StateAndRef? The latter makes it easier to request an AccountInfo; then use it as an input for a certain transaction (like in my case, I would request an account, get its StateAndRef, clone it with the new status, use the StateAndRef as input, and the clone with new status as output).
With the current Accounts implementation, the AccountInfo state no longer has the status state. https://github.com/corda/accounts/blob/master/contracts/src/main/kotlin/com/r3/corda/lib/accounts/contracts/states/AccountInfo.kt
And the RequestAccountFlow was coded in a way to utilizing ShareAccountInfoFlow (which returns an AccountInfo)

Corda oracles verification

I'm trying to understand how corda oracles work from an example on github. It seems like in every example oracle verification function checks the data in command and data in output state. I don't understand why that should work because we (issuer node) manage that data and put it in command/output state.
// Our contract does not check that the Nth prime is correct. Instead, it checks that the
// information in the command and state match.
override fun verify(tx: LedgerTransaction) = requireThat {
"There are no inputs" using (tx.inputs.isEmpty())
val output = tx.outputsOfType<PrimeState>().single()
val command = tx.commands.requireSingleCommand<Create>().value
"The prime in the output does not match the prime in the command." using
(command.n == output.n && command.nthPrime == output.nthPrime)
}
In this example state gets Nth prime number from oracle but after it's issued the verification function doesn't rerun generateNth prime function to make sure that this number is really the one we needed. I understand that data in this example is deterministic since Nth prime cannot change but what about the case where we have dynamic data like stock values? Shouldn't oracle verification function also send another http request and get current values to check them?
Firstly, note that contracts in Corda are not able to access the outside world in any way (DB reads, HTTP requests, etc.). If they could, transaction validity would be non-deterministic. A transaction that is found to be valid on day n may become invalid on day n+1 (because a database row changed, or a website went down, etc.). This would cause disagreements about whether a given transaction was a valid ledger update.
However, we sometimes need a transaction to include external data for verification (whether a company is bankrupt, whether a natural catastrophe happened, etc.). To do this, we use a trusted oracle that only signs the transaction if a given piece of data is valid.
We could embed the information in the input or output states. However, this would require us to reveal the entire input or output state to the oracle for signing. For privacy reasons, it is therefore preferable to embed the data in a command that only contains the data of interest to the oracle, so that we can filter out all the other parts of the transaction and only present this command to the oracle for signing.
The oracle will usually perform a DB read or make an HTTP request to check the validity of the data before signing.

Oracle coherence: is there a way to force the invocation of an agent on a specific node?

I have a replicated cluster composed by several nodes (up to 30) on which there is a single JAVA process accessing to the coherence cache and I use the map.invoke(key, agent) method for both creation and update of agents. The creation and the update are performed setting the value in the process method.
Example (agent is instance of a ConcreteEntryProcessor implementing EntryProcessor interface):
map.invoke(key, agent);
Which invoke the following code of agent object:
public Object process(Entry entry) {
if (entry.isPresent())
{
//UPDATE
... some stuff which compute the new entry value...
entry.setValue(newValue, true);
return newValue
}
else
{
//CREATION
..other stuff to determine the value...
entry.setValue(value, true);
return value;
}
}
I noticed that if the update is made by the node that created the agent I have good performances, otherwise I have a performance decrease if the update is made from a different node. It seems that there is a kind of ownership of data.
Is there a way to force the execution of an agent on the local node or change the ownership of data?
It all depends on cache configuration. If you use distributed (partitioned) cache, then indeed there is some kind of data ownership. In that case entry processor is invoked on a node that owns given key.
And according to your performance issues, I see two possibilities:
Performance of map.invoke(key, agent) decreases, but performance of EntryProcessor.process(entry) is stable. In that case your performance issues are probably caused by serialization and network traffic needed to send back the result of processing to the node that called map.invoke(key, agent). If you don't need this value on that node, then simply return null from your entry processor.
Performance of EntryProcessor.process(entry) decreases. In that case mayby your create/update logic need some data from the node that called map.invoke(key, agent). So it is again serialization/network traffic issue, but without knowing the details of your particular logic it is hard to find a solution to your issue.

Salesforce Batch Apex Class - Querying Against Large Data Sets

I have a batch apex class where i'm building collections of websites and emails, so that i can use those collections to filter other other queries which will be made into collections. With all collections set, i want to run through a final loop of the scope to perform business processes.
Mockup:
for(Object o : scope)
{
listEmails.add(o.Email);
listWebsites.add(o.Websites);
}
Map<String, Account> accounts = Gather all accounts where website not in :listWebsties; //Website is key
List<String, Contact> contacts = Gather all contacts where email not in :listEmails; //Email is key
for(Object o : scope)
{
Account = accounts.get(o.website);
Contact = contacts.get(o.Email);
Perform business logic here
}
The problem is when i run this batch it stays processing for hours. When working with a rather small database this works fine. But in working in a larger environment perhaps this is not the best solution.
Can anyone help me speed up the batch process with a more effective approach?
Is there anyway to post the entire batch apex class? Or help understand the data more?
It looks like from your map that all of your accounts (in theory) have unique websites and all of your contacts have unique emails?
I assume you build those maps by hand? That is you loop over the accounts and do a
map.put(account.website,account)?
Do you have any system debug statements to confirm your map sizes?
What happens if there is no account or no contact when you call accounts.get()?
And the business logic - is it more looping?
And are you using Batch variables in a static manner - i.e. you can have a counter to count the total number of records processed. If so, is your variable a list? that can be dangerous of course.
Also what object is your scope object? Not that it matters, but I'd think you'd want to have your scope be the Accounts themselves or the Contacts themselves.
I'd try adding system.debug statements to your batch to verify it's running and to see where the infinite loop may be occurring.

Resources