I am implementing custom currency-like issued FungibleAsset in Corda 3.4, the token is a simple enumeration.
I am stuck with generateSpend(...) method.
In net.corda.finance.contracts.asset.Cash.generateSpend(...) fun uses AbstractCashSelection.unconsumedCashStatesForSpending(...) (in general) which is under the hood calls VaultService.softLockReserve(...).
Questions:
1. I've never found usage of VaultService.softLockRelease(...) for cache flow, is the lock released implicitly?
2. Shall we implement AbstractCashSelection-like CustomTokenSelection and create the copy of cash flow?
3. Is the current cash flow production ready?
Please consider using/contributing to the new Tokens SDK
https://github.com/corda/token-sdk
which will supersede the experimental Finance module (and its current Cash contract)
I haven't messed around with the Token SDK, but from Corda's history it sort of looks like this:
override val amount: Amount<Issued<Equity>>
In which Equity would look like:
#CordaSerializable
data class Equity(
val isin: String,
val defaultFractionDigits: Int = 0
) : TokenizableAssetInfo {
override val displayTokenSize: BigDecimal
get() = BigDecimal.ONE.scaleByPowerOfTen(-defaultFractionDigits)
}
Again some of the code above could be slightly deprecated, as I haven't messed around with token-sdk, but just a useful reference on how Fungible states are used with Amount, probably a similar thing is going underneath within the SDK.
Related
Question on Reference states in C4: If a state has two fields of type LinearPointer then does corda automatically resolves those 2 pointers and adds them to the tx.reference states even if not added in the flow code? If yes, any reason why corda is doing that? I am referring to the below function:
https://github.com/corda/corda/blob/6769b00ed5249e2eb798428a35e54ab740cf3bee/core/src/main/kotlin/net/corda/core/transactions/TransactionBuilder.kt#L540
it is called every time we call addInput, addOutput, etc..
For example:
data class IOUState(val value: Int,
val lenderParty: Party,
val borrowerParty: Party,
val lender: LinearPointer<IDState>,
val borrower: LinearPointer<IDState>,
override val linearId: UniqueIdentifier = UniqueIdentifier()):
I only wish to add lender to the reference states, but i noticed that corda internally adds borrower to tx. referencestates.
Yes! It does get added automatically.
If you do not want the states auto added as ref states then just add the linear ID in the state and not a linear pointer. so make lender a linear pointer and borrower a linearId. That should solve the problem!
The reason why ref states are automatically added is to give you the extra assurance that they are 1) current and 2) have verified chains of provenance.
Also, this way, they can be easily resolved by counter-parties.
I have created the MockNetwork and MockNodes for testing the CorDapp.
Then I successfully executed the Flows with State. It help me to store states on ledger.
I'm able to fetch previously stored state using :
mockNode1.rpcOps.vaultAndUpdates().first
.filterStatesOfType<SsiState>()
But unable to fetch same states using vaultService of mockNode1:
mockNode1.services.vaultService.track().first.states
or
mockNode1.vault.track().first.states
what could be the cause?
The solution would be to rebase to Corda M13. In M12.1, the new vault query interface (query(), track()) was only partially implemented, hence why it is not behaving as expected.
Alternatively, if you wish to remain on M12.1 you can use mockNode1.services.vaultService.states() instead. It is worth noting that this method will be deprecated going forward in favour of the new interface which you initially tried to use and which is defined here: https://docs.corda.net/api-vault.html
From release-M13 of Corda, in the CordApp-Tutorial example, there are some constraints checks made within the flow itself (ExampleFlow.Acceptor). My question is, what constraints may I check on the flow, and what constraints in the Contract? Or it's just an organization issue?
This is a great question. I believe you are referring to:
#InitiatedBy(Initiator::class)
class Acceptor(val otherParty: Party) : FlowLogic<SignedTransaction>() {
#Suspendable
override fun call(): SignedTransaction {
val signTransactionFlow = object : SignTransactionFlow(otherParty) {
override fun checkTransaction(stx: SignedTransaction) = requireThat {
val output = stx.tx.outputs.single().data
"This must be an IOU transaction." using (output is IOUState)
val iou = output as IOUState
"The IOU's value can't be too high." using (iou.iou.value < 100)
}
}
return subFlow(signTransactionFlow)
}
}
The CollectSignaturesFlow and its counterpart, the SignTransactionFlow automates the collection of signatures for any type of transaction. This automation is super useful because developers don't have to manually write flows for signature collection anymore! However, developers have to be aware that given any valid transaction - as per the referenced contract code in the transaction - the counter-party will always sign! This is because transactions are validated in isolation, not relative to some expected external values.
Let me provide two examples:
If I have access to one of your unspent cash states from a previous transaction, then perhaps I could create a cash spend transaction (from you to me) and ask you to sign it via the CollectSignaturesFlow. If the transaction is valid then, without any additional checking, you'll sign it which will result in you sending me the cash. Clearly you don't want this!
The contract code can only go part of the way to verifying a transaction. If you want to check that the transaction represents a deal you want to enter into e.g. price < some amount then you'll have to do some additional checking. The contract code cannot opine on what constitutes a commercially viable deal for you. This checking has to be done as part of the SignTransactionFlow by overriding signTransaction
In a production CorDapp, one may wish to defer to human judgement on whether to sign a transaction and enter into a deal. Or alternatively, this process could be automated by reaching out to some external reference data system via HTTP API or MQ to determine if the deal is one that should be entered into.
In the code example above, we added two simple constraints:
One which prevents a borrower from creating an overly large (greater than 100) IOU state
One which ensures the transaction does indeed contain an IOU state and not some other state we are not expecting
Note that these two constraints cannot be placed inside the contract code. Contract code is more appropriate for defining the constraints that govern how an asset or agreement should evolve over time. For example, with regards to an IOU:
Issuances must be signed by lender and borrower
Issuances must be for a value greater than zero
Redemptions must involve a cash payment of the correct currency
The IOU must be redeemed before the expiry date
More examples available here: https://github.com/roger3cev/iou-cordapp-v2/blob/master/src/main/kotlin/net/corda/iou/contract/IOUContract.kt
Remember, Corda is designed for potentially mutually distrusting parties to enter into consensus about shared facts. As such, nodes cannot implicitly trust what they receive over the wire from their counter-parties, therefore we always have to check what we receive is what we expect to receive.
Hope that makes sense!
I have run into a problem working with Realm migration blocks and the strategy for migrating realms.
Given an object MyObject with a number of properties:
In version 1 we have the property myProperty
In version 2 we change the property to myPropertyMk2
In version 3 we change the property to myPropertyMk3
Given following migration block:
private class func getMigrationBlock(realmPath: String) -> RLMMigrationBlock {
return { migration, oldSchemaVersion in
if (oldSchemaVersion == RLMNotVersioned) {
NSLog("No database found when migrating.")
return
} else {
NSLog("Migrating \(realmPath) from version \(oldSchemaVersion) to \(RealmMigrationHelper.CURRENT_DATABASE_VERSION)")
}
NSLog("Upgrading MyObject from version %d to %d", oldSchemaVersion, CURRENT_DATABASE_VERSION)
if (oldSchemaVersion < 2) {
migration.enumerateObjects(MyObject.className(), block: {
oldObject, newObject in
newObject["myPropertyMk2"] = oldObject["myProperty"]
})
}
if (oldSchemaVersion < 3) {
migration.enumerateObjects(MyObject.className(), block: {
oldObject, newObject in
newObject["myPropertyMk3"] = oldObject["myPropertyMk2"]
})
}
NSLog("Migration complete.")
}
}
When I was version 2 of the DB this worked just fine (obviously without the oldSchemaVersion < 3 block), but when I introduced version 3 I started getting the problems because it does not recognise the newObject["myPropertyMk2"] in oldSchemaVersion < 2 block. If I change it to newObject["myPropertyMk3"] it works just fine.
From reading the RLMMigration code this makes perfectly good sense as we work with the old schme and the new scheme, but based on the documentation on realm.io I do not think it makes sense. Then I would have expected it to be scheme less.
I have an idea about making a scheme less migration within the block by simply using a dictionary and then finally apply this dictionary to the newObject.
Are there any thoughts on the migration strategy of realms that deals with this? It is mentioned on realms website, but only very briefly.
Thank you :)
thanks for your question and report of your issue.
From reading the RLMMigration code this makes perfectly good sense as we work with the old schme and the new scheme, but based on the documentation on realm.io I do not think it makes sense. Then I would have expected it to be scheme less.
As you correctly recognized from the code in RLMMigration, migrations are not scheme-free. The migration closure which you provide should handle migrations from any version in the past to the current version. If your user didn't update your app in between and so skipped one version, there is no chance, that Realm could been aware of your intermediate schema version, as the schema is reflected at runtime. You're generally free to break the backwards-compatibility with existing old versions deliberately, but you would need to take care to reset the configuration to a defined state.
You're for sure right about the point, that this could been better documented. I have created an internal ticket about that.
I have an idea about making a scheme less migration within the block by simply using a dictionary and then finally apply this dictionary to the newObject.
Are there any thoughts on the migration strategy of realms that deals with this? It is mentioned on realms website, but only very briefly.
Depending on your scheme and the amount of data you have, you can reorganize it object-wise in memory via a dictionary and then apply it to newObject as you describe. The current API makes relatively few assumptions and allows an approach like this. But it wouldn't work in that way good for everyone, e.g. if you have large lists of related objects.
I want to inject a BroadcasterFactory into a Publisher-style class before I have constructed my Nettosphere via it's builders. But a call to BroadcasterFactory.getDefault() returns null before it's initialized via the construction of my Nettosphere. I guess I could build a BroadcasterFactory myself first, but that seems to be messing with the Nettosphere construction process.
At a high level I want access to Broadcasters (1 per connection) from the backend in order to push individual messages to clients.
I could do my own map of broadcasters, but broadcasterfactory already does this and I don't really want to have to manage 2 stores of broadcasters.
Thanks :)
Which version of Atmosphere are you using?
Just tested with 2.0.0-SNAPSHOT and it worked.
I suspect they were a regression before.