How to get transaction history without certain state - corda

I try to get transaction history on corda.
I need to get the amount of the transaction for a certain period
My api for this :
#GET
#Path("transactions")
#Produces(MediaType.APPLICATION_JSON)
fun gettransatcions(): List<StateAndRef<ContractState>> {
val TODAY = Instant.now()
val pagingSpec = PageSpecification(DEFAULT_PAGE_NUM, 100)
val start = TODAY.minus(1, ChronoUnit.HOURS)
val end = TODAY.plus(1, ChronoUnit.HOURS)
val recordedBetweenExpression = QueryCriteria.TimeCondition(
QueryCriteria.TimeInstantType.RECORDED,
ColumnPredicate.Between(start, end))
val criteria = QueryCriteria.VaultQueryCriteria(timeCondition = recordedBetweenExpression,status = Vault.StateStatus.ALL)
val results = rpcOps.vaultQueryBy<ContractState>(criteria, paging = pagingSpec)
val size = results.states.count()
return rpcOps.vaultQueryBy<ContractState>().states
}
where:
val rpcOps: CordaRPCOps
I can explicitly specify States for which to receive transactions like:
val criteria = VaultQueryCriteria(contractStateTypes = setOf(Cash.State::class.java, DealState::class.java))
but, I need to get transactions across all states except for a certain.
Have corda got any mechanism for this ?

There is no type of query criteria that specifically excludes certain states. However, you can define a query criteria that specifically includes certain states, then combine that with your existing criteria using an AND composition:
val TODAY = Instant.now()
val pagingSpec = PageSpecification(DEFAULT_PAGE_NUM, 100)
val start = TODAY.minus(1, ChronoUnit.HOURS)
val end = TODAY.plus(1, ChronoUnit.HOURS)
val recordedBetweenExpression = QueryCriteria.TimeCondition(
QueryCriteria.TimeInstantType.RECORDED,
ColumnPredicate.Between(start, end))
val timeCriteria = QueryCriteria.VaultQueryCriteria(timeCondition = recordedBetweenExpression, status = Vault.StateStatus.ALL)
val typeCriteria = QueryCriteria.VaultQueryCriteria(contractStateTypes = setOf(State1::class.java, State2::class.java), status = Vault.StateStatus.ALL)
val combinedCriteria = timeCriteria.and(typeCriteria)
val results = rpcOps.vaultQueryBy<ContractState>(combinedCriteria, paging = pagingSpec)
This will retrieve all the states that meet both your time criteria and your type criteria.

Related

How to get/build a JavaRDD[DataSet]?

When I use deeplearning4j and try to train a model in Spark
public MultiLayerNetwork fit(JavaRDD<DataSet> trainingData)
fit() need a JavaRDD parameter,
I try to build like this
val totalDaset = csv.map(row => {
val features = Array(
row.getAs[String](0).toDouble, row.getAs[String](1).toDouble
)
val labels = Array(row.getAs[String](21).toDouble)
val featuresINDA = Nd4j.create(features)
val labelsINDA = Nd4j.create(labels)
new DataSet(featuresINDA, labelsINDA)
})
but the tip of IDEA is No implicit arguments of type:Encode[DataSet]
it's a error and I dont know how to solve this problem,
I know SparkRDD can transform to JavaRDD, but I dont know how to build a Spark RDD[DataSet]
DataSet is in import org.nd4j.linalg.dataset.DataSet
Its construction method is
public DataSet(INDArray first, INDArray second) {
this(first, second, (INDArray)null, (INDArray)null);
}
this is my code
val spark:SparkSession = {SparkSession
.builder()
.master("local")
.appName("Spark LSTM Emotion Analysis")
.getOrCreate()
}
import spark.implicits._
val JavaSC = JavaSparkContext.fromSparkContext(spark.sparkContext)
val csv=spark.read.format("csv")
.option("header","true")
.option("sep",",")
.load("/home/hadoop/sparkjobs/LReg/data.csv")
val totalDataset = csv.map(row => {
val features = Array(
row.getAs[String](0).toDouble, row.getAs[String](1).toDouble
)
val labels = Array(row.getAs[String](21).toDouble)
val featuresINDA = Nd4j.create(features)
val labelsINDA = Nd4j.create(labels)
new DataSet(featuresINDA, labelsINDA)
})
val data = totalDataset.toJavaRDD
create JavaRDD[DataSet] by Java in deeplearning4j official guide:
String filePath = "hdfs:///your/path/some_csv_file.csv";
JavaSparkContext sc = new JavaSparkContext();
JavaRDD<String> rddString = sc.textFile(filePath);
RecordReader recordReader = new CSVRecordReader(',');
JavaRDD<List<Writable>> rddWritables = rddString.map(new StringToWritablesFunction(recordReader));
int labelIndex = 5; //Labels: a single integer representing the class index in column number 5
int numLabelClasses = 10; //10 classes for the label
JavaRDD<DataSet> rddDataSetClassification = rddWritables.map(new DataVecDataSetFunction(labelIndex, numLabelClasses, false));
I try to create by scala:
val JavaSC: JavaSparkContext = new JavaSparkContext()
val rddString: JavaRDD[String] = JavaSC.textFile("/home/hadoop/sparkjobs/LReg/hf-data.csv")
val recordReader: CSVRecordReader = new CSVRecordReader(',')
val rddWritables: JavaRDD[List[Writable]] = rddString.map(new StringToWritablesFunction(recordReader))
val featureColnum = 3
val labelColnum = 1
val d = new DataVecDataSetFunction(featureColnum,labelColnum,true,null,null)
// val rddDataSet: JavaRDD[DataSet] = rddWritables.map(new DataVecDataSetFunction(featureColnum,labelColnum, true,null,null))
// can not reslove overloaded method 'map'
debug error infomations:
A DataSet is just a pair of INDArrays. (inputs and labels)
Our docs cover this in depth:
https://deeplearning4j.konduit.ai/distributed-deep-learning/data-howto
For stack overflow sake, I'll summarize what's here since there's no "1" way to create a data pipeline. It's relative to your problem. It's very similar to how you you would create a dataset locally, generally you want to take whatever you do locally and put that in to spark in a function.
CSVs and images for example are going to be very different. But generally you use the datavec library to do that. The docs summarize the approach for each kind.

How Do I Get The Hisory Of A LinearState

The output of one of my flows is a LinearState that has been "modified" over time by various flows.
Is there an API to get the previous versions of this state in the order that they were created/modified?
I can query the vault like this:
val linearIdCriteria = QueryCriteria.LinearStateQueryCriteria(linearId = listOf(outputState.linearId), status = ALL)
val states = myNode.services.vaultService.queryBy<MyState>(linearIdCriteria).states
However, the SQL generated by hibernate doesn't have an Order By clause so the order of the states in the list cannot be guaranteed.
The states returned don't have any timestamp on them so I can't see how to order the list.
The time that the state was saved is available as a sorting option (Sort.VaultStateAttribute.RECORDED_TIME) which should guarantee the order of the vault query.
//Get all versions of the State by its LinearId
val linearIdCriteria = QueryCriteria.LinearStateQueryCriteria(linearId = listOf(outputState.linearId), status = ALL)
//Add sorting by recorded date
val sortByRecordedDate = SortColumn(SortAttribute.Standard(Sort.VaultStateAttribute.RECORDED_TIME), Sort.Direction.ASC)
val sorting = Sort(listOf(sortByRecordedDate))
val states = myNode.services.vaultService.queryBy<MyState>(linearIdCriteria, sorting).states
Unconsumed and consumed states can be sorted in ascending/descending alphabetical order using the following code -
Vault.StateStatus status = Vault.StateStatus.CONSUMED;
#SuppressWarnings("unchecked")
Set<Class<LinearState>> contractStateTypes = new HashSet(singletonList(LinearState.class));
QueryCriteria vaultCriteria = new VaultQueryCriteria(status, contractStateTypes);
List<UniqueIdentifier> linearIds = singletonList(ids.getSecond());
QueryCriteria linearCriteriaAll = new LinearStateQueryCriteria(null, linearIds, Vault.StateStatus.UNCONSUMED, null);
QueryCriteria dealCriteriaAll = new LinearStateQueryCriteria(null, null, dealIds);
QueryCriteria compositeCriteria1 = dealCriteriaAll.or(linearCriteriaAll);
QueryCriteria compositeCriteria2 = compositeCriteria1.and(vaultCriteria);
PageSpecification pageSpec = new PageSpecification(DEFAULT_PAGE_NUM, MAX_PAGE_SIZE);
Sort.SortColumn sortByUid = new Sort.SortColumn(new SortAttribute.Standard(Sort.LinearStateAttribute.UUID), Sort.Direction.DESC);
Sort sorting = new Sort(ImmutableSet.of(sortByUid));
Vault.Page<LinearState> results = vaultService.queryBy(LinearState.class, compositeCriteria2, pageSpec, sorting);
OR
val sortAttribute = SortAttribute.Custom(CustomerSchema.CustomerEntity.class,
"changeDate")
val sorter = Sort(setOf(Sort.SortColumn(sortAttribute, Sort.Direction.DESC)))
val constraintResults = vaultService.queryBy<LinearState>(constraintTypeCriteria, sorter)
Following are the URLs for reference:
URL1: https://docs.corda.net/api-vault-query.html
URL2: https://github.com/corda/corda/issues/5060
URL3: https://github.com/manosbatsis/vaultaire
URL4: https://r3-cev.atlassian.net/browse/CORDA-2247

Writing Corda custom query by concating two schema column values and compare?

We have Name schema contains,
FirstName : Rock
LastName : John
Prefix : Mr
MiddleName : ""
Suffix: "Jr"
We are creating some states, schema with the definition.
But Now Want to field the states with values. We need to filter the values like
(FirstName+LastName).equals("RockJohn").
We are trying to write the custom vault query.
Is there any way to achieve this?
In Java, you'd write something like:
FieldInfo firstNameField = getField("firstName", NameSchemaV1.PersistentName.class);
FieldInfo lastNameField = getField("lastName", NameSchemaV1.PersistentName.class);
CriteriaExpression firstNameIndex = Builder.equal(firstNameField, "Rock");
CriteriaExpression lastNameIndex = Builder.equal(lastNameField, "John");
QueryCriteria firstNameCriteria = new QueryCriteria.VaultCustomQueryCriteria(firstNameIndex);
QueryCriteria lastNameCriteria = new QueryCriteria.VaultCustomQueryCriteria(lastNameIndex);
QueryCriteria criteria = firstNameCriteria.and(lastNameCriteria);
Vault.Page<ContractState> results = getServiceHub().getVaultService().queryBy(NameState.class, criteria);
In Kotlin, you'd write something like:
val results = builder {
val firstNameIndex = NameSchemaV1.PersistentName::firstName.equal("Rock")
val lastNameIndex = NameSchemaV1.PersistentName::lastName.equal("John")
val firstNameCriteria = QueryCriteria.VaultCustomQueryCriteria(firstNameIndex)
val lastNameCriteria = QueryCriteria.VaultCustomQueryCriteria(lastNameIndex)
val criteria = firstNameCriteria.and(lastNameCriteria)
serviceHub.vaultService.queryBy(NameState::class.java, criteria)
}
You can use a Hibernate formula to create a dynamic/calculated property:
#Formula(value = " concat(first_name, last_name) ")
String fullName
Then treat it as a regular property/field in your queries

In Corda, how can I query the vault for all states recorded after a specific state?

I have the StateRef for a state that was recorded by my node. How can I get a stream of all the states recorded by my node since that StateRef was recorded?
You need to do two things:
Identify when the StateRef you have was recorded
Start streaming updates from after that time
Here's an example RPC client that would do this:
fun main(args: Array<String>) {
// Getting an RPC connection to the node.
require(args.size == 1) { "Usage: ExampleClientRPC <node address>" }
val nodeAddress = NetworkHostAndPort.parse(args[0])
val client = CordaRPCClient(nodeAddress)
val rpcOps = client.start("user1", "test").proxy
// Change this to an actual StateRef.
val dummyStateRef = StateRef(SecureHash.zeroHash, 0)
// Getting the time the state was recorded.
val queryByStateRefCriteria = VaultQueryCriteria(stateRefs = listOf(dummyStateRef))
val queryByStateRefResults = rpcOps.vaultQueryBy<ContractState>(queryByStateRefCriteria)
val queryByStateRefMetadata = queryByStateRefResults.statesMetadata
val dummyStateRefRecordedTime = queryByStateRefMetadata.single().recordedTime
// Getting the states recorded after that time.
val queryAfterTimeExpression = TimeCondition(
RECORDED, BinaryComparison(BinaryComparisonOperator.GREATER_THAN_OR_EQUAL, dummyStateRefRecordedTime))
val queryAfterTimeCriteria = VaultQueryCriteria(
status = ALL,
timeCondition = queryAfterTimeExpression)
val queryAfterTimeResults = rpcOps.vaultTrackBy<ContractState>(queryAfterTimeCriteria)
val afterTimeStates = queryAfterTimeResults.states
}

In Corda, how to get the timestamp of when a transaction happened?

I am using Corda 3.2. Given a SignedTransaction, how can I establish when it was recorded?
There is no direct API for determining when a transaction was recorded. However, you can achieve this by checking either:
When one of the transaction's inputs was consumed:
val inputStateRef = signedTx.inputs[0]
val queryCriteria = QueryCriteria.VaultQueryCriteria(stateRefs = listOf(inputStateRef))
val results = serviceHub.vaultService.queryBy<ContractState>(queryCriteria)
val consumedTime = results.statesMetadata.single().consumedTime!!
When one of the transaction's outputs was recorded:
val ledgerTx = signedTx.toLedgerTransaction(serviceHub)
val outputStateRef = StateRef(signedTx.id, 0)
val queryCriteria = QueryCriteria.VaultQueryCriteria(stateRefs = listOf(outputStateRef))
val results = serviceHub.vaultService.queryBy<ContractState>(queryCriteria)
val recordedTime = results.statesMetadata.single().recordedTime

Resources