Corda: Separate Attachments sending in a single transaction - corda

I'm having a three node network, where Node 1 sends document as an attachment for a transaction to Node 2 and Node 3. Node 2 should also will send some attachments to Node 3 . The first part of it, I have achieved it and published a state via flow. What i intend is to see all of this in a single transaction. can someone give me inputs on how to achieve this ?
Some additional Info: Node 1 is also supposed to access the attachment sent by Node 2 to Node 3

You need to know what attachment hash to refer to in the first place. If the transaction is initiated by Node 1, it sounds like Node 2 must have pre-uploaded the attachment and must know what the hash for it is (unless Node 2 is able to figure out what the attachment must be based on the information contained in the state/transaction).
If you’re able to guarantee that, you can simply create a sub-flow that sends the TransactionBuilder generated by Node 1 to Node 2, calls addAttachment and returns it back to Node 1 for the transaction signing steps.

Related

How to track deleted documents with a filter?

I want to track changes to documents in a collection in Firestore. I use a lastModified property to filter results. The reason I use this “lastModified” filter is so that each time the app starts the initial snapshot in the listener does not return all documents in the collection.
// The app is only interested in changes occurring after a certain date
let date: Date = readDateFromDatabase()
// When the app starts, begin listening for city updates.
listener = db.collection("cities")
.whereField("lastModified", isGreaterThanOrEqualTo: date)
.addSnapshotListener() { (snapshot, error)
// Process added, modified, and removed documents.
// Keep a record of the last modified date of the updates.
// Store an updated last modified date in the database using
// the oldest last modified date of the documents in the
// snapshot.
writeDateToDatabase()
}
Each time documents are processed in the closure, a new “lastModified” value is stored in the database. The next time the app starts, the snapshot listener is created with a query using this new “lastModified” value.
When a new city is created, or one is updated, its “lastModified” property is updated to “now”. Since “now” should be greater than or equal to the filter date, all updates will be sent to the client.
However, if a really old city is deleted, then its “lastModified” property may be older than the filter date of a client that has received recent updates. The problem is that the deleted city’s “lastModified” property cannot be updated to “now” when it is being deleted.
Example
Client 1 listens for updates ≥ d_1.
Client 2 creates two cities at d_2, where d_1 < d_2.
Client 1 receives both updates because d_1 < d_2.
Client 1 stores d_2 as a future filter.
Client 2 updates city 1 at d_3, where d_2 < d_3.
Client 1 receives this update because d_1 < d_3.
Client 1 stores d_3 as a future filter.
...Some time has passed.
Client 1 app starts and listens for updates ≥ d_3.
Client 2 deletes city 2 (created at d_2).
Client 1 won’t receive this update because d_2 < d_3.
My best solution
Don’t delete cities, instead add an isDeleted property. Then, when a city is marked as deleted, its “lastModified” property is updated to “now”. This update should be sent to all clients because the query filter date will always be before “now”. The main business logic of the app ignores cities where isDeleted is true.
I feel like I don’t fully understand this problem. Is there a better way to solve my problem?
The solution you've created is quite common and is known as a tombstone.
Since you no longer need the actual data of the document, you can delete its fields. But the document itself will need to remain to indicate that it's been deleted.
There may be other approaches, but they'll all end up similarly. As you have to somehow signal to each client (no matter when they connect/query) that the document is gone, keeping the document as a tombstone seems like a simple and good approach to me.

Corda nodes role or nodes categorization

I have a requirement of identifying a pool/set of nodes in the Flow on basis of their assigned role, So that the transactions can be sent to all nodes of a particular role/type.
For example:
A node (Type 1)
B node (Type 1)
C node (Type 2)
D node (Type 2)
Than, if I pass Type 2 in the flow, than I should be able to get the List of Parties/Node with that Type (i.e. C,D).
Is it possible? If yes, how? and where can I define the nodes type, maybe as some suffix in node name?
One possible hacky way I have in mind is, set prefix in every node and
then get List of nodes, extract name and identify according to it. But
it will be required in every flow initiation.
Thanks for any help.
There is no better solution to serve your purpose with the fewer overheads.
If you are expecting the no. of nodes and roles to be changed dynamically, a proper way would be to retrieves the roles (whether using an oracle or not) and cache it.
Then write a separate flow to trigger updates it.
If not, you can just hardcode it in the config file.

Write conflict in Dynamo

Imaging that there are two clients client1 and client2, both writing the same key. This key has three replicas named A, B, C. A first receives client1's request, and then client2', while B receives client2's request, and then client1's. Now A and B must be inconsistent with each other, and they cannot resolve conflict even using Vector Clock. Am I right?
If so, it seems that it is easy to occur write conflict in dynamo. Why so many open source projects based on dynamo's design?
If you're using Dynamo and are worried about race conditions (which you should be if you're using lambda)
You can check conditionals on putItem or updateItem, if the condition fails
e.g. during getItem the timestamp was 12345, add conditional that timestamp must equal 12345, but another process updates it, changes the timestamp to 12346, your put/update should fail now, in java for example, you can catch ConditionalCheckFailedException, you can do another get item, apply your changes on top, then resubmit the put/update
To prevent a new item from replacing an existing item, use a conditional expression that contains the attribute_not_exists function with the name of the attribute being used as the partition key for the table. Since every record must contain that attribute, the attribute_not_exists function will only succeed if no matching item exists.
For more information about PutItem, see Working with Items in the Amazon DynamoDB Developer Guide.
Parameters:
putItemRequest - Represents the input of a PutItem operation.
Returns:
Result of the PutItem operation returned by the service.
Throws:
ConditionalCheckFailedException - A condition specified in the operation could not be evaluated.
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/AmazonDynamoDB.html#putItem-com.amazonaws.services.dynamodbv2.model.PutItemRequest-
Can't talk about HBase, but I can about Cassandra, which is inspired in Dynamo.
If that happens in Cassandra, the most recent key wins.
Cassandra uses coordinator nodes (which can be any node), that receive the client requests and resends them to all replica nodes. Meaning each request has its own timestamp.
Imagine that Client2 has the most recent request, miliseconds after Client1.
Replica A receives Client1, which is saved, and then Client2, which is saved over Client1 since Client2 is the most recent information for that key.
Replica B receives Client2, which is saved, and then Client1, which is rejected since has an older timestamp.
Both replicas A and B have Client2, the most recent information, and therefore are consistent.

How to store 10 most recent objects in a Firebase?

I want a Firebase to hold the last 10 most recently added objects, but no more. I'll use a web server log as an example.
Say I have a program watching a web server log. Every time a new entry is made in the log, I want my Firebase to get the IP address from that entry. But I only need the Firebase to store the last 10 IP addresses sent, not every one it ever received.
I can imagine doing this by setting up 10 objects in Firebase, say:
app/slot0
app/slot1
app/slot2
app/slot3
etc
Then PATCH slot0 to add the IP and, when done, update the slot tracker:
currentSlot++
And when currentSlot gets to 10 it wraps around and points to 0
if(currentSlot > numSlots) currentSlot = 0;
So that it's basically a list of 10 objects and I'm manually keeping track of which slot is the next one. This way I don't need to store an infinite number of items, but only the last 10. And clients listening to all of these slots will get updates every time one changes.
My question is whether this is an optimal way of doing this? I can't help thinking there is a more efficient way.
There's 100 different ways to do this but here's a thought:
Assume that an app stores 10 IP's in an array (0-9) and the IP at index 0 is the latest IP connection.
When a new connection is made, the IP at index 9 is removed from he array and the IPs at 0-8 have their indexes incremented (IP at index 0 moves to index 1, IP at index 1 moved to index 2 etc).
Then the newest IP is inserted at item 0. The array data is written to Firebase.
Depending on your platform, this is easy as inserting an IP into the array at index 0 and removing index 10, then writing to firebase.
However, try to avoid writing arrays into Firebase. There are much better ways to do this - a node with IP and a timestamp would work well.
connection_events
connection_id_0123
ip: 192.168.1.1
timestamp: 20151107133000
connection_id_4566
ip: 198.168.1.123
timestamp: 20151107093000
The connection_id's are generated by childByAutoId or push so they are 'random' but you always have the timestamp to order by.
Another thought using the above structure is to Firebase query for the oldest one and remove that node, then add the newest one. This would work since ordering is controlled by the timestamp.

Filtering with multiple flags in a view

I'm building a Drupal site that has a database of local services. I'm using 2 vocabularies to categorise the services by:
a. Ward/Neighbourhood
b. Type of Service
Using the Views, Flag and Flag Terms modules, I'm trying to set up an interface that allows users to filter the records in 3 stages:
Flag the local wards/neighbourhoods they want to find services in.
Flag the types of service they are interested in
View a list of services filtered on the flagged terms set in steps 1 and 2. The list should only show services of the type selected in step 2 and only within the wards selected in Step 1.
Each of these stages is set up as a view. The first 2 views are working fine; users are able to Flag the terms for ward and service type.
The problem is the 3rd view which filters nodes based on the Flags. In the View, I've added Flag relationships for each vocabulary. But when I try to filter the nodes on Flag 1 AND Flag 2, no records are returned.
It seems like Flag 1 needs to be an argument for the second filter, but I'm not sure how to pass the flag IDs in to the URL.
I'm struggling with the logic of this, any help would be much appreciated.
I've solved it by creating a custom module, as explained here:
http://sethsandler.com/code/drupal-6-creating-activity-stream-views-custom-sql-query-merging-multiple-views-part-1/

Resources