What is a NYM and how does this relate to a VERINYM - hyperledger-indy

In the Hyperledger Indy docs and code I often see reference to a NYM but I cannot find a clear description of what this means including in the official glossary.
What is a NYM and how does this differ from a VERINYM?

DID's are broadly classified as Verinym or Pseudonym.
Creation of a DID that is known to the ledger is know as a Verinym, and the transaction used for creating a Verinym is known as a NYM transacition.
Check the getting started tutorial for more details.

I have the same question when I just get into the code. Then I found this in an English dict website:
nym = nim = nom = name
for example: pseudonym :)

There is additional information about NYMs available from the documentation here (on hyperledger-indy.readthedocs.io) and here (github.com):
So from the second link:
NYM
Creates a new NYM record for a specific user, endorser, steward or trustee. Note that only trustees and stewards can create new endorsers and a trustee can be created only by other trustees (see roles).
The transaction can be used for creation of new DIDs, setting and rotation of verification key, setting and changing of roles.
dest (base58-encoded string):
Target DID as base58-encoded string for 16 or 32 byte DID value. It may differ from the from metadata field, where from is the DID of the submitter. If they are equal (in permissionless case), then transaction must be signed by the newly created verkey.
Example: from is a DID of a Endorser creating a new DID, and dest is a newly created DID.
role (enum number as integer; optional):
Role of a user that the NYM record is being created for. One of the following values
None (common USER)
"0" (TRUSTEE)
"2" (STEWARD)
"101" (ENDORSER)
"201" (NETWORK_MONITOR)
A TRUSTEE can change any Nym's role to None, thus stopping it from making any further writes (see roles).
verkey (base58-encoded string, possibly starting with "~"; optional):
Target verification key as base58-encoded string. It can start with "~", which means that it's an abbreviated verkey and should be 16 bytes long when decoded, otherwise it's a full verkey which should be 32 bytes long when decoded. If not set, then either the target identifier (did) is 32-bit cryptonym CID (this is deprecated), or this is a user under guardianship (doesn't own the identifier yet). Verkey can be changed to "None" by owner, it means that this user goes back under guardianship.
alias (string; optional):
NYM's alias.
If there is no NYM transaction for the specified DID (did) yet, then this can be considered as the creation of a new DID.
If there is already a NYM transaction with the specified DID (did), then this is is considered an update of that DID. In this case only the values that need to be updated should be specified since any specified one is treated as an update even if it matches the current value in ledger. All unspecified values remain unchanged.
So, if key rotation needs to be performed, the owner of the DID needs to send a NYM request with did and verkey only. role and alias will stay the same.
Example:
{
"ver": 1,
"txn": {
"type":"1",
"ver": 1,
"protocolVersion":2,
"data": {
"dest":"GEzcdDLhCpGCYRHW82kjHd",
"verkey":"~HmUWn928bnFT6Ephf65YXv",
"role":101,
},
"metadata": {
"reqId":1513945121191691,
"from":"L5AD5g65TDQr1PPHHRoiGf",
"digest": "4ba05d9b2c27e52aa8778708fb4b3e5d7001eecd02784d8e311d27b9090d9453",
"payloadDigest": "21f0f5c158ed6ad49ff855baf09a2ef9b4ed1a8015ac24bccc2e0106cd905685",
"taaAcceptance": {
"taaDigest": "6sh15d9b2c27e52aa8778708fb4b3e5d7001eecd02784d8e311d27b9090d9453",
"mechanism": "EULA",
"time": 1513942017
}
},
},
"txnMetadata": {
"txnTime":1513945121,
"seqNo": 10,
"txnId": "N22KY2Dyvmuu2PyyqSFKue|01"
},
"reqSignature": {
"type": "ED25519",
"values": [{
"from": "L5AD5g65TDQr1PPHHRoiGf",
"value": "4X3skpoEK2DRgZxQ9PwuEvCJpL8JHdQ8X4HDDFyztgqE15DM2ZnkvrAh9bQY16egVinZTzwHqznmnkaFM4jjyDgd"
}]
}
}

A NYM (short for “Verinym”) is associated with the Legal Identity of an Identity Owner and is a Hyperledger Indy specific term for a data object, which holds DID data of one concrete identity returned during DID resolution. While a NYM can be read from a Hyplerledger Indy Node by any client, a NYM can only be written to a Hyperledger Indy network as long as the writing entity possess the proper permissions.
From the page: https://hyperledger.github.io/indy-did-method/

Related

findAll() returns empty with WHERE option

First question on StackOverflow, long time reader first time poster or whatever people say.
I'm developing a Discord bot in my free time using Discord.js, and I'm using Sequelize to interface with a local SQLite database. I can insert data into it just fine-- however, I can't seem to delete any of the records I add. Relevant piece of code is below, which I believe to be self-contradictory:
const query3 = await Towers.findAll({
attributes: ['channelID']
});
console.log(JSON.stringify(query3)); //returns the one Tower
console.log(query3[0].channelID === channel); //returns true(!)
const query2 = await Towers.findAll({
attributes: ['channelID'],
where: {channelID: channel}
});
console.log(JSON.stringify(query2)); //returns empty
//DELETE FROM Towers WHERE channelID = channel;
const query = await Towers.destroy({
where: {channelID: channel}
});
console.log(query); //returns 0, expected behavior given query2 returns empty
I'm attempting to delete a record from a table named Towers by passing a channel ID to it, which is expected to be unique. However, when I make any query on the database with a WHERE clause, the query returns an empty set-- even when, in this example, I sanity-checked and verified that the value I'm attempting to remove is present in the table. This occurs for both findAll() and findOne() as long as a WHERE clause is present.
(For posterity, I've double and triple checked that channelID was spelled correctly and with the correct capitalization in all instances.)
I'm happy to provide any more information if needed!
EDIT: As requested, the model definition...
const Towers = sequelize.define('Towers', {
serverID: {
type: Sequelize.INTEGER,
allowNull: false,
},
channelID: {
type: Sequelize.INTEGER,
unique: true,
allowNull: false,
},
pattern: Sequelize.STRING,
height: Sequelize.INTEGER,
delay: Sequelize.BOOLEAN,
});
channel in the snippet in the original post is defined as parseInt(interaction.options.getChannel('channel').id).
To anyone who happens to have the same issue I did, the answer is a doozy.
I wanted to store Discord server and channel ID's as integers, even though they're returned to you as strings when calling the API. As it turns out, Discord snowflakes are higher than float64 precision, which JS uses. When parsing the strings into integers to insert them into my table, the value changed from the intended number, and I was creating erroneous records.
In my case (with the actual numbers obfuscated) interaction.options.getChannel('channel').id returned "837512533934092340", while parseInt(interaction.options.getChannel('channel').id returned 837512533934092300. The number I was adding to the table was somehow 40 less!
I'm not sure if this could be fixed by using BigInt, but since it's going into a different structure anyway, I just shrugged and changed the serverId and channelId types to Sequelize.STRING in the model definition and removed the parseInt calls. Works like a charm now.
Good opportunity to shake my fist at JS though.

DynamoDB how to Update a Map if an attribute exists, else silently ignore

I have a table called Products, whose Key is a Range : orgzviceid + productid. It has a map attribute called "checkout" and a quantity storing attribute called "prod_stk_qty_i_i".
Say initially, for a product with Product ID 34, total available quantity is 10. As soon as a Cart checkout happens, assuming the Checkout ID is 5, and it has checked 2 quantities out a product id 34, then the product's (for productid 34) "checkout" map entry and "prod_stk_qty_i_i" in DynamoDB would be something like this:
"checkout" : { "5" : 2 },
"prod_stk_qty_i_i" : 8
If another checkout happens for the same product (say 1 quantity), and if that checkout ID is 7, then the checkout ooks like this:
"checkout" : { "5" : 2, "7" : 1 },
"prod_stk_qty_i_i" : 7
If payment is made, the checkout entry is removed, and quantity is increased.
Now, my requirement is to periodically after some timeout (30 minutes), release the Product Quantities which have been checked out, but not released. I do this by
Increasing the Quantity by "checkout."'s value
Removing the checkout. map entry
It is important that this operation not fail even if this operation is attempted multiple times, (idempotent), so its necessary that it only update if the checkout.checkoutID field exists. If not, it should simply ignore.
I tried the following:
[
"UpdateItem",
[
{
"TableName": "Products",
"Key": {
"orgzviceid": {
"N": "3000161710"
},
"productid": {
"N": "11"
}
},
"UpdateExpression": "REMOVE #checkout.#checkoutID SET #prod_stk_qty_i_i = #prod_stk_qty_i_i + #checkout.#checkoutID",
"ExpressionAttributeNames": {
"#checkout": "checkout",
"#checkoutID": "235",
"#prod_stk_qty_i_i": "prod_stk_qty_i_i"
},
"ConditionExpression": "attribute_exists(#checkout.#checkoutID)",
"ReturnValues": "ALL_NEW"
}
]
]
However, it gives me an error in case the checkout entry is not found for checkout id 235. Note that I've written ConditionExpression to do the update only if attribute "condition.235" exists.
Error Logs:
com.amazonaws.dynamodb.v20120810#ConditionalCheckFailedException","message":"The
conditional request failed ..."
So, how do I write a query such that if the map entry exist, then do the above operation, other wise not fail?
Obviously, one bad hack is to first check in a GetItem query if the checkout entry exists for the provided CheckoutID, and then only do this, however, it just does not seem right
I believe your using conditional expressions incorrectly. The point of the conditional is to fail if certain criteria is not met. WHy do you have the conditional at all? Without the conditional it would just execute the update expression and if the item does not exist I would not expect you to get an error. Like querying for an item that does not exist. You should simply get an empty set back. Not an error.
Your approach will not work because you are mixing "Attribute" and "AttributeValue" together in your conditional expression. Let me explain:
"ConditionExpression": "attribute_exists(#checkout.#checkoutID)"
In your table, checkout is an attribute in dynamo db, whereas checkoutID is in no way related to the table schema. So for dynamo DB, checkoutID is part of the attribute's value and not the attribute itself.
Therefore, to having the condition that you do will not work.
A conditional expression for your use case would be something which says attribute checkout exists and it's value is . However, in order to do that, you'd need to pass the expected map which boils down to reading the record before updating.
I do think that reading the record, updating the value and persisting it should be the way to go ahead in this case (and is not necessarily a bad idea)
Do consider using some kind of optimistic locking in this case to prevent against dirty reads and writes.

uppercase and lowercase not working in Contains in Aws dynamoDB?

I have a DynamoDB database with an attribute Event_Name which has uppercase values, for example KRISHNA. When I specify a Scan FilterExpression comparitor CONTAINS with a lowercase value, for example krishna, the item with value KRISHNA is not returned. When I use the uppercase value it returns the item. Please help me.
For reference my code is:
var params = {
TableName: "User",
FilterExpression: "NOT userId in (:a) and contains(Event_Name, :name)",
ExpressionAttributeValues: {
":a": {
S: $scope.userid
},
":name": {
S: namekey
}
}
};
using dynamodb scan method
Probably you already figured out, but since I stumbled upon this question and it's not closed, here is a link in AWS forum addressing the issue
https://forums.aws.amazon.com/thread.jspa?threadID=92159
DynamoDB is case sensitive. If your data is case insensitive, one solution is to lower case or upper case the data before storing it in DynamoDB. Then you can get around this by querying for all lower case or all upper case. You will need to take locale into account for locale-sensitive ordering.
So there is nothing wrong you are doing, you just were expecting something that is not available with DynamoDB

Firebase database: Referencing a dynamic value

Lets assume I'm trying to build a group messaging application, so I designed my database structure to look like so:
users: {
uid1: { //A user id using push()
username: "user1"
email: "aaa#bbb.ccc"
timestampJoined: 18594659346
groups: {
gid1: true,
gid3: true
}
}
uid2: {
username: "user2"
email: "ddd#eee.fff"
timestampJoined: 34598263402
groups: {
gid1: true,
gid5: true
}
}
....
}
groups: {
gid1: { //A group id using push()
name: "group1"
users: {
uid1: true,
uid2: true
}
}
gid2: {
name: "group2"
users: {
uid5: true,
uid7: true,
uid80: true
}
}
...
}
messages: {
gid1: {
mid1: { //A message id using push()
sender: uid1
message: "hello"
timestamp: 12839617675
}
mid2: {
sender: uid2
message: "welcome"
timestamp: 39653027465
}
...
}
...
}
According to Firebase's docs this would scale great.
Now lets assume that inside my application, I want to display the sender's username on every message.
Querying the username for every single message is obviously bad, so one of the solutions that I found was to duplicate the username in every message.
The messages node will now look like so:
messages: {
gid1: {
mid1: { //A message id using push()
sender: uid1
username: "user1"
message: "hello"
timestamp: 12839617675
}
mid2: {
sender: uid2
username: "user2"
message: "welcome"
timestamp: 39653027465
}
...
}
...
}
Now I want to add the option for the user to change his username.
So if a user decides to change his username, it has to be updated in the users node, and in every single message that he ever sent.
If I would have gone with the "listener for every message" approach, then changing the username would have been easy, because I would have needed to change the name in a single location.
Now, I have to also update the name in every message of every group that he sent.
I assume that querying the entire messages node for the user id is a bad design, so I thought about creating another node that stores the locations of all the messages that a user has sent.
It will look something like this:
userMessages: {
uid1: {
gid1: {
mid1: true
}
gid3: {
mid6: true,
mid12: true
}
...
}
uid2: {
gid1: {
mid2: true
}
gid5: {
mid13: true,
mid25: true
}
...
}
...
}
So now I could quickly fetch the locations of all the messages for a specific user, and update the username with a single updateChildren() call.
Is this really the best approach? Do I really have to duplicate so much data (millions of messages) only because I'm referencing a dynamic value (the username)?
Or is there a better approach when dealing with dynamic data?
This is a perfect example of why, in general, parent node names (keys) should be disassociated from the values they contain or represent.
So some big picture thinking may help and considering the user experience may provide the answer.
Now lets assume that inside my application, I want to display the
sender's username on every message.
But do you really want to do that? Does your user really want to scroll through a list of 10,000 messages? Probably not. Most likely, the app is going to display a subset of those messages and even at that probably 10 or 12 at a time.
Here's some thoughts:
Assume a users table:
users
uid_0
name: Charles
uid_1
name: Larry
uid_2:
name: Debbie
and a messages table
messages
msg_1
sender: uid_1
message: "hello"
timestamp: 12839617675
observers:
uid_0: true
uid_1: true
uid_2: true
Each user logs in and the app performs a query that observes the messages node they are part of - the app displays displays the message text of the message as well as each users name that's also observing that message (the 'group').
This could also be used to just display the user name of the user that posted it.
Solution 1: When the app starts, load in all of the users in the users node store them in dictionary with the uid_ as the key.
When the messages node is being observed, each message is loaded and you will have the uid's of the other users (or the poster) stored in the users_dict by key so just pick their name:
let name = users_dict["uid_2"]
Solution 2:
Suppose you have a LOT of data stored in your users node (which is typical) and a thousand users. There's no point in loading all of that data when all you are interested in is their name so your could either
a) Use solution #1 and just ignore all of the other data other than the uid and name or
b) Create a separate 'names' node in firebase which only keeps the user name so you don't need to store it in the users node.
names:
uid_0: Charles
uid_1: Larry
uid_2: Debbie
As you can see, even with a couple thousand users, that's a tiny bit of data to load in. And... the cool thing here is that if you add a listener to the names node, if a users changes their name the app will be notified and can update your UI accordingly.
Solution 3:
Load your names on an as needed basis. While technically you can do this, I don't recommend it:
Observe all of the messages nodes the user is part of. Those nodes will be read in and as they are read in, build a dictionary of uid's that you will need the names of. Then perform a query for each user name based on the uid. This can work but you have to take the asynchronous nature of Firebase into account and allow time for the names to be loaded in. Likewise, you could load in a message, then load in the user name for that message with the path: users/uid_x/user_name. Again though this get into an async timing issue where you are nesting async calls within async calls or a loop and that should probably be avoided.
The important point with any solution the user experience and keeping your Firebase structure as flat as possible.
For example, if you do in fact want to load 10,000 messages, consider breaking the message text or subject out into another node, and only load those nodes for your initial UI list. As the user drills down into the message, then load the rest of the data.
Steps to follow:
fetch username at every restart of app
cache them locally
show username from cache based on uid
done
Note: how you fetch username depends on your way of implementation
You only need this structure
mid1: { //A message id using push()
sender: uid1
message: "hello"
timestamp: 12839617675
}
The username can be read from the users directly "users/uid1/username" using a single value event listener after you read each child. Firebase is supposed to be used with sequential calls, since you cannot create complex queries like in SQL,
And just to keep it efficient you could:
1)Create a reference dictionary to use it as a cache handler in which after you read every message you verify if you have the value for each key:
[uid1:"John",uid2:"Peter",....etc...]
And if the key doesn't exist you add with the single value listener pointing to /users/$uid/username that handles the "add to cache" in its callback
2)Use the limitTo startAt and endAt queries to paginate the listener and avoid bringing data the user won't see
*There is no need to actually keep updating all the messages and all the nodes with every user change, imagine a chat group with 100 users in which every user have 20 messages ...2000 updates with your single updateChildren() call that would be extremely inefficient, since it is not scalable and you are updating data that surely no user will ever see again in a real life scenario (like the first message of the 2000 chat messages)

Why use an object when denormalising data?

In the recent blog post on denormalising data, it suggests logging all of a user's comments beneath each user like so:
comments: {
comment1: true,
comment2: true
}
Why is this not a list like so:
comments: [
"comment1",
"comment2",
]
What are the advantages? Is there any difference at all? While I'm at it, how would you go about generating unique references for these comments for a distributed app? I was imagining that with a list I'd just push them onto the end and let the array take care of the index.
Firebase only ever stores objects. The JS client converts arrays into objects using the index as a key. So, for instance if you store the following array using set:
comments: [
"comment1",
"comment2"
]
In Forge (the graphical debugger), it will show up as:
comments:
0: comment1
1: comment2
Given this, storing the ID of the comment directly as a key has the advantage that you can refer to it directly in the security rules, for example, with an expression like:
root.child('comments').hasChild($comment)
In order to generate unique references for these comments, please use push (https://www.firebase.com/docs/managing-lists.html):
var commentsRef = new Firebase("https://<example>.firebaseio.com/comments");
var id = commentsRef.push({content: "Hello world!", author: "Alice"});
console.log(id); // Unique identifier for the comment just added.

Resources