I have an online game, with an actor that represents a user's state. State updates via recursive become calls:
private PartialFunction<Object, BoxedUnit> updatedUser(final User user) {
return ReceiveBuilder.
...
matchEquals("update", s -> {
context().become(updatedUser(new User(...)));
}).build();
}
Now when the user leaves the game (actor stops), I need to save its state to a database. I think the ideal place for this would be sending a message from postStop. But user's state is out of scope.
public void postStop() throws Exception {
//user state out of scope
Database.tell(user, self());
}
I don't want to have state as an actor field. What would be the best way to solve this problem?
There's no way to access the user value outside of the updatedUser function.
Just make your user state an instance variable of the actor.
Related
I have a simple flow that creates a state between a buyer and seller and obviously each side can see everything on the state.
However, I now have a requirement that the buyer wants to store the user that processed the transaction for auditing and reporting purposes.
The user in this case is not a node or an account but a user that has logged in to the application and been authorised via Active Directory.
I could just add the user name to the state as a String but that would expose private data to the seller.
An alternative would be to obfuscate the name in some way but I would rather store the information in a separate table outside the state and only in the buyers vault.
How do I do this and is there a sample that demonstrates it?
You can create a second output state, which is used in the same transaction, but has only the token issuer as participant. Then of course, it is up to you to make the link between the "issued state" and the "recorder state", it depends on what you will store inside the latter.
Let's make an example of a fungible token issuance from Node1 to Node2. You could create a "issuance recorder state" that aims at recording something on Node1's vault only, like so (note the list of participants):
// the "recorder" state visible only in Node1's vault
#BelongsToContract(IssuanceRecordContract::class)
class IssuanceRecord(
val holder: AbstractParty,
val amount: Amount<IssuedTokenType>
) : ContractState {
override val participants: List<AbstractParty>
get() = listOf(amount.token.issuer)
}
and then you could pass it to the same TransactionBuilder that you are using to issue the fungible token (which instead has both parties in the list of participants), like so:
// This is from the Issuanflow launched from Node1
#Suspendable
override fun call(): String {
...
...
// Create the FungibleToken (issuedToken is an IssuedTokenType created before)
val tokenCoin = FungibleToken(Amount(volume.toLong(), issuedToken), holderAnonymousParty)
// create the "recorder" output state visible only to Node1
val issuanceRecord = IssuanceRecord(holderAnonymousParty, tokenCoin.amount)
// create the Transaction Builder passing the "recorder" output state
val transactionBuilder = TransactionBuilder(notary).apply {
addOutputState(issuanceRecord, IssuanceRecordContract.ID)
addCommand(Command(IssuanceRecordContract.Commands.Issue(), ourIdentity.owningKey))
}
// Issue the token passing the transactionBuilder and the fungible token
addIssueTokens(transactionBuilder, tokenCoin)
// collect signatures
// verify transaction
// FinalityFlow (fundamental to make this work in Node1)
}
This way, I think, the recorder states will be atomically stored in Node1's vault. If something happens, the transaction will be not successful for both output states.
I am looking for the correct contract upgrade process. Consider the following example:
SimpleContract : Contract {
data class State(override val owner: AbstractParty, val relevantParticipant: AbstractParty) : OwnableState {
override val participants: List<AbstractParty> = listOf(owner, relevantParticipant)
override fun withNewOwner(newOwner: AbstractParty): CommandAndState
= CommandAndState(Commands.Move(), copy(owner = newOwner))
}
}
As I understand, this state is only stored in the owner's vault, but the relevantParticipant also has (in it's transaction storage) the transaction where the SimpleContract.State is one of the outputs. If the owner were to (authorize and) initiate the upgrade, the flow fails as the relevantParticipant does not have the authorized contract upgrade for it. What is the right approach here?
One solution is for the owner to send the StateRef to the relevantParticipant. The relevantParticipant can then retrieve the StateAndRef using ServiceHub.loadState, and choose to authorise the contract upgrade using ContractUpgradeFlow.Authorise.
This is better than sending the StateAndRef directly, since the relevantParticipant can then verify that the state being sent hasn't been tampered with (since they retrieve the actual state from their storage, not the counterparty's).
Lets assume I'm trying to build a group messaging application, so I designed my database structure to look like so:
users: {
uid1: { //A user id using push()
username: "user1"
email: "aaa#bbb.ccc"
timestampJoined: 18594659346
groups: {
gid1: true,
gid3: true
}
}
uid2: {
username: "user2"
email: "ddd#eee.fff"
timestampJoined: 34598263402
groups: {
gid1: true,
gid5: true
}
}
....
}
groups: {
gid1: { //A group id using push()
name: "group1"
users: {
uid1: true,
uid2: true
}
}
gid2: {
name: "group2"
users: {
uid5: true,
uid7: true,
uid80: true
}
}
...
}
messages: {
gid1: {
mid1: { //A message id using push()
sender: uid1
message: "hello"
timestamp: 12839617675
}
mid2: {
sender: uid2
message: "welcome"
timestamp: 39653027465
}
...
}
...
}
According to Firebase's docs this would scale great.
Now lets assume that inside my application, I want to display the sender's username on every message.
Querying the username for every single message is obviously bad, so one of the solutions that I found was to duplicate the username in every message.
The messages node will now look like so:
messages: {
gid1: {
mid1: { //A message id using push()
sender: uid1
username: "user1"
message: "hello"
timestamp: 12839617675
}
mid2: {
sender: uid2
username: "user2"
message: "welcome"
timestamp: 39653027465
}
...
}
...
}
Now I want to add the option for the user to change his username.
So if a user decides to change his username, it has to be updated in the users node, and in every single message that he ever sent.
If I would have gone with the "listener for every message" approach, then changing the username would have been easy, because I would have needed to change the name in a single location.
Now, I have to also update the name in every message of every group that he sent.
I assume that querying the entire messages node for the user id is a bad design, so I thought about creating another node that stores the locations of all the messages that a user has sent.
It will look something like this:
userMessages: {
uid1: {
gid1: {
mid1: true
}
gid3: {
mid6: true,
mid12: true
}
...
}
uid2: {
gid1: {
mid2: true
}
gid5: {
mid13: true,
mid25: true
}
...
}
...
}
So now I could quickly fetch the locations of all the messages for a specific user, and update the username with a single updateChildren() call.
Is this really the best approach? Do I really have to duplicate so much data (millions of messages) only because I'm referencing a dynamic value (the username)?
Or is there a better approach when dealing with dynamic data?
This is a perfect example of why, in general, parent node names (keys) should be disassociated from the values they contain or represent.
So some big picture thinking may help and considering the user experience may provide the answer.
Now lets assume that inside my application, I want to display the
sender's username on every message.
But do you really want to do that? Does your user really want to scroll through a list of 10,000 messages? Probably not. Most likely, the app is going to display a subset of those messages and even at that probably 10 or 12 at a time.
Here's some thoughts:
Assume a users table:
users
uid_0
name: Charles
uid_1
name: Larry
uid_2:
name: Debbie
and a messages table
messages
msg_1
sender: uid_1
message: "hello"
timestamp: 12839617675
observers:
uid_0: true
uid_1: true
uid_2: true
Each user logs in and the app performs a query that observes the messages node they are part of - the app displays displays the message text of the message as well as each users name that's also observing that message (the 'group').
This could also be used to just display the user name of the user that posted it.
Solution 1: When the app starts, load in all of the users in the users node store them in dictionary with the uid_ as the key.
When the messages node is being observed, each message is loaded and you will have the uid's of the other users (or the poster) stored in the users_dict by key so just pick their name:
let name = users_dict["uid_2"]
Solution 2:
Suppose you have a LOT of data stored in your users node (which is typical) and a thousand users. There's no point in loading all of that data when all you are interested in is their name so your could either
a) Use solution #1 and just ignore all of the other data other than the uid and name or
b) Create a separate 'names' node in firebase which only keeps the user name so you don't need to store it in the users node.
names:
uid_0: Charles
uid_1: Larry
uid_2: Debbie
As you can see, even with a couple thousand users, that's a tiny bit of data to load in. And... the cool thing here is that if you add a listener to the names node, if a users changes their name the app will be notified and can update your UI accordingly.
Solution 3:
Load your names on an as needed basis. While technically you can do this, I don't recommend it:
Observe all of the messages nodes the user is part of. Those nodes will be read in and as they are read in, build a dictionary of uid's that you will need the names of. Then perform a query for each user name based on the uid. This can work but you have to take the asynchronous nature of Firebase into account and allow time for the names to be loaded in. Likewise, you could load in a message, then load in the user name for that message with the path: users/uid_x/user_name. Again though this get into an async timing issue where you are nesting async calls within async calls or a loop and that should probably be avoided.
The important point with any solution the user experience and keeping your Firebase structure as flat as possible.
For example, if you do in fact want to load 10,000 messages, consider breaking the message text or subject out into another node, and only load those nodes for your initial UI list. As the user drills down into the message, then load the rest of the data.
Steps to follow:
fetch username at every restart of app
cache them locally
show username from cache based on uid
done
Note: how you fetch username depends on your way of implementation
You only need this structure
mid1: { //A message id using push()
sender: uid1
message: "hello"
timestamp: 12839617675
}
The username can be read from the users directly "users/uid1/username" using a single value event listener after you read each child. Firebase is supposed to be used with sequential calls, since you cannot create complex queries like in SQL,
And just to keep it efficient you could:
1)Create a reference dictionary to use it as a cache handler in which after you read every message you verify if you have the value for each key:
[uid1:"John",uid2:"Peter",....etc...]
And if the key doesn't exist you add with the single value listener pointing to /users/$uid/username that handles the "add to cache" in its callback
2)Use the limitTo startAt and endAt queries to paginate the listener and avoid bringing data the user won't see
*There is no need to actually keep updating all the messages and all the nodes with every user change, imagine a chat group with 100 users in which every user have 20 messages ...2000 updates with your single updateChildren() call that would be extremely inefficient, since it is not scalable and you are updating data that surely no user will ever see again in a real life scenario (like the first message of the 2000 chat messages)
I'm using Firebase in a website (and it's awesome).
Via several .on('child_added') and .on('value') callbacks I'm populating a local store that is bound to my UI.
When a user signs out, I want to clean their data out of my local store. What is the recommended way to react to a user signing out with Firebase? Ideally I would like to pass a callback to .unauth() to do a cleanup. But there's no such callback.
My current solution is a bit messy...
I listen with onAuth().
When onAuth triggered with a non-null value I set a variable isLoggedIn = true
When onAuth is triggered with null and isLoggedId === true then I perform a cleanup.
I don't want to do a clean up every time onAuth is called with null because it does this on page load.
If the user triggers the logout, that means your code is calling unauth(). That would also be the moment to clean up:
ref.unauth();
cleanupData();
But if the user gets signed out for another reason (i.e. their session expiring), then cleaning up in the null part of onAuth() makes sense. If you're worried about the initial null, you could wrap this in a check:
var previousAuthData = null;
ref.onAuth(function(authData) {
if (authData) {
...
}
else if (previousAuthData != null) {
cleanupData();
}
previousAuthData = authData;
}
There are multiple examples on publish/subscribe but not clear on what is the best practice for storing custom data in the in-built "users" collection in Meteor (especially in the new possibility of template specific collections).
For example, I need to store user browse history - something that is accessible through Meteor.user().settings.history.lastvisited[]
The challenge is:
Is any special publish / subscribe required for the above? (the
reason being, I am assuming the users collection is already
published and available on client side - so do we need another?)
How to take care of edge cases where user is new and hence settings.history object may not be defined? Can we have a special publish that automatically takes care of creating an empty object if the settings is undefined? How to do it?
I did this :
// server side
Meteor.publish('userSettings', function (maxRows) {
if (this.userId) {
return Meteor.users.find({ _id: this.userId }, { fields: {'settings':1}});
}
this.ready();
});
//client side
Meteor.subscribe('userSettings');
But I do not see anyway how I can access the published "userSettings" object on the client side - what is missing ??
You can create a field and set it to false/'', on each user you create using the accountsOnCreateUser method.
Accounts.onCreateUser(function(options, user) {
//this function gets called each time a user has been created on the Meteor.user collection
if (options.profile)
user.settings = ''; //this is just and example.
return user;
})
Now the publish looks ok, but in order to get it work im always use a Tracker.autorun function.
Tracker.autorun(function(){
Meteor.subscribe('userSettings');
})
Why the autorun? well if you don't call the auto run here, the subscription get only called 1 time when the apps loads, and not when the user documents.
Take care of yours deny/allow permissions, check this meteor:common mistakes post on the Profile editing section
Also the subscribe function have a callback function. Meteor.subscribe(name, [arg1, arg2...], [callbacks]), so you can do something like this.
var myUserSubscription = Meteor.subscribe('userSettings',function(){
console.log("ok im here on the client side")
console.log("this user subscription is ready " + myUserSubscription.ready())
})
console.log("outside the subscription why not? " + myUserSubscription.ready();
About ready();
True if the server has marked the subscription as ready. A reactive
data source.