I need some help understanding something I noticed with Cash.Commands.Issue. During some testing I discovered that by using Cash.Commands.Issue an issuer can take cash away (consume a Cash.State) from the party that owns that Cash.State without their consent.
I was rewriting CashTests.kt in Java for practice:
https://github.com/corda/corda/blob/master/finance/contracts/src/test/kotlin/net/corda/finance/contracts/asset/CashTests.kt
...when I came across this behavior. The test:extended issue examples demonstrates using an Issue command an issuer can consume an existing Cash.State and create a new one with a larger Amount. This makes sense as long as the only thing that changes is the increased Amount.
For fun I tried changing the Owner of the new Cash.State expecting it to fail - it didn't. It creates the new Cash.State with the new Owner and the higher amount while consuming the original Cash.State leaving the original Owner with nothing. Is this correct? My interpretation of this transaction is the issuer of a Cash.State can take cash away from the owning party without their permission.
Amount<Currency> amountCurrency = DOLLARS(1000);
Amount<Issued<Currency>> amountIssuedCurrency = new Amount<Issued<Currency>> (
amountCurrency.getQuantity(),
new Issued<Currency>(megaCorp.ref("123".getBytes()), amountCurrency.getToken())
);
Cash.State initialCashState = new Cash.State(amountIssuedCurrency, alice.getParty());
Cash.State doubleInitialCashStateAndNewOwner =
initialCashState.copy(initialCashState.getAmount().times(2), charlie.getParty());
NodeTestUtils.ledger(megaCorpServices, dummyNotary.getParty(), l -> {
l.transaction("megaCorp issues alice money", tx -> {
tx.attachment(Cash.PROGRAM_ID);
tx.output(Cash.PROGRAM_ID, "alice money", initialCashState);
tx.command(megaCorp.getPublicKey(), new Cash.Commands.Issue());
return tx.verifies();
});
// Here we will extend "alice money" to double the amount but switch the
// owner to charlie (this is the part that confuses me)
l.transaction("megaCorp extends issue and changes owner", tx -> {
tx.attachment(Cash.PROGRAM_ID);
tx.input("alice money");
tx.output(Cash.PROGRAM_ID, "charlie money", doubleInitialCashStateAndNewOwner);
tx.command(megaCorp.getPublicKey(), new Cash.Commands.Issue());
return tx.verifies();
});
l.verifies();
return Unit.INSTANCE;
});
I believe the finance module is a bit out of date. Take a look at the Tokens SDK that is actively being developed and should be supported moving forward.
Related
This is my very first time building and deploying a website so bear with me if this is a dumb question. I'm building a Heardle style clone. The idea is that every day there's a new song and people have 6 guesses to figure out which song it is from short clips of the song. Every part of this seems to work with one major exception -- I can't seem to reload today's song dynamically.
I have a function:
export async function getStaticProps() {
const allSearchableSongs = songHelper.getSongData()
const todaysSong = await songHelper.getTodaysSong()
const songURLs: string[] = (todaysSong) ? await songHelper.getTodaysSongClips(todaysSong!) : []
let valid = false
if (songURLs) {
valid = true
}
return {
props: {
allSearchableSongs,
todaysSong,
songURLs,
valid
},
revalidate: 10
};
}
Note that getTodaysSong() and getTodaysSongClips() both make calls to AWS to get data from S3. Whenever I rebuild the website this works well. However, I would like for this to refresh after 60 seconds so that nobody is ever looking at a stale website. But this doesn't change ever. The song is always out of date until I redeploy. I've checked to make sure that the data is changing daily and that's all well and good -- but the website doesn't ever reload.
I'm currently hosting this on Vercel.
What am I doing wrong? How can I ensure that this reloads after 60 seconds?
I am running into an issue with cash states. Basically I have a node that issues itself money and is unable to access an existing state to use for payment/anything. Let’s say this state is 5 dollars, if I issue 10 more, both rpcOps and the servicehub getCashBalances will say that I have 15 dollars. However, any cash flows that try to use more than 10 dollars will tell me I don’t have sufficient balance.
I’ve set up api endpoints for the node to even just exit the cash but it will say that I’m exiting more than I have. When I query the vault with QueryCriteria.VaultQueryCriteria(Vault.StateStatus.UNCONSUMED), I can see the state is there, and there doesn’t seem to be anything that differentiates the inaccessible state from any subsequent accessible states.
Could there be anything I’m overlooking here? The issuers are the same and the owners are hashed but should be the same, as well.
Updated with command / code:
fun selfIssueTime(#QueryParam(value = "amount") amount: Long,
#QueryParam(value = "currency") currency: String): Response {
// 1. Prepare issue request.
val issueAmount = Amount(amount.toLong() * 100, Currency.getInstance(currency))
val notary = rpcOps.notaryIdentities().firstOrNull() ?: throw IllegalStateException("Could not find a notary.")
val issueRef = OpaqueBytes.of(0)
val issueRequest = CashIssueFlow.IssueRequest(issueAmount, issueRef, notary)
val self = myIdentity
// 2. Start flow and wait for response.
val (status, message) = try {
val flowHandle = rpcOps.startFlowDynamic(
CashIssueFlow::class.java,
issueRequest
)
flowHandle.use { it.returnValue.getOrThrow() }
CREATED to "$issueAmount issued to $self."
} catch (e: Exception) {
BAD_REQUEST to e.message
}
// 3. Return the response.
return Response.status(status).entity(message).build()
}
I believe this is fixed via the later version of Corda finance jars. We have developed a couple more CorDapps samples using the currency class, and we did not run into any issue. For example: https://github.com/corda/samples-java/blob/master/Tokens/dollartohousetoken/workflows/src/main/java/net/corda/examples/dollartohousetoken/flows/FiatCurrencyIssueFlow.java#L39
Further more, with the release of the Corda TokenSDk, Currency on Corda has actually a new way to get issue, transfer and redeem. This is done by:
/* Create an instance of the fiat currency token */
TokenType token = FiatCurrency.Companion.getInstance(currency);
/* Create an instance of IssuedTokenType for the fiat currency */
IssuedTokenType issuedTokenType = new IssuedTokenType(getOurIdentity(), token);
/* Create an instance of FungibleToken for the fiat currency to be issued */
FungibleToken fungibleToken = new FungibleToken(new Amount<>(amount, issuedTokenType), recipient, null);
/* Issue the required amount of the token to the recipient */
return subFlow(new IssueTokens(ImmutableList.of(fungibleToken), ImmutableList.of(recipient)));
I'm creating a simple 2d-multiplayer game with Unity and I chose to use Firebase as the backend. I'm facing some issues when trying to fill up rooms using Firebase Cloud functions. How I planned this to work is following:
Player clicks "Join Room"-button
Unique device ID is send to Realtime database under "Player Searching For Room" and event listener is added to that ID
Cloud function will be triggered when the 'onWrite'-event happens. Function will then check if the room array is empty. If the room array is empty the Cloud function will then push new room to the realtime database.
Cloud function pushes the room ID under the player ID in the "Players Searching For Room"
Because player is already listening to his own ID under the "Players Searching For Room", a function will be run when room ID is pushed under his own ID. This will tell the player that the Cloud function successfully found a room for him.
Below is the Cloud function:
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
// Create and Deploy Your First Cloud Functions
// https://firebase.google.com/docs/functions/write-firebase-functions
var room = [];
// ID for the room we are filling at the moment
var room_currentID;
// This functions triggers each time something is added or deleted to "Players Searching For Room"
exports.findRoom = functions.database
.ref('/Players Searching For Room/{pushId}')
.onWrite(event => {
// Check if data exists (if not, this was triggered by delete -> return)
if(!event.data.exists())
{
return;
}
// If this player already has a room, we want to return
if(event.data.val().inRoom != "none")
return;
// Store data under changed pushId (if player was added to waiting players then data is -> "size" : 4)
const data = event.data.val();
// Name of the player. We get this from the pushId of the item we pushed ("container" for data pushed).
var name = event.params.pushId;
// Size of the room the player wants to join
var size = data.size;
// With IF-statements check which room_size_n object array we want to loop through
console.log("Player called " + name + " is waiting for room that has maxium of " + size + " players")
// We can push the user to the room array since it can never be full
// (we clear the array before allowing next guy to join)
room.push(name);
// If this was the first guy
// we need to create new room
if(room.length == 1)
{
admin.database().ref('/Rooms').push({
onGoing: false // We need to set something, might aswell set something usefull
}).then(snapshot => {
// because this function is triggered by changes in firebase realtime database
// we can't return anything to the player. BUT we can inform player about the room
// he's been attached to by adding roomID to the playername in the "Players Searching For Room"
// then players device will handle removing
// Store ID of the room so that we can send it to later joiners in this room
room_currentID = snapshot.key;
data.inRoom = room_currentID;
return event.data.ref.set(data);
});
}
// If there already exists a suitable room with space on it
else if(room.length > 1)
{
// We can attach the stored roomID to the player so he knows which rooms onGoing flag to watch for.
data.inRoom = room_currentID;
// Attach roomID to the player and then check if room is full
// waiting roomID to attach to player before setting onGoing TRUE
// prevents other player to get a head start
event.data.ref.set(data).then(snapshot => {
// If roomId was attached to the player we can check the room size
if(room.length == size)
{
// ...and if the room became full we need to set onGoing to true
admin.database().ref('/Rooms/'+room_currentID).set({
onGoing: true
}).then(snapshot => {
room = [];
});
}
});
}
});
Problem is that if multiple users click the Join Game-button in a short period of time it messes up the system. Adding player ID under "Players Searching For Room" seems to work everytime, but sometimes Cloud function never attach room ID to the player ID and sometimes Cloud function creates more rooms than it should. I tested this simply by having a button that attached random ID under "Players Searching For Room" each time it was clicked. Then I rapidly clicked that button for ´10 times. That should have attached 5 different room IDs to those 10 random IDs and also generate 5 new rooms. Instead it generated 7 rooms and added room IDs to only 8 random IDs not 10.
Problem is, i think:
Abe clicks Join Game-button at 12.00.00
Cloud function starts (execution takes 5 seconds)
Rob clicks Join Game-button at 12.00.02
Cloud function triggers again before finishing Abe's request
Eveything gets messed
Is it possible with firebase to change this so that if Rob triggers the Cloud function before Abe's request is done, Rob will be put on hold while Abe finishes. When Abe is finished than it's Rob's turn. Ugh, awfully long explanation hopefully somebody will read this :)
At Google I/O 2017, I gave a talk on building a multiplayer game using only Firebase on the backend. Cloud Functions implements pretty much all the logic of the game. It also has a simple matching feature, and you could extend that scheme to do more complicated things. You can watch the talk here, and source code for the project will be coming soon.
I am building a recommender system where I use Firebase to store and retrieve data about movies and user preferences.
Each movie can have several attributes, and the data looks as follows:
{
"titanic":
{"1997": 1, "english": 1, "dicaprio": 1, "romance": 1, "drama": 1 },
"inception":
{ "2010": 1, "english": 1, "dicaprio": 1, "adventure": 1, "scifi": 1}
...
}
To make the recommendations, my algorithm requires as input all the data (movies) and is matched against an user profile.
However, in production mode I need to retrieve over >10,000 movies. While the algorithm can handle this relatively fast, it takes a lot of time to load this data from Firebase.
I retrieve the data as follows:
firebase.database().ref(moviesRef).on('value', function(snapshot) {
// snapshot.val();
}, function(error){
console.log(error)
});
I am there wondering if you have any thoughts on how to speed things up? Are there any plugins or techniques known to solve this?
I am aware that denormalization could help split the data up, but the problem is really that I need ALL movies and ALL the corresponding attributes.
My suggestion would be to use Cloud Functions to handle this.
Solution 1 (Ideally)
If you can calculate suggestions every hour / day / week
You can use a Cloud Functions Cron to fire up daily / weekly and calculate recommendations per users every week / day. This way you can achieve a result more or less similar to what Spotify does with their weekly playlists / recommendations.
The main advantage of this is that your users wouldn't have to wait for all 10,000 movies to be downloaded, as this would happen in a cloud function, every Sunday night, compile a list of 25 recommendations, and save into your user's data node, which you can download when the user accesses their profile.
Your cloud functions code would look like this :
var movies, allUsers;
exports.weekly_job = functions.pubsub.topic('weekly-tick').onPublish((event) => {
getMoviesAndUsers();
});
function getMoviesAndUsers () {
firebase.database().ref(moviesRef).on('value', function(snapshot) {
movies = snapshot.val();
firebase.database().ref(allUsersRef).on('value', function(snapshot) {
allUsers = snapshot.val();
createRecommendations();
});
});
}
function createRecommendations () {
// do something magical with movies and allUsers here.
// then write the recommendations to each user's profiles kind of like
userRef.update({"userRecommendations" : {"reco1" : "Her", "reco2", "Black Mirror"}});
// etc.
}
Forgive the pseudo-code. I hope this gives an idea though.
Then on your frontend you would have to get only the userRecommendations for each user. This way you can shift the bandwidth & computing from the users device to a cloud function. And in terms of efficiency, without knowing how you calculate recommendations, I can't make any suggestions.
Solution 2
If you can't calculate suggestions every hour / day / week, and you have to do it each time user accesses their recommendations panel
Then you can trigger a cloud function every time the user visits their recommendations page. A quick cheat solution I use for this is to write a value into the user's profile like : {getRecommendations:true}, once on pageload, and then in cloud functions listen for changes in getRecommendations. As long as you have a structure like this :
userID > getRecommendations : true
And if you have proper security rules so that each user can only write to their path, this method would get you the correct userID making the request as well. So you will know which user to calculate recommendations for. A cloud function could most likely pull 10,000 records faster and save the user bandwidth, and finally would write only the recommendations to the users profile. (similar to Solution 1 above) Your setup would like this :
[Frontend Code]
//on pageload
userProfileRef.update({"getRecommendations" : true});
userRecommendationsRef.on('value', function(snapshot) { gotUserRecos(snapshot.val()); });
[Cloud Functions (Backend Code)]
exports.userRequestedRecommendations = functions.database.ref('/users/{uid}/getRecommendations').onWrite(event => {
const uid = event.params.uid;
firebase.database().ref(moviesRef).on('value', function(snapshot) {
movies = snapshot.val();
firebase.database().ref(userRefFromUID).on('value', function(snapshot) {
usersMovieTasteInformation = snapshot.val();
// do something magical with movies and user's preferences here.
// then
return userRecommendationsRef.update({"getRecommendations" : {"reco1" : "Her", "reco2", "Black Mirror"}});
});
});
});
Since your frontend will be listening for changes at userRecommendationsRef, as soon as your cloud function is done, your user will see the results. This might take a few seconds, so consider using a loading indicator.
P.S 1: I ended up using more pseudo-code than originally intended, and removed error handling etc. hoping that this generally gets the point across. If there's anything unclear, comment and I'll be happy to clarify.
P.S. 2: I'm using a very similar flow for a mini-internal-service I built for one of my clients, and it's been happily operating for longer than a month now.
Firebase NoSQL JSON structure best practice is to "Avoid nesting data", but you said, you don't want to change your data. So, for your condition, you can have REST call to any particular node (node of your each movie) of the firebase.
Solution 1) You can create some fixed number of Threads via ThreadPoolExecutors. From each worker thread, you can do HTTP (REST call request) as below. Based on your device performance and memory power, you can decide how many worker threads you want to manipulate via ThreadPoolExecutors. You can have code snippet something like below:
/* creates threads on demand */
ThreadFactory threadFactory = Executors.defaultThreadFactory();
/* Creates a thread pool that creates new threads as needed, but will reuse previously constructed threads when they are available */
ExecutorService threadPoolExecutor = Executors.newFixedThreadPool(10); /* you have 10 different worker threads */
for(int i = 0; i<100; i++) { /* you can load first 100 movies */
/* you can use your 10 different threads to read first 10 movies */
threadPoolExecutor.execute(() -> {
/* OkHttp Reqeust */
/* urlStr can be something like "https://earthquakesenotifications.firebaseio.com/movies?print=pretty" */
Request request = new Request.Builder().url(urlStr+"/i").build();
/* Note: Firebase, by default, store index for every array.
Since you are storing all your movies in movies JSON array,
it would be easier, you read first (0) from the first worker thread,
second (1) from the second worker thread and so on. */
try {
Response response = new OkHttpClient().newCall(request).execute();
/* OkHttpClient is HTTP client to request */
String str = response.body().string();
} catch (IOException e) {
e.printStackTrace();
}
return myStr;
});
}
threadPoolExecutor.shutdown();
Solution 2) Solution 1 is not based on the Listener-Observer pattern. Actually, Firebase has PUSH technology. Means, whenever something particular node changes in Firebase NoSQL JSON, the corresponding client, who has connection listener for particular node of the JSON, will get new data via onDataChange(DataSnapshot dataSnapshot) { }. For this you can create an array of DatabaseReferences like below:
Iterable<DataSnapshot> databaseReferenceList = FirebaseDatabase.getInstance().getReference().getRoot().child("movies").getChildren();
for(DataSnapshot o : databaseReferenceList) {
#Override
public void onDataChange(DataSnapshot o) {
/* show your ith movie in ListView. But even you use RecyclerView, showing each Movie in your RecyclerView's item is still show. */
/* so you can store movie in Movies ArrayList. When everything completes, then you can update RecyclerView */
}
#Override
public void onCancelled(DatabaseError databaseError) {
}
}
Although you stated your algorithm needs all the movies and all attributes, it does not mean that it processes them all at once. Any computation unit has its limits, and within your algorithm, you probably chunk the data into smaller parts that your computation unit can handle.
Having said that, if you want to speed things up, you can modify your algorithm to parallelize fetching and processing of the data/movies:
| fetch | -> |process | -> | fetch | ...
|chunk(1)| |chunk(1)| |chunk(3)|
(in parallel) | fetch | -> |process | ...
|chunk(2)| |chunk(2)|
With this approach, you can spare almost the whole processing time (but the last chunk) if processing is really faster than fetching (but you have not said how "relatively fast" your algorithm run, compared to fetching all the movies)
This "high level" approach of your problem is probably your better chance if fetching the movies is really slow although it requires more work than simply activating a hypothetic "speed up" button of a Library. Though it is a sound approach when dealing with large chunk of data.
For example, I have following database structure:
/
+ users
+ 1
+ items
+ -xxx: "hello"
+ 2
+ items
Then;
var usersRef = new Firebase("https://mydb.firebaseio.com/users");
usersRef.on("child_changed", function(snapshot) {
utils.debug(JSON.stringify(snapshot.exportVal()));
});
If a value, "world", is pushed to "/users/1/items", I may get:
{"items": {"-xxx": "hello", "-yyy": "world"}}
So, how to tell which one is changed?
I need to on("child_added") every single ref to "/users/$id/items"?
NOTE: I'm trying to write an admin process in node.js.
The child_changed event only provides information on which immediate child has changed. If a node deeper in a data structure changed, you'll know which immediate child was affected but not the full path to the changed data. This is by design.
If you want granular updates about exactly what changed, you should attach callbacks recursively to all of the elements you care about. That way when an item changes, you'll know what the item was by which callback is triggered. Firebase is actually optimized for this use case; attaching large numbers of callbacks -- even thousands -- should work fine. Behind the scenes, Firebase aggregates all of callbacks together and only synchronizes the minimum set of data needed.
So, for your example, if you want to get alerted every time a new item is added for any user, you could do the following:
var usersRef = new Firebase("https://mydb.firebaseio.com/users");
usersRef.on("child_added", function(userSnapshot) {
userSnapshot.ref().child("items").on("child_added", function(itemSnapshot)
utils.debug(itemSnapshot.val());
});
});
If you are working with a very large number of users (hundreds of thousands or millions), and synchronizing all of the data is impractical, there's another approach. Rather than have your server listen to all of the data directly, you could have it listen to a queue of changes. Then when clients add items to their item lists, they could also add an item to this queue so that the server becomes aware of it.
This is what the client code might look like:
var itemRef = new Firebase("https://mydb.firebaseio.com/users/MYID/items");
var serverEventQueue = new Firebase("https://mydb.firebaseio.com/serverEvents");
itemRef.push(newItem);
serverEventQueue.push(newItem);
You could then have the server listen for child_added on that queue and handle the events when they come in.
Andrew Lee gave a nice answer, but I think you should try to use cloud functions. something like this should work:
exports.getPath = functions.database.ref('/users/{id}/items/{itemId}')
.onWrite(event => {
// Grab the current value of what was written to the Realtime Database.
const original = event.data.val();
console.log('user id', event.params.id);
console.log('item id', event.params.itemId);
});