Meteor and subscriptions - meteor

I don't understand the concept of Meteor.subscribe.
It is supposed to receive records from the server, and attach it to collections with the same name, right?
[subscribe] will queue incoming attributes until you declare the Meteor.Collection on the client with the matching collection name.
So, why the example in docs uses different names? What is the relation between allplayers and players?
Meteor.subscribe("allplayers");
...
// client queues incoming players records until ...
...
Players = new Meteor.Collection("players");

There are two names:
The name of the collection ('players' in this case).
The name of the subscription ('allplayers' in this case).
A subscription is a way to get records into a client side collection. The name this collection that the records go into is decided (on the server side) by the use of this.set() in the relevant Meteor.publish function, but usually it is just name of the collection that is queried on the the server side[1].
Many subscriptions can deposit data into the same collection, so certainly the name of the subscription doesn't need to correspond to the name of the collection. In fact, it's probably only a good idea to have them be the same if you are doing a fairly straightforward single subscription to that collection.
[1] If you return a cursor (e.g. return players.find();) in Meteor.publish, it automatically wires up calls to this.set(name) for you, where name is inferred from the server side players collection.

Related

How can I query for all new and updated documents since last query?

I need to query a collection and return all documents that are new or updated since the last query. The collection is partitioned by userId. I am looking for a value that I can use (or create and use) that would help facilitate this query. I considered using _ts:
SELECT * FROM collection WHERE userId=[some-user-id] AND _ts > [some-value]
The problem with _ts is that it is not granular enough and the query could miss updates made in the same second by another client.
In SQL Server I could accomplish this using an IDENTITY column in another table. Let's call the table version. In a transaction I would create a new row in the version table, do the updates to the other table (including updating the version column with the new value. To query for new and updated rows I would use a query like this:
SELECT * FROM table WHERE userId=[some-user-id] and version > [some-value]
How could I do something like this in Cosmos DB? The Change Feed seems like the right option, but without the ability to query the Change Feed, I'm not sure how I would go about this.
In case it matters, the (web/mobile) clients connect to data in Cosmos DB via a web api. I have control of the entire stack - from client to back-end.
As the statements in this link:
Today, you see all operations in the change feed. The functionality
where you can control change feed, for specific operations such as
updates only and not inserts is not yet available. You can add a “soft
marker” on the item for updates and filter based on that when
processing items in the change feed. Currently change feed doesn’t log
deletes. Similar to the previous example, you can add a soft marker on
the items that are being deleted, for example, you can add an
attribute in the item called "deleted" and set it to "true" and set a
TTL on the item, so that it can be automatically deleted. You can read
the change feed for historic items, for example, items that were added
five years ago. If the item is not deleted you can read the change
feed as far as the origin of your container.
Change feed is not available for your requirements.
My idea:
Use Azure Function Cosmos DB Trigger to collect all the operations in your specific cosmos collection. Follow this document to configure the input of azure function as cosmos db, then follow this document to configure the output as azure queue storage.
Get the ids of changed items and send them into queue storage as messages.When you want to query the changed item,just query the messages from the queue to consume them at a specific unit time and after that just clear the entire queue. No items will be missed.
With your approach, you can get added/updated documents and save reference value (_ts and id field) somewhere (like blob)
SELECT * FROM collection WHERE userId=[some-user-id] AND _ts > [some-value] and id !='guid' order by _ts desc
This is a similar approach we use to read data from Eventhub and store checkpointing information (epoch number, sequence number and offset value) in blob. And at a time only one function can take a lease of that blob.
If you go with ChangeFeed, you can create listener (Function or Job) to listen all add/update data from collection and you can store those value in some collection, while saving data you can add Identity/version field on every document. This approach may increase your cosmos DB bill.
This is what the transaction consistency levels are for: https://learn.microsoft.com/en-us/azure/cosmos-db/consistency-levels
Choose strong consistency and your queries will always return the latest write.
Strong: Strong consistency offers a linearizability guarantee. The
reads are guaranteed to return the most recent committed version of an
item. A client never sees an uncommitted or partial write. Users are
always guaranteed to read the latest committed write.

How to store only node specific off-ledger custom data in corda?

I created custom table in corda using QueryableState. e.g. IOUStates table.
I can able to see the custom information getting stored in this kind of table.
but i observed that if party A and Party B is doing the transaction then this
custom information gets stored at both the places , e.g. IOUStates
table getting created at nodeA ledger as well as nodeB's ledger.
and custom information is stored in partyA's and PartyB's ledger.
My Question is :-
If some Transaction is getting processed from PartyA's node , then
I want to store part of the transaction's data i.e. custom data ONLY at partyA's Ledger.* level . i.e. off-Ledger of partA only.
It should not be shared with partyB.
In simple case , how to store Only node specific off ledger custom data ?
Awaiting for some reply...
Thanks.
There's a number of ways to achieve this:
Don't use Corda at all! If the data is truly off-ledger then why are you using Corda? Instead, store it in a separate database. Of course you can "JOIN" it with on-ledger data if required, as the on-ledger data is stored in a SQL database.
Similar to point one except you can use the jdbcSession() functionality of the ServiceHub to create a custom table in the node's database. This table can easily be accessed from within your flows.
Create a ContractState object that only has one participant: the node that wants to store the data. I call this a "unilateral" state, i.e. a state that only one party ever stores.
Most importantly, if you don't want to share some data with a counter-party then it should never be disclosed inside a corda state object or attachment that another party might see. Instead:
inside your flows, you can use the data encapsulated within the shared state object (e.g. the IOU) to derive the private data
alternatively if the data is supplied when the flow begins then store the private data locally using one of the methods above

Update same client collection with different subscription

I have a problem with subscriptions which should have fill same collection on the client with the different set of records.
For example:
I have a Books collection and two different publications:
Meteor.publish(‘books’, () => Books.find({ status: { $ne: 3 } });
publish “booksForReservation” which is returning an array of books (books are filtered based on the reservation and some other data)
Problem occurs on the client, when I coming from one route to another. All Books collection are in the main component, and when I need those booksForReservation collection on the client is not updating with only that specific set of books.
I have subscribed on the client like:
Meteor.subscribe(‘booksForReservation’, reservationsIds);
let books = Books.find({}).fetch();
but I’m still getting all books collection displayed. when I have filtered Books collection on the client side with the same query used on the server, collection is updated. But then filtering it on the server doesn’t have any point.
How can I update same collection with different subscription?
You have to filter collection on the client side with a query from the server. You subscribe to two publications, so on the client you will have data from both of them. The point of filtering collection on the server side is security. By doing it you won't publish any unwanted data to the client. You can read more about it here https://guide.meteor.com/data-loading.html#specific-queries

How to entirely skip validation in simple schema and allow incomplete documents to be stored?

I'm creating an order form and a schema defined for an Order (certain required fields such as address, customer info, items selected and their quantities, etc).
a. User visits site.
b. A unique ID is generated for their session as well as a timestamp.
var userSession = {
_id: createId(),
timestamp: new Date(),
};
var sessionId = userSession._id;
c. The userSession is placed in local storage.
storeInLocalStorage('blahblah', sessionObject);
d. An Order object is created with the sessionId as the only field so far.
var newOrder = {
sessionId: sessionId;
};
e. Obviously at this point the Order object won't validate according to the schema so I can't store it in Mongo. BUT I still want to store it in Mongo so I can later retrieve incomplete orders, or orders in progress, using the sessionID generated on the user's initial visit.
This won't work because it fails validation:
Orders.insert(newOrder);
f. When a user revisits the site I want to be able to get the incomplete order from Mongo and resume:
var sessionId = getLocalStorage('blahblah')._id;
var incompleteOrder = Orders.findOne({'sessionId', sessionId});
So I'm not sure how to go about doing this while accomplishing these points.
I want full simpleschema validation on the Orders collection when the user is entering in items on the forms and when the user is intending to submit a full, complete order.
I want to disable simpleschema validation on the Orders collection and still allow storing into the DB so that partial orders can be stored for resumption at a later time.
I can make a field conditionally required using this here but that would mean 50+ fields would be conditionally required just for this scenario and that seems super cumbersome.
It sounds like you want to have your cake, and eat it too!
I think the best approach here would be keep your schema and validation on the Orders collection, but store incomplete orders elsewhere.
You could store them in another collection (with a more relaxed schema) if you want them on the server (possibly for enabling resume on another device for the logged in user) , or more simply in Local Storage, and still enable the resume previous order behaviour you are wanting.
Only write to the Orders collection when the order is complete (and passes validation).
Here's a variation on #JeremyK's answer: add an inProgress key to your order of type [Object]. This object would have no deeper validation. Keep your in progress order data in there until the order is final then copy/move all the relevant data into the permanent keys and remove the inProgress key. This would require that you make all the real keys optional of course. The advantage is that the object would maintain its primary key throughout the life cycle.
I think this particular case has been solved; but just in case, you can skip Simple Schemma validations by accessing MongoDB native API via Collection#rawCollection():
Orders.rawCollection().insert(newOrder);
While this question is very old in the meantime there is a better solution. You probably use simple schema together with collection2. Collection2 has the ability to set multiple schemas based on a selector and then validate against the correct schema based on it.
https://github.com/Meteor-Community-Packages/meteor-collection2#attaching-multiple-schemas-to-the-same-collection
e.g. you could have a selector {state: 'finished'} and only apply the full schema to these documents while having another selctor, e.g. {state: 'in-progress'} for unfinished orders with a schema with optional fields.

Linked collections in Meteor

I am working on a card game platform with a lot (10,000+) of cards dynamically updated with real-world data. Cards are populated/updated once a day.
I have two basic collections at the foundation (asides from users):
1) data - all individual items with different data values for same data fields/parameters (for example, various car models with their specifications). I update this collection once a day from a json API I have on another server for another purpose.
2) cards - "printed" cards with unique IDs but duplicates are off course possible (so we can have 10 Ford Focus 2010. cards).
Cards collection has a couple of most important fields from data collection (model, brand, top performance parameter(s) of the card) to provide efficient user card browsing, and a "dataId" field which links it to data collection for detailed info.
Cards in collection "cards" should be inserted ("issued" or "printed") with functions/methods on server side but in response to client side events (such as new-game etc). When a new card is inserted/dispatched, it first gets a unique "admin-owner" with a user _id from users table for one-to-one relationship, which is later updated to create ownership.
So, on client side, cards collection is like a user "deck" (all cards where owner is user). If I am correct, it should be written on the server side as:
Meteor.publish('cards', function() {
return Cards.find({"userID":this.userId});
});
This is all quite clear and up to that point Meteor is fantastic as it saves me months of work!
But, I am not sure about:
1) I would like to have a client-side data collection publication to cover client detailed card view (by linking cards with data). It should off course have only all data items from data collection with details for each card in client card collection ("deck"). I see it as something like:
Meteor.publish('data', function (dataIds *array with all unique data item ids in client card collection *) {
return Data.find("dataID":{$in:dataIds);
});
2) I need a server/client method to add/insert new cards from data-items ("create 10 news Ford Focus 2010 cards") with an empty/admin user by executing Meteor.call methods from client console of "admin" user, and a server/client method to change ownership of a random card so that it becomes a part of a client cards collection ("cast random card to user").
Where would I place those methods? How can I access server methods from client console (if a certain admin user is logged)?
4) I need a clever way of handling a server publication/client subscription of data collection that will have only the data used in cards from client cards collection.
Should I use client side minimongo query to create an array with all dataIds needed to cover local cards collection? I am new to mongo, so I am not sure how would I write something like SELECT DISTINCT or GROUP BY to get that. Also, not sure if that is the best way to handle that, or should I do something server side as a publication?
Having a clear idea on 1-4 would get me going and then I guess I would dig my way around (and under :)
1) The publish function you wrote makes perfect sense. Of course, there's a bit confusion in the term "client-side data collection publication": publications are on the server side, while on the client side you've got subscriptions. Also, while you didn't specify your schema, I suppose you've got dataID field in cards collection, that joins with _id in data collection, so your find should say {_id: {$in: dataIds}}.
2) Read this carefully, there's all you need for that. Remember to check user privileges within the server side method. A rule of thumb for security is that you should never trust the client.
3) There's no point 3?
4) I'm not sure how the question here is different from 1. However, you should probably familiarize with this method, which you can use in your subscription to ensure the _ids in the array are unique.

Resources