Would real world Meteor application use server side Methods almost exclusively? - meteor

I'm learning Meteor and fundamentally enjoy how fast I can build data driven applications however as I went through the Creating Posts chapter in the Discover Meteor book I learned about using server side Methods. Specifically the primary reason (and there are a number of very valid reasons to use these) was because of the timestamp. You wouldn't want to rely on the client date/time, you'd want to use the server date/time.
Makes sense except that in almost every application I've ever built we store date/time of row create/update in a column. Effectively every single create or update to the database records date/time which in Meteor now looks like I would need to use server side Methods to ensure data integrity.
If I'm understanding correctly that pretty much eliminates the ease of use and real-time nature of a client side Collection because I'll need to use Methods for almost every single update and create to our databases.
Just wanted to check and see how everyone else is doing this in the real world. Are you just querying a server side Method that just returns the date/time and then using client side Collection or something else?
Thanks!

The short answer to this question is that yes, every operation that affects the server's database will go through a server-side method. The only difference is whether you are defining this method explicitly or not.
When you are just getting started with Meteor, you will probably do insert/update/remove operations directly on client collections using validators, which check for whether the operation is allowed. This usage is actually calling predefined methods on both the server and client: (for a collection named foo the you have /foo/insert, for example) which simply checks the specified validators before doing the operation. As you become more familiar with Meteor you will probably override these default methods, for reasons you described (among others.)
When using your own methods, you will typically want to define a method both on the server and the client, just as the default collection functions do for you. This is because of Meteor's latency compensation, which allows most client operations to be reflected immediately in the browser without any noticeable lag, as long as they are permitted. Meteor does this by first simulating the effect of a method call in the client, updating the client's cached data temporarily, then sending the actual method call to the server. If the server's method causes a different set of changes than the client's simulation, the client's cache will be updated to reflect this when the server method returns. This also means that if the client's method would have done the same as the server, we've basically allowed for an instant operation from the perspective of the client.
By defining your own methods on the server and client, you can extend this to fill your own needs. For example, if you want to insert timestamps on updates, have the client insert whatever timestamp in the simulation method. The server will insert an authoritative timestamp, which will replace the client's timestamp when the method returns. From the client's perspective, the insert operation will be instant, except for an update to the timestamp if the client's time happens to be way off. (By the way, you may want to check out my timesync package for displaying relative server time accurately on the client.)
A final note: it's good to understand what scope you are doing collection operations in, as this was one of the this that originally confused me about Meteor. For example, if you have a collection instance in the client Foo, Foo.insert() in normal client code will call the default pair of client/server methods. However, Foo.insert() in a client method will run only in a simulation and will never call server code - so you will need to define the same method on the server and make sure you do Foo.insert() there as well, for the method to work properly.
A good rule of thumb for moving forward is to replace groups of validated collection operations with your own methods that do the same operations, and then adding specific extra features on the server and client respectively.

In short— yes!
Publications exist to send out a 'live', and dynamic, subset of the database to the client, sending DDP added messages for existing records, followed by a ready, and then added, changed, and deleted messages to keep the client's cache consistent.
Methods exist to- directly, or indirectly— cause Mongo Updates, and like it was mentioned by Andrew, they are always in use.
But truly, because of Meteor's publication architecture, any edits to collections that are currently being published to at least one client, will be published via DDP - regardless of the source of the change to Mongo - even an outside process.

Related

Does it make sense to use Firestore Realtime as a way to interact with functions?

I have a good old fashioned piece of code that takes some inputs, runs some code and produces outputs for a variety of tasks. This code runs in a function, and it should be triggered by a UI action. It is relevant to store a history of these runs per user, to list them in a specific view of the UI, and, on returning users, put them back where they left.
Since the Firestore documentation and videos strongly encourage us to default to it for data exchange with the backend, I of course decided to entertain the idea before just moving on to simply implementing a callable.
First, it made sense to me to model the data as follows:
/users
id
/taskRuns
id
inputs
outputs
/history
id
date
...outputs
and make the data flow as follows:
The UI writes to the inputs field and it listens to the document.
The unction is triggered. It runs happily and it writes to outputs.
A second function listening to /assets puts copies over the inputs and outputs onto its corresponding /history subcollection.
The UI reacts to the change.
At this point, a couple of red flags are raised on my mind:
Since the UI has to both read and write /assets, a malicious user could technically trigger infinite loops here. I'm aware that the same thing is possible with a callable, but it somehow seems even easier in this scenario.
I lose the ability to do timeout and error handling the way one would with a simple API call.
However, I also understand that:
My function needs to write to the database anyway to store the outputs even if it was returning it à la API.
Logic for who's responsible for what has somewhat been broken down into pieces.
So - Am I looking at it wrong? Is there a world in which this approach would make sense over the callable?
The approach you're describing uses the database as a queue/cache/proxy for your backend functionality. I regularly use it myself with either Realtime Database or Firestore to expose backend functionality, because it:
Works more gracefully when there's intermittent loss of internet connection. Where a direct call to Cloud Functions would have to implement its own retries, the database SDKs already handle this scenario.
Uses the database as a proxy/cache for the result, so that repeated loads of the data don't cause additional calls to my Cloud Functions.
By setting a maximum number of instances on my Cloud Function, I can prevent overloading any legacy infrastructure my code calls, and the database becomes my queue of pending requests.
The pattern is also used in this video from Cloud Next 2019: Serverless in real life: a case study in the travel industry and this talk from Google I/O: Architecting for Data Contention in a Realtime World with Firebase.
That said, many of my team mates only ever use callable functions, so it's definitely a matter of preference and the requirements of your use-case.

How to structure a Client–Server data Model solution?

I need to write a client–server solution. The server will perform scheduled operations and also serve up data from a SQL DB to the client.
The client is yet to be fully defined but it will make requsts to the server and display data for the user and pass data back for persistence.
The whole solution is dealing with entities (Users, Products, etc. with their associated attributes).
In my head, both the server and the client need to be aware of these entities in order for them to be efficiently manipulated in code rather than having to unpack JSON and duplicate code.
My question is, should I make a class library containing models (classes or structs) representing these entities that is referenced by both the client- and server-side projects?
Otherwise, is there some standard way of building such a solution?
Thus far I have a client, a server (based on ASP.NET 2) and a Class Library containing entity Models along with some data access logic. Both the client and server projects reference the Class Library. One day in and I’m already starting to doubt my approach as being too clumsy.
I will be working with VS2019 using C#.
This isn't really a question well suited to StackOverflow which aims to solve specific code/tech problems.
It is possible to use the same model (Entity) in both client and server, but I highly recommend separating the client model (view model) from the domain model. (Entity) The reasons for this are:
Clients rarely need, or should expose every domain field & relationship. Sending models from server to client involve serialization. This can result in either performance issues or errors as the serializer "touches" properties and wants to lazy-load them, or you add the cost of eager-loading everything; Or it results in incomplete models where unloaded relationships are left null. (Not because there aren't any, they just weren't loaded) Client models should be trimmed down to just the data the client needs to see, formatted in a way it can use. Ultimately this is shipping more data than needed to the client and back. Keep the payloads over the wire as small as possible.
Security can be an issue when passing entities from Client to Server. Your UI may only allow users to change a few values in a very particular way, but the temptation can be to take that entity, attach it to a DB Context and update it. (1 line updates) However, this entity sent from the client can very easily be tampered with by the browser which can result in changes being made that you don't expect/allow. (I.e. change a FK relationship)
At best this can allow stale data overwrites where changes made after that record was sent to the client are overwritten silently when the client gets around to submitting their change. Don't trust data coming from a client, especially under the premise of "saving time". Update requests should validate the data coming in and re-load the Entity to check things like the row version before updating allowed values.
Enabling view models can be done using a technique supported in EF called Projection. This can either be hand-written using .Select or leveraging tools like Automapper and its ProjectTo method to easily transform entities and Linq expressions into simple, dumb serializable view models. When a view model comes back to the server, you simply load an entity and associations from the DB by ID, and update values after validation steps and SaveChanges to persist.

How to best architect website when each client has own database and subdomain?

For client security and privacy reasons, we want to deploy a unique database for each client while using the same website.
I envision that during the session_start event, we would determine which database to use for them (by looking at the subdomain they come in on) and set the connection string in a session variable. Then on every page_init, we'd dynamically set any object's connection string. In code behind, we'd do the same thing with the connection string.
Is there a better approach to doing this and will setting the connection string in page_init work? Is using a session variable wise? I've tended not to ever use them except when no other solution was possible.
The problem with the model itself it is really complex and can let you with some errors specially when we are talking about changes in the database. Imagine that you need to add an extra field on the interface. if you have 100 clients this will mean updating 100 different databases. when we talk about dealing with downtime them things get even worst.
I would do with that in a light different abstract your database layer create one api that will call the database. And from the website you always call the api passing the domain that you want the data to come from.
You can ask me what advantage this will give to you. The biggest one that you will see it is when doing upgrades and maintenance. Having one api per client it is a lot better to think them having one database per client. and if you really want to have just one (I would really recommend having one per client and deploying automatically) you can have a switch on the call and base with some parameters that you pass to the api ( can be on the header like the subdomain on the header) you can chose what database to connect.
Let me give you a sample scenario and how I would suggest to approach this scenario (that is true for database or api)
I want to include a new data field. So first thing it is to add this field on the backend (api or database) deploy this new field if it is one api you can even test that calling the api and see that the new field it is now returned that is not a problem for your ui because it is just a field that it does not use. after that you change the ui to actually use this field and deploy that to production.

How to invoke code within a web app that isn't externally open?

Say, for example, you are caching data within your ASP.NET web app that isn't often updated. You have another process running outside of the app which ocassionally updates this data, when you do this you would like the cached data to be cleared immediately so that the next request picks up the new data straight away.
The caching service is running in the context of your web app and not externally - what is a good method of calling into the web app to get it to update the cache?
You could of course, just hack a page or web service together called ClearTheCache that does it. This can then be called by your other process. Of course you don't want this process to be externally useable or visible on your web app, so perhaps you could then check that incoming requests to this page are calling localhost, if not throw a 404. Is this acceptable? Could this be spoofed at all (for instance if you used HttpApplication.Request.Url.Host)?
I can think of many different ways to go about this, mainly revolving around creating a page or web service and limiting requests to it somehow, but I'm not sure any are particularly elegant. Neither do I like the idea of the web app routinely polling out to another service to check if it needs to execute something, I'd really like a PUSH solution.
Note: The caching scenario is just an example, I could use out-of-process caching here if needed. The question is really concentrating on invoking code, for any given reason, within a web app externally but in a controlled context.
Don't worry about the limiting to localhost, you may want to push from a different server in future. Instead share a key (asymmetrical or symmetrical doesn't really matter) between the two, have the PUSH service encrypt a block of data (control data for example) and have the receiver decrypt. If the block decrypts correctly and the data is readable you can safely assume that only the service that was supposed to call you has and you can perform the required actions! Not the neatest solution, but allows you to scale beyond a single server.
EDIT
Having said that an asymmetrical key would be better, have the PUSH service hold the private part and the website the public part.
EDIT 2
Have the PUSH service put the date/time it generated the cipher text into the data block, then the client can be sure that a replay attack hasn't taken place by ensuring the date/time is within an acceptable time period (say a minute).
Consider an external caching mechanism like EL's caching block, which would be available to both the web and the service, or a file to cache data to.
HTH.

Does the concept of shared sessions exist in ASP.NET?

I am working on a web application (ASP.NET) game that would consist of a single page, and on that page, there would be a game board akin to Monopoly. I am trying to determine what the best architectural approach would be. The main requirements I have identified thus far are:
Up to six users share a single game state object.
The users need to keep (relatively) up to date on the current state of the game, i.e. whose turn it is, what did the active user just roll, how much money does each other user have, etc.
I have thought about keeping the game state in a database, but it seems like overkill to keep updating the database when a game state object (say, in a cache) could be kept up to date. For example, the flow might go like this:
Receive request for data from a user.
Look up data in database. Create object from that data.
Verify user has permissions to perform request based on the game's state (i.e. make sure it's really their turn or have enough money to buy that property).
Update the game object.
Write the game object back to the database.
Repeat for every single request.
Consider that a single server would be serving several concurrent games.
I have thought about using AJAX to make requests to an an ASP.NET page.
I have thought about using AJAX requests to a web service using silverlight.
I have thought about using WCF duplex channels in silverlight.
I can't figure out what the best approach is. All seem to have their drawbacks. Does anyone out there have experience with this sort of thing and care to share those experiences? Feel free to ask your own questions if I am being too ambiguous! Thanks.
Update: Does anyone have any suggestions for how to implement this connection to the server based on the three options I mention above?
You could use the ASP.Net Cache or the Application state to store the game object since these are shared between users. The cache would probably be the best place since objects can be removed from it to save memory.
If you store the game object in cache using a unique key you can then store the key in each visitors Session and use this to retrieve the shared game object. If the cache has been cleared you will recreate the object from the database.
While updating a database seems like overkill, it has advantages when it comes time to scale up, as you can have multiple webheads talking to one backend.
A larger concern is how you communicate the game state to the clients. While a full update of the game state from time to time ensures that any changes are caught and all clients remain in synchronization even if they miss a message, gamestate is often quite large.
Consider as well that usually you want gamestate messages to trigger animations or other display updates to portray the action (for example, of a piece moves, it shouldn't just appear at the destination in most cases... it should move across the board).
Because of that, one solution that combines the best of both worlds is to keep a database that collects all of the actions performed in a table, with sequential IDs. When a client requests an update, it can give all the actions after the last one it knew about, and the client can "act out" the moves. This means even if an request fails, it can simply retry the request and none of the actions will be lost.
The server can then maintain an internal view of the gamestate as well, from the same data. It can also reject illegal actions and prevent them from entering the game action table (and thus prevent other clients from being incorrectly updated).
Finally, because the server does have the "one true" gamestate, the clients can periodically check against that (which will allow you to find errors in your client or server code). Because the server database should be considered the primary, you can retransmit the entire gamestate to any client that gets incorrect state, so minor client errors won't (potentially) ruin the experience (except perhaps a pause while the state is downloaded).
Why don't you just create an application level object to store your details. See Application State and Global Variables in ASP.NET for details. You can use the sessionID to act as a key for the data for each player.
You could also use the Cache to do the same thing using a long time out. This does have the advantage that older data could be flushed from the Cache after a period of time ie 6 hours or whatever.

Resources