Structure
This is a question how to best use Apollo Client 2 in our current infrastructure. We have one Graphql server (Apollo server 2) which connects to multiple other endpoints. Mutations are send via RabbitMQ and our Graphql also listens to RabbitMQ which then are pushed to the client via subscriptions.
Implementation
We have a Apollo server which we send mutations, these always give a null because they are async. Results will be send back via a subscription.
I have currently implemented it like this.
Send mutation.
Create a optimisticResponse.
Update the state optimistically via writeQuery in the update function.
When the real response comes (which is null) use the optimisticResponse again in the update method.
Wait for the subscription to come back with the response.
Then refresh the the state/component with the actual data.
As you can see.. not the most ideal way, and a lot of complexity in the client.
Would love to keep the client as dumb as possible and reduce complexity.
Seems Apollo is mostly designed for sync mutations (which is logical).
What are your thoughts on a better way to implement this?
Mutations can be asynchronous
Mutations are generally not synchronous in Apollo Client. The client can wait for a mutation result as long as it needs to. You might not want your GraphQL service to keep the HTTP connection open for that duration and this seems to be the problem you are dealing with here. Your approach responding to mutations goes against how GraphQL is designed to work and that starts to create complexity that - in my opinion - you don't want to have.
Solve the problem on the implementation level - not API level
My idea to make this work better is the following: Follow the GraphQL spec and dogmatic approach. Let mutations return mutation results. This will create an API that any developer is familiar working with. Instead treat the delivery of these results as the actual problem you want to solve. GraphQL does not specify the transport protocol of the client server communication. If you have websockets running between server and client already throw away HTTP and completely operate on the socket level.
Leverage Apollo Client's flexible link system
This is where Apollo Client 2 comes in. Apollo Client 2 lets you write your own network links that handle client server communication. If you solve the communication on the link level, developers can use the client as they are used to without any knowledge of the network communication details.
I hope this helps and you still have the chance to go in this direction. I know that this might require changes on the server side but could be totally worth it when your application is frontend heavy (as most applications are these days)
Related
I'm using Node.js and ws for my WebSocket servers and want to know the best practice methods of tracking connections and incoming and outgoing messages with Azure Azure Application Insights.
It appears as though this service is really only designed for HTTP requests and responses so would I be fine if I tracked everything as an event? I'm currently passing the JSON.parse'd connection message values.
What to do here really depends on the semantics of your websocket operations. You will have to track these manually since the Application Insights SDK can't infer the semantics to map to Request/Dependency/Event/Trace the same way it can for HTTP. The method names in the API do indeed make this unclear for non-HTTP, but it becomes clearer if you map the methods to the telemetry schema generated and what those item types actually represent.
If you would consider a receiving a socket message to be semantically beginning an "operation" that would trigger dependencies in your code, you should use trackRequest to record this information. This will populate the information in the most useful way for you to take advantage of the UI in the Azure Portal (eg. response time analysis in the Performance blade or failure rate analysis in the Failures blade). Because this request isn't HTTP, you'll have to mend your data to fit the schema a bit. An example:
client.trackRequest({name:"WS Event (low cardinality name)", url:"WS Event (high cardinality name)", duration:309, resultCode:200, success:true});
In this example, use the name field to describe that items that share this name are related and should be grouped in the UI. Use the url field as information that more completely describes the operation (like GET parameters would in HTTP). For example, name might be "SendInstantMessage" and url might be "SendInstantMessage/user:Bob".
In the same way, if you consider sending a socket message to be request for information from your app, and has meaningful impact how your "operation" acts, you should use trackDependency to record this information. Much like above, doing this will populate the data in most useful way to take advantage of the Portal UI (Application Map in this case would then be able to show you % of failed Websocket calls)
If you find you're using websockets in a way that doesn't really fit into these, tracking as an event as you are now would be the correct use of the API.
In database some entity is getting updated by some backend process. We want to show this updated value to the user not real-time but as fast as possible on website.
Problems we are facing with these approaches.
Polling :- As we know that there are better techniques then polling like SSE, WebSockets.
SSE :- In SSE the connection open for long time(I search on internet and found that it uses long polling). Which might cause problem when user increases.
WebSockets :- As we need only one way communication(from server to client), SSE is better then this.
Our Solution
We check database on every request of user and update the value.(It is not very good as it will depend upon user next request)
Is it good approach or is there any better way to do this or Am I missing something about SSE(misunderstood something).
Is it fine to use SignalR instead of this all?(is there any long connection issue in it or not?)
Thanks.
It's just up to your requirements what you should use.
Options:
You clients need only the update information, in the case they make a request -> Go your way
If you need a solution with different client types like (Webclient, Winformclient, Androidclient,....) and you have for example different browser types which you should support. Not all browsers support all mechanisme... SignalR was designed to choose automatically the right transport mechanisme according to the mechanisme which a clients supports --> SignalR is an option. (Read more details here: https://www.asp.net/signalr) Has also options that your connection keeps alive.
There are also alternatives like https://pusher.com/ (For short this is only a queue where you can send messages, and also subscribe for messages) But these services are only free until for example some data volume.
You can use event based communication. When ever there is a change(event) in the backend/database, server should send a message to clients.
Your app should register to respective events and refresh the UI when ever there is an update.
We used Socket IO for this usecase, in our apps and it worked well.
Here is the website https://socket.io/
I had my chance to play with this tool for a while now and made a chat application instead of a hello world. My project has 2 meteor applications sharing the same mongo database:
client
operator
when I type a message from the operator console it sometimes takes as much as 7-8 seconds to appear to the subscribed client. So my question is...how much of a real time can I expect from this meteor? Right now I can see better results with other services such as pubnub or pusher.
Should the delay come from the fact that it's 2 applications subscribed to the same db?
P.S. I need 2 applications because the client and operator apps are totally different mostly in design and media libraries (css/jquery plugins etc.) which is the only way I found to make the client app much lighter.
If you use two databases without DDP your apps are not going to operate in real time. You should either use one complete app or use DDP to relay messages to the other instance (via Meteor.connect)
This is a bit of an issue for the moment if you want to do the subscription on the server as there isn't really server to server ddp support with subscriptions yet. So you need to use the client to make the subscription:
connection = Meteor.connect("http://YourOtherMetorInstanceUrl");
connection.subscribe("messages");
Instead of
Meteor.subscribe("messages");
In your client app, of course using the same subscription names as you do for your corresponding publish functions on the other meteor instance
Akshat's answer is good, but there's a bit more explanation of why:
When Meteor is running it adds an observer to the collection, so any changes to data in that collection are immediately reactive. But, if you have two applications writing to the same database (and this is how you are synchronizing data), the observer is not in place. So it's not going to be fully real-time.
However, the server does regularly poll the database for outside changes, hence the 7-8 second delay.
It looks like your applications are designed this way to overcome the limitation Meteor has right now where all client code is delivered to all clients. Fixing this is on the roadmap.
In the mean time, in addition to Akshat's suggestion, I would also recommend using Meteor methods to insert messages. Then from client application, use Meteor.call('insertMessage', options ... to add messages via DDP, which will keep the application real-time.
You would also want to separate the databases.
JMS or messaging is really good in tying up disparate applications and form the infrastructure of many ESB and SOA architectures.
However say Application A needs an immediate response from a service on Application B e.g. Needs the provisioning details of an Order OR Needs an immediate confirmation on some update. Is Messaging the right solution for that from a performance point of view? Normally the client would connect to a MoM on a Queue - then a listener which has to be free will pick up the message and forward to the server side processor - which will process the response and send it back to a Queue or Topic and the requesting client will follow the same process and pick it up. If the message size is big the MoM will have to factor that in as well.
Makes me wonder if Http is a better solution to access such solutions instead of going via messaging route? I have seen lots of applications use MoM like AMQ or TIBCO Rvd to actually use for immediate Request/Response - but is that bad design or is it some fine tuning or setting that makes it same as Http.
It really depends on your requirements. Typically messaging services will support one or all of the following:
Guaranteed Delivery
Transactional
Persistent (i.e. messages are persisted until delivered, even if the system fails in the interrim)
An HTTP connection cannot [easilly] implement these attributes, but then again, if you don't need them, then I suppose you could make the case that "simple" HTTP would offer a simpler and more lightweight solution. (Emphasis on the "simple" because some messaging implmentations will operate over HTTP).
I don't think Request/Response implemented over messaging is, in and of itself, bad design. I mean, here's the thing.... are you implementing both sides of this process ? If not, and you already have an established messaging service that will respond to requests, all other considerations aside, that would seem to be the way to go... and bypassing that to reimplement using HTTP because of some desgin notion would need some fairly strong reasoning behind it, in my view.
But the inverse is true as well. If you an HTTP accessible resource already, and you don't have any super stringent requirements that might otherwise suggest a more robust messaging solution, I would not force one in where it's not warranted.
If you are commensing totally tabula-rasa and you must implement both sides from scratch..... then..... post another question here with some details ! :)
I am interested in creating an application using the the Meteor framework that will be disconnected from the network for long periods of time (multiple hours). I believe meteor stores local data in RAM in a mini-mongodb js structure. If the user closes the browser, or refreshes the page, all local changes are lost. It would be nice if local changes were persisted to disk (localStorage? indexedDB?). Any chance that's coming soon for Meteor?
Related question... how does Meteor deal with document conflicts? In other words, if 2 users edit the same MongoDB JSON doc, how is that conflict resolved? Optimistic locking?
Conflict resolution is "last writer wins".
More specifically, each MongoDB insert/update/remove operation on a client maps to an RPC. RPCs from a given client always play back in order. RPCs from different clients are interleaved on the server without any particular ordering guarantee.
If a client tries to issue RPCs while disconnected, those RPCs queue up until the client reconnects, and then play back to the server in order. When multiple clients are executing offline RPCs, the order they finally run on the server is highly dependent on exactly when each client reconnects.
For some offline mutations like MongoDB's $inc and $addToSet, this model works pretty well as is. But many common modifiers like $set won't behave very well across long disconnects, because the mutation will likely conflict with intervening changes from other clients.
So building "offline" apps is more than persisting the local database. You also need to define RPCs that implement some type of conflict resolution. Eventually we hope to have turnkey packages that implement various resolution schemes.