What is the best way to show a user multiple responses?
Currently I have a few nodes that jump to another node without a condition to tigger a new response to show a new line of content to the user. The issue is that it all comes in the same time.
Is there a way to delay a response??
The Watson Conversation service uses REST API to communicate with client application and responds "at once". This works in a way that the client request is processed via the defined dialog nodes tree and when this is finished everything is sent back to the client.
If multiple nodes were hit then multiple responses can be returned to the client. If the client (implementing the chat console) wants to delay the answers this needs to be implemented on the client side.
Related
We use heavily Firebase JS SDK, and at one point encountered an issue where updates made by the SDK (using updateDoc) did not reach the server (though no network issues were detected). Same happened also for onSnapshot call (no updates retrieved from server, although there were updated).
In order to better understand what happens, we wanted to understand what happens on the network layer (in terms of HTTP requests flow), especially when we listen for updates, or perform updates.
The question is created in order to share our findings with other who are wondering about this, and in case some one from the Firebase team can confirm.
We performed updateDoc and onSnapshot calls using Firebase JS SDK for Firestore, and expected to be able to associate the network requests seen in Chrome Dev Tools to these SDK calls.
Below are the observations regarding the HTTP calls flow.
When we initialize Firestore, a Web Channel is created on which the traffic between the client and the server will go through.
The Web Channel uses gRPC over HTTP/2 (thanks Frank van Puffelen for the tip)
The default behavior is as follows:
Commands (e.g. start listen, set data) are sent using POST requests.
The response includes SID, which identifies the command lifecycle. Note that start listen commands (on different docs) will share the same SID.
SSE-based (server-side-events) GET requests with ±1 minute interval are used to listen on results from the commands (with the appropriate SID).
After the interval ends, a new SSE-based GET request will be created unless the command finished (e.g. stop listen was called, data was set).
If the command finished, and no additional GET request is required, a POST terminate request will be sent with the relevant SID.
The listen (onSnapshot) behavior by default is as follows:
Whenever onSnapshot is called, we have a POST request to a LISTEN endpoint to addTarget which returns immediately with SID. The addTarget includes the document path and a targetId generated by the client for later use. If there's already an open onSnapshot, it will use the same SID.
Following the POST response, a pending GET request is created which receives changes. It will use the same SID.
The pending GET requests receives changes are for all target (listened) objects (e.g. documents).
The GET request remains open for about a minute (sometimes less, sometimes more).
Every time new data arrives, it is processed, but the GET request remains open until it's interval (±1 minute) has passed.
The GET request also receives the initial values once a target was added (via onSnapshot).
Once the interval has passed, the pending GET request ends (and we can see the response).
Once it ends, it is replaced by a new pending GET request, which behaves the same way.
Whenever we unsubscribe from a listener, there is a POST request to a LISTEN endpoint to removeTarget.
After unsubscribing from all targets, the GET request will still end after the interval completes. However, no new GET request will initiate, and a terminate POST request will be sent.
Setting data for a doc (setDoc) behaves as follows:
A POST request is sent to the backend in order to start a session and receive SID.
A pending GET request is created, on which we aim to receive updates when the data was set in the backend. The pending GET request leaves for ±1 minute.
Another POST request is created with the update details (e.g. update field f1 on document d1 ).
Once the Firestore backend has updated the data, we'll receive the update operation details on the GET request, which will remain pending until the ±1 minute interval completes.
After the interval completes, and if the data was stored in the backend DB, the GET request will end, and a terminate POST request will be sent.
With long-polling (set by experimentalForceLongPolling) we have the following notable changes:
Every time there's a change in an object we listen on, the pending GET request is ended and a new one is created instead.
Instead of ±1 minute GET request, we have a ±30 seconds GET request followed by a ±15 seconds GET request.
I am using SignalR in my web api to provide real-time functionality to my client apps (mobile and web). Everything works ok but there is something that worries me a bit:
The clients get updated when different things happen in the backend. For example, when one of the clients does a CRUD operation on a resource that will be notified by SignalR. But, what happens when something happens on the client, let's say the mobile app, and the device data connection is dropped?.
It could happen that another client has done any action over a resource and when SignalR broadcasts the message it doesn't arrive to that client. So, that client will have an old view sate.
As I have read, it seems that there's no way to know if a meesage has been sent and received ok by all the clients. So, beside checking the network state and doing a full reload of the resource list when this happens is there any way to be sure message synchronization has been accomplished correctly on all the clients?
As you've suggested, ASP NET Core SignalR places the responsibility on the application for managing message buffering if that's required.
If an eventually consistent view is an issue (because order of operations is important, for example) and the full reload proves to be an expensive operation, you could manage some persistent queue of message events as far back as it makes sense to do so (until a full reload would be preferable) and take a page from message buses and event sourcing, with an onus on the client in a "dumb broker/smart consumer"-style approach.
It's not an exact match of your case, but credit where credit is due, there's a well thought out example of handling queuing up SignalR events here: https://stackoverflow.com/a/56984518/13374279 You'd have to adapt that some and give a numerical order to the queued events.
The initial state load and any subsequent events could have an aggregate version attached to them; at any time that the client receives an event from SignalR, it can compare its currently known state against what was received and determine whether it has missed events, be it from a disconnection or a delay in the hub connection starting up after the initial fetch; if the client's version is out of date and within the depth of your queue, you can issue a request to the server to replay the events out to that connection to bring the client back up to sync.
Some reading into immediate consistency vs eventual consistency may be helpful to come up with a plan. Hope this helps!
As far as I understand, both web-feeds RSS and Atom request, starting at the client side, content from the server, and they do that at periodic intervals of time. It doesn't matter whether there is new content or not, the client checks for updates.
Wouldn't it be more efficient the other way round? Let the server announce new updates. In this scenario, it would have to keep track of the clients, and when each got what update. It would also have to send a message to each one. But still, it looks more efficient if client-server were not communicating when there are no new news.
Is there a reason why web-feeds are the way they are?
This model is not inherent to feeds (RSS or Atom), but to HTTP itself, where a client queries a server to get data. This is at this point, the only way in a pure client -> server model to determine whether there is any new data available or updated.
Now, in the context of server querying other servers, PubsubHubbub solves that with webhooks. Basically, when polling any given resource, a server can also "subscribe" by providing a webhook which will be called upon a change or update in the feed. This way the subscriber does not have to poll the feed over and over again.
It is quite easy to update the interface by sending jQuery ajax request and updating with new content. But I need something more specific.
I want to send the response to client without their having requested it and update the content when they have found something new on the server. No need to send an ajax request every time. When the server has new data it sends a response to every client.
Is there any way to do this using HTTP or some specific functionality inside the browser?
Websockets, Comet, HTTP long polling.
It has name server push (you can also find it under name Comet technology). Do search using these keywords and you will find bunch examples, tools and so on. No special protocol is required for that.
Aaah! You are trying to break the principles of the web :) You see if the web was pure MVC (model-view-controller) the 'server' could actually send messages to the client(s) and ask them to update. The issue is that the server could be load balanced and the same request could be sent to different servers. Now if you were to send a message back to the client you'll have to know who all are connected to the server. Let's say the site is quite popular and you have about 100,000 people connecting to it every day. You'll actually have to store the IPs of each of them to know where on the internet they are located and to be able to "push" them a message.
Caveats:
What if they are no longer browsing your website? You see currently there is no way to log out automatically if you close your browser. The server needs to check after a fixed timeout if you have logged out (or you send a new nonce with every response to prevent the server from doing that check)
What about a system restart/crash etc? You'd lose all the IPs that you were keeping track of and you are back to square one - you have people connected to you but until you receive new requests you can't really "send" them data when they may be expecting it as per your model.
Let's take an example of facebook's news feeds or "Most recent" link close to the top right - sometimes while you are browsing your wall you see the number next to most recent has gone up or a new 'feed' has come to the top of your wall post! It's the client sending periodic requests to the server to find out what was updated rather than the other way round
You see, it keeps it simple and restful. You may feel it's inefficient for the client to "poll" the server to pull the data and you'd prefer push, but the design of the server gets simplified :)
I suggest ajax-pulling is the best way to go - you are distributing computation to the client and keeping it simple (KIS principle :)
Of course you can get around it, the question is, is it worth it?
Hope this helps :)
RFC 6202 might be a good read.
I'm using Mate's RemoteObjectInvoker to call methods in my FluorineFX based API. However, all requests seem to be sent to the server sequentiality. That is, if I dispatch a group of messages at the same time, the 2nd one isn't sent until the first returns. Is there anyway to change this behavior? I don't want my app to be unresponsive while a long request is processing.
This thread will help you to understand what happens (it talks about blazeds/livecylce but I assume that Fluorine is using the same approach). In a few words what happens is:
a)Flash player is grouping all your calls in one HTTP post.
b)The server(BlazeDs,Fluorine etc) receives the request and starts to execute the methods serially, one after another.
Solutions
a)Have one HTTP post per method, instead of one HTTP post containing all the AMF messages. For that you can use HTTPChannel instead of AMFChannels (internally it is using flash.net.URLLoader instead of flash.net.NetConnection). You will be limited to the maximum number of parallel connection defined by your browser.
b)Have only one HTTP post but implement a clever solution on the server (it will cost you a lot of development time). Basically you can write your own parallel processor and use message consumers/publishers in order to send the result of your methods to the client.
c)There is a workaround similar to a) on https://bugs.adobe.com/jira/browse/BLZ-184 - create your remoteobject by hand and append a random id at the end of the endpoint.