the meteor poll time is configurable in server app ?
I need delay poll for masive updates to database.
How config this time for balancing massive updates ?
Related
Using
FirebaseDatabase.getInstance().setPersistenceEnabled(true);
Is this guarantee to download the data only one time across the App life/App restarts even if the user has good connection?
N.B: The official docs isn't clear ( at least for me) at this point.
By enabling persistence, any data that the Firebase Realtime Database client would sync while online persists to disk and is available offline, even when the user or operating system restarts the app. This means your app works as it would online by using the local data stored in the cache. Listener callbacks will continue to fire for local updates.
The sole goal of enabling persistence is to ensure that the app continues to work, even when the user starts it when they don't have a connection to the Firebase servers.
The client does send tree of hash values of its restored local state to the server when it connects, which the server then uses to only send the modified segments back. But there is no guarantee on how much data this sends or saves.
If you want to learn more about what Firebase actually does under the hood, I highly recommend enabling debug logging and studying its output on logcat.
For more on the topic, see these questions on Firebase's synchronization strategy.
I'm getting the following message from firebase:
runTransactionBlock: usage detected while persistence is enabled. Please be aware that transactions will not be persisted across app restarts.
So what exactly happens after the app restarts? Do the updates in my local database get overwritten due to a sync event from the main database? Something else?
Transactions are not persisted to disk. So when you app is restarted, none of your transactions will be sent to the server.
After regaining connectivity, your local cache will contain the data from the server.
I have an ASP.NET website with a a number of long-running (5 mins to 2 hours) user-initiated tasks. I want each user to be able to see the progress of there own jobs, and be able to close their browser and return at a later time.
Current plan is to store each job in the database when it's started and publish a message to a RabbitMQ queue, which a windows service will receive and start processing the job.
However, I'm not sure of the best way to pass the progress information back to the webserver from the service? I see two options:
Store the progress information in the database, and have the web-app poll for it
Have a RabbitMQ consumer in the webserver and have the windows service post progress messages to that queue
I'm leaning towards the second option, as I don't really want to add more overhead to the database by regular polling / writing progress info. However, there are lots of warnings about using RabbitMQ (as a consumer) - as I am not sending vital messages (it doesn't matter if progress messages aren't processed), I'm wondering if this matters? It's not that (famous last words) difficult to restart the RabbitMQ consumer whenever the web app is restarted.
Does that option sound reasonable? Any better choices out there?
Store the progress information in the database, and have the web-app poll for it
Have a RabbitMQ consumer in the webserver and have the windows service post progress messages to that queue
the correct answer is C) All Of The Above!
A database is not an integration layer for applications.
RabbitMQ is not meant for end-user consumption of messages.
But when you combine RabbitMQ with a database, you get beautiful things...
have your background service send progress updates through RabbitMQ. the web server will listen for these updates and write the new status to the database. use websockets (signalr) to push the progress update to the user immediately, but you still have the current status in the database in case the user does a full refresh of the page or comes back later.
i wrote about this basic setup in a blog post on using rabbitmq to do user notifications
I'd like to better understand how the subscription model works - let's say I have some global subscriptions ie. they are loaded when the client starts, assuming it's the client's first connection to the Meteor server then all required data will be populated in minimongo and kept in sync with the server for the duration of the session.
But what happens when the client closes the app and reconnects at later stage:
Is the local store kept indefinitely on the client?
If above is true, then when the user re-connects would the data be synced to handle any differences between the local and server dbs?
Minimongo is an in-memory javascript datastore. Because it is not persisted to disk, none of the data will be available after the client browser/tab is closed. When a client reconnects, minimongo will be empty and all active subscription data will be synced as if for the first time.
I have a workflow built in a way that it has delay activity which causes it to persist, and after delay has expired, a notification is sent. Workflow is exposed via Workflow Services.
This works perfect except for the scenarios when the server restart occurs or server is brought down for a maintenance for a day or two and timers were already expired. In that case, notification is not sent until the first request related to particular workflow arrives to WCF endpoints.
I have to mention that application pool is already set to alwaysRunning.
Is there anything else that has to be added for IIS/AppFabric to check pending timers that should have already been executed?
I'm using Workflow Foundation 4.5.
Issue has been caused by AppFabric not being set properly.