I have long running workflow which is persisted into database using SqlWorkflowInstanceStore, the problem is if I don't use WorkflowApplication.Load before terminating a workflow instance nothing is deleted from the persistence database and according to msdn calling WorkflowApplication.Load will "Load the specified workflow instance into memory from an instance store."
Why is workflow behaving this way?
You can delete the workflow records from the instance store yourself. In fact Windows Server AppFabric has a menu option to do exactly that.
You terminate a workflow instance object you have in memory. WorkflowApplication.Load() is not affecting that, it is making it possible. Without loading it from the store you don't have an instance in memory to call terminate on (unless you just started it of course).
Related
My current application developed in Unity uses Firebase Realtime Database with database persistence enabled. This works great for offline use of the application (such as in areas of no signal).
However, if a new user runs the application for the first time without an internet connection - the application freezes. I am guessing, it's because it needs to pull down the database for the first time in order for persistence to work.
I am aware of threads such as: 'Detect if Firebase connection is lost/regained' that talk about handling database disconnection from Firebase.
However, is there anyway I can check to see if it is the users first time using the application (eg via presence of the persistent database?). I can then inform them they must go online for 'first time setup'?
In addition to #frank-van-puffelen's answer, I do not believe that Firebase RTDB should itself cause your game to lock up until a network connection occurs. If your game is playable on first launch without a network connect (ie: your logic itself doesn't require some initial state from the network), you may want to check the following issues:
Make sure you can handle null. If your game logic is in a Coroutine, Unity may decide to silently stop it rather than fully failing out.
If you're interacting with the database via Transactions, generally assume that it will run twice (once against your local cache then again when the cache is synced with the server if the value is different). This means that the first time you perform a change via a transaction, you'll likely have a null previous state.
If you can, prefer to listen to ValueChanged over GetValueAsync. You'll always get this callback on your main Unity thread, you'll always get the callback once on registration with the data in your local cache, and the data will be periodically updated as the server updates. Further, if you see #frank-van-puffelen answer elsewhere, if you're using GetValueAsync you may not get the data you expect (including a null if the user is offline). If your game is frozen because it's waiting on a ContinueWithOnMainThread (always prefer this to ContinueWith in Unity unless you have a reason not to) or an await statement, this could ValueChanged may work around this as well (I don't think this should be the case).
Double check your object lifetimes. There are a ton of reasons that an application may freeze, but when dealing with asynchronous logic definitely make sure you're aware of the differences between Unity's GameObject lifecycle and C#'s typical object lifecycle (see this post and my own on interacting with asynchronous logic with Unity and Firebase). If an objects OnDestroy is invoked before await, ContinueWith[OnMainThread], or ValueChanged is invoked, you're in danger of running into null references in your own code. This can happen if a scene changes, the frame after Destroy is called, or immediately following a DestroyImmediate.
Finally, many Firebase functions have an Async and synchronous variant (ex: CheckDependencies and CheckDependenciesAsync). I don't think there are any to call out for Realtime Database proper, but if you use the non async variant of a function (or if you spinlock on the task completing, including forgetting to yield in a coroutine), the game will definitely freeze for a bit. Remember that any cloud product is i/o bound by nature, and will typically run slower than your game's update loop (although Firebase does its best to be as fast as possible).
I hope this helps!
--Patrick
There is nothing in the Firebase Database API to detect whether its offline cache was populated.
But you can detect when you make a connection to the database, for example by listening to the .info/connected node. And then when that first is set to true, you can set a local flag in the local storage, for example in PlayerPrefs.
With this code in place, you can then detect if the flag is set in the PlayerPrefs, and if not, show a message to the user that they need to have a network connection for you to download the initial data.
If the OS destroys my app's process and there was a Realm instance still open, but no transactions executing, is there a chance that this will cause problems when my app starts back up again? If not, why not just open Realm instances in the application's custom Application class's onCreate method, store global references to them, and then just let the OS close them if/when it ends your app's process?
There is nothing inheriently wrong with that approach on the UI thread since it is a Looper thread that will auto-update the Realm, but remember that you need a Realm instance for each thread you want to work on.
Realm is an MVCC database, which means it can keep multiple versions of the data alive at the same time. This means that if you keep an Realm instance open on a non-looper thread, Realm will have to keep track of all changes between oldest and newest version. This can inflate the filesize.
In general we recommend controlling the life cycle as described in the below links. That will prevent any issues.
https://realm.io/docs/java/latest/#closing-realm-instances
https://realm.io/docs/java/latest/#controlling-the-lifecycle-of-realm-instances
I know there have been a bunch of questions and answers on this one already, but I couldn't find any conclusive answer that says what I want to do is OK.
Basically I want to have a singleton SQLiteOpenHelper that gets instantiated in MyApplication.onCreate() (I extend Application). That way, my code only ever has to call the singleton's getWritableDatabase() and I don't have to worry about managing whether the db is open or close, or worry about tearing down and building up connections.
I gather that the connection will remain open until I call the singleton's close() which I plan on never doing, so the connection will remain open as long as the application is running. When the application is killed, the whole process gets killed, so the db connection will get killed with it, as well as any resources tied up with the db. Then, the next time the application is run, I instantiate the singleton again, and go on with my business.
Is this a bad idea? Can I run into issues with not closing the db connection even if the process is killed? The reason I would like to do this is because a lot of my code is not strongly coupled to Activity lifecycles, and it would be a real headache to manage the db lifetime based on it.
The ASP.NET runtime is meant for short work loads that can be run in parallel. I need to be able to schedule periodic events and background tasks that may or may not run for much longer periods.
Given the above I have the following problems to deal with:
The AppDomain can shutdown due to changes (Web.config, bin, App_Code, etc.)
IIS recycles the AppPool on a regular basis (daily)
IIS itself might restart, or for that matter the server might crash
I'm not convinced that running this code inside ASP.NET is not the right thing to do, becuase it would allow for a simpler programming model. But doing so would require that an external service periodically makes requests to the app so that the application is keept running and that all background tasks are programmed with utter most care. They will have to be able to pause and resume thier work, in the event of an unexpected error.
My current line of thinking goes something like this:
If all jobs are registered in the database, it should be possible to use the database as a bookkeeping mechanism. In the case of an error, the database would contain all state necessary to resume the operation at the next opportunity given.
I'd really appriecate some feedback/advice, on this matter. I've been considering running a windows service and using some RPC solution as well, but it doesn't have the same appeal to me. And I'd instead have a lot of deployment issues and sycnhronizing tasks and code cross several applications. Due to my business needs this is less than optimial.
This is a shot in the dark since I don't know what database you use, but I'd recommend you to consider dialog timers and activation. Assuming that most of the jobs have to do some data manipulation, and is likely that all have to do only data manipulation, leveraging activation and timers give an extremely reliable job scheduling solution, entirely embedded in the database (no need for an external process/service, not dependencies outside the database bounds like msdb), and is a solution that ensures scheduled jobs can survive restarts, failover events and even disaster recovery restores. Simply put, once a job is scheduled it will run even if the database is restored one week later on a different machine.
Have a look at Asynchronous procedure execution for a related example.
And if this is too radical, at least have a look at Using Tables as Queues since storing the scheduled items in the database often falls under the 'pending queue' case.
I recommend that you have a look at Quartz.Net. It is open source and it will give you some ideas.
Using the database as a state-keeping mechanism is a completely valid idea. How complex it will be depends on how far you want to take it. In many cases you will ended up pairing your database logic with a Windows service to achieve the desired result.
FWIW, it is typically not a good practice to manually use the thread pool inside an ASP.Net application, though (contrary to what you may read) it actually works quite nicely other than the huge caveat that you can't guarantee it will work.
So if you needed a background thread that examined the state of some object every 30 seconds and you didn't care if it fired every 30 seconds or 29 seconds or 2 minutes (such as in a long app pool recycle), an ASP.Net-spawned thread is a quick and very dirty solution.
Asynchronously fired callbacks (such as on the ASP.Net Cache object) can also perform a sort of "behind the scenes" role.
I have faced similar challenges and ultimately opted for a Windows service that uses a combination of building blocks for maximum flexibility. Namely, I use:
1) WCF with implementation-specific types OR
2) Types that are meant to transport and manage objects that wrap a job OR
3) Completely generic, serializable objects contained in a custom wrapper. Since they are just a binary payload, this allows any object to be passed to the service. Once in the service, the wrapper defines what should happen to the object (e.g. invoke a method, gather a result, and optionally make that result available for return).
Ultimately, the web site is responsible for querying the service about its state. This querying can be as simple as polling or can use asynchronous callbacks with WCF (though I believe this also uses some sort of polling behind the scenes).
I tell you what I have do.
I have create a class called Atzenta that have a timer (1-2 second trigger).
I have also create a table on my temporary database that keep the jobs. The table knows the jobID, other parameters, priority, job status, messages.
I can add, or delete a job on this class. When there is no action to be done the timer is stop. When I add a job, then the timer starts again. (the timer is a thread by him self that can do parallel work). I use the System.Timers and not other timers for this.
The jobs can have different priority.
Now let say that I place a job on this table using the Atzenta class. The next time that the timer is trigger is check the query on this table and find the first available job and just run it. No other jobs run until this one is end.
Every synchronize and flags are done from the table. In the table I have flags for every job that show if its |wait to run|request to run|run|pause|finish|killed|
All jobs are all ready known functions or class (eg the creation of statistics).
For stop and start, I use the global.asax and the Application_Start, Application_End to start and pause the object that keep the tasks. For example when I do a job, and I get the Application_End ether I wait to finish and then stop the app, ether I stop the action, notify the table, and start again on application_start.
So I say, Atzenta.RunTheJob(Jobs.StatisticUpdate, ProductID); and then I add this job on table, open the timer, and then on trigger this job is run and I update the statistics for the given product id.
I use a table on a database to synchronize many pools that run the same web app and in fact its work that way. With a common table the synchronize of the jobs is easy and you avoid 2 pools to run the same job at the same time.
On my back office I have a simple table view to see the status of all jobs.
Lets say that you are using a shared hosting plan and your application stores lots of objects
in the application state.
If they start taking too much memory does this mean that the server will just remove them?
If not what will happen then? What happens when the server has no memory left? Can you still store objects into the application or session state?
I am asking this because i am planning on developing a big site that will rely on the application state, and it will be crucial that the objects stored there don't get destroyed.
What i am afraid of is that at a certain point i might have too many objects in the application state and they might get removed to free up memory.
There are three different thresholds:
The total size of your app exceeds the maximum process size on your machine (really only applicable with an x86 OS). In that case, you'll start getting out of memory errors at first, generally followed very quickly by a process crash.
Your process, along with everything else running on the machine, no longer fits in physical memory. In that case, the machine will start to page, generally resulting in extremely poor performance.
Your process exceeds the memory limit imposed by IIS on itself, via IIS Manager. In that case, the process will be killed and restarted, as with a regular AppPool recycle.
With the Application object, entries are not automatically removed if you approach any of the above thresholds. With the Cache object, they can be removed, depending on the priority you assign.
As others have said, over-using the Application object isn't generally a good idea, because it's not scalable. If you were ever to add a second load-balanced server, keeping the info in sync from one server to another becomes very challenging, among other things.
What happens when any application takes up too much memory on a computer?
It causes the server to run everything really slowly. Even the other sites that share the computer.
It's not a good idea to store that much in application state. Use your config file and/or the database.
It sounds like you have a memory leak, the process keeps leaking memory until it crushes with an out-of-memory condition and is then automatically restarted by the server.
1.5GB is about the maximum amount of memory a 32 bit process can allocate before running out of address space.
Somethings to look for:
Do you do your own caching? when are
items removed from the cache?
Is there somewhere data is added to a
collection every once in a while but
never removed?
Do you call Dispose on every object
that implements IDisposable?
Do you access any non-managed code at
all (COM objects or using DllImport)
or allocate non-managed memory (using
the Marshal class for example)?
anything that is allocated there is
never freed by the garbage collector,
you have to free it yourself.
Do you use 3rd party libraries or any
code from 3rd parties? it can have
any of the problems in the list too.
If you use the Cache object instead of the Application object, you can minimize problems of running out of memory. If the memory utilization of the ASP.Net worker process approaches the point at which the process will be bounced automatically (the recycle limit), the memory in Cache will be scavenged. Items that haven't been used for a while are removed first, potentially preventing the process from recycling. If the data is stored in Application, ASP.Net can do nothing to prevent the process from recycling, and all app state will be lost.
However, you do need to have a way of repopulating the Cache object. You could do that by persisting the cached data in a database, as others have proposed.
Here's a short article with a good code example for handling Cache.
And here's a video of how to use Cache.
Anything stored in application state should be refreshable, and needs to be saved in current status in files or database. If nothing else happens, IIS restarts worker processes at least once a day, so nothing in application state will be there forever.
If you do run out of memory, you'll probably get an out of memory exception. You can also monitor memory usage, but in a shared host environment, that may not be enough information to avoid problems. And you may get the worker process recycled as an "involuntary" fix.
When you say that it's crucial that objects stored in application state don't get destroyed, it sounds like you're setting yourself up for trouble.
I think you should use session instead of the application sate and stored that session into sql server database. So once your application user end its session that will release your memory.
If you want more specific answer then please provide the more information about your application.