I have a project that uses Windows Workflow and I have a database in SQL Server 2008 with Workflow tables.
I want to Edit Record in Project so want to go back to a previous Activity without condition, and update the workflow instance in database.
How can I undo the previous step in Windows Workflow Foundation?
Not really. There is a bit if hack though that might be useful. When you use a workflow instance store and tell the workflow to abandon it's state when an unhandled exception occurs the last saved state in the instance store is unchanged. So you can reload the workflow from that state and continue as if the next activity never executed. The problem is in controlling when the workflow persist. You get to control that for the most part but there are some circumstances where a workflow will always persist.
Related
I am developing UWP application using vs studio 2017 version 15.9.6.
I want to use Windows local SQLite database. I want to run an SQL script named mySql.txt when the user first time install the application. I dont want to run it every time when the user run the app as it contain insert statement, which will cause duplicate rows insertion. So I want to run that script only once, preferably in the installation time.
How can I do that? I am very new to UWP and .NET. Please guide me step-by-step if possible.
You can make sure the initialization/seeding is done only once for the app. For that you may utilize ApplicationDate.Current.LocalSettings.
These allow you to write simple data for your application which are bound to your app. Once the user uninstalls the app, these data will be removed as well. This fits your scenario exactly.
Suppose your database initialization code is in the method InitializeDb(). You could use the following to make sure the initialization is done only once:
if (!ApplicationData.Current.LocalSettings.Values.ContainsKey("DbInitialized"))
{
InitializeDb();
ApplicationData.Current.LocalSettings.Values["DbInitialized"] = true;
}
This code first checks if we have initialized the db previously and if not, performs the initialization and stores a flag into app settings to make sure the next time the initialization is skipped.
You can run this code during app initialization, for example in OnLaunched method, or on when the database service is first required.
This is of course the simplest implementation, so you can (and should) add some exception handling, so that if the initialization fails, it can be retried and so on. Also you may want to handle app updates and DB updates - in which case you can use ApplicationData.Current.Version which allows you to track the version of application data and can be used to keep track of DB version as well so you can perform appropriate migrations between versions.
Finally, for even better user convenience, there is also a way to perform the app update steps during updates. See this article for more info.
This question is like " What was first, chicken or egg?".
Let's imagine we have some source code. Written using symfony or yii. It has db migration code that hadle some database changes.
Now, we have some commits that updates our code (for example new classes) and some db changes (change old columns or add new tables).
When we developing at localhost or update our dev servers it's ok to have time to stop services\any actions and update server. But when we tries to do it on production server we will crash everything for a while and this is not an option.
Why this will happen - when we pull it (git\mercurial) our code will be updated, but NOT database, and when code will be executed - it will throw exceptions of database. To fix it we should run build-in framework migrations. So in the end our server will be crashed until migrations will be called.
Code and migrations should be updated "in one time".
What is the best practice to handle it?
ADDED:
Solution like "run pull then run migrations in one call" - not an option in highload project. Because on highload even in second some entries\calls can be borken.
Stop server we cannot too.
Pulling off a zero downtime deployment can be a bit tricky and there are many ways to achieve this.
As for the database it is recommended to do changes in a backwards compatible fashion. So for example adding a nullable column or new table will not affect your existing code base and can be done safely. So if you want to add a new non-nullable column you would do it in 3 steps:
Add new column as nullable
Populate with data to make sure there are no null-values
Make the column NOT NULL
You will need a new deployment for 1 & 3 at the very least. When modifying a column it's pretty much the same, you create a new column, transfer the data over, release the code that uses the new column (optionally with the old column as fallback) and then remove the old column (plus fallback code) in a 3rd deployment.
This way you make sure that your database changes will not cause a downtime in your existing application. This takes great care and obviously requires you to have a good deployment pipeline allowing for fast releases. If it takes hours to get a release out this method will not be fun.
You could copy the database (or even the whole system), do a migration and then switch to that instance, but in most applications this is not feasible because it will make it a pain to keep both instances in sync between deployments. I cannot recommend investing too much time in that, but I might be biased from my experience.
When it comes to switching the current version of your code with a newer one you have multiple options. The fancy cloud based solutions like kubernetes make this kind of easy. You create a second cluster with your new version and then slowly route traffic from the old cluster to the new one. If you have a single server it is quite common to deploy a new release to a separate folder, do all the management tasks like warming caches and then when the release is ready to be used you switch a symlink to the newest release. Both methods require meticulous planning and tweaking if you really want them to be zero downtime. There are all kinds of thing that can cause issues like a shared cache being accidentally cleared to sessions not being transferred over correctly to the new release. Whenever something that's stored in a session changes you have to take a similar approach as with the database and basically slow move the state over to the new one while running the code or having a fallback to still handle the old data otherwise you might get errors when reading the session, causing 500 pages for your customers.
They key to deploy with as few outages and glitches as possible is good monitoring of the systems and the application to see where things go wrong during a deployment to make it more stable over time.
You can create a backup server with content that mirrors your current server. Then do some error detection.
If an error is detected on your primary server, update your DNS record to divert your traffic to your secondary server.
Once primary back up and running, traffic moves back to primary and then sync the changes in your secondary.
These are called failover servers.
I need to run 4 background gobs for cleaning temp files and proccessing some files. I have chosen Quart.net for the job.
I have a Asp.Net website, which accepts uploading files that will be processed by the Quartz Jobs at night.
First i thought about making a console application for the Quartz jobs, keeping the website and the jobs totally decoupled.
But then, i've seen that i will need some config values (connectionstring and paths to files) that are on the asp.net web.config. So a question came to my mind:
Should i run the jobs through the asp.net instance or should i do this on a console application?
Furthermore, i want that when the Quartz jobs start running, the website show a special page (like "We are processing the files...).
What i care the most is the performance, i don't want the website to be affected by the Quartz jobs, neither the jobs' performance affected by the website.
So, what should i do? Have you done something like this and can give me an advice?
Should i run the jobs through the asp.net instance or should i do this on a console application?
If you want to have to manually trigger them each night, sure. But a console application using the host system's task scheduler seems like a more automated solution. A web application is more of a request/response system, it's not really suited for periodic or long-running actions. Scheduling some sort of background operation on the host, such as a scheduled console application or a windows service, would serve that purpose better.
Note that if it truly needs to be unattended and run even when there's nobody logged in to the server console, a windows service may be a more ideal approach than a console application.
i've seen that i will need some config values (connectionstring and paths to files) that are on the asp.net web.config
Console application have App.config files which serve the same purpose. You can use that.
i want that when the Quartz jobs start running, the website show a special page
You definitely want to keep the two de-coupled. But you may be able to accomplish this easily enough. Maybe have some sort of status flag in the database which indicates if any particular record is "currently being processed". The website can simply look for any records with that flag when a page loads and display that message.
There are likely a couple of different ways to synchronize status here, it doesn't really matter what you choose. What does matter is that the systems remain decoupled and that any status which is statically persisted is handled somewhat carefully to avoid an errant process from leaving an incorrect status. (For example, a background task sets a status of "processing" and then fails in some way. The website would forever indicate that it's processing.)
I've got a state machine workflow implemented using WorkflowFoundation 4.1. I'm using the SQL Server persistence store and the WorkflowApplication class for loading and running workflows.
I'm making changes to the state machine workflow model, and am finding that existing instances break very easily. I've written code that can replay the workflow back into the correct state, which is basically a migration, which works fine, however I need to be able to clear out the old workflow instance as well.
The main issue is that if the workflow is invalid, I can't even load it, so I can't terminate or cancel it either.
Is there a way to use the workflow API to remove a workflow without loading it (ie, some command on the SqlPersistenceStore), or do I have to clean the database manually?
The SqlWorkflowInstanceStore doesn't allow you to do so directly. You will need to go into the database and delete the record there. If you are using AppFabric there is actually a command to delete a workflow instance without loading it first for just that purpose. There should be a PowerShell command to do that using code.
I am working on Windows Workflow Foundation 4.0 project having native and service host activities in it. I am using SQL 2008 R2 as my backend.
I have several activities in the workflow, most of which should place bookmark in the database so that next activity could be resumed from there. when the first activity places a bookmark in the database, it is resumed by the second activity quite well, but when the second activity wants to place a bookmark in the database, the bookmark is not updating itself from the previous bookmark and also, no exception is raised in the code.
I am new to Workflow foundation, please advise if i am doing something wrong ?
Thanks
The best way to know what is going on is to capture the tracking data. Just create a class which derives from TrackingParticipant and add it to the Extensions collection.