Recently I encountered this issue wherein inside my log table, my correlation id is currently running but in my AppFabric Persistence Store, that same correlation id has stopped.
I'm thinking of "restarting or rehydrating" that "corrupted" workflow programmatically, but I don't know how? Any tips on how to do this? I'm quite new in this technology.
Thanks!
Related
I have recently published my mvc 5 application to a windows server using IIS. Now I might be thinking too much into this, because when using code first it should populate the database with the new entries to my data model. HOWEVER, before i make changes to my data model in my development environment, i just wanted to ask here to make sure I really don't mess anything up.
So after launching my application as always now my client wants me to make some changes which now involve me adding a data model as well as new controllers. Is it safe for me to add these changes, then once i re-publish the application, should the production database get updated as per the code-first additions i've made?
I'm having some confusion in understanding how my production database is going to recognize the new data, and tables that are being added once i make this update.
Do i simply attach my newly changed database from my development machine to my production environment - All the data on my development system is needed and usable on the production side of things as well.
I hope i was able to ask this question clear enough so that someone can help me out.
I plan to make weekly or bi-weekly changes to the web app.
Thank you
This question is about Windows Workflow Foundation. I am working with Windows Workflow Foundation 4.5 hosted in a MVC Web application.
I have a requirement where user would like to ‘approve’ (resume bookmarks) multiple instances of requests in a ‘batch’ with all or nothing approach. Meaning they would like to roll back the workflow instances in the batch to previous state if any on the instance encountered exception while being resumed.
Currently I am just naively looping through each instance in the batch and calling resume bookmark. It works somewhat until it does not. We are seeing some issues where one of the instance in the batch failed due to some exception.
I have searched everywhere and could not find any useful information. I was hoping you could have some insight to it.
Would there be a way to wrap the entire loop within ‘TransactionSope’?
We are migrating from WebSphere BPM 8.0.1.3 to 8.5.6, our plan is to move application by application rather than in a big-bang. The idea would be that when we move an application to the new server, we would create an IHS rule which redirects the related URLs to the new server. That would mean that we keep some applications running on the old server while some are already migrated to the new one.
Is this possible to achieve? Or any other idea alternate to re writing IHS rules? Like make use of WebServer plugins?
Unfortunately, I don't think that your current approach is going to work well for you. I've outlined the various options for IBM BPM upgrades here. I see several major problems with your approach, all of which come down to the fact that many of the URLs used by IBM BPM contain no details about the context for the request.
The first issue I see IBM uses a portal for a given user's work. That is all their tasks across the various BPM solutions will appear in the same web UI. This URL is not different across the Process Applications in the install. This means that all your users are trying to get their task list by going to a url like - https://mybpmserver/portal. There isn't a way to understand the process app a given user may be working with in this context, so you don't know who to redirect to the new server.
The second issue is that users are able to work with multiple process apps, so even if the context was known in the above url, you would enter complexities with respect to users working in 2 different process apps unless both have been migrated.
The third issue is that BPM is essentially a state engine. IBM does not supply a way to "migrate" that state from an old install to a new install on a per Process App (PA) basis, you have to migrate all or none. Assuming "none" because it feels like you want to follow the drain approach in my article, then you have the problem that the URLs for executing a task do not have the PA context and therefore you won't know which server to direct which task to. That is for a given PA you will have tasks on both the old server which existed before the upgrade, and the new server which were created after the upgrade, but the URLs for these tasks will look essentially the same.
There are additional issues, but the main one comes down to properly understanding how the run time BPM engines work. Some of the above issues may be mitigated if you have a separate UI layer for presenting the tasks the users (my company make a portal replacement that can do this) which would permit it to understand the context of the tasks, but if you have this, then you can get the correct behavior in that code and not worry about WAS configuration settings.
You could use the plugin-cfg.xml merge tool on the two generated plugin-cfg.xml's. That way the WAS Plugin would always know which server had which applications.
Few questions:
1. Is SQL server installation needed to run Windows Work Flow?
2. If yes, where does work flow stores (persists) data for a long running process
3. I see that some files are created in .\windows\Microsft.NET\Framework\v4.0\SQL\en\ (some sql scripts to create persistense points)
4. Do we need to run these scripts to manually create database?
5. Can we persist data on file system instead? so that we don't need to install SQL Server?
Thanks
I see one supposed answer already, but "read the docs" answers really aren't good answers, especially in an area so poorly documented as WF, so in case anyone else stumbles across this thread:
(1) SQL Server doesn't have to be installed just to use workflows, but if you want persistence for long running workflows, (2) SQL Server is your easiest way to get it.
(3) and (4) You can let AppFabric do most of the heavy lifting in setting up the persistence database for you.
(5) you could persist on a file system instead of SQL Server but IMHO, from what I've seen in my short time with WF and persistence so far, you'd be crazy to try to implement your own persistence provider like that, especially when just starting out. You can use SQL Server Express to get started. Why reinvent the wheel?
I've had sporadic performance problems with my website for awhile now. 90% of the time the site is very fast. But occasionally it is just really, really slow. I mean like 5-10 seconds load time kind of slow. I thought I had narrowed it down to the server I was on so I migrated everything to a new dedicated server from a completely different web hosting company. But the problems continue.
I guess what I'm looking for is a good tool that'll help me track down the problem, because it's clearly not the hardware. I'd like to be able to log certain events in my ASP.NET code and have that same logger also track server performance/resources at the time. If I can then look back at the logs then I can see what exactly my website was doing at the time of extreme slowness.
Is there a .NET logging system that'll allow me to make calls into it with code while simultaneously tracking performance? What would you recommend?
Every intermittent performance problem I ever had turn out to be caused by something in the database.
You need to check out my blog post Unexplained-SQL-Server-Timeouts-and-Intermittent-Blocking. No, it's not caused by a heavy INSERT or UPDATE process like you would expect.
I would run a database trace for 1/2 a day. Yes, the trace has to be done on production because the problem doesn't usually happen in a low use environment.
Your trace log rows will have a "Duration" column showing how long an event took. You are looking at the long running ones, and the ones before them that might be holding up the long running ones. Once you find the pattern you need to figure out how things are working.
IIS 7.0 has built-in ETW tracing capability. ETW is the fastest and least overhead logging. It is built into Kernel. With respect to IIS it can log every call. The best part of ETW you can include everything in the system and get a holistic picture of the application and the sever. For example you can include , registry, file system, context switching and get call-stacks along with duration.
Here is the basic overview of ETW and specific to IIS and I also have few posts on ETW
I would start by monitoring ASP.NET related performance counters. You could even add your own counters to your application, if you wanted. Also, look to the number of w3wp.exe processes running at the time of the slow down vs normal. Look at their memory usage. Sounds to me like a memory leak that eventually results in a termination of the worker process, which of course fixes the problem, temporarily.
You don't provide specifics of what your application is doing in terms of the resources (database, networking, files) that it is using. In addition to the steps from the other posters, I would take a look at anything that is happening at "out-of-process" such as:
Databases connections
Files opened
Network shares accessed
...basically anything that is not happening in the ASP.NET process.
I would start off with the following list of items:
Turn on ASP.Net Health Monitoring to start getting some metrics & numbers.
Check the memory utilization on the server. Does re-cycling the IIS periodically remove this issue (memory leak??).
ELMAH is a good tool to start looking at the exceptions. Also, go though the logs your application might be generating.
Then, I would look for anti-virus software running at a particular time or some long running processes which might be slowing down the machine etc., a database backup schedule...
HTH.
Of course ultimately I just want to solve the intermittent slowness issues (and I'm not yet sure if I have). But in my initial question I was asking for a rather specific logger.
I never did find an answer for that so I wrote my own stopwatch threshold logging. It's not quite as detailed as my initial idea but it has the benefit of being very easy to apply globally to a web application.
From my experience performance related issues are almost always IO related and is rarely the CPU.
In order to get a gauge on where things are at without writing instrumentation code or installing software is to use Performance Monitor in Windows to see where the time is being spent.
Another quick way to get a sense of where problems might be is to run a small load test locally on your machine while a code profiler (like the one built into VS) is attached to the process to tell you where all the time is going. I usually find a few "quick wins" with that approach.