I have a user who has to use Microsoft Word 2007 to create and manipulate word documents in his ASP web application. I'm well aware of the 'Considerations for server-side Automation of Office' KB article and the many third party components available to do this kind of thing, but they've gone a fair bit down this road already.
The problem he's having is that the WINWORD.EXE process never terminates even though he calls Quit on the application object.
For example:
Set objWord = Server.CreateObject("Word.Application")
'' Do work
objWord.Quit
Set objWord = Nothing
As you would expect, his server ends up littered with orphaned WINWORD.EXE
processes requiring manual intervention to terminate.
Is there a way to terminate these WINWORD.EXE processes explicitly in the script?
Update:
I've made some progress with this and I think I have a handle on what the problem is. When an Office application runs for the first time under a new Windows account it asks for your name and initials. Now because there is no visible GUI, there is no-one there to push all the right buttons. I've tried starting WINWORD.EXE as the website anonymous account identity using RUNAS, but there's some permission issue that's preventing Word or Office performing some other kind of initial setup routine it wants to do.
There are two things I can think of:
MyWorkBook.Saved has not been set to true. As a result the Quit call results in WinWord displaying a "Do you wish to save" dialog. On a server. Under a session with no desktop. Needless to say Word never manages to quit as it is waiting for the user.
Your user's script is not always making it to the Quit line. This is a frequent and perhaps biggest reasons against using the Office Automation, as you really need multiple layers of exception handling and finally statements (which isn't VBScript or the ASP classic engines strong point) to ensure that the instance is properly cleaned up.
This question is now no longer relevant. Almost all the issues were related to WINWORD.EXE requiring elevated privileges (higher than we allow for web site anonymous user accounts) to be able to perform certain tasks.
Customer has wisely decided to use ASPOSE.Words for .NET which also has a COM callable wrapper so classic ASP code can use it too.
Related
in an old ASP.NET Web Forms application, developed in Visual Studio 2010,
suddenly does not run anymore, and in the log file appears this message:
Session state has created a session id,
but cannot save it because the response was already flushed by the application.
No new deployment has been made, and no code modifications take place.
Until now I didn't find any solution to this.
What I have to check?
I state that the source code is no longer available, and therefore it would be very difficult to change the code and proceed with a new deployment.
Thanks in advance.
Luis
This would suggest that someone might be hitting the site and jumping directly to some URL (and thus code) that say does some response redirect to another page or some such.
Remember, when code behind runs, and say re-directs to another page, in most cases the running code for the current page is terminated, and that is normal behaviors.
However, the idea that you going to debug code and debug a web site when you don't have the code to debug? Gee, I don't see how that's going to work at all. As noted, if this just started, then it sounds like incoming requests are to pages that don't expect to be hit "first", but some pages that expect to be ONLY called from other pages in the site when some session() and imporant values are setup BEFORE such pages are to be hit.
It also not clear if the site is using sql based sessions, or just in-memory sessions. In memory can (and is) faster, but it also not particually relaible. Now, if you deployed to a new web server or new hosting, then often session errrors can now start to appear, and this is due to the MASSIVE HUGE LARGE DIFFERENT of using cloud based hosting vs that of older hosting soluions that run on a single server.
Clould computing is real utility computing, and thus when you host a web site on such systems, then in-memory session() cannot be used anymore, since multiple servers can and will be used to "dish out" web pages. Since more then one server might be used, then obvisouly in-memory sesson() can't work, since a few web pages might be served out by one server, and then a few more pages might be served out by another server. And using shared memory for a session is limited to ONE server, since multiplel servers don't and can't transfer their memory to other servers.
So, this suggests that you want to be sure that sql server based sessions are being used here - and for any kind of server farm, or any kind of system that does load balances between more then one server, then of course you HAVE to use sql server based sessions, since in memory can't work in that kind of environment.
The error could also be due to excessive server loads - often the session database is "locked" for a short period of time, and can thus often be a bottleneck. So, for say years you might not see a issue, but then as load and use of the web site increases, then this can become noticed where as in the past it was not. I suppose the database used for storing sessions could be checked, or looked at, since as you note, you don't have the ability to test + debug the code. I would REALLY but REALLY work towards solving and fixing this lack of source code for the web site, since without that, you have really no means to manage, maintain, and fix issues for that web site.
But, abrupt terminations of web pages? As noted, this could be a error triggered in code, and thus the page never finished what it was supposed to do. And as noted, perhaps a page that expects some session() values but does not have them as explained above could be the problem. It not clear if your errors also shows what URL this was occurring for.
While nothing seems to have changed - something obviously did.
Ultimate, you need to get that source code, or deal with the people + vendor that supplies the code for that site. If you don't have a vendor, and you don't have source code, you quite much attempting to work on a car that you cant even open the hood to check what's going on under that hood.
so, one suggestion here? Someone is hitting a page that expected some value(s) in session to exist. Often the simple solution is to shove ANY simple or dummy value into session so session REALLY does get created, and then when the page attempts to save the session(), there is one to save!!!
In other words, this error often occurs when session is attempted to be saved, but no sesison exists. For such pages, as noted, a simple tiny small code change of doing this session("zoozoo") = "my useless text" will fix this error. So, it sounds like session is being lost.
As noted, a error on a web page can also trigger a app-pool re-start. If app-pool re-starts, then session is lost (in memory session). Now, with session being lost, then any page that decides to terminate early AND ALSO having used session() will trigger this error.
So, this sounds like app-pool is being re-started and session is being lost. (you can google why app-pool restarts and for the many reasons). However, critical to this issue would be are you using sql based sessions, or in-memory (server) sessions? So, this sounds like some code is triggering a error, and with a error triggered, app-pool re-starts. And with app-pool being restarted, then in-memory session is blown away. And now, without ANY session at all, then attempts to save the session trigger the exact error message you see. (and this is why shoving a dummy value into the session allows and can fix some pages - since you can't save a "nothing" session, and if you do, then you get that exact error message.
but, as noted, you can't make these simple changes to code anyway, right?
But, first on this issue - are you using memory based sessions or not? And that feature can be setup and configured in IIS, and without changes to the code base. So, one quick fix might be to turn on sql server based sessions. It will cost web site performance (10%), but the increased reliability is more then worth the performance hit.
Another area to look at? Are AJAX calls being made to a page, and again without any previous session having been created? So, once again, we down to a change in end user behaviors, and possible those hitting a page first before having logged in, or done other things - and again one would see this error crop up.
Here's the scenario...
We have an internal website that is running the latest version of the ODAC (Oracle Client). It opens database connections, runs a stored procedure or packaged method, then disconnects. Connection pooling is turned on, and we are currently under version 11g in both our development and test environments, but under 10gR2 in our production environment. This happens on Production.
A few days ago, a process began firing off a ORA-2020 error. The process is called from a webpage on our internal website. The user simply sets a date, hits a button, and a job is started on another system that is separate from the website. The call itself, however, uses a database link to run a function.
We've scoured the SQL to find that it only uses that one database link. And since these links are on a per session basis and the user isn't exceeding the default limit of 4, how is it possible that we are getting a ORA-2020 error.
We have ran a number of tests to try to push over the default limit of 4. ODAC, from what I recall, runs a commit after each connection, and I can't seem to run 4 DB links, then run a piece of SQL with 1 DB link directly after with any errors. The only way I can bring up this error is if I run a query with 4 DB links, then a function or piece of dynamic SQL with a database link within it. We don't have that problem as this issue is sporadic. It isn't ALWAYS happening.
Questions
Is it possible that connection pooling is allowing User B to use User A's connection after the initial process was run, thus adding to the open links number if User B runs a SQL statement with more database links?
Is this a scenario where we should up our limit past 4? What are the disadvantages of increasing the number?
Do I need to explicitly close open database links before disconnecting from the database? Oracle documentation seems to suggest it should automatically happen, but "on occasion"... doesn't.
Firstly, the simple solution: I'd double check that in the production database the number of default links is actually 4.
select *
from v$system_parameter
where name = 'OPEN_LINKS'
Assuming you're not going to get off that lightly:
Is it possible that connection pooling is allowing User B to use User
A's connection after the initial process was run, thus adding to the
open links number if User B runs a SQL statement with more database
links?
You say that you explicitly close the session, which, according to the documentation, should mean that all links associated with that session are closed. Other than that I confess complete ignorance on this point.
Is this a scenario where we should up our limit past 4? What are the
disadvantages of increasing the number?
There aren't any disadvantages that I can think of. Tom Kyte suggests, albeit a long time ago, that each open database link uses 500k of PGA memory. If you don't have any then this will obviously cause a problem but it should be more than fine for most situations.
There are, however, unintended consequences: Imagine that you up this number to 100. Somebody codes something that continually opens links and draws a lot of data through all them select * from my_massive_table or similar. Instead of 4 sessions doing this you have 100, which is attempting to transfer hundreds of gigabytes simultaneously. Your network dies under the strain...
There's probably more but you get the picture.
Do I need to explicitly close open database links before disconnecting
from the database? Oracle documentation seems to suggest it should
automatically happen, but "on occasion"... doesn't.
As you've noted the best answer is "probably not", which isn't much help. You don't mention exactly how you're terminating the session but if you're killing it rather than closing gracefully then definitely.
Using a database link spawns a child process on the remote server. Because your server is no longer in absolute charge of this process there's a myriad of things that could cause it to become orphaned or otherwise not close on termination of the parent process. By no means does this happen the whole time but it can and does.
I would do two things.
In your process, if an exception is encountered, e-mail the results of the following query to yourself.
select *
from v$dblink
As a minimum at least you will know what database links are open in the session and give you some way of tracing them.
Follow the documentations advice; specifically the following:
"You may have occasion to close the link manually. For example, close
links when:
The network connection established by a link is used infrequently in an application.
The user session must be terminated."
The first seems to exactly fit your situation. Unless your process is time-sensitive, which doesn't seem to be the case, then what have you got to lose? The syntax is:
alter session close database link <linkname>
We ended up increasing the link amount, but we never did find the root cause.
I would like to know the best way to deal with long running processes started on demand from an ASP.NET webpage.
The process may consist of various steps (like upload files to the server, run SSIS packages on them, execute some stored procedures etc.) and sometimes the process could take up to couple of hours to finish.
If I go for asynchronous execution using a WCF service, then what happens if the user closes the browser while the process is running, how the process success or failure result should be displayed to the user? To solve this, I choose one-way WCF service calls, but the problem with this is I need to create a process table and store the result (and error messages if it fails in any of the steps and which steps have completed successfully) in that table which is an additional overhead because there are many such processes with various steps that the user can invoke from the web page and user needs to be made aware of the progress (in simplest case, the status can be "process xyz running") and once it is done, the output needs to be displayed to the user (for example by running a stored procedure).
What is the best way to design the solution for this?
As I see it, you have three options
Have a long running page where the user waits for the response. If this is several hours, you're going to have many usability problems, so I wouldn't even consider it.
Create a process table to store the results of operations. Run service functions asynchronously and delegate logging the results to the service. There can be a page that the user refreshes which gets the latest results of this table.
If you really don't want to create a table, then store all the current process details in the users' session state, and have a current processes page as above. You have the possible issue that the session might timeout, or the web app might restart and you'll lose all this.
I can't see that number 2 is such a great hardship. You could make the table fairly generic to encompass all types of processes: process details could just be encoded as binary or xml and interpreted by the web application. You then have the most robust solution.
I cant say what the best way would be but using Windows Workflow Foundation for such long running processes is definitely one way to go about it.
You can do tracking of the process to see what stage it is at, even persist it if you have steps where it is awaiting user input etc.
WF provides a lot of features out of the box (especially if your storage medium is SQL Server) and may be a good option to consider.
http://www.codeproject.com/KB/WF/WF4Extensions.aspx might help give you some more insight into the same.
I think you are in the right track. You should run the process asynchronously, store the execution somewhere (a table), and keep status of the running process in there.
Your user should see a pending display label while the process is executing, and a finished label with the result when the process finished. If the user closed the browser, she will see the result of her running process next time she logs in.
We require that in a ASP.Net application, a .Net process should be invoked every day at a specified time automatically. This process needs to interact with the database (SQL Server 2005) and generate billing on a daily basis. We are using a shared hosting hence we are not able to create a windows service or create SQL Server jobs. How can this be achieved without user intervention?
You could try the technique described here, used at StackOverflow itself (or at least it was used here at one point). In a nutshell:
At startup, add an item to the HttpRuntime.Cache with a fixed
expiration.
When cache item expires, do your work, such as WebRequest or what have
you.
Re-add the item to the cache with a fixed expiration.
To get it to run at a specific time instead of an interval, you could lower the interval and simply modify your working method to check the time itself.
As the comments in the original article linked above note, this isn't a perfect solution, and no one should prefer it over a proper scheduling technique if one is available. See When Does Asp.Net Remove Expired Cache Items? for some additional qualifications.
Yes, use Windows Scheduler. Depending on how it's configured you might need to be logged in for the scheduler to run.
You could always schedule a task to run a webservice..
http://weblogs.asp.net/jgalloway/archive/2005/10/24/428303.aspx
The scheduler would run a VBS file with the following..
Set oServerXML = CreateObject("Msxml2.ServerXMLHTTP")
oServerXML.Open "GET","http://my.hostedservice.com/myService.asmx/myService?aParam=Value
oServerXML.setRequestHeader "Content-Type","application/x-www-form-urlencoded"
oServerXML.Send
Set oServerXML = nothing
Can't be done, unfortunately.
IIS only responds to requests, and SQL Server only wakes up for jobs.
The closest you'll be able to do is to put your routine in an ASPX page, not linked from the site and not with an obvious name, and trigger it by a request from some other machine out on the Internet.
The other machine could be a Windows, Linux, Mac, whatever you have available, and all of those platforms have ways of scheduling events (service, cron, etc.) that can make the request to trigger the update on the server.
There are ways to run "services" in .Net by using cache expiration to trigger the task.
More at CodeProject
You can use a Scheduled Task, but this might not work in a shared hosting environment either.
You could setup a webservice or page on your website to kickoff the process, then have a scheduled task on a desktop machine hit that page/service once daily to start the process. Hacky, but it might work.
Being .NET ignorant, I would imagine there's some kind of .NET based scheduler framework available for this (much like Quartz for Java).
Or you could simply fire off a long running thread that spends the bulk of its time sleeping, wake up every minute, check the time, check it's list of "things to do", fire off the ones that need to be done. Level of sophistication being as far as you want to take it, but the primary goal of keeping the primary scheduling thread "alive", "at all costs".
What i can think about now are:
Create a dll which contain the
schedule logic you want, and make
sure that this dll schedule function
will not stop and will loop for ever,
then you will need a page on that
server this page will fire this dll
functions. "you will need to call
this page at least once to start the
scheduler".
Create an application "holds schedule logic" on another machine, may be your Home PC, and make your pc application call the functions on the server through webservices or pages
We have an ASP.Net application that provides administrators to work with and perform operations on large sets of records. For example, we have a "Polish Data" task that an administrator can perform to clean up data for a record (e.g. reformat phone numbers, social security numbers, etc.) When performed on a small number of records, the task completes relatively quickly. However, when a user performs the task on a larger set of records, the task may take several minutes or longer to complete. So, we want to implement these kinds of tasks using some kind of asynchronous pattern. For example, we want to be able to launch the task, and then use AJAX polling to provide a progress bar and status information.
I have been looking into using the BackgroundWorker class, but I have read some things online that make me pause. I would love to get some additional advice on this.
For example, I understand that the BackgroundWorker will actually use the thread pool from the current application. In my case, the application is an ASP.Net web site. I have read that this can be a problem because when the application recycles, the background workers will be terminated. Some of the jobs I mentioned above may take 3 minutes, but others may take a few hours.
Also, we may have several hundred administrators all performing similar operations during the day. Will the ASP.Net application thread pool be able to handle all of these background jobs efficiently while still performing it's normal request processing?
So, I am trying to determine if using the BackgroundWorker class and approach is right for our needs. Should I be looking at an alternative approach?
Thanks and sorry for such a long post!
Kevin
In your case it actually sounds like the solution you will be looking for is multifaceted (and not a simple in and done project).
Since you said that some processes can last for hours that is absolutely not something for ASP.NET to own. This should be ran inside a windows service and managed with native windows threading.
You will need to implement some type of work queue in your service and a way to communicate with the queue. One way is to expose a WCF service for all actions your service will govern. Another would be to have service poll a database table and pick up work from the table.
To be able express the status of the process you will want the ASP.NET application to be able to have some reference to the processID for example the WCF service returns a guid identifier. Then you have a method that when you give it the processID it will return the status of the process. You can then implement the polling of that service call using AJAX and display any type of modal you wish.
Another thing to remember is that you need to design your processes to have knowledge of where it is and where it will be when it is finished so it can track the state it's in. For example, BatchJobA is run and will have 1000 records to process. The service needs to know what record it's on or what the current % of competition is for it to be able to return information to the UI. For sql queries that take a very long time to execute this can be very problematic to accurately gauge where it is unless you do alot of pre and post processing of temp tables that you can in the middle of it read the status of the temp tables to understand where it is.
Based on what you are saying I think that BackgroundWorker is not a good choice.
Furthermore keeping this functionality as a part of your main app can be problematic, specifically because you do not want the submitted processing to be interrupted if the main app recycles. You can play with asynch processing but it still will be a part of the main app AppDomain - all of it will die if the app recycles.
I would suggest buidling a separate app implementing this functionality. In a similar situation I separated background processing to a Windows service and hosted a web service in it as a means of communication
You might consider a slightly different approach.
For example, have a command and control table in which you send commands like "REFORMAT PHONE NUMBERS" or whatever.
Then have a windows service monitoring that table. Whenever a record shows up, run the command.
This eliminates any sort of worry about a background thread. Further you have a bit more flexibility with regards to what's in the queue, order of operations including priority, etc. Finally, you would have a definitive list of what is running or needs to run.
As an option, instead of a windows service you might just use a SQL job to execute every so often to watch your control table and perform the requested action.