MSMQ Inconsistent State After Restart - asp.net

I'm seeing a really strange error that I'm having a difficult time
tracking down. I think its related to my configuration of Rhino ESB, though I'm not sure
if RSB is actually causing it, so I figured I'd ask and see if
anyone else has come across this in any other usages of MSMQ.
I'm using RSB as a client in a web app (ASP.NET, the client runs in the background). The client talks to a windows service via the MSMQ binding for RSB. Restarting the service never appears to have an effect on MSMQ, neither does restarting IIS by hand. However, whenever I actually restart the computer itself, MSMQ always refuses to start back up, claiming that a "queue is in an inconsistent state". Attempting to start MSMQ manually results in the same error, effectively rendering the MSMQ install completely useless. The only way to solve it is to actually remove then reinstall MSMQ.
The only information I've found via the almighty Google are references to a problem in MSMQ 2.0 (this problem is occurring in MSMQ 4.0). I've verified that Dispose is being called on on the bus at shutdown, in both the service and the web site.
Does anyone have any idea why this could be occurring? Thanks!

I faced the same issue on Window 2008 Server (Virtual Machine). Although the environment was not related to rhino tools.
The error in the event log:
"The Message Queuing service cannot start because a queue is in an inconsistent state. For more information, see Microsoft Knowledge Base article 827493 at support.microsoft.com."
As Roy pointed out, this is happening every 2-3 days. Every time we would follow the steps below to recover - instead re-installing the MSMQ.
1) Stop all applications and services that uses MSMQ.
2) Kill the mqsvc.exe from the Task Manager
3) Go to C:\Windows\System32\msmq\storage and delete any .mq files
4) Start the MSMQ Service
4) Start your application

In my scenario I've been able to fix "queue is in an inconsistent state" error after MSMQ service restart.
Turns out the computer name was too long, so changing computer name to a name with less than 15 characters fixed the issue.

My team is experiencing a similar issue, with MSMQ getting called by NSB 2.5. The issue came up recently after Infrastructure moved our VM to another physical server and for some reason lowered available RAM. We think the issue may be memory-related.
EDIT
After a week of no more issues with this, I can confidently say that raising RAM on the server solved our MSMQ's "Inconsistent state" issue. Mind you, we did have to re-install MSMQ first -- but the issue never came back, and before the RAM update the issue popped up every 2 days.

Regularly on Windows 2008RC2, MSMQ cannot start after reboot.
The two regular issues for me are:
"The Message Queuing service cannot start because a queue is in an inconsistent state"
and
"The dependency service does not exist or has been marked for deletion"
Sometimes, the following has helped (although we are seeking a more solid answer)
rename msmq folder to msmq_old
net stop wuauserv
net stop bits
Delete “%windir%\softwaredistribution” directory
Reboot
This has occurred 5 times this year, and each time, a variety of the above with plenty of reboots.
Sometimes, we revert to Remove Feature / Add Feature, however you may get yourself in a loop. As it boots up, a rollback occurs in the windows update service, so the Feature is never uninstalled, and the problem is never repaired.
Following the steps above can help with that.

Related

How do I resolve an "Arg_NullReferenceException" error when trying to connect through CheckPoint VPN?

I use CheckPoint VPN to log in to my place of work's servers to work remotely. The VPN has been working (mostly) fine all year, and I haven't changed any of the settings, but this morning, when I tried to log in, it's giving me the "Arg_NullReferenceException." I can't seem to find anything on this particular error on google.
I have tried restarting my computer, because it's not the first issue I've had with CheckPoint VPN (though it is the first time I've seen that error message), and a restart usually resolves whatever issue I'm having. I've also tried creating a new connection with the same settings, but I'm getting the same error with that one, too.
I'm not entirely sure what other information I would need to provide. I'm also not sure if it's a problem on my end, or on the company servers. I have already emailed tech support, but I thought I should be thorough.
This is a known issue. I have been jumping through hoops trying to get the capsule client to work. Raise a ticket with TAC if you have support. If not then you can download the E86 Endpoint connect client and run it. That has been my work around for this issue.
They just issued an update to the Capsule via the Microsoft Store. It seems one of the recent Windows Security Update broke the L2TP protocol within windows.

Application intermittently holding requests open

I have a strange case where one of my applications is causing the IIS (7.0) request queue to fill up. Requests are not being terminated after 30s as they should be. This then takes all DB connections from the pool and renders that app useless (other apps are unaffected).
I have no idea a) Why they are stalling in the first place, and b) why IIS is letting them sit there stalled rather than killing them. I would guess my app is locking something, perhaps something the GC is trying to reclaim.
My question is where do I start on debugging such an issue? I have no idea. It's currently happening only in production, but reasonably regularly (maybe once every 4 hours) on all web servers.
PS: There is potentially an argument that this question is better on serverfault than on SO, but given that I think this is a development problem with the app rather than an admin one, I have started on SO for now. I am however happy to re-post there if needed.
For reference using WinDbg was the solution I used. I attached WinDbg to w3wp process for the app pool once requests had queued. I could then view the call stack in each process, and although different most of them were sat waiting on a lock inside ResourceManager.
I still don't know why it was locking there, I thought ResourceManager was thread safe. I re-wrote some code to cache the ouput of ResourceManager in another class and that seems to be avoid the lock.

Managing suspended/dehydrated instances when redeploying orchestrations

Scenario:
A biztalk application is deployed with a receive port, orchestration and send port. Messages flow correctly.
At some point, a bug is found in the orchestration, causing messages to suspend. The orchestration must be fixed and redeployed.
Question:
Because you can't redploy an orchestration with suspended instances, how would you go about retaining those messages, terminating the instances, redeploying and then resending those messages through the fixed orchestration? Is there a process or tool for this?
If the bug doesn`t require major modification - i.e no new orchestrations, no new schemas, no new promoted fields etc, then a short term 'hack' is possible, viz by simply reinstalling the fixed MSIs (and GAC) on your servers, and restarting the host instances (using NLB if applicable) (i.e. without importing the MSI's into BizTalk).
You should then be able to resume any suspended (resumable) orchs. Then schedule some downtime at a less busy time, put your app into partially stopped to prevent new orchs starting, wait for all running orchs to complete, and then import the fixed MSI (consider bumping up the buggy orch assembly version with the hotfix)
Building a custom tool with the ability to audit all messages going in and out of Biztalk is useful, so you can replay them. This will allow you to terminate orchs, reinstall, and then replay.
You can as well fix the orchstration and while building it, increment the version of the assembly. This way you can have parallel deployment of the orchestrations. You can unenlist the existing one after deploying the newer versioned orchstration.

What Causes "Internal connection fatal errors"

I've got a number of ASP.Net websites (.Net v3.5) running on a server with a SQL 2000 database backend. For several months, I've been receiving seemingly random InvalidOperationExceptions with the message "Internal connection fatal error". Sometimes there's a few days in between, while other times there are multiple errors per day.
The exception is not limited to one site in particular, though they share business and data access assemblies. The error seems to always be thrown from SqlClient.TdsParser.Run(). It sometimes is thrown from old-school direct SqlCommand.Execute() calls, while other times it is thrown from Linq2Sql code.
I've been assured by the network guys that there are no errors or packets lost on their end. Has anyone else experienced this? Could it be a driver problem? We have been unable as of yet to pinpoint a specific trigger for this exception.
We're running II6 on Windows Server 2003.
After a few months of ignoring this issue, it started to reach a critical mass as traffic gradually increased. Under heavy load, including some crawlers, things got crazy and these errors poured in nonstop.
Through trial and error, we eventually tracked down a handful of SqlCommand or LINQ queries whose SqlConnection wasn't closed immediately after use. Instead, through some sloppy programming originating from a misunderstanding of LINQ connections, the DataContext objects were disposed (and connections closed) only at the end of a request rather than immediately.
Once we refactored these methods to immediately close the connection with a C# "using" block (freeing up that pool for the next request), we received no more errors. While we still don't know the underlying reason that a connection pool would get so mixed up, we were able to cease all errors of this type. This problem was resolved in conjunction with another similar error I posted, found here: Why is my SqlCommand returning a string when it should be an int?
Sounds like the database connection is getting dropped or timing out.
We recently had similar issues moving to IIS 6 from IIS 5 connecting to SQL 2000. Our issue was solved by increasing number of ephemeral ports available.
Look at the usage of the ephemeral ports by the IIS server. The default max no. of ports available is normally 4000. You might want to consider increasing this if the sites on your server are particularly busy or your application is making a lot of database calls.
You can monitor these first to see if going over max limit.
Search Microsoft Knowledge base for "MaxUserPort" and "TcpTimedWaitDelay" and make necessary registry changes. Make sure you back up registry or snapshot server before making the changes. Will need to reboot for changes to take effect.
You should double check your database and recordset connection are being closed after use. Not closing will use up this port range unnecessarily.
Check the efficiency of your stored procedures anyway as they might be taking longer than they need too.
"If you rapidly open and close 4000 sockets in less than four minutes, you will reach the default maximum setting for client anonymous ports, and new socket connection attempts fail until the existing set of TIME_WAIT sockets times out." - from http://support.microsoft.com/kb/328476
Check your server's LOG folder (\program files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG or similar) for files named SqlDump*.mdmp and SqlDump*.txt. If you do find any you'll have to take it to Product Support.
I was creating a new EF Core project and was trying to create the database to an external Linux server instead of a Windows Server or local one. After hours of searching I found out that I am using MySQL instead of the Microsoft SQL server.
I found it weird that everyone was using 1433 instead of the usual 3306. So to fix my 'Internal connection fatal error' I had to set up a docker instance of SQL Server bound to its default port of 1433.
It literally was that simple. In the docker repo look for "microsoft-mssql-server" and run the image as described neatly in the description below. Everything works now and I am able to push my database from my EF Core project to an external server.

Slow BizTalk File Receive

I have an application with a file receive location. After the host instance has been running for a few hours the receive location fails to identify new files dropped into the folder that it is monitoring. It doesn't forget about them altogether, it's just that performance grinds to a crawl. The receive location is configured to poll the target folder every 60 seconds but after host instance has been running for an hour or so, then it seems that the target folder is being polled only every thirty minutes. If I restart the host instance then the files waiting in the target folder are collected right away and performance is fine for the next hour or so.
The same application runs fine in a different environment.
There are now obvious entries in the event log related to the problem.
All the BizTalk SQL jobs are running fine except for Backup BizTalk Server (BizTalkMgmtDb).
Any suggestions gratefully received.
Thanks
Rob
Here are some additional tools which may help you identify and diagnose BizTalk database issues.
BizTalk MsgBox Viewer
Here is a tool to repair identified errors:
Terminator
Use at your own risk... read the glogs and docs. Start with the message box viewer and let us know our results.
Without more details, the biggest tell is that your Backup Job is failing. If the backup job is failing, it may not be properly configured. If it is properly configured and still failing, then you've got other issues. Can you give us some more information about your BizTalk install.
What version are you running?
What are our database sizes?
What are your purge and archive settings like?
Is there any long running blocks in your SQL Server DB coming from BizTalk?
Another thing to consider is the user accounts the send, receive and orchestration hosts are running under. Please check the BizTalk Administration Console. If they are all running the same account, sometimes the orchestrations can starve the send and receive processes of CPU time. I believe priority is given to orchestrations then receive, then send. Even if you are just developing, it is useful to use separate accounts for this. This also improves security.
The Wrox BizTalk Server 2006 will also supply tuning advice.
What other things are going on with the server? Is BizTalk pegged otherwise or is it idle?
You mention that the solution does not have any problems in another environment, so it's likely that there is a configuration problem.
Check the following:
** On SQL Server, set some upper memory limit for SQL Server. By default, SQL Server uses whatever it can get and then hangs onto it, so set a reasonable limit so that your system can operate without spending a lot of time paging memory onto and from your hard drive(s).
** Ensure that you have available disk space - maybe you are running low - this can lead to all kinds of strange problems.
** Try to split up the system's paging file among its physical drives (if you have more than one drive on the system). Also consider using a faster drive, or if you have lots of cash laying around, get a SAN.
** In BizTalk, is tracking enabled? If so, are you also tracking message bodies? Disable tacking or message body tracking and see if there is a difference.
** Start performance monitor and monitor the following counters when running your solution
Object: BizTalk Messaging
Instance: (select the receiving host) %%
Counter: Documents Received/Sec
Object: BizTalk Messaging
Instance: (select the transmitting host) %%
Counter: Documents Sent/Sec
Object: XLANG/s Orchestrations
Instance: (select the processing host) %%
Counter: Orchestrations Completed/Sec.
%% You may have only one host, so just use it. Since BizTalk configurations vary, I am using generic names for hosts.
The preceding counters monitor the most basic aspects of your server, but may help to narrow down places to look further. You can, of course, add CPU and Memory too. If you have time (days...maybe weeks) you could monitor for processes that allocate memory and never release it. Use the following counter...
Object: Memory
Counter: Pool Nonpaged Bytes
Slow decline of this counter indicates that a process is not releasing memory, which affects everything on the system.
Let us know how things turn out!
I had the same problem with, when my orchestration was idle for some time it took a long time to process the first msg. A article of EvYoung helped me solve this problem.
"This is caused by application domain unloading within the BizTalk host process. If an AppDomain is shutdown after idle, the next message that comes needs to wait for the Orchestration to compile again. Depending on the complexity of your design, this can be a noticeable wait. To prevent this in low latency requirement scenario, you can modify the BTSNTSVC.EXE.config file and set SecondsIdleBeforeShutdown property to -1. This will prevent AppDomain shutdown due to idle."
You can find the article in here:
http://blogs.msdn.com/b/biztalkcpr/archive/2008/05/08/thoughts-on-orchestration-performance.aspx
It took me to long to respond but i thought i might help someone. cheers :)
Some good suggestions from others. I will add :
Do you have any custom receive pipeline components on the receive location ? If so perhaps one is leaking memory, calling some external component eg database which is taking a long time ?
How big are the files you are receiving ?
On the File transport properties of your receive location, set "file renaming" on, do the files get renamed within 60s.

Resources