In the last 3 weeks, we have been experiencing a huge number of socket timeouts from the Microsoft Band Cloud API. When this issue started, a few retries were sufficient, however, at this point, it seems like it would be a never-ending retry loop.
We integrate with several other cloud API's and this is the only one that is experiencing this issue and is actually one of our lighter used API's, so I don't believe that it is load induced or our system.
Is anyone else experiencing this issue? Any recommendations? Thanks!
The service was having some issues with it's DNS failover components, but should now be back up and running normally. Please let me know if it continues to happen.
Related
I have a question and I couldn't find any answer here.
I am using the NamedPipeClientStream/NamedPipeServerStream classes for bidirectional IPC communication in a .NET 5.0/.NET Framework 4.8 environment and it works fine, however I need to have a way of letting the client (immediately) know if the server is not running anymore and I am not sure what is the best way to achieve this.
Of course I can try on the client side to call Connect with a timeout every 3 seconds for example and if I get a TimeoutException, then it means the server is not available, however as far I as have read, doing this (for example with a timer and/or a background thread) is not very efficient. Are there better ways to do this (and still use named pipes)? Can someone point me in another (better) direction?
I had it solved before with WCF, however as .NET Core doesn't support it, it is not a solution anymore.
Thank you very much.
BR,
M.
You could do a twofold approach where if a server shuts down nicely you send a 'i am shutting down' message to the client. And also have the server send small heartbeat messages once per second to clients. This way clients can check if the last heartbeat is stale or the server has shutdown whenever they need.
Looking for a WCF replacement myself and came across your post ;-)
We are running into a possible latency issue with Corda Token SDK and encumbrances. The encumbrance is not immediately recognized by DatabaseTokenSelection but after a sufficient wait it does. And this happens sporadically.
This is a little hard to diagnose but if you're able to pinpoint what's causing the latency please do make an issue on here: https://github.com/corda/token-sdk/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc
if you don't hear back on the issue message me (#david) on slack.corda.net I can help kick the can along for you.
Some tips for how to debug this:
test the latency between the nodes with ping
make sure any cnames resolve with nslookup from some of your corda nodes
check if any flows are ending up in the flow hospital along the way: https://docs.corda.net/docs/corda-os/4.7/node-flow-hospital.html
I'm having issues with SignalR failing to complete its connection cycle when running in a load balanced environment. I’m exploring Redis as a means of addressing this, but want a quick sanity check that I’m not overlooking something obvious.
Symptoms –
Looking at the network traffic, I can see the negotiate and connect requests made, via XHR and websockets respectively, which is what I’d expect. However, the start request fails with
Error occurred while subscribing to realtime feed. Error: Invalid start response: ''. Stopping the connection.
And an error message of ({"source":null,"context":{"readyState":4, "responseText":"","status":200, "statusText":"OK"}})
As expected, this occurs when the connect and start requests are made on different servers. Also, works 100% of the time in a non load-balanced environment.
Is this something that a Redis backplane will fix? It seems to make sense, but most of the rational I’ve seen for adding a backplane is around hub messaging getting lost, not failing to make a connection, so I'm wondering if I'm overlooking something basic.
Thanks!
I know this is a little late, but I believe that the backplane only allows one to send messages between the different environments user pools, it doesn't have any affect on how the connections are made or closed.
Yes, you are right. Backplane acts as cache to pass messages to other servers behind load balancer. Connection on load balancer is a different and a tricky topic, for which I am also looking for an answer
We're currently having an issue with our Biztalk having too few concurrent orchestrations and is causing delay in delivering messages. The system has been running for years and the issue happened just recently. In a normal state it would have 20-40 concurrent orchestrations but currently it's been running only 4 or less at the same time.
The same configuration on a test server is working properly so at first we thought clearing the database would help, but unfortunately hasn't.
Any advise will be greatly appreciated.
First thing is to fire up Performance Monitor and see if the system is Throttling and if so, why. See: Host Throttling Performance Counters
I'm seeing a really strange error that I'm having a difficult time
tracking down. I think its related to my configuration of Rhino ESB, though I'm not sure
if RSB is actually causing it, so I figured I'd ask and see if
anyone else has come across this in any other usages of MSMQ.
I'm using RSB as a client in a web app (ASP.NET, the client runs in the background). The client talks to a windows service via the MSMQ binding for RSB. Restarting the service never appears to have an effect on MSMQ, neither does restarting IIS by hand. However, whenever I actually restart the computer itself, MSMQ always refuses to start back up, claiming that a "queue is in an inconsistent state". Attempting to start MSMQ manually results in the same error, effectively rendering the MSMQ install completely useless. The only way to solve it is to actually remove then reinstall MSMQ.
The only information I've found via the almighty Google are references to a problem in MSMQ 2.0 (this problem is occurring in MSMQ 4.0). I've verified that Dispose is being called on on the bus at shutdown, in both the service and the web site.
Does anyone have any idea why this could be occurring? Thanks!
I faced the same issue on Window 2008 Server (Virtual Machine). Although the environment was not related to rhino tools.
The error in the event log:
"The Message Queuing service cannot start because a queue is in an inconsistent state. For more information, see Microsoft Knowledge Base article 827493 at support.microsoft.com."
As Roy pointed out, this is happening every 2-3 days. Every time we would follow the steps below to recover - instead re-installing the MSMQ.
1) Stop all applications and services that uses MSMQ.
2) Kill the mqsvc.exe from the Task Manager
3) Go to C:\Windows\System32\msmq\storage and delete any .mq files
4) Start the MSMQ Service
4) Start your application
In my scenario I've been able to fix "queue is in an inconsistent state" error after MSMQ service restart.
Turns out the computer name was too long, so changing computer name to a name with less than 15 characters fixed the issue.
My team is experiencing a similar issue, with MSMQ getting called by NSB 2.5. The issue came up recently after Infrastructure moved our VM to another physical server and for some reason lowered available RAM. We think the issue may be memory-related.
EDIT
After a week of no more issues with this, I can confidently say that raising RAM on the server solved our MSMQ's "Inconsistent state" issue. Mind you, we did have to re-install MSMQ first -- but the issue never came back, and before the RAM update the issue popped up every 2 days.
Regularly on Windows 2008RC2, MSMQ cannot start after reboot.
The two regular issues for me are:
"The Message Queuing service cannot start because a queue is in an inconsistent state"
and
"The dependency service does not exist or has been marked for deletion"
Sometimes, the following has helped (although we are seeking a more solid answer)
rename msmq folder to msmq_old
net stop wuauserv
net stop bits
Delete “%windir%\softwaredistribution” directory
Reboot
This has occurred 5 times this year, and each time, a variety of the above with plenty of reboots.
Sometimes, we revert to Remove Feature / Add Feature, however you may get yourself in a loop. As it boots up, a rollback occurs in the windows update service, so the Feature is never uninstalled, and the problem is never repaired.
Following the steps above can help with that.