What is the reason for an azure vm to become non-compliant after update deployment?
Is it because there are any critical or security updates missing?
Does having Other updates missing cause a vm to become non-compliant
There were no documentation to clarify this, only with the current statistics, I came into this conclusion. If there are any documentation to prove that, appreciate your help.
yes, there may be a chance for missing some critical or security updates.Before you deploy software updates to your machines, review the update compliance assessment results for enabled machines. For each software update, its compliance state is recorded and then after the evaluation is complete, it is collected and forwarded in bulk to Azure Monitor logs.
On a Windows machine, the compliance scan is run every 12 hours by default, and is initiated within 15 minutes of the Log Analytics agent for Windows is restarted. The assessment data is then forwarded to the workspace and refreshes the Updates table. Before and after update installation, an update compliance scan is performed to identify missing updates, but the results are not used to update the assessment data in the table.it is important to review recommendations on how to configure the windows Update client
After reviewing the compliance results, the software update deployment phase is the process of deploying software updates. To install updates, schedule a deployment that aligns with your release schedule and service window.
After the deployment is complete, review the process to determine the success of the update deployment by machine or target group.Check deployment status
Related
I am performing an install on a server, which requires the server to reboot to complete the installation. My query is how to find out when the reboot completes so that I can run a basic smoke test on the server and confirm the deployment status.
This is a non-trivial task in general, and the solution will be depending on the operating system of the server that's restarted.
In general, it will be best to let the server decide itself on when the startup process has completed, and have it send some notification to interested entities.
If you can't do that, you can check for services that are typically available after reboot (like, HTTP port reachable, exported file shares mountable,...). Adding some additional time on top of that may be useful.
On the other, more unreliable end of possible solutions, you can simply wait for a certain amount of time that's typically required for completing a reboot.
I have an BizTalk application which loops on a XML and send data to SQL server database. The orchestration works fine on the DEV machine throughout the process and is consistent. But if I process the same file on the QA machine it starts with the same speed and then the performance keeps on degrading. There is no issue on the Database object, the throttling settings are the same compared to DEV. I restarted the machine. Not sure why QA is reacting this way for this application.
What are the areas to be checked?
There are various factors which can cause this and overall your solution performance:
Is QA a shared environment, i.e. there are other solutions on it
which may cause the slow down?
If you are sharing hosts on which orchestration is running then that host might be throttling due to various reasons such as memory issues etc, Use performance counter to monitor the host throttling state.
You may have too many persistent points in orchestration, since you are looping and sending message to sql db in loop. if you are using send shape it will cause persistent point per send in loop,will degrade performance considerable.
Isolate the issue i.e. whether it is orchestration running slow or
sending to SQL is taking time.
Tracking is turned on and DTA jobs are not running
Message clean jobs not running as expected in QA
I wrote a blog about how to use SQL Server Profiler to capture the RPC call from BizTalk to SQL Server. You could isolate whether SQL is causing the issue that way; capture the RPC call on DEV or QA, and then try running just the stored procedure on QA. If it doesn't run as quickly as on DEV, that's your problem. If it does, look at your BizTalk artifacts.
Here's the blog: http://blog.tallan.com/2015/01/09/capturing-and-debugging-a-sql-stored-procedure-call-from-biztalk/
BizTalk host throttled because DatabaseSize exceeded the configured throttling limit. Also The SQL Server Agent was not running on the server, so purge processes did not run. This looks to have built up the database size over time until Biztalk throttled the application due to the resources being low
I'm having issues getting application insights to report data to Visual Studio Online from behind our firewall. I opened the firewall rules noted in this article but it didn't make a difference. I've uninstalled and reinstalled several times. The only thing that is showing up in the Operations Logs is that it's periodically purging items in the "AppDiagnostics" queue since exceed the maximum allowed size of 15 MB (full error below).
Get-WebApplicationMonitoringStatus shows all the applications I would expect to be monitored being monitored.
The health service has removed some items from the send queue for management group "AppDiagnostics" since it exceeded the maximum allowed size of 15 megabytes.
The IP addresses and hosts that you need to configure/allow are officially documented here:
https://learn.microsoft.com/en-us/azure/application-insights/app-insights-ip-addresses
I'd copy and paste the "relevant" portions, but
there's a huge number of them depending on what you want/need to do and
then they'd be wrong here whenever the above doc is updated
Last night one of the websites (.NET 4.0 forms) hosted on my Win 2008 R2 (IIS 7.5) Server started to time out throwing the following error for all connected users.
TYPE System.Web.HttpException
MESSAGE Request timed out.
DETAIL System.Web.HttpException (0x80004005): Request timed out.
The outage was confined to just one website within IIS, the others continued to work fine.
Unfortunately I was unable to identify why the website was timing out. Here are the steps I took:
First thing I did was look at the task manager which revealed normal CPU and memory usage. Network activity was also moderate.
I then opened IIS to look at the live connections under 'Worker Processes'. There were about 60 live connections, so it didn't look like anything DDoS related.
Checked database connectivity (hosted on a separate server), all fine!
I then reset the website on IIS. That didn't work
I tried to then do a complete iisreset...still no luck :(
In the end (and under some duress) the only thing I could think to do to resolve this was to restart the server.
Restarting the server worked but I am nervous not knowing why this happened in the first place. Can anyone recommend any checks that I failed to carryout? Is there an official checklist for working through these sorts of IIS problems? I have reviewed the IIS logs but don't see anything unusual on the run up to the outage.
Any pointers or links to useful resources to help me understand and mitigate against this in future will be much appreciated.
EDIT
The only time I logged into the server that day was to add an additional web handler component (for remote deploy) to IIS Web Deploy. I'm doubtful this caused the outage as the server worked for for 6 hours after.
Because iisreset didn't helped and you had to restart whole machine, I would suspect it was a global resources shortage and mostly used website (or most resource consuming) was impacted. It could be because of not available RAM, network connections congestion due to some malfunctioning calls (for example a lot of CLOSE_WAIT sockets exhausting connections pool, we've seen that in production because of malfunction of external service). It could be also one specific client problem, which was disconnected after machine restart so eventually the problem disappeared.
I would start from:
Historical analysis
review Event Viewer to see any errors/warnings from that period of time,
although you have already looked into IIS logs, I would do it once again with help of Log Parser Lizard to make some statistics like number of request per client, network bandwith per client, average response time per client and so on.
Monitoring
continuously monitor Performance Counters:
\Processor(_Total_)\% Processor Time,
\.NET CLR Exceptions(_Global_)\# of Exceps Thrown / sec,
\Memory\Available MBytes,
\Web Service(Default Web Site)\Current Connections (per each your site name),
\ASP.NET v4.0.30319\Request Wait Time,
\ASP.NET v4.0.30319\Requests Current,
\ASP.NET v4.0.30319\Request Queued,
\Process(XXX)\Working Set,
\Process(XXX)\% Processor Time (XXX per each w3wp process),
\Network Interface(XXX)\Bytes total / sec
run Performance Analysis of Logs (PAL) Tool during time of failure to make a very detailed analysis of performance counters data,
run netstat -ano to analyze network traffic (or TCPView tool even better)
If all this will not lead you to any conclusion, create a Debug Diagnostic rule to create a memory dump of the process for long running requests and analyze it with WinDbg and PSSCor extension for .NET debugging.
I have an application with a file receive location. After the host instance has been running for a few hours the receive location fails to identify new files dropped into the folder that it is monitoring. It doesn't forget about them altogether, it's just that performance grinds to a crawl. The receive location is configured to poll the target folder every 60 seconds but after host instance has been running for an hour or so, then it seems that the target folder is being polled only every thirty minutes. If I restart the host instance then the files waiting in the target folder are collected right away and performance is fine for the next hour or so.
The same application runs fine in a different environment.
There are now obvious entries in the event log related to the problem.
All the BizTalk SQL jobs are running fine except for Backup BizTalk Server (BizTalkMgmtDb).
Any suggestions gratefully received.
Thanks
Rob
Here are some additional tools which may help you identify and diagnose BizTalk database issues.
BizTalk MsgBox Viewer
Here is a tool to repair identified errors:
Terminator
Use at your own risk... read the glogs and docs. Start with the message box viewer and let us know our results.
Without more details, the biggest tell is that your Backup Job is failing. If the backup job is failing, it may not be properly configured. If it is properly configured and still failing, then you've got other issues. Can you give us some more information about your BizTalk install.
What version are you running?
What are our database sizes?
What are your purge and archive settings like?
Is there any long running blocks in your SQL Server DB coming from BizTalk?
Another thing to consider is the user accounts the send, receive and orchestration hosts are running under. Please check the BizTalk Administration Console. If they are all running the same account, sometimes the orchestrations can starve the send and receive processes of CPU time. I believe priority is given to orchestrations then receive, then send. Even if you are just developing, it is useful to use separate accounts for this. This also improves security.
The Wrox BizTalk Server 2006 will also supply tuning advice.
What other things are going on with the server? Is BizTalk pegged otherwise or is it idle?
You mention that the solution does not have any problems in another environment, so it's likely that there is a configuration problem.
Check the following:
** On SQL Server, set some upper memory limit for SQL Server. By default, SQL Server uses whatever it can get and then hangs onto it, so set a reasonable limit so that your system can operate without spending a lot of time paging memory onto and from your hard drive(s).
** Ensure that you have available disk space - maybe you are running low - this can lead to all kinds of strange problems.
** Try to split up the system's paging file among its physical drives (if you have more than one drive on the system). Also consider using a faster drive, or if you have lots of cash laying around, get a SAN.
** In BizTalk, is tracking enabled? If so, are you also tracking message bodies? Disable tacking or message body tracking and see if there is a difference.
** Start performance monitor and monitor the following counters when running your solution
Object: BizTalk Messaging
Instance: (select the receiving host) %%
Counter: Documents Received/Sec
Object: BizTalk Messaging
Instance: (select the transmitting host) %%
Counter: Documents Sent/Sec
Object: XLANG/s Orchestrations
Instance: (select the processing host) %%
Counter: Orchestrations Completed/Sec.
%% You may have only one host, so just use it. Since BizTalk configurations vary, I am using generic names for hosts.
The preceding counters monitor the most basic aspects of your server, but may help to narrow down places to look further. You can, of course, add CPU and Memory too. If you have time (days...maybe weeks) you could monitor for processes that allocate memory and never release it. Use the following counter...
Object: Memory
Counter: Pool Nonpaged Bytes
Slow decline of this counter indicates that a process is not releasing memory, which affects everything on the system.
Let us know how things turn out!
I had the same problem with, when my orchestration was idle for some time it took a long time to process the first msg. A article of EvYoung helped me solve this problem.
"This is caused by application domain unloading within the BizTalk host process. If an AppDomain is shutdown after idle, the next message that comes needs to wait for the Orchestration to compile again. Depending on the complexity of your design, this can be a noticeable wait. To prevent this in low latency requirement scenario, you can modify the BTSNTSVC.EXE.config file and set SecondsIdleBeforeShutdown property to -1. This will prevent AppDomain shutdown due to idle."
You can find the article in here:
http://blogs.msdn.com/b/biztalkcpr/archive/2008/05/08/thoughts-on-orchestration-performance.aspx
It took me to long to respond but i thought i might help someone. cheers :)
Some good suggestions from others. I will add :
Do you have any custom receive pipeline components on the receive location ? If so perhaps one is leaking memory, calling some external component eg database which is taking a long time ?
How big are the files you are receiving ?
On the File transport properties of your receive location, set "file renaming" on, do the files get renamed within 60s.