Rails Warbler Deployment - jrubyonrails

I am doing some testing to determine resource usage of a rails war. I have used Warbler to package the "15-minute Blog" application using Rails 2.3.5 and JRuby 1.4.0. I am deploying into Tomcat 6.0.24 and create multiple deployments by copying the blog.war file as blogN.war.
This worked great for the first 4 deployments but I can't seem to deploy any more than 4 instances of the war; in other words, the catalina.out log hangs with "Deploying web application archive blog5.war".
Any ideas on what the problem might be or how I might better trouble-shoot this?

Increasing PermGenSpace memory to "-XX:PermSize=64m -XX:MaxPermSize=128m" corrected this problem.

Check your log files, may be the case that your java process, in which tomcat executes, runs out of memory, see java parameters ( -Xmx -Xms ) and http://wiki.apache.org/tomcat/FAQ/Memory . Increasing the available memory may allow you to run more instances of the application.

Related

asp.net mvc 4 app 'precompiliation'

I have an asp.net mvc4 application that we deploy to about 1400 clients. Our current deployment process goes like this:
:: Publish the project files to a local directory
%msbuild% RPortal.csproj "/p:Platform=AnyCPU;Configuration=Release;ConfigurationName=Release;SolutionDir=%solutionDir%;PublishDestination=.\Deploy\Release" /t:PublishToFileSystem
And then we have some powershell scripts that push the new build to each of our clients, using other tools to help with the sync.
We are noticing some sever slowness with application boot up times (sometimes upwards of 5 minutes), and one of the approaches we've investigated to solve this would be the idea of ASP.Net Precompiling (http://msdn.microsoft.com/en-us/library/vstudio/bb398860%28v=vs.100%29.aspx)
In some experiments with the idea, it appears that calling aspnet_compile.exe on our published system does, in fact, create some new files. (A couple of dlls, and a few *.compiled files).
My questions are as follows:
How does this compilation differ from the primary compilation from the .cs source
files?
Does this compilation happen regardless, at first run, or only when run manually?
(related to 2), Does this compilation survive app pool restarts and server reboots?
With our current scenario, it seems that we are killing our servers trying to get 1400 applications to start up (yes, they all live on 1 webserver... that's not a situation I am in control of). The server will be humming along, with no particular problems, resonable resource consumption, and then, all of a sudden, our CPU will peg to 100% and stay there. The only factor we can tie it back to is that it happens when more than 5 (or so) of the 1400 apps are all booting up.
Our hope is that precompiling will front load most of the app start burden, but I clearly don't understand whats really going on here.
Any light you might shed would be most appreciated.

DLL deployment increases startup time of Sitecore site

We have a Sitecore 6.6 instance which is used to host multiple sites. It is hosted in IIS 7.5. We developed custom Sitecore sublayouts and pipelines which are used across websites.
When any dll is deployed in bin folder, the Sitecore site takes long time to startup (8-10 mins). But when IIS is reset, startup time is less (30-40 seconds).
What could be the reason for application startup time to be more for DLL deployment than IIS Reset ?
Any suggestions to improve the application startup time for DLL deployment ?
Update 1: The startup time after DLL deployment impacts our build process as it increases the overall build deployment time in all environments (DEV,STG,LIVE).
Profiling snapshot of w3wp process revealed two major hotspots:
Sitecore.Threading.Semaphore.P
Sitecore.IO.FileWatcher.Worker
Update 2: After following the deployment suggested by Vicent, profiling snapshot of w3wp process revealed major hotspot at
Sitecore.Web.UI.WebControls.Sublayout.GetUserControl(Page)
Further analysis of memory dump showed that thread was waiting for JIT compilation of newly deployed DLL.
For me, it's like your problem is not the start up of sitecore, but the shutdown.
When you copy your dll, the filewatcher detects the change in the bin folder (write it to the logs) and tries to shutdown sitecore (log this too), but if sitecore has tasks running on different threads, (indexing, publishing, scheduled tasks, etc), the semaphore will wait until the other threads finish normally.
That's why when you "kill" the process without waiting for threads to finish sitecore starts up quickly.
I've this behaviour in my environments too, so when i need a quick restart, i copy the dll, wait for a few seconds so at least sitecore tries to shutdown and then and i kill the w3p.exe related to my pool. I won't advice nobody to do this, but i don't have any way to "kindly" kill the threads... Maybe somebody knows how to "force shutdown nicely..."
This blog post by Alex Shyba has some interesting pointers to improve the startup time for Sitecore (but might not be applicable if you're talking about a live environment rather than a DEV environment).
It might also be worth checking your prefetch caches and running through the Performance Tuning Guide if you haven't done that yet.
I've seen this problem before. It happened on version 6.5, but I've not seen a fix for it in the release notes since then.
Sitecore Support has a hotfix for this - it was indeed related to something with their Filesystem watcher tasks. You would need to raise a ticket with them, to get the hotfix or additional information.
My support ticket reference for this issue is 370593. The hotfix has issue 323775. If you mention this in your support ticket, it should speed up the process a bit - if it is indeed the same issue you're experiencing.

memory limit for x64 process in IIS7.5

Windows Web Server 2008 R2 (x64) &
.Net Framework 4.5
It is a classic ASP.Net Web Site (Not a web project, code is in App_Code directory and compiled when the site is being launched)
And it depends on many reference DLLs in /Bin directory.
For those DLLs I have source code, I compile them targeted as "x64" platform.
And I have some other DLLs without source code (mysql.data.dll / etc), which are compiled as "Any CPU".
I modified them in EditBin.exe to ensure the IMAGE_FILE_LARGE_ADDRESS_AWARE flag is indicated in their PE header.
According to this table:
http://msdn.microsoft.com/en-us/library/aa366778%28VS.85%29.aspx#memory_limits
x64 process can't use more than 2GB memory unless IMAGE_FILE_LARGE_ADDRESS_AWARE is set.
How can I verify whether it works?
Is there any place I can see the memory limitation of running x64 process?
I don't know if you can actually "see the memory limitation" explicitly stated (assuming you don't trust MS's own documentation that you cited) other than getting your hands on and digging into the IIS and/or ASP.NET source code.
That being said, you could try stress testing the site, and monitor memory consumption (via Task Manager or Process Monitor), to see if it exceeds 2GB. I would recommend tinyget, part of the IIS 6 Resource Kit, which can still be used with IIS 7.
tinyget -svr:localhost -uri:/<your site> -loop:200 -threads:20
You'll have to play with the loop and thread count to try to push it over 2GB. I would expect to see an System.OutOfMemoryException as you approach about 1.4GB of combined physical and virtual private bytes. You may want to create a stress test function in the site itself, for testing purposes only of course, which will help you reach this limit, by using the exact opposite of good practices. You can read more about what would lead to an System.OutOfMemoryException` here, and then do things they recommend against. For example, add a test method that just concatenates strings in a very large loop.
try procexp, from sysinternals here. This application can monitor .NET specific metrics.
Nevertheless, according to your link, you should be able to address at least 8GB.
Please keep in mind that enforcing the IMAGE_FILE_LARGE_ADDRESS_AWARE is irrelevant in your case. You could have all your components compiled to target "Any CPU", the only flag that is checked is the flag of the executable file.

WinDBG - Analyse dump file on local PC

I have created a memory dump of an ASP.NET process on a server using the following command: .dump /ma mydump.dmp. I am trying to identify a memory leak.
I want to look at the dump file in more detail on my local development PC. I read somewhere that it is advisable to debug on the same machine as you create the dump file. However, I have also read that some developers do analyse the dump file on their local development PC's. What is the best approach?
I notice that when I create a dump file using the command above the W3WP process memory increases by about 1.5 times. Why this this? I suppose this should be avoided on a live server.
Analyzing on the same machine can save you from SOS loading issues thereafter. Unless you are familiar with WinDbg and SOS, you will find it confusing and frustrating then.
If you have to use another machine for analysis, make sure you read carefully this blog post, http://blogs.msdn.com/b/dougste/archive/2009/02/18/failed-to-load-data-access-dll-0x80004005-or-what-is-mscordacwks-dll.aspx as it shows you how to copy the necessary files from the source machine (where the dump is captured) to the target machine (the one you launch WinDbg).
For your second question, as you use WinDbg to attach to the process directly, and use .dump command to capture the dump, the target process unfortunately is modified. Not easy to explain in a few words. The recommended way is to use ADPlus.exe or Debug Diag. Even procdump from SysInternals is better. Those tools are designed for dump capture and they have minimal impact on the target processes.
For memory leak from unmanaged libraries, you should use memory leak rule of Debug Diag. for managed memory leak, you can simply capture hang dumps when memory usage is high.
I am no expert on WinDBG but I once had to analyse a dump file on my ASP.NET site to find a StackOverflowException.
While I got a dump file of my live site (I had no choice since that was what was failing), originally I tried to analyse that dump file on my local dev PC but ran into problems when trying to load the CLR data from it. The reason being that the exact version of the .NET framework differed between my dev PC and the server - both were .NET 4 but I imagine my dev PC had some cumulative updates installed that the server did not. The SOS module simply refused to load because of this discrepancy. I actually wrote a blog post about my findings.
So to answer part of your question it may be that you have no choice but to run WinDBG from your server, at least you can be sure that the dump file will match your environment.
It is not necessary to debug on the actual machine unless the problem is difficult to manifest on your development machine.
So long as you have the pdbs with the private symbols then the symbols should be resolved and call stacks correctly displayed and the correct version of .NET installed.
In terms of looking at memory leaks you should enable Gflags user stack trace and take memory dumps at 2 intervals so you can compare the memory usage before and after the action that provokes the memory leak, remember to disable gflags afterwards!
You could also run DebugDiag on the server which has automated memory pressure analysis scripts that will work with .Net leaks.

java.net.BindException: No free port within range in Glassfish 3.1

Today I have deployed an app to our production application server GlassfishV3 through Jenkins CI to the autodeploy folder. The app server went down, and I cannot bring it back up.
My goal is to have the server up and running the same as prior to deploying the application. This is what I have done:
First find the PID of the process running at port 4848: nestat -nlept
Then kill the PID by doing kill -9 PID
Remove the war file that Jenkinks just put in the autodeploy directory just in case if that is the problem.
Start the server again by doing ./asadmin start-domain domain1
The server takes FOREVER to start !!! In fact it never starts successfully as I cannot access the admin console at 4848 or any of the other apps that were already running. However, it leaves a process running at 4848.
I looked at the jvm.log and server.log and I found a java.net.BindException:No free port within range.........
So my questions are as follows:
Do you know what is going on?
Do you know how to fix it?
Do you know of a way to speed up the ./asadmin start-domain domain1 process?
Note: In our QA app server (Same version, same OS, Same Java, Same Grails) it does not happen. Really frustrated with this issue.
Thanks a lot for your help. Any help would be very much appreciated as this is a production issue that has several applications down for a few hours already.
Dario
I can deploy my application now, basically it boiled down to increasing the MaxPermSize jvm option
Under the config folder, edit domain.xml and change the default size to this:
-XX:MaxPermSize=256m
You can always increase it as necessary.
Also, if that is not enough you can also change the max heap size in that same file
-Xmx512m . I have left it as is but if required you can change that to 6g or more on a 64 bit OS. On a 32 bit OS it will only recognize up to 3.5g.
Hope this helps somebody else in the future, as this issue kept me at work until 9:00PM
UPDATE:
I had peformance issues again and I found this other solution in Joshi's tech blog:
http://joshitech.blogspot.com/2009/09/glassfish-application-server.html
Basically add the following jvm options in the domain.xml. It should increase Glassfish boot up and deployment performance:
<jvm-options>-server</jvm-options>
<jvm-options>-Xms3000m</jvm-options>
<jvm-options>-Xmx3000m</jvm-options>
<jvm-options>-XX:MaxPermSize=192m</jvm-options>
<jvm-options>-XX:NewRatio=2</jvm-options>
<jvm-options>-XX:+AggressiveHeap</jvm-options>
<jvm-options>-XX:+AggressiveOpts</jvm-options>
<jvm-options>-XX:+UseParallelGC</jvm-options>
<jvm-options>-XX:+UseParallelOldGC</jvm-options>
<jvm-options>-XX:ParallelGCThreads=5</jvm-options>

Resources