Meteor build takes long time after I save my javascript file - meteor

After I save my javascript file meteor will rebuild app but it takes so long time (I have to wait for 15 minute or more)
Are there solutions to solve it.
Thanks!

Based on my personal experience,
In general case, when you quit building in between there might be some folders undeleted like below images.
In such case, delete such folders first.
If you are using Windows, RAM can be issue. If you machine has very little memory allocated then building the ever growing Meteor project become very annoying. I came to know this when I upgraded my machine from 1st generation core i5-6GB DDR2 RAM-500GB HDD to 2nd generation core i7-8GB DDR3 RAM- 248GB SSD storage drive. Machine configuration plays a vital role for Meteor building for development env and production env. Earlier configuration took 6 min to build but new configuration took less than a minute for me now.
If you have any antivirus software running behind the scene then it too might cause the slow speed for meteor build process.

Related

Does it matter if my website has build errors?

I am a C# winforms programmer, not used to ASP.Net.
As a winforms programmer I build regularly to detect syntax errors.
Recently I opened a Kentico website in Visual Studio and to my surprise found that there were build errors.
Does this matter?
My instinct is to go about correcting the site until it builds. This is a side track from what I set out to do.
If you are attempting to build any kind of quality into your project/software, then yes, it does matter if it builds.
Regarding Kentico and build times, if you're using a website vs. a web project, yes the build times are typically longer and range anywhere from a few minutes to I've seen upwards of an hour. The build times depend greatly on the machine building it as well. So if your machine has a Celeron processor, with 1GB of RAM and a 5400 RPM drive, you're going to take longer to build than a machine with an i7 processor, 16GB of RAM and a solid state hard drive that can read/write 500+ MB/s. Also keep in mind Kentico out of the box has over 9000 system files in it so as a website, it will take some time to build.
One of the first things I check when a site doesn't build is to ensure all the referenced DLLs are in the website/project. If not, this will cause several errors and is usually a very simple fix. If you have any kind of errors from code which resides in the /App_Code directory, your site will NOT run at all when you publish it. If you have errors within any other directory, the site will run BUT wherever those code files are referenced on the website, will display errors. So in your instance if you have webpart files in the /CMSWebparts/OurCompany folder, if those webparts are placed on pages within the website, those pages will error out even though the rest of the site is running.
In my opinion, just fix the errors and be done with them. Then check the code into a version control system to keep track of the changes.
Does this matter?
It depends on what you are trying to achieve with your website. If you want to make it available to the public then building is definitely something you should consider as top priority. If on the other hand you want to have the source code open in Visual Studio on your local machine, just for reading purposes, then building is not necessary.

Meteor File Change Watcher is taking too long to recognize changes

My Meteor file change watcher is taking forever to detect my file changes and refresh the browser, sometimes even longer than a minute. This makes developing a real pain.
My Meteor is running inside an Ubuntu-VM. The projects folder lies in my OSX and is mounted inside the VM. So I'm aware that inotify/kqueue won't work, so Meteor should fallback to stat polling.
I even set the environment variables according to this post, but the behavior is still the same.
METEOR_WATCH_FORCE_POLLING=true
METEOR_WATCH_POLLING_INTERVAL_MS=500
Is there any way to fix this annoying behavior?
The folder from OSX is mounted as a nfs share btw.
Update:
I did some testing and there is no difference if the application has a big amount of packages or is taking long to build, even with the very basic app you get after meteor create I still get the same behavior.
If I change a file in the VM (so that inotify works) the refresh is happening instantly.
I've have apps in production that incrementally become slower when adding packages, both 3rd party and private packages. I also discovered that adding 3rd libs directly on the client/lib increases the reloading time.
I'm not sure if Meteor 1.0.2 actually solved the problem of watching the directory efficiently.
What version of Meteor are you using?
Much of this lag-reloading issue was addressed on Meteor 1.0.2. While it still takes some time, I would say was ~5x faster on my experience.

asp.net mvc 4 app 'precompiliation'

I have an asp.net mvc4 application that we deploy to about 1400 clients. Our current deployment process goes like this:
:: Publish the project files to a local directory
%msbuild% RPortal.csproj "/p:Platform=AnyCPU;Configuration=Release;ConfigurationName=Release;SolutionDir=%solutionDir%;PublishDestination=.\Deploy\Release" /t:PublishToFileSystem
And then we have some powershell scripts that push the new build to each of our clients, using other tools to help with the sync.
We are noticing some sever slowness with application boot up times (sometimes upwards of 5 minutes), and one of the approaches we've investigated to solve this would be the idea of ASP.Net Precompiling (http://msdn.microsoft.com/en-us/library/vstudio/bb398860%28v=vs.100%29.aspx)
In some experiments with the idea, it appears that calling aspnet_compile.exe on our published system does, in fact, create some new files. (A couple of dlls, and a few *.compiled files).
My questions are as follows:
How does this compilation differ from the primary compilation from the .cs source
files?
Does this compilation happen regardless, at first run, or only when run manually?
(related to 2), Does this compilation survive app pool restarts and server reboots?
With our current scenario, it seems that we are killing our servers trying to get 1400 applications to start up (yes, they all live on 1 webserver... that's not a situation I am in control of). The server will be humming along, with no particular problems, resonable resource consumption, and then, all of a sudden, our CPU will peg to 100% and stay there. The only factor we can tie it back to is that it happens when more than 5 (or so) of the 1400 apps are all booting up.
Our hope is that precompiling will front load most of the app start burden, but I clearly don't understand whats really going on here.
Any light you might shed would be most appreciated.

DLL deployment increases startup time of Sitecore site

We have a Sitecore 6.6 instance which is used to host multiple sites. It is hosted in IIS 7.5. We developed custom Sitecore sublayouts and pipelines which are used across websites.
When any dll is deployed in bin folder, the Sitecore site takes long time to startup (8-10 mins). But when IIS is reset, startup time is less (30-40 seconds).
What could be the reason for application startup time to be more for DLL deployment than IIS Reset ?
Any suggestions to improve the application startup time for DLL deployment ?
Update 1: The startup time after DLL deployment impacts our build process as it increases the overall build deployment time in all environments (DEV,STG,LIVE).
Profiling snapshot of w3wp process revealed two major hotspots:
Sitecore.Threading.Semaphore.P
Sitecore.IO.FileWatcher.Worker
Update 2: After following the deployment suggested by Vicent, profiling snapshot of w3wp process revealed major hotspot at
Sitecore.Web.UI.WebControls.Sublayout.GetUserControl(Page)
Further analysis of memory dump showed that thread was waiting for JIT compilation of newly deployed DLL.
For me, it's like your problem is not the start up of sitecore, but the shutdown.
When you copy your dll, the filewatcher detects the change in the bin folder (write it to the logs) and tries to shutdown sitecore (log this too), but if sitecore has tasks running on different threads, (indexing, publishing, scheduled tasks, etc), the semaphore will wait until the other threads finish normally.
That's why when you "kill" the process without waiting for threads to finish sitecore starts up quickly.
I've this behaviour in my environments too, so when i need a quick restart, i copy the dll, wait for a few seconds so at least sitecore tries to shutdown and then and i kill the w3p.exe related to my pool. I won't advice nobody to do this, but i don't have any way to "kindly" kill the threads... Maybe somebody knows how to "force shutdown nicely..."
This blog post by Alex Shyba has some interesting pointers to improve the startup time for Sitecore (but might not be applicable if you're talking about a live environment rather than a DEV environment).
It might also be worth checking your prefetch caches and running through the Performance Tuning Guide if you haven't done that yet.
I've seen this problem before. It happened on version 6.5, but I've not seen a fix for it in the release notes since then.
Sitecore Support has a hotfix for this - it was indeed related to something with their Filesystem watcher tasks. You would need to raise a ticket with them, to get the hotfix or additional information.
My support ticket reference for this issue is 370593. The hotfix has issue 323775. If you mention this in your support ticket, it should speed up the process a bit - if it is indeed the same issue you're experiencing.

WinDBG - Analyse dump file on local PC

I have created a memory dump of an ASP.NET process on a server using the following command: .dump /ma mydump.dmp. I am trying to identify a memory leak.
I want to look at the dump file in more detail on my local development PC. I read somewhere that it is advisable to debug on the same machine as you create the dump file. However, I have also read that some developers do analyse the dump file on their local development PC's. What is the best approach?
I notice that when I create a dump file using the command above the W3WP process memory increases by about 1.5 times. Why this this? I suppose this should be avoided on a live server.
Analyzing on the same machine can save you from SOS loading issues thereafter. Unless you are familiar with WinDbg and SOS, you will find it confusing and frustrating then.
If you have to use another machine for analysis, make sure you read carefully this blog post, http://blogs.msdn.com/b/dougste/archive/2009/02/18/failed-to-load-data-access-dll-0x80004005-or-what-is-mscordacwks-dll.aspx as it shows you how to copy the necessary files from the source machine (where the dump is captured) to the target machine (the one you launch WinDbg).
For your second question, as you use WinDbg to attach to the process directly, and use .dump command to capture the dump, the target process unfortunately is modified. Not easy to explain in a few words. The recommended way is to use ADPlus.exe or Debug Diag. Even procdump from SysInternals is better. Those tools are designed for dump capture and they have minimal impact on the target processes.
For memory leak from unmanaged libraries, you should use memory leak rule of Debug Diag. for managed memory leak, you can simply capture hang dumps when memory usage is high.
I am no expert on WinDBG but I once had to analyse a dump file on my ASP.NET site to find a StackOverflowException.
While I got a dump file of my live site (I had no choice since that was what was failing), originally I tried to analyse that dump file on my local dev PC but ran into problems when trying to load the CLR data from it. The reason being that the exact version of the .NET framework differed between my dev PC and the server - both were .NET 4 but I imagine my dev PC had some cumulative updates installed that the server did not. The SOS module simply refused to load because of this discrepancy. I actually wrote a blog post about my findings.
So to answer part of your question it may be that you have no choice but to run WinDBG from your server, at least you can be sure that the dump file will match your environment.
It is not necessary to debug on the actual machine unless the problem is difficult to manifest on your development machine.
So long as you have the pdbs with the private symbols then the symbols should be resolved and call stacks correctly displayed and the correct version of .NET installed.
In terms of looking at memory leaks you should enable Gflags user stack trace and take memory dumps at 2 intervals so you can compare the memory usage before and after the action that provokes the memory leak, remember to disable gflags afterwards!
You could also run DebugDiag on the server which has automated memory pressure analysis scripts that will work with .Net leaks.

Resources