My QT static builds run out of memory on other systems - qt

When I build my application statically, it comes out to just over 5Mb, so it's a small, simple program. However, any system that has under 3Gb of ram can't run the program, saying there's not enough memory. There is nothing very memory intensive in the program, and I did nothing to allocate memory specifically. Any thoughts on whats causing this?

I believe that less the 1Mb built code can easilly fill the 10GB memory. Make sure that your code does not use redundant memory.

There was a problem with the static build. I first got it to work by exporting from the visual studio plugin, and then I rebuilt the SDK and program again, and everything worked fine from QT creator.

Related

WinDBG - Analyse dump file on local PC

I have created a memory dump of an ASP.NET process on a server using the following command: .dump /ma mydump.dmp. I am trying to identify a memory leak.
I want to look at the dump file in more detail on my local development PC. I read somewhere that it is advisable to debug on the same machine as you create the dump file. However, I have also read that some developers do analyse the dump file on their local development PC's. What is the best approach?
I notice that when I create a dump file using the command above the W3WP process memory increases by about 1.5 times. Why this this? I suppose this should be avoided on a live server.
Analyzing on the same machine can save you from SOS loading issues thereafter. Unless you are familiar with WinDbg and SOS, you will find it confusing and frustrating then.
If you have to use another machine for analysis, make sure you read carefully this blog post, http://blogs.msdn.com/b/dougste/archive/2009/02/18/failed-to-load-data-access-dll-0x80004005-or-what-is-mscordacwks-dll.aspx as it shows you how to copy the necessary files from the source machine (where the dump is captured) to the target machine (the one you launch WinDbg).
For your second question, as you use WinDbg to attach to the process directly, and use .dump command to capture the dump, the target process unfortunately is modified. Not easy to explain in a few words. The recommended way is to use ADPlus.exe or Debug Diag. Even procdump from SysInternals is better. Those tools are designed for dump capture and they have minimal impact on the target processes.
For memory leak from unmanaged libraries, you should use memory leak rule of Debug Diag. for managed memory leak, you can simply capture hang dumps when memory usage is high.
I am no expert on WinDBG but I once had to analyse a dump file on my ASP.NET site to find a StackOverflowException.
While I got a dump file of my live site (I had no choice since that was what was failing), originally I tried to analyse that dump file on my local dev PC but ran into problems when trying to load the CLR data from it. The reason being that the exact version of the .NET framework differed between my dev PC and the server - both were .NET 4 but I imagine my dev PC had some cumulative updates installed that the server did not. The SOS module simply refused to load because of this discrepancy. I actually wrote a blog post about my findings.
So to answer part of your question it may be that you have no choice but to run WinDBG from your server, at least you can be sure that the dump file will match your environment.
It is not necessary to debug on the actual machine unless the problem is difficult to manifest on your development machine.
So long as you have the pdbs with the private symbols then the symbols should be resolved and call stacks correctly displayed and the correct version of .NET installed.
In terms of looking at memory leaks you should enable Gflags user stack trace and take memory dumps at 2 intervals so you can compare the memory usage before and after the action that provokes the memory leak, remember to disable gflags afterwards!
You could also run DebugDiag on the server which has automated memory pressure analysis scripts that will work with .Net leaks.

windbg cant load sos clr

I'm not sure that windbg is the right tool, but that's what I'm trying now
my asp.net app seems to have a memory leak, it keeps on growing by about 3 MB almost every time a page loads (then it goes back down...)
I want to read the entire process memory and see exactly whats being stored that is unnecessary.
So I run windbg, attach to the webserver40.exe process
then I try
.loadby sos clr
and I get
The call to LoadLibrary(C:\Windows\Microsoft.NET\Framework\v4.0.30319\sos) failed, Win32 error 0n193
"%1 is not a valid Win32 application."
Please check your debugger configuration and/or network access.
It seems that I have this sos.dll in Framework AND Framework64
I tried both using
.load C:\Windows\Microsoft.NET\Framework64\v4.0.30319\sos
but nothing loads
I don't understand why its looking for a vaild 32bit app. im on a 64bit pc with 64bit windows.
How can I get this sos thing to load?
Also when I start I get this warning
WARNING: Process 7240 is not attached as a debuggee
The process can be examined but debug events will not be received
I also tried loadby sos mscorwks it didn't work, but I understand that was discontinued. I'm in asp.net 4
I also read somewhere that the code should be stopped in debug before loading sos, that just hangs VS 2010.
Thank you very much.
Again, if there's another tool that could better help me, I'm all ears :-)
WebDev.WebServer40.exe is a 32 bit executable. To debug that you need to use 32 bit WinDbg. Visual Studio, as well as Callipso server are still executing in 32 bit mode.
For your other question. Yes, WinDbg is a great tool to investigate memory leaks in managed code. This blog will get you started. However in your case I would not be so sure you have a memory leak.
You are saying that memory goes down eventually. This means it is not a memory leak, because a leaked memory never gets released.
Do not waste your time investigate memory problems in Callipso. There are a lot of differences between IIS and Callipso that would make your findings not applicable in production environment. Even if you find that Callipso is in fact leaking does not mean that IIS would be.

Calling functions in Qt from third-party DLL works in debug mode, crashes in release

I use a third-party DLL (FTD2xx) to communicate with an external device. Using Qt4, in debug mode everything works fine, but the release crashes silently after successfully completing a called function. It seems to crash at return, but if I write something to the console (with qDebug) at the end of the function, sometimes it does not crash there, but a few, or few dozen lines later.
I suspect a not properly cleaned stack, what the debug build can survive, but the release chokes on it. Did someone encounter a similar problem? The DLL itself cannot be changed, as the source is not available.
It seems the reduction of the optimization level was the only way around. The DLL itself might have problems, as a program which does nothing but calls a single function from that DLL crashes the same way if optimization is turned on.
Fortunately, the size and speed lost by the change in optimization level is negligible.
Edit: for anyone with similar problems on Qt 5.0 or higher: If you change the optimization level (for example, to QMAKE_CXXFLAGS_RELEASE = -O0), it's usually not enough to just rebuild the application. A full "clean all" is required.
Be warned - the EPANET library is not thread safe, it contains a lot of global variables.
Are you calling two methods of that library from different threads?

Qt performance when running executable outside of Qt Creator is awful!

I just tried running a program that I've been developing with Qt outside of Qt. I double clicked on the program in /release, resolve all the missing DLLs, and find that my app has awful slow performance compared to when it is launched from within Qt Creator. What might be the reason for this?!
Well, I have never encountered this. Is there any files you are loading in your application which is big enough to make the application load slow? Like reading a big flat file and get the contents out of it. If so, just make sure that the file's contents hasn't changed while you were running through Qt Creator and when you run it outside. It's my guess though. For me, the performances doesn't have much differences in either cases.

Why is aspnet_compiler.exe so slow (and can it be made any faster)?

During our build process we run aspnet_compiler.exe against our websites to make sure that all the late-bound stuff in ASP.NET/MVC actually builds (I know nothing about ASP.NET but am assured this is necessary to prevent finding the failures at runtime).
Our sites are fairly large in size, with a few hundred pages/views/controls/etc. however the time taken seems excessive in the 10-15 minute range (for reference, this is longer than it takes the entire solution with approx 40 projects to compile, and we're only pre-compiling two website projects).
I doubt that hardware is the issue as I'm running on the latest Quad core Intel chip, with 4GB RAM and a WD Velociraptor 10,000rpm hard disk. And part of what's odd is that the EXE doesn't seem to be using much CPU (1-5%) and doesn't seem to be doing an awful lot of I/O either.
So... is this a known issue? Why is it so slow? And is there any way to speed it up?
Note: To clarify a couple of things people have answered about, I am not talking about the compilation of code within Visual Studio. We're using web application projects already, and the speed of compilation of those is not the issue. The problem is the pre-compilation of the site after these projects have already been compiled (see this MSDN page for more details) as part of the dev build script. We are performing in-place pre-compilation, not copying the files to a target directory.
Switching to Roslyn compiler most likely will significantly improve precompile time. Here is a good article about it: https://devblogs.microsoft.com/aspnet/enabling-the-net-compiler-platform-roslyn-in-asp-net-applications/.
In addition to this, make sure that batch compilation is enabled by setting batch attribute to true on the compilation element.
Simply, the aspnet_compiler uses what is effectively a "global compiler lock" whenever it starts pre-compiling any individual aspx page; it is basically only allowed to compile each page sequentially.
There are reasons for this (although I personally disagree with them) - primarily, in order to detect and prevent circular references causing an infinite loop of sorts, as well as ensuring that all dependencies are properly built before the requiring page is compiled, they avoid a lot of "nasty CS issues".
I once started writing a massively-forked version of aspnet_compiler.exe last time I worked at a web company, but got tied up with "real work" and never finished it. Biggest problem is the ASPX pages: the MVC/Razor stuff you can parallelize the HELL out of, but the ASPX parse/compile engine is about 20 levels deep of internal and private classes/methods.
Compiler should generate second code-behind file for every .aspx page, check
During compilation, aspnet_compiler.exe will copy ALL of the web site files to the output directory, including css, js and images.
You'll get better compilation times using Web application project instead of Web site model.
I don't have any specific hot tips for this compiler, but when I have this sort of problem, I run ProcMon to see what the process is doing on the machine, and I run Wireshark to check that it isn't spending ages timing-out some network access to a long-forgotten machine which is referenced in some registry key or environment variable.
Just my 2 cents.
One of the things slowing down ASP.NET views precompilation significantly is the -fixednames command line option for aspnet_compiler.exe. Do not use it especially if you're on Razor/MVC.
When publishing the wep app from Visual Studio make sure you select "Do not merge", and do not select "create separate assembly" cause this is what causes the global lock and slows things down.
More info here https://msdn.microsoft.com/en-us/library/hh475319(v=vs.110).aspx

Resources