IsCmdBld.exe - Randomly Errors Out - build-process

For some reason, when I use the standalone build from InstallShield 2009 Professional, there are times when I get an error and times when the build completes successfully, with no major distinguishable reason why. The error that shows up usually reads something like:
IsCmdBld.exe - Application Error
The instruction at "0xa781543" referenced memory at "0x6a19a778". The memory could not be "written"
Click on OK to terminate the program
Now, this message only comes up sometimes, it doesn't occur with any regularity or pattern. Anybody have any ideas on this? Thanks.

If the buggy memory approach doesn't pan out, I wouldn't be surprised if there was a bug in IsCmdBld.exe causing this.

I'd hazard a guess that you've got some dodgy/incompatible RAM. If you can take the machine offline and let memtest run for 24 hrs you should have enough information to debug.
I've had a similar issue in the past with a machine where the RAM/Mobo combo would cause one or two errors about once every 24 hrs. The same RAM in another machine would be fine, and other brand RAM in that machine would be fine. Running the machine off of a UPS would also be fine, I guess that teeny tiny electricity fluctuations were just enough to highlight minor incompatibilities between the two pieces of hardware.
If that doesn't help I'd suggest taking your question over to the InstallShield forums, last IS bug I had turned out to be a bug which was lodged, fixed and a hotfix distributed.

Related

Which are the things to consider for running javafx standalone application 24*7*365 days

I am working on one project which is a standalone javafx application. It will run 24*7*365 days continuously.
So, i have a question in mind.
which things we need to consider for running this application smoothly and with high performance for 24*7*365?
Please guides me sir, regarding it.
Details for used things are as follows for Reference :-
Used java version :- 1.8.0_121
Available Ram :- 2GB
Allocated Memory for application :- -Xmx1524M
Hardware Configuration :- Processor - Intel Atom CPUD425# 1.80GHz x 2
OS :- 32 Bit Fedora 15
I will probably state the obvious here, but OutOfMemory errors are the main thing you should worry about. A small glitch in your code/program could make your app die fast or run extremely slow under memory pressure.
I would say that you need to enable garbage collection logs and monitor those. Also is there a way for a javafx app to actually use another instance if the current one is facing issues? There are tools for that under different apps, but not sure about javafx... I mean can you automatically shut down (and collect heap data) the current running application and automatically start a new one (so that later you can analyze what actually happened)? It might not be feasible, and if it's not, you should have enough stress tests before you actually lunch it into production.
One thing you should check first is whether your system suffers from the notorious memory problems that some Linux graphics drivers have. See for example my answer to this question here on SO:
Javafx growing memory usage when drawing image

multiple threads issue in visual studio debugging

Environment: Win7 Pro, VS2013 Ultimate, asp.net website (not application), vb.net
(This is a major edit that focuses more on the problem as currently understood. Looks at this point like it might be a user-code specific issue.)
Almost complete resolution, 3/1 4:15 pm PDT: If I get rid of the nasty loop in the code, by just counting and quitting when the counter gets too high, the problem seems to go away entirely, including not leaving IISExpress with high CPU usage. Still don't know why, but maybe that's just another unexplainable IDE anomaly. Probably doesn't need any more attention.
The basic problem is breakpoints not getting hit during debug runs, and the "the process or thread has changed since last step" condition. It seems directly related to IISExpress.
If I go into task manager and kill IISExpress between debug runs, it doesn't have the problem with the breakpoints. I've tested that three times and is consistent.
Plus IISExpress shows 25-50% CPU usage.
The problem started out of nowhere a couple of days ago on VS2013 Premium. I had been testing essentially the same code for several weeks with no issue. I reinstalled Ultimate with SP4 to see if that would make a difference, however, not so.
The current code set that I'm testing does have a nasty loop in it that I'm trying to remove, however, if I step through the loop just a few times and then kill the debug I still get the problem.
I have another website on the same system that I'm testing that does not exhibit this problem and does not leave iisexpress with the high CPU usage between debug.
Any help with this would be appreciated.

Debugging ASP.Net shared pool techniques

I work for a hosting company, providing ASP.Net 3.5 hosting. Honestly, we usually provide quite good uptime and velocity. However, we are having problems with one of our shared pools. As usual, we try to maximize the number of webs that can run into one pool.
Lately we are suffering continuous hangs. The process doesn't crash, but starts to show OutOfMemoryExceptions or stops processing requests. We think this is responsability of one of the applications (it would be great to know which one).
I have some memory dumps that I have processed with WinDbg. I've run f.e:
!dumpheap -stat
This method provide global memory usage of objects. Nothing remarkable... Also I've checked:
~*e!clrstack
I see various non managed threads. In those who are managed appears stacks like:
[HelperMethodFrame_1OBJ: 0f30e320]
System.Threading.WaitHandle.WaitMultiple(System.Threading.WaitHandle...
0f30e3ec 7928b3ff System.Threading.WaitHandle.WaitAny(System.Threading...
0f30e40c 7a55fc89 System.Net.TimerThread.ThreadProc()...
0f30e45c 792d6e46 System.Threading.ThreadHelper.ThreadStart_Context(System...
0f30e468 792f5781 System.Threading.ExecutionContext.runTryCode(System...
At least, I haven't seen exception throwing or similar (in that moment). I've also had access to two scripts written by Tess Ferrandez for calculating the number of sessions and size. Also here not promising results. Anything peculiar or remarkable (24000 bytes as average).
I would like to know what kind of strategies are you usually using facing this kind of problems. Have you ever used Microsoft Support?
Thanks a lot!
Very nice question, well a bad asp.net can hang all shared web apps on the same pool...
Ok let see... if the problem is on memory, get the VMMap from Sysinternals, and also the Process Explorer
Run them both, and from Process explorer find the PID number of pool that you wish to investigate, its under the inetinfo.exe, and have probably the name aspnet_wp.exe.
Now on the VMMap add for monitoring this Pool using for help the PID, and voila, you see the memory and the open images (aspx files) that probably are a lot and make the problems... The files that you going to see are located on temporary of asp.net Framework, but you can connect them and see from witch site they come from.
Well if the problem is not on memory, but the programmer have create bad loops, or even create thread sleeps, then I think process explorer is a way to investigate the pools and search for whats eating the power.
Additional
Maybe a pool recycle every 15minute can solve this issue ?
More about
In those videos there are a lot of informations about VMMap and memory manager.
Mysteries of Windows Memory Management, Part 1, and , Part 2
There are many tools, but it sounds like your main goal is to determine what's causing the problem. This can be done very simply with a binary search.
Break the pool in half, and see which one crashes. Repeat until you have a crashed pool with only one application in it.
This is already O(log2n), but you can speed the process up arbitrarily by dividing into more than two sub-pools.

How can I trace an apparent memory leak in an asp.net application?

Some background info:
We have several websites running on a 64-bit machine with IIS6
These websites all have the same core code, but different skins and content
We have a SQL 2005 database which is fairly heavily used throughout the site
Historically we've used SQL stored procs, but have been gradually transitioning to NHibernate. The majority of our code uses NHibernate now, but not all.
These sites have been running fine on our live web server for a while, although we get a few errors a day regarding SQL connectivity / deadlocking.
Last Thursday we noticed the sites going very slow, then checking task manager revealed one of the websites was hogging over 1.6Gb of memory. Ever since then we've been restarting the app and watching it slowly increase in size over the course of the day.
We apparently have a memory leak (or at least, that's the effect), but I'm losing hair trying to work out how to trace it.
It only appears to be happening on this one website, even though as far as I am aware nothing had changed in the code before it started happenning. It is, however, our busiest website so it could be a traffic issue.
Debug Diagnostics hasn't revealed any issues.
Refreshing certain pages very quickly causes the memory to jump up rapidly, then fall slightly, but all the time the gradual progression is upwards.
I cannot replicate the issue on our test servers or locally. Probably because the traffic has something to do with it.
My suspicion is that the problem lies in database connectivity / locking. However, I'm not sure how that would cause the problem specified.
Any ideas?
Edit
Okay so not exactly sure I've found the problem but we're getting closer. It's definately SQL related. The error log reveals lots of errors since last thursday.
It all happened after we ran some windows updates on our servers. One of the updates failed on the SQL server so not sure if this caused some problems.
The warnings we're getting are:
SQL Server has encountered XX occurence(s) of I/O requests taking longer than 15 seconds to complete on file .. tempdb.mdf
Where XX is anything between 17 and 90! Does that sound like a deadlocking issue?
Followed by the following erors:
Unable to complete login process due to delay in opening server connection
These coincide with our log times for when the websites have been "blipping".
We've increased the page file size on SQL server to the recommended size, as it was set to a max of 4Gb, but recommended was 12Gb. I think we may need to roll back the windows updates we did on Thursday if that doesn't fix it.
Unfortunately I can't get into Activity monitor as it tells me Timeout expired!
Edit
Okay after a reboot I'm into Activity monitor. How many sleeping processes would you say would be normal? We have roughly 127 sleeping. That's serving over 10 websites.
If there is a deadlock or timeout issue, will NHibernate not clean up its connections properly?
Okay so in the end it seems it's quite complex. Sql deadlocks and data problems, heightened it seems by anti-virus software that was locking up or choking on a file.
Turning off the anti-virus reduced the problems, but we still need to resolve the underlying data issues.

Does hyperthreading lead to unstable systems?

I'm building a PC with the new Intel I7 quad core processor. With hyperthreading turned on it will report 8 cores in Task Manager.
Some of my colleagues are saying that hyperthreading will make the system unreliable and suggest turning it off.
Can any of you good people enlighten me and the rest of the stockoverflow users.
Follow on: I've been using hyperthreading constantly, and its been spot on. No instability whatsoever. I'm using:
Microsoft Server 2008 64 bit
Microsoft SQL Server 2008 64 bit
Microsoft Visual Studio 2008
Diskeeper Server
Lots of controls (Telerik, Dundas, Rebex, Resharper)
Stability isn't likely to be affected, since the abstraction is very low level and the OS just sees it as another CPU to provide work to. However, performance is another matter.
In all honesty I can't say if this is still the case, but at least when the HT-enabled CPUs first came out, there were known problems with at least some applications. For example, MySQL, and multi-threaded apps like the Java application I support for my day job were known to have decreased performance when HT was enabled. We always recommended it be removed, at least for our particular use case of a server-side enterprise application.
It's possible that this is no longer an issue, and in a desktop environment this is less likely to be a problem for most use cases. The ability to split work on the CPU generally would lead to more responsive applications when the CPU is heavily utilized. However, the context switching and overhead could be a detrement when the app is already heavily threaded and CPU-intensive such as in the case of a database server.
Off the top of my head I can think of a few reasons your colleagues might say this.
Several articles about SQL performance suffering under hyperthreading. I believe it winds up doing too much context switchings or cache thrashing. can't remember exactly.
Early on going from single proc to multi-proc or more likely for most people hyperthreaded procs, brought many threading issues into the open. Race conditions, deadlocks, etc, that they never saw before. Even though its a code problem some people blamed the procs.
Are they making the same claims about multi-core/multi-proc or just about hyperthreaded?
As for me, I've been developing on a hyperthreaded box for 4 years now, only problem has been a UI deadlock issue of my own making.
Hyperthreading will mainly make a difference in the scheduler behaviour/performance when dispatching threads to the same CPU as opposed to different CPU...
It will show in a badly coded application that does not handle race conditions between threads...
So it is usually bad design/code.... that suddendly find a failure mode condition
Unreliable? I doubt so. The only disadvantage of hyperthreading that I can think of is the fact that if the OS is not aware of it, it might schedule two threads on one physical processor when other physical processors are idle which will degrade performance.
There was a problem with SQL server and hyperthreading for some queries because SQL server has its own scheduler, maxdop 1 would solve that
To whatever degree Windows is unstable, it's highly unlikely that hyperthreading contributes significantly (or it would have made big news by now.)
I've had a hyperthreading PC for a couple years now. Not that many cores, but it's worked fine for me.
Wish I had test data to prove your colleagues wrong, but it sounds like it's just my opinion versus theirs at this point. ;)
The threads in a hyperthreaded CPU share the same cache, and as such don't suffer from the cache consistency problems that a multiple cpu architecture can. Though, if the developer of a piece of software is programming with multiple cpus in mind, they will (or should) be writing with read semantics (iirc, that's the term). i.e. all writes are flushed from the cache immediately.
As far as I know, from the OS's point of view, it doesn't see hyperthreading as any different from having actual multiple cores. From the OS's point of view, there is no difference - it's isolated.
So, aside from the fact that hyperthreading's "extra cores" aren't "real" (in the strictly technical sense) and don't have the full performance of "real" CPU cores, I can't see that it'd be any less reliable. Slower, perhaps, in some rare instances, but not less reliable.
Of course, it depends on what you're running - I suppose some applications might get "down & dirty" with the CPU and hyperthreading might confuse them, but that's probably pretty rare.
I myself have been running a PC with hyperthreading for several years now, and I have seen no stability problems.
Sorry I don't have more concrete data!
I own an i7 system, and I haven't had any issues.
If it works w/ multiple cores, it works with hyperthreading.
The short answer: yes.
The long answer, as with almost every question, is "it depends". Depends on the OS, the software, the CPU revision, etc. I have personally had to disable hyperthreading on two occasions to get software working properly (one, with the Synergy application, and two, with the Windows NT 4.0 installer), but your mileage may vary.
As long as you get windows installed detecting multiple HT cores from the beginning (it loads some relevant drivers and such), you can always disable (and re-enable) HT "after the fact". If you have bizarre stability issues with specific software that you can't resolve, it's not hard to disable HT to see if it has any impact.
I wouldn't disable it to start with because, frankly, it will probably work fine in 99.99% of your daily use. But be aware that yes, it can occasionally cause bizarre behaviors, so don't rule it out if you happen to be troubleshooting something very odd down the road.
Personally, I've found that hyperthreading, while not causing any problems, doesn't actually help all that much either. It might be like having an extra .1 of a processor. On my HT machine at work, I only very seldomly see my CPU go above 50%. I don't know if HT has gotten any better with newer processors like the i7, but I'm not optimistic.
Other than hearing a few reports about SQL Server, all I can report is positive. I get about 25% better performance on heavy multi-threaded apps with HT on. Have never run into a problem with it, and I'm using a first generation HT processor...
Late to the party, but for future referrence;
I'm currently having an issue with this with SQLServer. Basically, my understanding is Hyperthreading on the same processor shares the same L1 & L2 cache, which can cause issues between the two. Citrix also appears to have this problem from what I'm reading.
Slava Ok wrote a good blog post on it.
I'm here very late but found this page via Google. I may have discovered a very subtle problem. I have a i7 950 running 2003 Server and it's great. Initially I left hyperthreading on in the BIOS, but during some testing and pushing things hard, I ran a program called "crashme" by Carrette. This program tries to crash an OS by spawning a process and feeding it garbage to try and run. My dual Opteron setup ran it forever without a problem, but the 950 crashed within the hour. It didn't crash for anything else unless I did something stupid, so it was very surprising. On a whim I turned off HT and ran the program again. It runs all night, even multiple instances of it. One anecdote doesn't mean much, but try it and see what happens. Also, it seems that the processor is slightly cooler at any given load if HT is turned off. YMMV.

Resources