Environment.WorkingSet incorrectly reports memory usage - asp.net

Environment.WorkingSet incorrectly reports the memory usage for a web site that runs on Windows 2003 Server.(OS Vers: Microsoft Windows NT 5.2.3790 Service Pack 2, .NET Vers: 2.0.50727.3607)
It reports memory as Working Set(Physical Mem.): 1952 MB (2047468061).
Same web site runs locally on Windows Vista with a Working Set(Physical Mem.): 49 MB (51924992).
I have limited access to the server and support is so limited :(.
so i have computed the total memory by traversing with VirtualQuery.
Total of pages with state: MEM_FREE is 1300 MB.
(I guess server have 4 GBs of RAM and PAE is not enabled, max user mode virtual address is 0x7fff0000.)
So, i know working set is not only about virtual memory. But, is it normal to have such a high working set while its very low on another machine?

I think the problem is related to what is described in this article:
MAY 04, 2005
Fun with the WorkingSet and int32
I finally found an honest to goodness bug in the .NET framework.
... the
WorkingSet returns the amount of memory being used by the process as
an integer (32 bit signed integer). OK, so the maximum value of an
integer is 2,147,483,647 -- which is remarkably close to the total
amount of memory that a process can have in its working set.
... There is actually a switch in Windows that will allow a process to use
3 gig of memory instead of 2 gig. This switch is often turned on when
dealing with Analysis Services -- this thing can be a memory hog. So
now what happens is that when I poll the WorkingSet I get a negative
number, a really big negative number. Usually, in the realm of
-2,147,482,342.
... The problem was the overflow bit.
Working set is returned to the .NET framework as a binary value. The
first bit of an integer is the sign bit. 0 is positive, 1 is negative.
So, when the value turned from (binary)
1111111111111111111111111111111 to (binary)
10000000000000000000000000000000 the value goes from 2147483647 to
-2147483647.
OK, so I still have to fix this. Here is what I came up with (in C#):
long lWorkingSet = 0;
if (process.WorkingSet >= 0)
lWorkingSet = processWorkingSet;
else
lWorkingSet = ((long)int.MaxValue*2)+process.WorkingSet;
Hopefully that fixes the problem for now.
The real question will come in down the road. Microsoft knows about
this problem. I still have find out how they are going to fix this for
Win64...where this trick will no longer work.
http://msdn2.microsoft.com/library/0aayt1d0(en-us,vs.80).aspx:
There's gonna be a Process.WorkingSet64 variable, and they're
deprecating WorkingSet.
On a tangent, though, I thought it was impossible for a managed
process to come near the 3gb limit, because the runtime splits the
memory into multiple heaps. Is this not true?

At a guess, Environment.WorkingSet is probably returning the value from GetProcessWorkingSetSize, which is basically what has been set with SetProcessWorkingSetSize. It's basically whatever the system has picked as the largest working set size it would like to see for this process, not necessarily anything to do with how much memory it's actually using. The basic effect is that when/if the process uses more memory than that, the system's working set trimmer goes to work seeing if it can get some of its memory paged out to disk.

Related

Using Windbg to find Memory leak issue in asp.net Application

Problem Background
From the past few months we found issue in my online Asp.net application. Application working fine but 1 or 2 times in a day it suddenly crash on different modules on live server, but their is no as such issue in code at all and find not such types of issues on local server.
After some research i found that process that run my application on IIS on live server its memory increase continuously and when it reaches at certain level its begin to crash.
Temporary solution:
when ever we found such issue we restart application on IIS.
which can end this process and start new process,then after that application start working.
In a day some time we need to restart our application 2 or 3 times some times more.
Problem that i find: memory leak.
Find some solution after some research:
create a dump file from my application process from task manager when application is crashes.
Tool use: Windbg
open in Windbg tool for analysis.
write command
.load by clr
dumpheap -stat
shows a tons of references of datatypes.
now i stuck at thses point.
i share with you in image section.
Question:
1. I am on the right direction in finding memory leaks issue?
2. if my path is right where whats my next step?
3. Windbg tool is good for finding such kind of issue?
dump file link for detail review, i take this dump file when server stop to response
create a dump file from my application process from task manager when application is crashes
That's not a good choice, because
you don't have much time to do so. You can only do that as long as the crash dialog is displayed. If you're too late, the application is gone.
in that state, you'll have difficulties debugging it. Instead of the original exception, it will show a breakpoint, which is used by the OS to show the dialog and collect diagnostic data
Use WER local dumps to automatically create a crash dump when it crashes instead of doing it manually. It's much more reliable and gives you the original exception. See How do I take a good crash dump for .NET
I am on the right direction in finding memory leaks issue?
Sorry, you're on the wrong track already.
Starting with !dumpheap -stat is not a good idea. Usually one would start at the lowest level, which is !address -summary. It will give you an indicator whether it's a managed memory leak or a native memory leak. If it's a managed leak, you could continue with !dumpheap -stat
if my path is right where whats my next step?
Even if it's not the right path, it's a good idea that you learn how figure out that you're on the wrong path. So, how do I know?
Looking at your output of !dumpheap -stat, you can see
[...]
111716 12391360 System.String.
This tells you there are 110.000 different strings, using 12 MB of memory. It also tells you that everything else takes less than 12 MB. Look at the other sizes and you'll find out that .NET is not the reason for your OutOfMemoryException. They use less than 50 MB.
If there were a managed leak, you would look for paths where objects are connected to, so that the garbage collector thinks it cannot be freed. The command is !gcroot.
Windbg tool is good for finding such kind of issue?
It is possible, but WinDbg is not the best tool. Use a memory profiler instead. That's a dedicated tool for memory leaks. Typically it has a much better usability. Unfortunately you'll need to decide whether you need a managed memory profiler, native memory profiler or both.
I once wrote how to use WinDbg to track down .NET OutOfMemoryException. You'll find a chart there which gives you ideas on how to proceed in different situations.
In your dump I see 2 TB of <unknown> memory, which could be .NET, but needn't be. Still, these 2 TB are likely the cause of the OOM, because the rest is less than 350 MB in size.
Since clr is in the list of loaded modules, we can check !dumpheap -stat as you did. But there are not many objects using memory.
!eeheap -gc shows that there are 8 heaps, corresponding to the 8 processors of your machine, for parallel garbage collection. The largest individual heap is 45 MB, the total 249 MB. This roughly matches the sum of !dumpheap. conclusion: .NET is not the culprit.
Let's check the special cases:
Presence of MSXML
Bitmaps
Calls to HeapAlloc() which are so large that they are directly forwarded to VirtualAlloc().
Direct calls to VirtualAlloc()
MSXML is not present: lm m msxml* does not produce output.
There are no Bitmaps: !dumpheap -stat -type Bitmap
Heap allocations larger than 512 kB: !heap -stat. Here's a truncated part of the output:
0:000> !heap -stat
_HEAP 0000018720bd0000
Segments 00000006
Reserved bytes 0000000001fca000
Committed bytes 0000000001bb3000
VirtAllocBlocks 00000002
VirtAlloc bytes 00000312cdc4b110
_HEAP 0000018bb0fe0000
Segments 00000005
Reserved bytes 0000000000f0b000
Committed bytes 0000000000999000
VirtAllocBlocks 00000001
VirtAlloc bytes 0000018bb0fe0110
As you can see, there are 3 blocks that went to VirtualAlloc. The size is somewhat unrealistic:
0:000> ? 00000312cdc4b110
Evaluate expression: 3379296514320 = 00000312`cdc4b110
0:000> ? 0000018bb0fe0110
Evaluate expression: 1699481518352 = 0000018b`b0fe0110
That would be a total of 3.3TB + 1.7TB = 6TB and not 2TB. Now, it may happen that this is a bug of !address, but 4TB is not a common overflow point.
With !heap -a 0000018720bd0000 you can see the 2 virtual allocs:
Virtual Alloc List: 18720bd0110
0000018bac70c000: 00960000 [commited 961000, unused 1000] - busy (b), tail fill
0000018bad07b000: 00960000 [commited 961000, unused 1000] - busy (b), tail fill
And with !heap -a 0000018bb0fe0000 you can see the third one:
Virtual Alloc List: 18bb0fe0110
0000018bb1043000: 00400000 [commited 401000, unused 1000] - busy (b), tail fill
These are all relatively small blocks of 4.1MB and 9.8 MB.
For the last part, direct calls to VirtualAlloc(), you need to get back to the level of !address. With !address -f:VAR -c:".echo %1 %3" you can see the address and size of all <unknown> regions. You'll find a lot of entries there, many of small sizes, some which could be the .NET heaps, a few 2GB ones and one really large allocation
The 2GB ones:
0x18722070000 0x2d11000
0x18724d81000 0x7d2ef000
0x187a2070000 0x2ff4000
0x187a5064000 0x7d00c000
0x18822070000 0x2dfe000
0x18824e6e000 0x7d202000
0x188a2070000 0x2c81000
0x188a4cf1000 0x7d37f000
0x18922070000 0x2d13000
0x18924d83000 0x7d2ed000
0x189a2070000 0x2f5a000
0x189a4fca000 0x7d0a6000
0x18a22070000 0x2c97000
0x18a24d07000 0x7d369000
0x18aa2070000 0x2d0c000
0x18aa4d7c000 0x7d2f4000
It is likely that these are the .NET heaps (committed part + reserved part).
The large one:
0x7df600f57000 0x1ffec56a000
The information about it:
0:000> !address 0x7df600f57000
Usage: <unknown>
Base Address: 00007df6`00f57000
End Address: 00007ff5`ed4c1000
Region Size: 000001ff`ec56a000 ( 2.000 TB)
State: 00002000 MEM_RESERVE
Protect: <info not present at the target>
Type: 00040000 MEM_MAPPED
Allocation Base: 00007df5`ff340000
Allocation Protect: 00000001 PAGE_NOACCESS
It looks like a 2TB memory mapped file which is unused (and therefore reserved).
I don't know what your application is doing. This is really where I need to stop the analysis. I hope this was helpful and you can draw your conclusions and fix the issue.

Objects in .NET MemoryCache are beeing evicted unexpectedly

I am having some kind of strange behavior using .NET MemoryCache in an ASP.NET application. The problem is, that objects will be evicted after a view minutes and there seems to be no reason for that. The memory limits are set in the web.config:
<system.runtime.caching>
<memoryCache>
<namedCaches>
<add name="Default"
cacheMemoryLimitMegabytes="1500"
physicalMemoryLimitPercentage="18"
pollingInterval="00:02:00" />
</namedCaches>
</memoryCache>
</system.runtime.caching>
My development machine has 8 GB of ram and the w3wp.exe process is using about 0,5 GB. 2 GB are still available on the machine when the application is running (beside visual studio, webbrowsers and so on)
A RemovedCallBack method has been added to every entry to generate log entries for every removal and expecially for evictions:
private static void CachedItemRemovedCallback(CacheEntryRemovedArguments arguments)
{
LogCurrentCacheDelta(arguments.CacheItem, true);
if (arguments.RemovedReason == CacheEntryRemovedReason.Evicted)
{
Sitecore.Diagnostics.Log.Warn(
string.Format(
"Cache Item Evicted (cacheMemoryLimitMegabytes: {0}) - Key: {1}, Value: {2}",
FlightServiceCache.CacheMemoryLimit,
arguments.CacheItem.Key,
arguments.CacheItem.Value),
FlightServiceCache);
}
}
A counter for calculating the size currently used has also been implemented. I am using a binary serialization to estimate the size of the objects in memory. At the moment, the first eviction occured, about 120 objects were in the cache and the memory used was about 6 magabytes. For my understanding, this is in no way a reason for evicting entries from cache. But it happens again and again and after to days of investigation, I am still not sure why this happens.
I also took a look at the internal implementation of the trim() function in the .NET framework source code used when objects are beeing evicted. The calculation made therefore is not easy to understand, maybe someone knows how it works and can point this out for me.
It would be great if anyone could shade some light on that.
Thank you very much in advance and sorry for the really long post ;)
(btw. this is my first post so any suggestions about how to improve my questions are highly appreciated)
Had the same exact problem. Even if I set the CacheMemoryLimit to big enough value (let's say 1GB) and the PhysicalMemoryLimit to 10% (which in my case with 32GB of physical installed memory comes to 3.2 GB), still many of my cache entries would be evicted to free cache memory. Note that I was caching 1MB items and 10 of those, so altogether 10 MB, whereas, I supposed to have at least the minimum of the two limitations mentioned above, which is 1GB.
Yes, #VMAtm, was correct in his comment above that one should use bigger %, and I tested with 10% it evicted, 50% it didn't, and with the divide and conquer method I proved that with my setup then at around 45% it no longer evicts. But note, that depending on the overall installed memory size the behaviour might be different for the % values I used for testing.
So for me the point was not to trust the PhysicalMemoryLimit % and not to set it, rather, use the CacheMemoryLimit config property only. And if you still need to prove an option with the PhysicalMemoryLimit % then instead of using the system.runtime.caching configuration settings, rather, introduce your own settings, read them, get the actual physical installed memory size, use your percentage setting and then calculate the minimum of the two: physical memory limit (now in bytes) and cache memory limit (in bytes from your own setting). Having that you can then create MemoryCache and pass only the cacheMemoryLimitMegabytes config through NameValueCollection to its constructor, and the value for the cacheMemoryLimitMegabytes config property will be that minimum of two calculated above.
BTW to get the total physical installed memory size one can use:
[DllImport("kernel32.dll")]
[return: MarshalAs(UnmanagedType.Bool)]
private static extern bool GetPhysicallyInstalledSystemMemory(out long totalMemoryInKilobytes);

How does ASP.NET determine when to throw OutOfMemoryException? [duplicate]

This is my code:
int size = 100000000;
double sizeInMegabytes = (size * 8.0) / 1024.0 / 1024.0; //762 mb
double[] randomNumbers = new double[size];
Exception:
Exception of type 'System.OutOfMemoryException' was thrown.
I have 4GB memory on this machine 2.5GB is free when I start this running, there is clearly enough space on the PC to handle the 762mb of 100000000 random numbers. I need to store as many random numbers as possible given available memory. When I go to production there will be 12GB on the box and I want to make use of it.
Does the CLR constrain me to a default max memory to start with? and how do I request more?
Update
I thought breaking this into smaller chunks and incrementally adding to my memory requirements would help if the issue is due to memory fragmentation, but it doesn't I can't get past a total ArrayList size of 256mb regardless of what I do tweaking blockSize.
private static IRandomGenerator rnd = new MersenneTwister();
private static IDistribution dist = new DiscreteNormalDistribution(1048576);
private static List<double> ndRandomNumbers = new List<double>();
private static void AddNDRandomNumbers(int numberOfRandomNumbers) {
for (int i = 0; i < numberOfRandomNumbers; i++) {
ndRandomNumbers.Add(dist.ICDF(rnd.nextUniform()));
}
}
From my main method:
int blockSize = 1000000;
while (true) {
try
{
AddNDRandomNumbers(blockSize);
}
catch (System.OutOfMemoryException ex)
{
break;
}
}
double arrayTotalSizeInMegabytes = (ndRandomNumbers.Count * 8.0) / 1024.0 / 1024.0;
You may want to read this: "“Out Of Memory” Does Not Refer to Physical Memory" by Eric Lippert.
In short, and very simplified, "Out of memory" does not really mean that the amount of available memory is too small. The most common reason is that within the current address space, there is no contiguous portion of memory that is large enough to serve the wanted allocation. If you have 100 blocks, each 4 MB large, that is not going to help you when you need one 5 MB block.
Key Points:
the data storage that we call “process memory” is in my opinion best visualized as a massive file on disk.
RAM can be seen as merely a performance optimization
Total amount of virtual memory your program consumes is really not hugely relevant to its performance
"running out of RAM" seldom results in an “out of memory” error. Instead of an error, it results in bad performance because the full cost of the fact that storage is actually on disk suddenly becomes relevant.
Check that you are building a 64-bit process, and not a 32-bit one, which is the default compilation mode of Visual Studio. To do this, right click on your project, Properties -> Build -> platform target : x64. As any 32-bit process, Visual Studio applications compiled in 32-bit have a virtual memory limit of 2GB.
64-bit processes do not have this limitation, as they use 64-bit pointers, so their theoretical maximum address space (the size of their virtual memory) is 16 exabytes (2^64). In reality, Windows x64 limits the virtual memory of processes to 8TB. The solution to the memory limit problem is then to compile in 64-bit.
However, object’s size in .NET is still limited to 2GB, by default. You will be able to create several arrays whose combined size will be greater than 2GB, but you cannot by default create arrays bigger than 2GB. Hopefully, if you still want to create arrays bigger than 2GB, you can do it by adding the following code to you app.config file:
<configuration>
<runtime>
<gcAllowVeryLargeObjects enabled="true" />
</runtime>
</configuration>
You don't have a continuous block of memory in order to allocate 762MB, your memory is fragmented and the allocator cannot find a big enough hole to allocate the needed memory.
You can try to work with /3GB (as others had suggested)
Or switch to 64 bit OS.
Or modify the algorithm so it will not need a big chunk of memory. maybe allocate a few smaller (relatively) chunks of memory.
As you probably figured out, the issue is that you are trying to allocate one large contiguous block of memory, which does not work due to memory fragmentation. If I needed to do what you are doing I would do the following:
int sizeA = 10000,
sizeB = 10000;
double sizeInMegabytes = (sizeA * sizeB * 8.0) / 1024.0 / 1024.0; //762 mb
double[][] randomNumbers = new double[sizeA][];
for (int i = 0; i < randomNumbers.Length; i++)
{
randomNumbers[i] = new double[sizeB];
}
Then, to get a particular index you would use randomNumbers[i / sizeB][i % sizeB].
Another option if you always access the values in order might be to use the overloaded constructor to specify the seed. This way you would get a semi random number (like the DateTime.Now.Ticks) store it in a variable, then when ever you start going through the list you would create a new Random instance using the original seed:
private static int randSeed = (int)DateTime.Now.Ticks; //Must stay the same unless you want to get different random numbers.
private static Random GetNewRandomIterator()
{
return new Random(randSeed);
}
It is important to note that while the blog linked in Fredrik Mörk's answer indicates that the issue is usually due to a lack of address space it does not list a number of other issues, like the 2GB CLR object size limitation (mentioned in a comment from ShuggyCoUk on the same blog), glosses over memory fragmentation, and fails to mention the impact of page file size (and how it can be addressed with the use of the CreateFileMapping function).
The 2GB limitation means that randomNumbers must be less than 2GB. Since arrays are classes and have some overhead them selves this means an array of double will need to be smaller then 2^31. I am not sure how much smaller then 2^31 the Length would have to be, but Overhead of a .NET array? indicates 12 - 16 bytes.
Memory fragmentation is very similar to HDD fragmentation. You might have 2GB of address space, but as you create and destroy objects there will be gaps between the values. If these gaps are too small for your large object, and additional space can not be requested, then you will get the System.OutOfMemoryException. For example, if you create 2 million, 1024 byte objects, then you are using 1.9GB. If you delete every object where the address is not a multiple of 3 then you will be using .6GB of memory, but it will be spread out across the address space with 2024 byte open blocks in between. If you need to create an object which was .2GB you would not be able to do it because there is not a block large enough to fit it in and additional space cannot be obtained (assuming a 32 bit environment). Possible solutions to this issue are things like using smaller objects, reducing the amount of data you store in memory, or using a memory management algorithm to limit/prevent memory fragmentation. It should be noted that unless you are developing a large program which uses a large amount of memory this will not be an issue. Also, this issue can arise on 64 bit systems as windows is limited mostly by the page file size and the amount of RAM on the system.
Since most programs request working memory from the OS and do not request a file mapping, they will be limited by the system's RAM and page file size. As noted in the comment by Néstor Sánchez (Néstor Sánchez) on the blog, with managed code like C# you are stuck to the RAM/page file limitation and the address space of the operating system.
That was way longer then expected. Hopefully it helps someone. I posted it because I ran into the System.OutOfMemoryException running a x64 program on a system with 24GB of RAM even though my array was only holding 2GB of stuff.
I'd advise against the /3GB windows boot option. Apart from everything else (it's overkill to do this for one badly behaved application, and it probably won't solve your problem anyway), it can cause a lot of instability.
Many Windows drivers are not tested with this option, so quite a few of them assume that user-mode pointers always point to the lower 2GB of the address space. Which means they may break horribly with /3GB.
However, Windows does normally limit a 32-bit process to a 2GB address space.
But that doesn't mean you should expect to be able to allocate 2GB!
The address space is already littered with all sorts of allocated data. There's the stack, and all the assemblies that are loaded, static variables and so on. There's no guarantee that there will be 800MB of contiguous unallocated memory anywhere.
Allocating 2 400MB chunks would probably fare better. Or 4 200MB chunks. Smaller allocations are much easier to find room for in a fragmented memory space.
Anyway, if you're going to deploy this to a 12GB machine anyway, you'll want to run this as a 64-bit application, which should solve all the problems.
Changing from 32 to 64 bit worked for me - worth a try if you are on a 64 bit pc and it doesn't need to port.
If you need such large structures, perhaps you could utilize Memory Mapped Files.
This article could prove helpful:
http://www.codeproject.com/KB/recipes/MemoryMappedGenericArray.aspx
LP,
Dejan
Rather than allocating a massive array, could you try utilizing an iterator? These are delay-executed, meaning values are generated only as they're requested in an foreach statement; you shouldn't run out of memory this way:
private static IEnumerable<double> MakeRandomNumbers(int numberOfRandomNumbers)
{
for (int i = 0; i < numberOfRandomNumbers; i++)
{
yield return randomGenerator.GetAnotherRandomNumber();
}
}
...
// Hooray, we won't run out of memory!
foreach(var number in MakeRandomNumbers(int.MaxValue))
{
Console.WriteLine(number);
}
The above will generate as many random numbers as you wish, but only generate them as they're asked for via a foreach statement. You won't run out of memory that way.
Alternately, If you must have them all in one place, store them in a file rather than in memory.
32bit windows has a 2GB process memory limit. The /3GB boot option others have mentioned will make this 3GB with just 1gb remaining for OS kernel use. Realistically if you want to use more than 2GB without hassle then a 64bit OS is required. This also overcomes the problem whereby although you may have 4GB of physical RAM, the address space requried for the video card can make a sizeable chuck of that memory unusable - usually around 500MB.
Well, I got a similar problem with large data set and trying to force the application to use so much data is not really the right option. The best tip I can give you is to process your data in small chunk if it is possible. Because dealing with so much data, the problem will come back sooner or later. Plus, you cannot know the configuration of each machine that will run your application so there's always a risk that the exception will happens on another pc.
I had a similar problem, it was due to a StringBuilder.ToString();
Convert your solution to x64. If you still face an issue, grant max length to everything that throws an exception like below :
var jsSerializer = new JavaScriptSerializer();
jsSerializer.MaxJsonLength = Int32.MaxValue;
If you do not need the Visual Studio Hosting Process:
Uncheck the option: Project->Properties->Debug->Enable the Visual Studio Hosting Process
And then build.
If you still face the problem:
Go to Project->Properties->Build Events->Post-Build Event Command line and paste the following:
call "$(DevEnvDir)..\..\vc\vcvarsall.bat" x86
"$(DevEnvDir)..\..\vc\bin\EditBin.exe" "$(TargetPath)" /LARGEADDRESSAWARE
Now, build the project.
Increase the Windows process limit to 3gb. (via boot.ini or Vista boot manager)

"random" kernel crash after running for minutes.... HEP!? -- [same question posted on Khronos]

I have a thoroughly complex kernel processing audio input data. It will run for a couple of minutes, 60 times a second, and then hang. That's on the GPU; on the CPU it will run for hours. The input data are constantly changing, but each variable is always within proscribed ranges. I have inserted test code before uploading the inputs to the kernel each frame; in this test code, I can force these inputs to be well below their valid input range, but it still will eventually crash. (Say the valid range for a particular input is 0->400; I can force it to 0->1 and it will STILL eventually crash. I can force it to be below 0.1 and it will still ultimately bite the dust.) However, if I force the input variables to zero, the GPU will happily dance for hours. Of course, that input-free dance is not so particularly interesting.
I'm at a loss so far, though I have clues. I can make it crash much faster than 2 minutes if an input variable is high in its approved range. I can make it crash in less then 10 seconds under the right circumstances. BUT, I can't seem to _back_off_of_ those certain circumstances such that they go away. As said above, I can force the input vars into ridiculously small portions of their valid range, and the kernel (let's call him Harlan Sanders) will eventually go belly-up. BUT, if they're forced to actual zero, no problems puppy, we can run all day long.
To repeat, I'm a bit at a loss - although I have things that look like clues, I have not yet figured out what they are hinting at, though I've been trying for a few days. Frankly, I do not expect to find a real solution by asking here; whenever I stumble over a problem in opencl it seems that my fate is to be the first to articulate that particular problem. I guess this is part of the fun of being in on a technology during its infancy!!!!!!!!!! BUT, I want to do some serious, sustainable work with this "baby" (or, maybe, "toddler").
Op details: MacBook Pro 2010, OS 10.6.8, nv 330M GPU, xcode 3.2.5, shorts, teeshirt.
bonus P.S. for those who've read this far, including a related question:
My laptop, soldier that it has proved to be, is not powerful enough for the next stage. I must sell some stocks/bonds and purchase a Mac Pro. I'm looking at the ATI 5870. So, PERHAPS my problem will simply go away when I compile the .cl for the ATI??? Maybe I have run into a bug in the nV implementation. Maybe my kernel is so complex that I'm running into undetected resource limits (it's 1300 lines of code). So, SINCE I run fine on the CPU, perhaps I'll have no bugs, or different bugs, on the ATI card???
Any thoughts?
Thanks, guys & dolls --
Dave
Use "cl_" data types on the CPU side, because maybe you are not coping data the right way, or it is not being understood by the GPU. This could lead to GPU hangs on invalid pointers while handing the data.
You should also try -Werror, and read the error output. You can be doing smt wrong.
Without any code, we can only guess. But I haven't found any bug in the actual OpenCL NV or ATI implementations.
Make sure you release all resources. Events returned by Enqueue functions must be released. This error sometimes occurs after accessing buffers out of range.

Performance monitor shows 4294967293 sessions active

I have an ASP.Net 3.5 website running in IIS 6 on Windows Server 2003 R2. It is a relatively small internal application that probably serves less than ten users at any given time. The server has 4 Gig of memory and shows that 3+ Gig is available while the site is active.
Just minutes after restarting the web application Performance monitor shows that there is a whopping 4,294,967,293 sessions active! I am fairly certain that this number is incorrect; at the time this reading there were only 100 requests to the website.
Has anyone else experienced this kind odd behavior from perf mon? Any ideas on how to get an accurate reading?
UPDATE: After running for about an hour the number of active sessions has dropped by 4. So it does seem to be responding to sessions timing out.
Could be an overflow, but my money's on an underflow. I think that the program started with 0 people, someone logged off, and then the number of sessions went negative.
Well, 2^32 = 4,294,967,296, so sounds like there's some kind of overflow occurring. Can't say exactly why.
We have the same problem. It looks like MS has a Hotfix available: http://support.microsoft.com/kb/969722
Update 9/10/2009: Our IT department contacted MS for the Hotfix. It fixed our issue. We are running .NET 2.0 if it matters any.
I am also showing a high number, currently 4,294,967,268.
Every time I abandon a Session, the Sessions abandoned count goes up by 1, and the Sessions Active count decreases by 1. Currently my abandoned session count = 16, so this number probably started at 4,294,967,84.
Is there a fix for this?
My counters were working fine, but one morning I logged in remotely to the production server, and the counter was on this huge number (which is as somebody mentioned very close to 2^32 indicating an underflow). But the only difference from the day before when everything worked was the fact that during the night, windows had installed updates.
So for some reason these updates caused this pretty annoying error.
Observing the counter a little more, I found out that whenever the application is restarted - after some time with no traffic, the counter starts correctly at zero. When users start logging on, it increments fine. When they start logging off again, it still decrements fine until it reaches what is supposed to be zero. At that point it goes bananas...
Sigh...
If you have to use your existing statistics, I opened the log file in Excel and used a formula to bring a more accurate value. I cannot guarantee its accuracy, but the results did look okay:
If B2 is the (aspnet_wp)\Sessions Active value , and the formula sits in C2
/* This one is quicker as it doesn't have to do the extra calculations */
=IF(B2>1073741824,4294967296-B2,B2)
Or
/* This one is clearer what is going on */
=IF(B2>power(2,30),(4*power(2,30))-B2,B2)
P.S. (I feel your pain - I have to explain why they have 4.2 billion sessions opening whereas a second earlier they had 0!)

Resources