Environment.WorkingSet bug - asp.net

Environment.WorkingSet returns the working set incorrectly for my asp.net application which is the only application in its application pool.
On a Windows 2003 Server SP2 with 3GBs of Ram which is a VMWare Virtual Machine, it reports working set as 2.047.468.061 bytes(1952MBs) and Process.WorkingSet value is 75.563.008 bytes(72MBs).
• Memory Status values returned by GlobalMemoryStatusEx:
AvailExtendedVirtual : 0
AvailPageFile: 4.674.134.016
AvailPhys: 2.140.078.080
AvailVirtual: 1.347.272.704
TotalPageFile: 6.319.915.008
TotalPhys: 3.245.568.000
TotalVirtual: 2.147.352.576
• GetProcessMemoryInfo()
Working Set : 55.140.352
Peak Working Set: 75.571.200
PageFile : 94.560.256
QuotaPagedPoolUsage : 376.012
QuotaNonPagedPoolUsage : 33.261
• GetProcessWorkingSetSize() - min : 204.800 - max : 1.413.120
• GetPerformanceInfo()
CommitLimit : 1.542.948 pages 6.319.915.008 bytes
CommitPeak : 484.677 pages 1.985.236.992 bytes
CommitTotal : 417.514 pages 1.710.137.344 bytes
HandleCount : 57.012
KernelNonpaged : 8.671 pages 35.516.416 bytes
KernelPaged : 27.302 pages 111.828.992 bytes
KernelTotal : 35.973 pages 147.345.408 bytes
PageSize : 4.096 bytes
PhysicalAvailable : 508.083 pages 2.081.107.968 bytes
PhysicalTotal : 792.375 pages 3.245.568.000 bytes
ProcessCount : 43
SystemCache : 263.734 pages 1.080.254.464 bytes
ThreadCount : 1.038
After loding the new patch, http://support.microsoft.com/kb/983583/en-us, .NET Version changes to 2.0.50727.3615 and Environment.WorkingSet now returns the value: 2.047.468.141.(which is 80 bytes bigger than previous one)
On a Vista machine with 3GBs of Ram, Environment.WorkingSet and Process.WorkingSet values are similar and around 37 MBs.
So, why Environment.WorkingSet returns a fixed value? Restarting the application pool does not change anything, it always return the same magic value, 2.047.468.061.
I have also setup an .NET 1.1.4322.2443 application, and it weirdly WorkingSet is returned a number from a random set of unrelated numbers(193.654.824, 214.101.416, 57.207.080, 287.635.496) each time page refreshed while GetProcessMemoryInfo() returns the expected number.
I have also found that when the application run by impersonating "NT AUTHORITY\NetworkService" account this problem does not occur, Environment.WorkingSet returns the expected number both .net v1.1 and v2.0.
I have checked CodeAccessPermissions like EnvironmentPermission for windows user and NetworkService but could not find anything that restricts reading the WorkingSet value.
So, what could cause this? Is it a bug, some incorrect configuration or corrupt file etc.?

Environment.WorkingSet is a "estimation" by the CLR on the memory space needed by your application. If your application does not change (physically) then it should be the same value on every load (I cannot verify that 100%, but it should be correct enough to proceed.).
It is always bigger than is actually needed as it is unsure of the code branches required, but the OS will reduce the memory if it sees it is not being used. A good example is to minimise the application to the taskbar to see it reduce in size.

Related

Getting data from a direct mapped cache in prolog

The predicate getDataFromCache(StringAddress,Cache,Data,HopsNum,directMap,BitsNum)
should succeed when the Data is successfully retrieved from the Cache (cache hit)
and the HopsNum represents the number of hops required to access the data from
the cache which can differ according to direct map cache mapping technique such
that:
• StringAddress is a string of the binary number which represents the address
of the data you are required to address and it is six binary bits.
• Cache is the cache using the representation discussed previously .
• Data is the data retrieved from cache when cache hit occurs.
• HopsNum the number of hops required to access the data from the cache.
• BitsNum The BitsNum is the number of bits the index needs.
getDataFromCache is always giving me false although everythings seems working so I want someone to fix it
convertAddress(Binary,N,Tag,Idx,directMap):-
Idx is mod(Binary,10**N),
Tag is Binary // 10**N.
getDataFromCache(SA,[item(tag(T),data(D),V,_)|T],Data,HopsNum,directMap,BitsNum):-
convertAddress(SA,BitsNum,Tag,Idx,directMap),
number_string(Tag,Z),
Z==T,
V==1,
Data is D.
getDataFromCache(SA,[item(tag(T),data(D),V,_)|T],Data,HopsNum,directMap,BitsNum):-
convertAddress(SA,BitsNum,Tag,Idx,directMap),
number_string(Tag,Z),
(Z\=T;V==0),
getDataFromCache(SA,T,Data,HopsNum,directMap,BitsNum).
simply hopsNumber is always zero
and you don't have to traverse since it's direct
you can access it using nth0 perdicate
Also you are using the T variable twice

Understanding elasticsearch circuit_breaking_exception

I am trying to figure out why I am getting this error when indexing a document from a python web app.
The document in this case is a base64 encoded string of a file of size 10877 KB.
I post it to my web app, which then posts it via elasticsearch.py to my elastic instance.
My elastic instance throws an error:
TransportError(429, 'circuit_breaking_exception', '[parent] Data
too large, data for [<http_request>] would be
[1031753160/983.9mb], which is larger than the limit of
[986932838/941.2mb], real usage: [1002052432/955.6mb], new bytes
reserved: [29700728/28.3mb], usages [request=0/0b,
fielddata=0/0b, in_flight_requests=29700728/28.3mb,
accounting=202042/197.3kb]')
I am trying to understand why my 10877 KB file ends up at a size of 983mb as reported by elastic.
I understand that increasing the JVM max heap size may allow me to send bigger files, but I am more wondering why it appears the request size is 10x the size of what I am expecting.
Let us see what we have here, step by step:
[parent] Data too large, data for [<http_request>]
gives the name of the circuit breaker
would be [1031753160/983.9mb],
says, how the heap size will look, when the request would be executed
which is larger than the limit of [986932838/941.2mb],
tells us the current setting of the circuit breaker above
real usage: [1002052432/955.6mb],
this is the real usage of the heap
new bytes reserved: [29700728/28.3mb],
actually an estimatiom, what impact the request will have (the size of the data structures which needs to be created in order to process the request). Your ~10MB file will probably consume 28.3MB.
usages [
request=0/0b,
fielddata=0/0b,
in_flight_requests=29700728/28.3mb,
accounting=202042/197.3kb
]
This last line tells us how the estmation is being calculated.

Blazor preview 9/mono-wasm memory access out of bounds: max string size for DotNet.invokeMethod?

Since dotnet core 3 preview 9, I am facing an issue invoking a dotnet method passing a large string from JavaScript.
Code is worth more than a thousand words, so the snippet below reproduces the issue. It works when length = 1 * mb but fails when length = 2 * mb.
#page "/repro"
<button onclick="const mb = 1024 * 1024; const length = 2 * mb;console.log(`Attempting length ${length}`); DotNet.invokeMethod('#GetType().Assembly.GetName().Name', 'ProcessString', 'a'.repeat(length));">Click Me</button>
#functions {
[JSInvokable] public static void ProcessString(string stringFromJavaScript) { }
}
The error message is:
Uncaught RuntimeError: memory access out of bounds
at wasm-function[2639]:18
at wasm-function[6239]:10
at Module._mono_wasm_string_from_js (http://localhost:52349/_framework/wasm/mono.js:1:202444)
at ccall (http://localhost:52349/_framework/wasm/mono.js:1:7888)
at http://localhost:52349/_framework/wasm/mono.js:1:8238
at Object.toDotNetString (http://localhost:52349/_framework/blazor.webassembly.js:1:39050)
at Object.invokeDotNetFromJS (http://localhost:52349/_framework/blazor.webassembly.js:1:37750)
at u (http://localhost:52349/_framework/blazor.webassembly.js:1:5228)
at Object.e.invokeMethod (http://localhost:52349/_framework/blazor.webassembly.js:1:6578)
at HTMLButtonElement.onclick (<anonymous>:2:98)
I need to process large strings, which represent the content of a file.
Is there a way to increase this limit?
Apart from breaking down the string into multiple segments and performing multiples calls, is there any other way to process a large string?
Is there any other approach for processing large files?
This used to work in preview 8.
Is there a way to increase this limit?
No (unless you modify and recompile blazor and mono/wasm that is).
Apart from breaking down the string into multiple segments and performing multiples calls, is there any other way to process a large string?
Yes, as you are on the client side, you can use the shared memory techniques. You basically map a .net byte[] to an ArrayBuffer. See this (disclaimer: My library) or this library for reference on how to do it. These examples are using the binary content of actual javascript Files but it's applicable to strings as well. There is no reference documentation on these API's yet. Mostly just examples and the blazor source code.
Is there any other approach for processing large files?
See 2)
I recreated your issue in a netcore 3.2 Blazor app (somewhere between 1 and 2 Mb of data kills it just as you described). I updated the application to netcore 5.0 and the problem is fixed (it was still working when I threw 50Mb at it).

How to override oozie.service.WorkflowAppService.WorkflowDefinitionMaxLength property while submitting the oozie job

I have an application which dynamically generates oozie workflow.xml and now the size is increased to 245,524 bytes which is exceeding the default limit of 100000 bytes and getting the below error while running the job:
Error: E0736 : E0736: Workflow definition length [245,524] exceeded maximum allowed length [100,000]
This property can be set in oozie-default.xml but I would like to set in the application level. Is there any other way to set it?
This property can't be set on an application level, only in oozie-site.xml. Setting it requires an Oozie restart.
Have you considered breaking down your huge xml to many smaller pieces using the subworkflow action? It might help you reduce some duplication as well if you use parameters in the subworkflows.

Response Buffer Limit Exceeded

I am running a simple query to get data out of my database & display them. I'm getting an error that says Response Buffer Limit Exceeded.
Error is : Response object error 'ASP 0251 : 80004005'
Response Buffer Limit Exceeded
/abc/test_maintenanceDetail.asp, line 0
Execution of the ASP page caused the Response Buffer to exceed its configured limit.
I have also tried Response.flush in my loop and also use response.buffer = false in my top of the page, but still I am not getting any data.
My database contains 5600 records for that, Please give me some steps or code to solve the issue.
I know this is way late, but for anyone else who encounters this problem: If you are using a loop of some kind (in my case, a Do-While) to display the data, make sure that you are moving to the next record (in my case, a rs.MoveNext).
Here is what a Microsoft support page says about this:
https://support.microsoft.com/en-us/help/944886/error-message-when-you-use-the-response-binarywrite-method-in-iis-6-an.
But it’s easier in the GUI:
In Internet Information Services (IIS) Manager, click on ASP.
Change Behavior > Limits Properties > Response Buffering Limit from 4 MB to 64 MB.
Apply and restart.
The reason this is happening is because buffering is turned on by default, and IIS 6 cannot handle the large response.
In Classic ASP, at the top of your page, after <%#Language="VBScript"%> add:
<%Response.Buffer = False%>
In ASP.NET, you would add Buffer="False" to your Page directive.
For example:
<%#Page Language="C#" Buffer="False"%>
I faced the same kind of issue, my IIS version is 8.5. Increased the Response Buffering Limit under the ASP -> Limit Properties solved the issue.
In IIS 8.5, select your project, you can see the options in the right hand side. In that under the IIS, you can see the ASP option.
In the option window increase the Response Buffering Limit to 40194304 (approximately 40 MB) .
Navigate away from the option, in the right hand side top you can see the Actions menu, Select Apply. It solved my problem.
If you are not allowed to change the buffer limit at the server level, you will need to use the <%Response.Buffer = False%> method.
HOWEVER, if you are still getting this error and have a large table on the page, the culprit may be table itself. By design, some versions of Internet Explorer will buffer the entire content between before it is rendered to the page. So even if you are telling the page to not buffer the content, the table element may be buffered and causing this error.
Some alternate solutions may be to paginate the table results, but if you must display the entire table and it has thousands of rows, throw this line of code in the middle of the table generation loop: <% Response.Flush %>. For speed considerations, you may also want to consider adding a basic counter so that the flush only happens every 25 or 100 lines or so.
Drawbacks of not buffering the output:
slowdown of overall page load
tables and columns will adjust their widths as content is populated (table appears to wiggle)
Users will be able to click on links and interact with the page before it is fully loaded. So if you have some javascript at the bottom of the page, you may want to move it to the top to ensure it is loaded before some of your faster moving users click on things.
See this KB article for more information http://support.microsoft.com/kb/925764
Hope that helps.
Thank you so much!
<%Response.Buffer = False%> worked like a charm!
My asp/HTML table that was returning a blank page at about 2700 records. The following debugging lines helped expose the buffering problem: I replace the Do While loop as follows and played with my limit numbers to see what was happening:
Replace
Do While not rs.EOF
'etc .... your block of code that writes the table rows
rs.moveNext
Loop
with
Do While reccount < 2500
if rs.EOF then recount = 2501
'etc .... your block of code that writes the table rows
rs.moveNext
Loop
response.write "recount = " & recount
raise or lower the 2500 and 2501 to see if it is a buffer problem. for my record set, I could see that the blank page return, blank table, was happening at about 2700 records, good luck to all and thank you again for solving this problem! Such a simple great solution!
You can increase the limit as follows:
Stop IIS.
Locate the file %WinDir%\System32\Inetsrv\Metabase.xml
Modify the AspBufferingLimit value. The default value is 4194304, which is about 4 MB.
Changing it to 20MB (20971520).
Restart IIS.
One other answer to the same error message (this just fixed my problem) is that the System drive was low on disk space. Meaning about 700kb free. Deleting a lot of unused stuff on this really old server and then restarting IIS and the website (probably only IIS was necessary) cause the problem to disappear for me.
I'm sure the other answers are more useful for most people, but for a quick fix, just make sure that the System drive has some free space.
I rectified the error 'ASP 0251 : 80004005' Response Buffer Limit as follow:
To increase the buffering limit in IIS 6, follow these steps:
Click Start, click Run, type cmd, and then click OK.
Type the following command, and then press ENTER:
cd /d %systemdrive%\inetpub\adminscripts
Type the following command, and then press ENTER:
cscript.exe adsutil.vbs SET w3svc/aspbufferinglimit LimitSize
Note LimitSize represents the buffering limit size in bytes. For example, the number 67108864 sets the buffering limit size to 64 MB.
To confirm that the buffer limit is set correctly, follow these steps:
Click Start, click Run, type cmd, and then click OK.
Type the following command, and then press ENTER:
cd /d %systemdrive%\inetpub\adminscripts
Type the following command, and then press ENTER:
cscript.exe adsutil.vbs GET w3svc/aspbufferinglimit
refers to https://support.microsoft.com/en-us/kb/944886
If you are looking for the reason and don't want to fight the system settings, these are two major situations I faced:
You may have an infinite loop without next or recordest.movenext
Your text data is very large but you think it is not! The common reason for this situation is to copy-paste an Image from Microsoft word directly into the editor and so the server translates the image to data objects and saves it in your text field. This can easily occupies the database resources and causes buffer problem when you call the data again.
In my case i just have writing this line before rs.Open .....
Response.flush
rs.Open query, conn
It can be due to CursorTypeEnum also. My scenario was the initial value equal to CursorTypeEnum.adOpenStatic 3.
After changed to default, CursorTypeEnum.adOpenForwardOnly 0, it backs to normal.

Resources