ASP.NET Memory Leak Strange Charts - asp.net

I am trying to diagnose a memory leak, and I'm getting some strange numbers, perhaps someone can point out why I'm seeing what I'm seeing.
The first step that I performed, was attempting to see where the leak was occurring, in managed or unmanaged space.
I profiled the process, and got the following chart
According to the various documentation on leak diagnostics, I should see either, the private bytes run away while the "all heaps" don't (indicating an unmanaged leak) or they both run away in parallel, indicating a managed one.
It appears I do have a leak - (chart is CPU+Private Bytes+Managed Heaps).
What is puzzling me - is - why do my managed heaps consume only about 30MB between 9am and just before 5pm (but private bytes grow), then all of a sudden - BOOM my managed heaps jump to 3 gigs consumed?
Why would this happen?
UPDATE:
654cf3d8 199671 6389472 System.Web.HttpCacheValidateHandler
719c25e8 559507 6714084 System.Object
654b82e8 95499 6875928 System.Web.HttpServerVarsCollection
05e90a24 253641 7101948 System.Web.Mvc.NameValueCollectionValueProvider+ValueProviderResultPlaceholder+<>c__DisplayClass8
654e42e4 97208 7776640 System.Web.HttpWriter
04c2a5c8 264802 8473664 Castle.MicroKernel.BurdenReleaseDelegate
04c2ab68 264813 9533268 Castle.MicroKernel.Burden
06bde0a8 507282 10145640 System.Lazy`1[[System.Web.Mvc.ValueProviderResult, System.Web.Mvc]]
6fb5348c 267697 10707880 System.Collections.Generic.HashSet`1[[System.String, mscorlib]]
654e9388 160209 11535048 System.Web.HttpHeaderCollection
654ad44c 194416 12442624 System.Web.HttpCookieCollection
6fd1abbc 170480 14202840 System.Collections.Generic.HashSet`1+Slot[[System.String, mscorlib]][]
654b2204 95203 15613292 System.Web.HttpCachePolicy
06bde010 507282 16233024 System.Func`1[[System.Web.Mvc.ValueProviderResult, System.Web.Mvc]]
719c3a6c 469961 18106904 System.Int32[]
654e87e4 97208 18275104 System.Web.Hosting.IIS7WorkerRequest
654e2590 97208 19441600 System.Web.HttpRequest
654e285c 97208 19830432 System.Web.HttpResponse
715fbc80 422170 20264160 System.Collections.Generic.Dictionary`2[[System.String, mscorlib],[System.Object, mscorlib]]
654e2160 97208 23329920 System.Web.HttpContext
654e9614 388836 23330160 System.Web.HttpValueCollection
719c45c8 919071 47791692 System.Collections.Hashtable
654d5220 4808083 115393992 System.Web.HttpServerVarsCollectionEntry
719bfc20 4849839 116396136 System.Collections.ArrayList
719c4584 105080 119191278 System.Byte[]
70d45bec 9064979 145039664 System.Collections.Specialized.NameObjectCollectionBase+NameObjectEntry
719afe88 5391401 175028320 System.Object[]
719c5ed4 919078 237147240 System.Collections.Hashtable+bucket[]
719c2248 7055089 454532758 System.String
Ok, so I've run windbg over the crash dump, (!dumpheap -live -stat) and I've found that there are LOT of objects relating to the Http context, which are still held in memory (98,000 after a typical work day in fact).
Can anyone confirm... I shouldn't be seeing this right? There are Types occurring 97,208 times in the log - This means that the HttpRequest/HttpResponse, etc are being held in memory, causing ALOT of leakage.
What could be causing this? I know they're not being stored in the session..my session is set to default timeout, and when inspecting, it only contains 3 small string objects.

Figured it out. Running GCroot highlighted the problem. See how there is a Castle.MicroKernel.Releasers.LifecycledComponentsReleasePolicy in the reference list?
Castle wasn't being told to release the controller after the request had completed. Mystery solved.
0:000> !gcroot 95963d2c
HandleTable:
00bb12f0 (pinned handle)
-> 03062490 System.Object[]
-> 021501dc Castle.Windsor.WindsorContainer
-> 02150200 Castle.MicroKernel.DefaultKernel
-> 02150304 System.Collections.Generic.Dictionary`2[[System.String, mscorlib],[Castle.MicroKernel.ISubSystem, Castle.Windsor]]
-> 02150af8 System.Collections.Generic.Dictionary`2+Entry[[System.String, mscorlib],[Castle.MicroKernel.ISubSystem, Castle.Windsor]][]
-> 02150b74 Castle.Windsor.Diagnostics.DefaultDiagnosticsSubSystem
-> 02150b8c System.Collections.Generic.List`1[[Castle.Windsor.Diagnostics.IContainerDebuggerExtension, Castle.Windsor]]
-> 02150d00 System.Object[]
-> 02150d30 Castle.Windsor.Diagnostics.Extensions.ReleasePolicyTrackedObjects
-> 02150d3c Castle.Windsor.Diagnostics.TrackedComponentsDiagnostic
-> 02150e04 System.EventHandler`1[[Castle.Windsor.Diagnostics.TrackedInstancesEventArgs, Castle.Windsor]]
-> 02150d54 Castle.MicroKernel.Releasers.LifecycledComponentsReleasePolicy
-> 02150d84 System.Collections.Generic.Dictionary`2[[System.Object, mscorlib],[Castle.MicroKernel.Burden, Castle.Windsor]]
-> 038da530 System.Collections.Generic.Dictionary`2+Entry[[System.Object, mscorlib],[Castle.MicroKernel.Burden, Castle.Windsor]][]
-> 9596f3a4 WebController
-> 9596f9cc System.Web.Mvc.ControllerContext
-> 95965b5c System.Web.HttpContextWrapper
-> 95964078 System.Web.HttpContext
-> 95963d2c System.Web.Hosting.IIS7WorkerRequest

Related

Contstraint stream breaks when switching from OptaPlanner 7.46.0 to 8.0.0

I have a constraint that crashes in the latest OptaPlanner 8.0.0, but used to work fine on 7.46.0.
As expected, IntelliJ's code inspection (and the debugger) shows that after the first join, the stream is a TriConstraintStream. The runtime class makes more sense to me than the class OptaPlanner is trying to cast to.
When leaving out the last groupBy the error goes away, so that clause seems to cause the issue.
Did something change in the way join and groupby worked?
It seems that the underlying OptaPlanner code was refactored for 8.0.0, so I have trouble seeing what exactly changed in OptaPlanner.
Should I add something to ensure that a TriJoin is used instead of a BiJoin?
I could not find any relevant notes in the migration documentation.
protected Constraint preventProductionShortage(ConstraintFactory factory) {
return factory.from(Demand.class)
.groupBy(Demand::getSKU,
Demand::getWeekNumber
)//BiConstraintStream
.join(Demand.class,
equal((sku, weekNumber)-> sku, Demand::getSKU),
greaterThanOrEqual((sku, weekNumber)-> weekNumber, Demand::getWeekNumber)//TriConstraintStream
)
.groupBy((sku, weekNumber, totalDemand) -> sku,
(sku, weekNumber, totalDemand) -> weekNumber,
sum((sku, weekNumber, totalDemand) -> totalDemand.getOrderQuantity())
)//TriConstraintStream
.penalize("Penalty", HardMediumSoftScore.ONE_MEDIUM,
(sku_weekNumber, demandQty, productionQty) -> 1);
}
Stack trace:
java.lang.ClassCastException: class org.optaplanner.core.impl.score.stream.tri.CompositeTriJoiner cannot be cast to class org.optaplanner.core.impl.score.stream.bi.AbstractBiJoiner (org.optaplanner.core.impl.score.stream.tri.CompositeTriJoiner and org.optaplanner.core.impl.score.stream.bi.AbstractBiJoiner are in unnamed module of loader 'app')
at org.optaplanner.core.impl.score.stream.drools.common.rules.BiJoinMutator.<init>(BiJoinMutator.java:40)
at org.optaplanner.core.impl.score.stream.drools.common.rules.UniRuleAssembler.join(UniRuleAssembler.java:70)
at org.optaplanner.core.impl.score.stream.drools.common.rules.AbstractRuleAssembler.join(AbstractRuleAssembler.java:179)
at org.optaplanner.core.impl.score.stream.drools.common.ConstraintSubTree.getRuleAssembler(ConstraintSubTree.java:94)
at org.optaplanner.core.impl.score.stream.drools.common.ConstraintSubTree.getRuleAssembler(ConstraintSubTree.java:89)
at org.optaplanner.core.impl.score.stream.drools.common.ConstraintGraph.generateRule(ConstraintGraph.java:431)
at org.optaplanner.core.impl.score.stream.drools.common.ConstraintGraph.lambda$generateRule$57(ConstraintGraph.java:423)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
at org.optaplanner.core.impl.score.stream.drools.common.ConstraintGraph.generateRule(ConstraintGraph.java:424)
at org.optaplanner.core.impl.score.stream.drools.DroolsConstraintFactory.buildSessionFactory(DroolsConstraintFactory.java:101)
at org.optaplanner.core.impl.score.director.stream.ConstraintStreamScoreDirectorFactory.<init>(ConstraintStreamScoreDirectorFactory.java:77)
at org.optaplanner.test.impl.score.stream.DefaultConstraintVerifier.verifyThat(DefaultConstraintVerifier.java:63)
at org.optaplanner.test.impl.score.stream.DefaultConstraintVerifier.verifyThat(DefaultConstraintVerifier.java:32)
at com.ohly.planner.constraints.ConstraintsTest.weekShortageSingleSKU(ConstraintsTest.java:61)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:220)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:53)
Process finished with exit code -1
[edit] For completeness, the new function as suggested by Lukáš Petrovický
protected Constraint preventProductionShortage(ConstraintFactory factory) {
return factory.from(Demand.class)
.join(Demand.class,
equal(Demand::getSKU),
greaterThanOrEqual(demand -> demand.getWeekNumber()))
.groupBy((d, d2) -> d.getSKU(),
(d, d2) -> d.getWeekNumber(),
sum((d,d2) -> d2.getOrderQuantity())
)
...
[/edit]
This was an unfortunate bug not caught by the existing test coverage.
The fix is aimed at OptaPlanner 8.0.1, incl. test coverage improvement.
That said, I would argue that the constraint is not very efficient. Unless I'm missing some key implications, the following is semantically very similar, yet much faster:
protected Constraint preventProductionShortage(ConstraintFactory factory) {
return factory.from(Demand.class)
.join(Demand.class,
equal(Demand::getSKU),
greaterThanOrEqual(demand -> demand.getWeekNumber()))
.groupBy(..., ..., sum((demand, demand2) -> ...))
.penalize("Penalty", HardMediumSoftScore.ONE_MEDIUM);
}
Note how I eliminated the first use of groupBy(). There may be some difference though in how many tuples are penalized this way, which may or may not be what you want. Feel free to open another question on that.

Windbg - process frozen on SNIReadSyncOverAsync but no deadlock found

I have a strange problem. A process gets frozen/stuck while reading data using System.Data.SqlClient.SqlDataReader on GetValue() function. I am analyzing the process dump using WinDbg. I used SOS commands like !dlk, !SyncBlk, !analyze -v -hang etc. but none of them indicate any deadlocks.
The last call on callstack is
000000001a98e8a8 0000000076febd7a [InlinedCallFrame: 000000001a98e8a8] .**SNIReadSyncOverAsync**(SNI_ConnWrapper*, SNI_Packet**, Int32)
000000001a98e8a8 000007fee9e8bca1 [InlinedCallFrame: 000000001a98e8a8] .SNIReadSyncOverAsync(SNI_ConnWrapper*, SNI_Packet**, Int32)
000000001a98e880 000007fee9e8bca1 DomainBoundILStubClass.IL_STUB_PInvoke(SNI_ConnWrapper*, SNI_Packet**, Int32)
000000001a98e950 000007fee9e7254f SNINativeMethodWrapper.SNIReadSyncOverAsync(System.Runtime.InteropServices.SafeHandle, IntPtr ByRef, Int32)
000000001a98e9c0 000007fee9e7226e System.Data.SqlClient.TdsParserStateObject.ReadSniSyncOverAsync()
000000001a98ea70 000007fee9e72180 System.Data.SqlClient.TdsParserStateObject.TryReadNetworkPacket()
000000001a98eab0 000007fee9e72950 System.Data.SqlClient.TdsParserStateObject.TryPrepareBuffer()
000000001a98eae0 000007fee9e728d8 System.Data.SqlClient.TdsParserStateObject.TryReadByteArray(Byte[], Int32, Int32, Int32 ByRef)
000000001a98eb50 000007feea4c0adc System.Data.SqlClient.TdsParser.TryReadSqlValue(System.Data.SqlClient.SqlBuffer, System.Data.SqlClient.SqlMetaDataPriv, Int32, System.Data.SqlClient.TdsParserStateObject)
000000001a98ec10 000007fee9e85cb5 System.Data.SqlClient.SqlDataReader.TryReadColumnInternal(Int32, Boolean)
000000001a98ecb0 000007fee9e85af0 System.Data.SqlClient.SqlDataReader.TryReadColumn(Int32, Boolean, Boolean)
000000001a98ed20 000007fee9e85a0b System.Data.SqlClient.SqlDataReader.GetValueInternal(Int32)
000000001a98ed60 000007fee9e859b7 System.Data.SqlClient.SqlDataReader.GetValue(Int32)
What are other avenues for further debugging here? Did someone face the same issue with SqlDataReader?
I recently found out when this kind of stack trace is found, there is always a Win32 error in dump like this.
0:009> !gle
LastErrorValue: (Win32) 0x3e5 (997) - Overlapped I/O operation is in progress.
LastStatusValue: (NTSTATUS) 0xc0000034 - Object Name not found.
Does this mean it is blocked due to some underlying IO operation?

Erlang Tcp Accepter Pattern

Consider the following (based on sockserv from LYSE)
%%% The supervisor in charge of all the socket acceptors.
-module(tcpsocket_sup).
-behaviour(supervisor).
-export([start_link/0, start_socket/0]).
-export([init/1]).
start_link() ->
supervisor:start_link({local, ?MODULE}, ?MODULE, []).
init([]) ->
{ok, Port} = application:get_env(my_app,tcpPort),
{ok, ListenSocket} = gen_tcp:listen(
Port,
[binary, {packet, 0}, {reuseaddr, true}, {active, true} ]),
lager:info(io_lib:format("Listening for TCP on port ~p", [Port])),
spawn_link(fun empty_listeners/0),
{ok, {{simple_one_for_one, 60, 3600},
[{socket,
{tcpserver, start_link, [ListenSocket]},
temporary, 1000, worker, [tcpserver]}
]}}.
start_socket() ->
supervisor:start_child(?MODULE, []).%,
empty_listeners() ->
[start_socket() || _ <- lists:seq(1,20)],
ok.
%%%-------------------------------------------------------------------
%%% #author mylesmcdonnell
%%% #copyright (C) 2015, <COMPANY>
%%% #doc
%%%
%%% #end
%%% Created : 06. Feb 2015 07:49
%%%-------------------------------------------------------------------
-module(tcpserver).
-author("mylesmcdonnell").
-behaviour(gen_server).
-record(state, {
next,
socket}).
-export([start_link/1]).
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, code_change/3, terminate/2]).
-define(SOCK(Msg), {tcp, _Port, Msg}).
-define(TIME, 800).
-define(EXP, 50).
start_link(Socket) ->
gen_server:start_link(?MODULE, Socket, []).
init(Socket) ->
gen_server:cast(self(), accept),
{ok, #state{socket=Socket}}.
handle_call(_E, _From, State) ->
{noreply, State}.
handle_cast(accept, S = #state{socket=ListenSocket}) ->
{ok, AcceptSocket} = gen_tcp:accept(ListenSocket),
kvstore_tcpsocket_sup:start_socket(),
receive
{tcp, Socket, <<"store",Value/binary>>} ->
Uid = kvstore:store(Value),
send(Socket,Uid);
{tcp, Socket, <<"retrieve",Key/binary>>} ->
case kvstore:retrieve(binary_to_list(Key)) of
[{_, Value}|_] ->
send(Socket,Value);
_ ->
send(Socket,<<>>)
end;
{tcp, Socket, _} ->
send(Socket, "INVALID_MSG")
end,
{noreply, S#state{socket=AcceptSocket, next=name}}.
handle_info(_, S) ->
{noreply, S}.
code_change(_OldVsn, State, _Extra) ->
{ok, State}.
terminate(normal, _State) ->
ok;
terminate(_Reason, _State) ->
lager:info("terminate reason: ~p~n", [_Reason]).
send(Socket, Bin) ->
ok = gen_tcp:send(Socket, Bin),
ok = gen_tcp:close(Socket),
ok.
I'm unclear on how each tcpserver process is terminated? Is this leaking processes?
I don't see any place that you are terminating the owning process.
I think what you are looking for are four cases:
The client terminates the connection (you receive tcp_closed)
The connection goes wonky (you receive tcp_error)
The server receives a system message to terminate (this could, of course, just be the supervisor killing it, or a kill message)
The client sends a message telling the server its done and you want to do some clean up other than just reacting to tcp_closed.
The most common case is usually the client just closes the connection, and for that you want something like:
handle_info({tcp_closed, _}, State) ->
{stop, normal, State};
The connection getting weird is always a possibility. I can't think of any time I want to have the owning process or the socket stick around, so:
%% You might want to log something here.
handle_info({tcp_error, _}, State) ->
{stop, normal, State};
And any case where the client tells the server its done and you need to do cleanup based on the client having done something successful (maybe you have resources open that should be written to first, or a pending DB transaction open, or whatever) you would want to expect a success message from the client that closes the connection the way your send/2 does, and returns {stop, normal, State} to halt the process.
The key here is making sure you identify the cases where you want to end the connection and either have the server process killed or (better) return {stop, Reason, State}.
As written above, if you intend send/2 to be a single response and a clean exit (or really, that every accept cast should result in a single send/2 and then termination), then you want:
handle_cast(accept, S = #state{socket=ListenSocket}) ->
{ok, AcceptSocket} = gen_tcp:accept(ListenSocket),
kvstore_tcpsocket_sup:start_socket(),
receive
%% stuff that results in a call to send/2 in any case.
end,
{stop, normal, S}.
The case LYSE demonstrates is one where the connection is persistent and there is ongoing back-and-forth between a client and server. In the case above you are handling a single request, spawning a new listener to re-fill the listener pool, and should be exiting because you have no plan of this gen_server doing any further work.

How to understand this Riak stacktrace?

Can anyone help me solve this problem, I have the stacktrace of the problem, but can't understand what the trace actually means.
The error occurs when I try to retrieve all data from a bucket, in a Riak Database. And I am using java-riak-client library as the ORM. I can figure out that its a MapReduce problem, but other than that.....
Here below is the actual stacktrace, I actually could not figure out what error is it pointing to,
and I tried to find out the record its displaying in error.
#Update: Yes the record is there, when i CURL
com.basho.riak.client.RiakException: java.io.IOException: <html><head><title>500 Internal Server Error</title></head><body><h1>Internal Server Error</h1>The server encountered an error while processing this request:<br><pre>{error,
{error,
{case_clause,
{error,
{0,
[{module,riak_kv_mrc_map},
{partition,913438523331814323877303020447676887284957839360},
{details,
[{fitting,
{fitting,<0.21083.23>,#Ref<0.0.31.39954>,follow,1}},
{name,0},
{module,riak_kv_mrc_map},
{arg,{{jsfun,<<"Riak.mapValuesJson">>},none}},
{output,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,sink,
undefined}},
{options,
[{log,sink},
{trace,[error]},
{sink,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,
sink,undefined}},
{sink_type,{fsm,10,infinity}}]},
{q_limit,64}]},
{type,forward_preflist},
{error,[preflist_exhausted]},
{input,
{ok,{r_object,<<"xxxx-users">>,
<<"xxxx#hotmail.com-userpass">>,
[{r_content,
{dict,7,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],
[],[],[],[]},
{{[],[],
[[<<"Links">>]],
[],[],[],[],[],[],[],
[[<<"content-type">>,97,112,112,108,
105,99,97,116,105,111,110,47,106,
115,111,110],
[<<"X-Riak-VTag">>,54,98,119,73,73,
84,107,120,66,70,107,86,102,67,103,
71,73,116,120,121,85,53]],
[[<<"index">>]],
[],
[[<<"X-Riak-Last-Modified">>|
{1407,514685,380030}]],
[],
[[<<"charset">>,117,116,102,45,56],
[<<"X-Riak-Meta">>]]}}},
<<"{\"identityId\":{\"userId\":\"xxxx#hotmail.com\",\"providerId\":\"userpass\"},\"firstName\":\"xx\",\"lastName\":\"xx\",\"fullName\":\"xx xx\",\"email\":\"xxxx#hotmail.com\",\"authMethod\":{\"method\":\"userPassword\"},\"passwordInfo\":{\"hasher\":\"bcrypt\",\"password\":\"$2a$10$Gm1VVCM09iyI7TQY7r8B7.Baa.YrtHHgREkQpTIH9ThyW4WzuUeJ.\"}}">>}],
[{<<35,9,254,249,83,228,76,146>>,
{1,63574733885}}],
{dict,1,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[]},
{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],
[[clean|true]],
[]}}},
undefined},
undefined}},
{modstate,
{state,
913438523331814323877303020447676887284957839360,
{fitting_details,
{fitting,<0.21083.23>,#Ref<0.0.31.39954>,
follow,1},
0,riak_kv_mrc_map,
{{jsfun,<<"Riak.mapValuesJson">>},none},
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,sink,
undefined},
[{log,sink},
{trace,[error]},
{sink,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,
sink,undefined}},
{sink_type,{fsm,10,infinity}}],
64},
{jsfun,<<"Riak.mapValuesJson">>},
none}},
{stack,[]}]}}},
[{riak_kv_wm_mapred,pipe_mapred_nonchunked,3,
[{file,"src/riak_kv_wm_mapred.erl"},{line,180}]},
{webmachine_resource,resource_call,3,
[{file,"src/webmachine_resource.erl"},{line,183}]},
{webmachine_resource,do,3,
[{file,"src/webmachine_resource.erl"},{line,141}]},
{webmachine_decision_core,resource_call,1,
[{file,"src/webmachine_decision_core.erl"},{line,48}]},
{webmachine_decision_core,decision,1,
[{file,"src/webmachine_decision_core.erl"},{line,481}]},
{webmachine_decision_core,handle_request,2,
[{file,"src/webmachine_decision_core.erl"},{line,33}]},
{webmachine_mochiweb,loop,1,
[{file,"src/webmachine_mochiweb.erl"},{line,97}]},
{mochiweb_http,parse_headers,5,
[{file,"src/mochiweb_http.erl"},{line,180}]}]}}</pre><P><HR><ADDRESS>mochiweb+webmachine web server</ADDRESS></body></html>
at com.basho.riak.client.query.MapReduce.execute(MapReduce.java:81)
at models.UserRecordsModel$.getAllUsers(UserRecordsModel.scala:131)
at controllers.DataRetrieval$$anonfun$getRegisteredUserData$1.apply(DataRetrieval.scala:42)
at controllers.DataRetrieval$$anonfun$getRegisteredUserData$1.apply(DataRetrieval.scala:38)
at play.api.mvc.ActionBuilder$$anonfun$apply$10.apply(Action.scala:221)
at play.api.mvc.ActionBuilder$$anonfun$apply$10.apply(Action.scala:220)
at securesocial.core.SecureSocial$SecuredActionBuilder$$anonfun$2$$anonfun$apply$1.apply(SecureSocial.scala:117)
at securesocial.core.SecureSocial$SecuredActionBuilder$$anonfun$2$$anonfun$apply$1.apply(SecureSocial.scala:113)
at scala.Option.map(Option.scala:145)
at securesocial.core.SecureSocial$SecuredActionBuilder$$anonfun$2.apply(SecureSocial.scala:113)
at securesocial.core.SecureSocial$SecuredActionBuilder$$anonfun$2.apply(SecureSocial.scala:112)
at scala.Option.flatMap(Option.scala:170)
at securesocial.core.SecureSocial$SecuredActionBuilder.invokeSecuredBlock(SecureSocial.scala:112)
at securesocial.core.SecureSocial$SecuredActionBuilder.invokeBlock(SecureSocial.scala:146)
at play.api.mvc.ActionBuilder$$anon$1.apply(Action.scala:309)
at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4$$anonfun$apply$5.apply(Action.scala:109)
at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4$$anonfun$apply$5.apply(Action.scala:109)
at play.utils.Threads$.withContextClassLoader(Threads.scala:18)
at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4.apply(Action.scala:108)
at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4.apply(Action.scala:107)
at scala.Option.map(Option.scala:145)
at play.api.mvc.Action$$anonfun$apply$1.apply(Action.scala:107)
at play.api.mvc.Action$$anonfun$apply$1.apply(Action.scala:100)
at play.api.libs.iteratee.Iteratee$$anonfun$mapM$1.apply(Iteratee.scala:481)
at play.api.libs.iteratee.Iteratee$$anonfun$mapM$1.apply(Iteratee.scala:481)
at play.api.libs.iteratee.Iteratee$$anonfun$flatMapM$1.apply(Iteratee.scala:517)
at play.api.libs.iteratee.Iteratee$$anonfun$flatMapM$1.apply(Iteratee.scala:517)
at play.api.libs.iteratee.Iteratee$$anonfun$flatMap$1$$anonfun$apply$13.apply(Iteratee.scala:493)
at play.api.libs.iteratee.Iteratee$$anonfun$flatMap$1$$anonfun$apply$13.apply(Iteratee.scala:493)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:42)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.io.IOException: <html><head><title>500 Internal Server Error</title></head><body><h1>Internal Server Error</h1>The server encountered an error while processing this request:<br><pre>{error,
{error,
{case_clause,
{error,
{0,
[{module,riak_kv_mrc_map},
{partition,913438523331814323877303020447676887284957839360},
{details,
[{fitting,
{fitting,<0.21083.23>,#Ref<0.0.31.39954>,follow,1}},
{name,0},
{module,riak_kv_mrc_map},
{arg,{{jsfun,<<"Riak.mapValuesJson">>},none}},
{output,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,sink,
undefined}},
{options,
[{log,sink},
{trace,[error]},
{sink,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,
sink,undefined}},
{sink_type,{fsm,10,infinity}}]},
{q_limit,64}]},
{type,forward_preflist},
{error,[preflist_exhausted]},
{input,
{ok,{r_object,<<"xxxx-users">>,
<<"xxxx#hotmail.com-userpass">>,
[{r_content,
{dict,7,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],
[],[],[],[]},
{{[],[],
[[<<"Links">>]],
[],[],[],[],[],[],[],
[[<<"content-type">>,97,112,112,108,
105,99,97,116,105,111,110,47,106,
115,111,110],
[<<"X-Riak-VTag">>,54,98,119,73,73,
84,107,120,66,70,107,86,102,67,103,
71,73,116,120,121,85,53]],
[[<<"index">>]],
[],
[[<<"X-Riak-Last-Modified">>|
{1407,514685,380030}]],
[],
[[<<"charset">>,117,116,102,45,56],
[<<"X-Riak-Meta">>]]}}},
<<"{\"identityId\":{\"userId\":\"xxxx#hotmail.com\",\"providerId\":\"userpass\"},\"firstName\":\"xx\",\"lastName\":\"xx\",\"fullName\":\"xx xx\",\"email\":\"xxxx#hotmail.com\",\"authMethod\":{\"method\":\"userPassword\"},\"passwordInfo\":{\"hasher\":\"bcrypt\",\"password\":\"$2a$10$Gm1VVCM09iyI7TQY7r8B7.Baa.YrtHHgREkQpTIH9ThyW4WzuUeJ.\"}}">>}],
[{<<35,9,254,249,83,228,76,146>>,
{1,63574733885}}],
{dict,1,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[]},
{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],
[[clean|true]],
[]}}},
undefined},
undefined}},
{modstate,
{state,
913438523331814323877303020447676887284957839360,
{fitting_details,
{fitting,<0.21083.23>,#Ref<0.0.31.39954>,
follow,1},
0,riak_kv_mrc_map,
{{jsfun,<<"Riak.mapValuesJson">>},none},
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,sink,
undefined},
[{log,sink},
{trace,[error]},
{sink,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,
sink,undefined}},
{sink_type,{fsm,10,infinity}}],
64},
{jsfun,<<"Riak.mapValuesJson">>},
none}},
{stack,[]}]}}},
[{riak_kv_wm_mapred,pipe_mapred_nonchunked,3,
[{file,"src/riak_kv_wm_mapred.erl"},{line,180}]},
{webmachine_resource,resource_call,3,
[{file,"src/webmachine_resource.erl"},{line,183}]},
{webmachine_resource,do,3,
[{file,"src/webmachine_resource.erl"},{line,141}]},
{webmachine_decision_core,resource_call,1,
[{file,"src/webmachine_decision_core.erl"},{line,48}]},
{webmachine_decision_core,decision,1,
[{file,"src/webmachine_decision_core.erl"},{line,481}]},
{webmachine_decision_core,handle_request,2,
[{file,"src/webmachine_decision_core.erl"},{line,33}]},
{webmachine_mochiweb,loop,1,
[{file,"src/webmachine_mochiweb.erl"},{line,97}]},
{mochiweb_http,parse_headers,5,
[{file,"src/mochiweb_http.erl"},{line,180}]}]}}</pre><P><HR><ADDRESS>mochiweb+webmachine web server</ADDRESS></body></html>
at com.basho.riak.client.raw.http.ConversionUtil.convert(ConversionUtil.java:589)
at com.basho.riak.client.raw.http.HTTPClientAdapter.mapReduce(HTTPClientAdapter.java:386)
at com.basho.riak.client.query.MapReduce.execute(MapReduce.java:79)
... 36 more
The stack trace is telling us that there was a case clause exception at line 180 of the fie riak_kv_wm_mapred.erl
The clause at that line is handling the responses for the pipe processing the map phase, which appears to be returning the error preflist_exhausted, which is not explicitly handled by the case statement.
That error usually indicates that one or more vnodes were overloaded or otherwise unavailable, and fallbacks had not yet started to take over their workload.
The affected partition was 913438523331814323877303020447676887284957839360, the console.log and error.log may have further details about what happened.

How to generate good crash log report for Qt application in windows

I got such crash log in one of Qt application, Any idea how qt application is implemented to get such good crash report ?
//=====================================================
Crash Time: Fri Aug 20 15:05:51 2010
Tool Version: 1.6 (08/21/09)
Exception code: C0000005 ACCESS_VIOLATION
Fault address: 06706000 00:01FED9E4
Registers:
EAX:067689A0
EBX:00000000
ECX:067689A0
EDX:026304C0
ESI:00000012
EDI:01FEF018
CS:EIP:001B:06706000
SS:ESP:0023:01FEE4B4 EBP:01FEE4CC
DS:0023 ES:0023 FS:003B GS:0000
Flags:00010206
Call stack:
Address Frame
06706000 01FEE4B0 0000:00000000
39DE15ED 01FEE4CC QWidget::reparent+2D
39E7F560 01FEE6F4 QMenuBar::calculateRects+2D0
39E7E4B1 01FEE708 QMenuBar::performDelayedContentsChanged+41
39E7E600 01FEE714 QMenuBar::performDelayedChanges+20
39E7FF24 01FEE88C QMenuBar::drawContents+24
39E4162E 01FEEA88 QFrame::paintEvent+3BE
39DE060E 01FEEB00 QWidget::event+39E
39D56AE8 01FEEB38 QApplication::internalNotify+1E8
39D56785 01FEEC68 QApplication::notify+865
39D0A89B 01FEEC7C QApplication::sendSpontaneousEvent+3B
39D13DEF 01FEED2C QApplication::winMouseButtonUp+20DF
39D1095E 01FEEFB0 QApplication::winFocus+FBE
7E418734 01FEEFDC GetDC+6D
7E418816 01FEF044 GetDC+14F
7E41B4C0 01FEF098 DefWindowProcW+184
7E41B50C 01FEF0C0 DefWindowProcW+1D0
7C90EAE3 01FEF12C KiUserCallbackDispatcher+13
7E418A10 01FEF13C DispatchMessageW+F
39D25EED 01FEF194 QEventLoop::processEvents+2AD
39D76814 01FEF1AC QEventLoop::enterLoop+74
39D766CD 01FEF1B8 QEventLoop::exec+1D
39D56CCC 01FEF1C8 QApplication::exec+1C
0044847A 01FEFED8 0001:0004747A c:\QtTest\too\main.exe
004465C0 01FEFEE8 0001:000455C0 c:\QtTest\too\main.exe
00F17729 01FEFF18 0001:00B16729 c:\QtTest\too\main.exe
00F148E7 01FEFFC0 0001:00B138E7 c:\QtTest\too\main.exe
7C816FD7 01FEFFF0 RegisterWaitForInputIdle+49
Crash Log:
( 54350.189194700) T> Compile_And_Link_User_Code_Body
( 54350.191276250) T> Create_Header_Expanded_Driver called for: testQt
( 54350.197348809) T> computil.adb:Get_Preprocessed_File called SOURCE_FILE=QtTest_TMP.8.20.54350.2.10156.40.c
( 54350.786834903) T> Create_Header_Expanded_Driver called for: testQt
( 54350.792789012) T> computil.adb:Get_Preprocessed_File called SOURCE_FILE=QtTest_TMP.8.20.54350.8.10156.49.c,
( 54351.056315801) G> add_all_testcase_user_code_to_harness failure COMPILER_PKG.COMPILE_ERROR_EXCEPTION
( 54351.056962252) G> HierarchyView::initCoverage() ...
( 54351.068467625) G> starting Parser::build_testobjectlist_from_xml()
There is no crash report creation provided as part of Qt.
The application in question probably just registers an exception filter with the operating system, e.g. SetUnhandledExceptionFilter on Windows, and compiles and writes this crash report out itself.
If you didn't fancy rolling your own crash reporter, there are some nice open source libraries you could use for generating verbose crash reports like Google's Breakpad.
http://code.google.com/p/google-breakpad/

Resources