How to understand this Riak stacktrace? - riak
Can anyone help me solve this problem, I have the stacktrace of the problem, but can't understand what the trace actually means.
The error occurs when I try to retrieve all data from a bucket, in a Riak Database. And I am using java-riak-client library as the ORM. I can figure out that its a MapReduce problem, but other than that.....
Here below is the actual stacktrace, I actually could not figure out what error is it pointing to,
and I tried to find out the record its displaying in error.
#Update: Yes the record is there, when i CURL
com.basho.riak.client.RiakException: java.io.IOException: <html><head><title>500 Internal Server Error</title></head><body><h1>Internal Server Error</h1>The server encountered an error while processing this request:<br><pre>{error,
{error,
{case_clause,
{error,
{0,
[{module,riak_kv_mrc_map},
{partition,913438523331814323877303020447676887284957839360},
{details,
[{fitting,
{fitting,<0.21083.23>,#Ref<0.0.31.39954>,follow,1}},
{name,0},
{module,riak_kv_mrc_map},
{arg,{{jsfun,<<"Riak.mapValuesJson">>},none}},
{output,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,sink,
undefined}},
{options,
[{log,sink},
{trace,[error]},
{sink,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,
sink,undefined}},
{sink_type,{fsm,10,infinity}}]},
{q_limit,64}]},
{type,forward_preflist},
{error,[preflist_exhausted]},
{input,
{ok,{r_object,<<"xxxx-users">>,
<<"xxxx#hotmail.com-userpass">>,
[{r_content,
{dict,7,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],
[],[],[],[]},
{{[],[],
[[<<"Links">>]],
[],[],[],[],[],[],[],
[[<<"content-type">>,97,112,112,108,
105,99,97,116,105,111,110,47,106,
115,111,110],
[<<"X-Riak-VTag">>,54,98,119,73,73,
84,107,120,66,70,107,86,102,67,103,
71,73,116,120,121,85,53]],
[[<<"index">>]],
[],
[[<<"X-Riak-Last-Modified">>|
{1407,514685,380030}]],
[],
[[<<"charset">>,117,116,102,45,56],
[<<"X-Riak-Meta">>]]}}},
<<"{\"identityId\":{\"userId\":\"xxxx#hotmail.com\",\"providerId\":\"userpass\"},\"firstName\":\"xx\",\"lastName\":\"xx\",\"fullName\":\"xx xx\",\"email\":\"xxxx#hotmail.com\",\"authMethod\":{\"method\":\"userPassword\"},\"passwordInfo\":{\"hasher\":\"bcrypt\",\"password\":\"$2a$10$Gm1VVCM09iyI7TQY7r8B7.Baa.YrtHHgREkQpTIH9ThyW4WzuUeJ.\"}}">>}],
[{<<35,9,254,249,83,228,76,146>>,
{1,63574733885}}],
{dict,1,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[]},
{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],
[[clean|true]],
[]}}},
undefined},
undefined}},
{modstate,
{state,
913438523331814323877303020447676887284957839360,
{fitting_details,
{fitting,<0.21083.23>,#Ref<0.0.31.39954>,
follow,1},
0,riak_kv_mrc_map,
{{jsfun,<<"Riak.mapValuesJson">>},none},
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,sink,
undefined},
[{log,sink},
{trace,[error]},
{sink,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,
sink,undefined}},
{sink_type,{fsm,10,infinity}}],
64},
{jsfun,<<"Riak.mapValuesJson">>},
none}},
{stack,[]}]}}},
[{riak_kv_wm_mapred,pipe_mapred_nonchunked,3,
[{file,"src/riak_kv_wm_mapred.erl"},{line,180}]},
{webmachine_resource,resource_call,3,
[{file,"src/webmachine_resource.erl"},{line,183}]},
{webmachine_resource,do,3,
[{file,"src/webmachine_resource.erl"},{line,141}]},
{webmachine_decision_core,resource_call,1,
[{file,"src/webmachine_decision_core.erl"},{line,48}]},
{webmachine_decision_core,decision,1,
[{file,"src/webmachine_decision_core.erl"},{line,481}]},
{webmachine_decision_core,handle_request,2,
[{file,"src/webmachine_decision_core.erl"},{line,33}]},
{webmachine_mochiweb,loop,1,
[{file,"src/webmachine_mochiweb.erl"},{line,97}]},
{mochiweb_http,parse_headers,5,
[{file,"src/mochiweb_http.erl"},{line,180}]}]}}</pre><P><HR><ADDRESS>mochiweb+webmachine web server</ADDRESS></body></html>
at com.basho.riak.client.query.MapReduce.execute(MapReduce.java:81)
at models.UserRecordsModel$.getAllUsers(UserRecordsModel.scala:131)
at controllers.DataRetrieval$$anonfun$getRegisteredUserData$1.apply(DataRetrieval.scala:42)
at controllers.DataRetrieval$$anonfun$getRegisteredUserData$1.apply(DataRetrieval.scala:38)
at play.api.mvc.ActionBuilder$$anonfun$apply$10.apply(Action.scala:221)
at play.api.mvc.ActionBuilder$$anonfun$apply$10.apply(Action.scala:220)
at securesocial.core.SecureSocial$SecuredActionBuilder$$anonfun$2$$anonfun$apply$1.apply(SecureSocial.scala:117)
at securesocial.core.SecureSocial$SecuredActionBuilder$$anonfun$2$$anonfun$apply$1.apply(SecureSocial.scala:113)
at scala.Option.map(Option.scala:145)
at securesocial.core.SecureSocial$SecuredActionBuilder$$anonfun$2.apply(SecureSocial.scala:113)
at securesocial.core.SecureSocial$SecuredActionBuilder$$anonfun$2.apply(SecureSocial.scala:112)
at scala.Option.flatMap(Option.scala:170)
at securesocial.core.SecureSocial$SecuredActionBuilder.invokeSecuredBlock(SecureSocial.scala:112)
at securesocial.core.SecureSocial$SecuredActionBuilder.invokeBlock(SecureSocial.scala:146)
at play.api.mvc.ActionBuilder$$anon$1.apply(Action.scala:309)
at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4$$anonfun$apply$5.apply(Action.scala:109)
at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4$$anonfun$apply$5.apply(Action.scala:109)
at play.utils.Threads$.withContextClassLoader(Threads.scala:18)
at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4.apply(Action.scala:108)
at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4.apply(Action.scala:107)
at scala.Option.map(Option.scala:145)
at play.api.mvc.Action$$anonfun$apply$1.apply(Action.scala:107)
at play.api.mvc.Action$$anonfun$apply$1.apply(Action.scala:100)
at play.api.libs.iteratee.Iteratee$$anonfun$mapM$1.apply(Iteratee.scala:481)
at play.api.libs.iteratee.Iteratee$$anonfun$mapM$1.apply(Iteratee.scala:481)
at play.api.libs.iteratee.Iteratee$$anonfun$flatMapM$1.apply(Iteratee.scala:517)
at play.api.libs.iteratee.Iteratee$$anonfun$flatMapM$1.apply(Iteratee.scala:517)
at play.api.libs.iteratee.Iteratee$$anonfun$flatMap$1$$anonfun$apply$13.apply(Iteratee.scala:493)
at play.api.libs.iteratee.Iteratee$$anonfun$flatMap$1$$anonfun$apply$13.apply(Iteratee.scala:493)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:42)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.io.IOException: <html><head><title>500 Internal Server Error</title></head><body><h1>Internal Server Error</h1>The server encountered an error while processing this request:<br><pre>{error,
{error,
{case_clause,
{error,
{0,
[{module,riak_kv_mrc_map},
{partition,913438523331814323877303020447676887284957839360},
{details,
[{fitting,
{fitting,<0.21083.23>,#Ref<0.0.31.39954>,follow,1}},
{name,0},
{module,riak_kv_mrc_map},
{arg,{{jsfun,<<"Riak.mapValuesJson">>},none}},
{output,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,sink,
undefined}},
{options,
[{log,sink},
{trace,[error]},
{sink,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,
sink,undefined}},
{sink_type,{fsm,10,infinity}}]},
{q_limit,64}]},
{type,forward_preflist},
{error,[preflist_exhausted]},
{input,
{ok,{r_object,<<"xxxx-users">>,
<<"xxxx#hotmail.com-userpass">>,
[{r_content,
{dict,7,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],
[],[],[],[]},
{{[],[],
[[<<"Links">>]],
[],[],[],[],[],[],[],
[[<<"content-type">>,97,112,112,108,
105,99,97,116,105,111,110,47,106,
115,111,110],
[<<"X-Riak-VTag">>,54,98,119,73,73,
84,107,120,66,70,107,86,102,67,103,
71,73,116,120,121,85,53]],
[[<<"index">>]],
[],
[[<<"X-Riak-Last-Modified">>|
{1407,514685,380030}]],
[],
[[<<"charset">>,117,116,102,45,56],
[<<"X-Riak-Meta">>]]}}},
<<"{\"identityId\":{\"userId\":\"xxxx#hotmail.com\",\"providerId\":\"userpass\"},\"firstName\":\"xx\",\"lastName\":\"xx\",\"fullName\":\"xx xx\",\"email\":\"xxxx#hotmail.com\",\"authMethod\":{\"method\":\"userPassword\"},\"passwordInfo\":{\"hasher\":\"bcrypt\",\"password\":\"$2a$10$Gm1VVCM09iyI7TQY7r8B7.Baa.YrtHHgREkQpTIH9ThyW4WzuUeJ.\"}}">>}],
[{<<35,9,254,249,83,228,76,146>>,
{1,63574733885}}],
{dict,1,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[]},
{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],
[[clean|true]],
[]}}},
undefined},
undefined}},
{modstate,
{state,
913438523331814323877303020447676887284957839360,
{fitting_details,
{fitting,<0.21083.23>,#Ref<0.0.31.39954>,
follow,1},
0,riak_kv_mrc_map,
{{jsfun,<<"Riak.mapValuesJson">>},none},
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,sink,
undefined},
[{log,sink},
{trace,[error]},
{sink,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,
sink,undefined}},
{sink_type,{fsm,10,infinity}}],
64},
{jsfun,<<"Riak.mapValuesJson">>},
none}},
{stack,[]}]}}},
[{riak_kv_wm_mapred,pipe_mapred_nonchunked,3,
[{file,"src/riak_kv_wm_mapred.erl"},{line,180}]},
{webmachine_resource,resource_call,3,
[{file,"src/webmachine_resource.erl"},{line,183}]},
{webmachine_resource,do,3,
[{file,"src/webmachine_resource.erl"},{line,141}]},
{webmachine_decision_core,resource_call,1,
[{file,"src/webmachine_decision_core.erl"},{line,48}]},
{webmachine_decision_core,decision,1,
[{file,"src/webmachine_decision_core.erl"},{line,481}]},
{webmachine_decision_core,handle_request,2,
[{file,"src/webmachine_decision_core.erl"},{line,33}]},
{webmachine_mochiweb,loop,1,
[{file,"src/webmachine_mochiweb.erl"},{line,97}]},
{mochiweb_http,parse_headers,5,
[{file,"src/mochiweb_http.erl"},{line,180}]}]}}</pre><P><HR><ADDRESS>mochiweb+webmachine web server</ADDRESS></body></html>
at com.basho.riak.client.raw.http.ConversionUtil.convert(ConversionUtil.java:589)
at com.basho.riak.client.raw.http.HTTPClientAdapter.mapReduce(HTTPClientAdapter.java:386)
at com.basho.riak.client.query.MapReduce.execute(MapReduce.java:79)
... 36 more
The stack trace is telling us that there was a case clause exception at line 180 of the fie riak_kv_wm_mapred.erl
The clause at that line is handling the responses for the pipe processing the map phase, which appears to be returning the error preflist_exhausted, which is not explicitly handled by the case statement.
That error usually indicates that one or more vnodes were overloaded or otherwise unavailable, and fallbacks had not yet started to take over their workload.
The affected partition was 913438523331814323877303020447676887284957839360, the console.log and error.log may have further details about what happened.
Related
Contstraint stream breaks when switching from OptaPlanner 7.46.0 to 8.0.0
I have a constraint that crashes in the latest OptaPlanner 8.0.0, but used to work fine on 7.46.0. As expected, IntelliJ's code inspection (and the debugger) shows that after the first join, the stream is a TriConstraintStream. The runtime class makes more sense to me than the class OptaPlanner is trying to cast to. When leaving out the last groupBy the error goes away, so that clause seems to cause the issue. Did something change in the way join and groupby worked? It seems that the underlying OptaPlanner code was refactored for 8.0.0, so I have trouble seeing what exactly changed in OptaPlanner. Should I add something to ensure that a TriJoin is used instead of a BiJoin? I could not find any relevant notes in the migration documentation. protected Constraint preventProductionShortage(ConstraintFactory factory) { return factory.from(Demand.class) .groupBy(Demand::getSKU, Demand::getWeekNumber )//BiConstraintStream .join(Demand.class, equal((sku, weekNumber)-> sku, Demand::getSKU), greaterThanOrEqual((sku, weekNumber)-> weekNumber, Demand::getWeekNumber)//TriConstraintStream ) .groupBy((sku, weekNumber, totalDemand) -> sku, (sku, weekNumber, totalDemand) -> weekNumber, sum((sku, weekNumber, totalDemand) -> totalDemand.getOrderQuantity()) )//TriConstraintStream .penalize("Penalty", HardMediumSoftScore.ONE_MEDIUM, (sku_weekNumber, demandQty, productionQty) -> 1); } Stack trace: java.lang.ClassCastException: class org.optaplanner.core.impl.score.stream.tri.CompositeTriJoiner cannot be cast to class org.optaplanner.core.impl.score.stream.bi.AbstractBiJoiner (org.optaplanner.core.impl.score.stream.tri.CompositeTriJoiner and org.optaplanner.core.impl.score.stream.bi.AbstractBiJoiner are in unnamed module of loader 'app') at org.optaplanner.core.impl.score.stream.drools.common.rules.BiJoinMutator.<init>(BiJoinMutator.java:40) at org.optaplanner.core.impl.score.stream.drools.common.rules.UniRuleAssembler.join(UniRuleAssembler.java:70) at org.optaplanner.core.impl.score.stream.drools.common.rules.AbstractRuleAssembler.join(AbstractRuleAssembler.java:179) at org.optaplanner.core.impl.score.stream.drools.common.ConstraintSubTree.getRuleAssembler(ConstraintSubTree.java:94) at org.optaplanner.core.impl.score.stream.drools.common.ConstraintSubTree.getRuleAssembler(ConstraintSubTree.java:89) at org.optaplanner.core.impl.score.stream.drools.common.ConstraintGraph.generateRule(ConstraintGraph.java:431) at org.optaplanner.core.impl.score.stream.drools.common.ConstraintGraph.lambda$generateRule$57(ConstraintGraph.java:423) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) at org.optaplanner.core.impl.score.stream.drools.common.ConstraintGraph.generateRule(ConstraintGraph.java:424) at org.optaplanner.core.impl.score.stream.drools.DroolsConstraintFactory.buildSessionFactory(DroolsConstraintFactory.java:101) at org.optaplanner.core.impl.score.director.stream.ConstraintStreamScoreDirectorFactory.<init>(ConstraintStreamScoreDirectorFactory.java:77) at org.optaplanner.test.impl.score.stream.DefaultConstraintVerifier.verifyThat(DefaultConstraintVerifier.java:63) at org.optaplanner.test.impl.score.stream.DefaultConstraintVerifier.verifyThat(DefaultConstraintVerifier.java:32) at com.ohly.planner.constraints.ConstraintsTest.weekShortageSingleSKU(ConstraintsTest.java:61) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.junit.runner.JUnitCore.run(JUnitCore.java:160) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69) at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33) at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:220) at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:53) Process finished with exit code -1 [edit] For completeness, the new function as suggested by Lukáš Petrovický protected Constraint preventProductionShortage(ConstraintFactory factory) { return factory.from(Demand.class) .join(Demand.class, equal(Demand::getSKU), greaterThanOrEqual(demand -> demand.getWeekNumber())) .groupBy((d, d2) -> d.getSKU(), (d, d2) -> d.getWeekNumber(), sum((d,d2) -> d2.getOrderQuantity()) ) ... [/edit]
This was an unfortunate bug not caught by the existing test coverage. The fix is aimed at OptaPlanner 8.0.1, incl. test coverage improvement. That said, I would argue that the constraint is not very efficient. Unless I'm missing some key implications, the following is semantically very similar, yet much faster: protected Constraint preventProductionShortage(ConstraintFactory factory) { return factory.from(Demand.class) .join(Demand.class, equal(Demand::getSKU), greaterThanOrEqual(demand -> demand.getWeekNumber())) .groupBy(..., ..., sum((demand, demand2) -> ...)) .penalize("Penalty", HardMediumSoftScore.ONE_MEDIUM); } Note how I eliminated the first use of groupBy(). There may be some difference though in how many tuples are penalized this way, which may or may not be what you want. Feel free to open another question on that.
Riak db returns 500 Internal server error
I seem to be having an issue with our riak-kv db v2.2.3, we are currently running 5 node cluster, out of nowhere last week suddenly requests to our db started returning 500 internal server error for all/almost every request (also might be valuable to mention our db has nearly doubled in size over the past 2 weeks.) At first, we thought the issue was in the code making the requests however after sshing into one of the nodes in the cluster and attempting a simple request to list buckets we saw this: Command: curl -i http://localhost:8098/buckets?buckets=true Response: HTTP/1.1 500 Internal Server Error Vary: Accept-Encoding Server: MochiWeb/1.1 WebMachine/1.10.9 (cafe not found) Date: Sun, 10 Dec 2017 06:21:20 GMT Content-Type: text/html Content-Length: 1193 <html><head><title>500 Internal Server Error</title></head><body> <h1>Internal Server Error</h1>The server encountered an error while processing this request:<br><pre>{error, {error, {badmatch,{error,mailbox_overload}}, [{riak_kv_wm_buckets,produce_bucket_list,2, [{file,"src/riak_kv_wm_buckets.erl"},{line,225}]}, {webmachine_resource,resource_call,3, [{file,"src/webmachine_resource.erl"},{line,186}]}, {webmachine_resource,do,3, [{file,"src/webmachine_resource.erl"},{line,142}]}, {webmachine_decision_core,resource_call,1, [{file,"src/webmachine_decision_core.erl"},{line,48}]}, {webmachine_decision_core,decision,1, [{file,"src/webmachine_decision_core.erl"},{line,562}]}, {webmachine_decision_core,handle_request,2, [{file,"src/webmachine_decision_core.erl"},{line,33}]}, {webmachine_mochiweb,loop,2, [{file,"src/webmachine_mochiweb.erl"},{line,72}]}, {mochiweb_http,headers,5, [{file,"src/mochiweb_http.erl"},{line,105}]}]}}</pre><P><HR> <ADDRESS>mochiweb+webmachine web server</ADDRESS></body></html> After some more investigation into the issue, I pulled the logs on one of the riak nodes and saw this: 2017-12-10 03:54:58.654 [error] <0.24342.271>#yz_solrq_helper:send_solr_ops_for_entries:301 Updating a batch of Solr operations failed for index <<"attachment">> with error {error,{other,{ok,"500",[{"Content-Type","application/json; charset=UTF-8"},{"Transfer-Encoding","chunked"}],<<" {\"responseHeader\":{\"status\":500,\"QTime\":1}, error": { "msg ": "Exception writing document id 1*default*Attachment*07d8e24-dc32s-11e7-q9640-a7f8b4edb446*623 to the index; possible analysis error.", "trace": ""org.apache.solr.common.So lrException: Exception writing document id 1*default*Attachment*07d8e24-dc32s-11e7-q9640-a7f8b4edb446*623 to the index; possible analysis error. at org.apache.solr.update. DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:169) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69) at or g.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51) at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd (DistributedUpdateProcessor.java:952) at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:692) at org.apache.so lr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:141) at org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLo ader.java:106) at org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:68) at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java :99) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74) at org.apache.solr.handler.RequestHandlerBase.handleRequ est(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilte r.java:777) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFi lter.java:207) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) at org.eclipse.jetty.servlet.ServletHandler.doHandle(Servle tHandler.java:455) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHand ler.java:557) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(Contex tHandler.java:1075) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHa ndler.java:193) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedH andler.java:135) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255) at org.eclipse.jetty.server.handler.HandlerCo llection.handle(HandlerCollection.java:154) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) at org.eclipse.jetty.server.Server.han dle(Server.java:368) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489) at org.eclipse.jetty.server.BlockingHttpConnec tion.handleRequest(BlockingHttpConnection.java:53) at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953) at org.eclipse.jetty.s erver.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861) at org.ecli pse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72) at org.eclips e.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java :608) at org.eclip..." >> } } } After some googling some people said this error is Caused by: java.lang.OutOfMemoryError: Java heap space So I modified my riak config from: search.solr.jvm_options = -d64 -Xms4g -Xmx4g -XX:+UseStringCache -XX:+UseCompressedOops To: search.solr.jvm_options = -d64 -Xms8g -Xmx8g -XX:+UseStringCache -XX:+UseCompressedOops However I am still getting the same error any help is much appreciated.
Resource 7bed8adc-9ed9-49dc-b15e-6660e2fc3285 transitioned to failure state ERROR when use openstacksdk to create_server
When I create the openstack server, I get bellow Exception: Resource 7bed8adc-9ed9-49dc-b15e-6660e2fc3285 transitioned to failure state ERROR My code is bellow: server_args = { "name":server_name, "image_id":image_id, "flavor_id":flavor_id, "networks":[{"uuid":network.id}], "admin_password": admin_password, } try: server = user_conn.conn.compute.create_server(**server_args) server = user_conn.conn.compute.wait_for_server(server) except Exception as e: # there I except the Exception raise e When create_server, my server_args data is bellow: {'flavor_id': 'd4424892-4165-494e-bedc-71dc97a73202', 'networks': [{'uuid': 'da4e3433-2b21-42bb-befa-6e1e26808a99'}], 'admin_password': '123456', 'name': '133456', 'image_id': '60f4005e-5daf-4aef-a018-4c6b2ff06b40'} My openstacksdk version is 0.9.18.
In the end, I find the flavor data is too big for openstack compute node, so I changed it to a small flavor, so I create success.
SQLITE_ERROR: Connection is closed when connecting from Spark via JDBC to SQLite database
I am using Apache Spark 1.5.1 and trying to connect to a local SQLite database named clinton.db. Creating a data frame from a table of the database works fine but when I do some operations on the created object, I get the error below which says "SQL error or missing database (Connection is closed)". Funny thing is that I get the result of the operation nevertheless. Any idea what I can do to solve the problem, i.e., avoid the error? Start command for spark-shell: ../spark/bin/spark-shell --master local[8] --jars ../libraries/sqlite-jdbc-3.8.11.1.jar --classpath ../libraries/sqlite-jdbc-3.8.11.1.jar Reading from the database: val emails = sqlContext.read.format("jdbc").options(Map("url" -> "jdbc:sqlite:../data/clinton.sqlite", "dbtable" -> "Emails")).load() Simple count (fails): emails.count Error: 15/09/30 09:06:39 WARN JDBCRDD: Exception closing statement java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (Connection is closed) at org.sqlite.core.DB.newSQLException(DB.java:890) at org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109) at org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1$$anonfun$8.apply(JDBCRDD.scala:358) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1$$anonfun$8.apply(JDBCRDD.scala:358) at org.apache.spark.TaskContextImpl$$anon$1.onTaskCompletion(TaskContextImpl.scala:60) at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:79) at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:77) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:77) at org.apache.spark.scheduler.Task.run(Task.scala:90) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) res1: Long = 7945
I got the same error today, and the important line is just before the exception: 15/11/30 12:13:02 INFO jdbc.JDBCRDD: closed connection 15/11/30 12:13:02 WARN jdbc.JDBCRDD: Exception closing statement java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (Connection is closed) at org.sqlite.core.DB.newSQLException(DB.java:890) at org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109) at org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454) So Spark succeeded to close the JDBC connection, and then it fails to close the JDBC statement Looking at the source, close() is called twice: Line 358 (org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD, Spark 1.5.1) context.addTaskCompletionListener{ context => close() } Line 469 override def hasNext: Boolean = { if (!finished) { if (!gotNext) { nextValue = getNext() if (finished) { close() } gotNext = true } } !finished } If you look at the close() method (line 443) def close() { if (closed) return you can see that it checks the variable closed, but that value is never set to true. If I see it correctly, this bug is still in the master. I have filed a bug report. Source: JDBCRDD.scala (lines numbers differ slightly)
ASP.NET Memory Leak Strange Charts
I am trying to diagnose a memory leak, and I'm getting some strange numbers, perhaps someone can point out why I'm seeing what I'm seeing. The first step that I performed, was attempting to see where the leak was occurring, in managed or unmanaged space. I profiled the process, and got the following chart According to the various documentation on leak diagnostics, I should see either, the private bytes run away while the "all heaps" don't (indicating an unmanaged leak) or they both run away in parallel, indicating a managed one. It appears I do have a leak - (chart is CPU+Private Bytes+Managed Heaps). What is puzzling me - is - why do my managed heaps consume only about 30MB between 9am and just before 5pm (but private bytes grow), then all of a sudden - BOOM my managed heaps jump to 3 gigs consumed? Why would this happen? UPDATE: 654cf3d8 199671 6389472 System.Web.HttpCacheValidateHandler 719c25e8 559507 6714084 System.Object 654b82e8 95499 6875928 System.Web.HttpServerVarsCollection 05e90a24 253641 7101948 System.Web.Mvc.NameValueCollectionValueProvider+ValueProviderResultPlaceholder+<>c__DisplayClass8 654e42e4 97208 7776640 System.Web.HttpWriter 04c2a5c8 264802 8473664 Castle.MicroKernel.BurdenReleaseDelegate 04c2ab68 264813 9533268 Castle.MicroKernel.Burden 06bde0a8 507282 10145640 System.Lazy`1[[System.Web.Mvc.ValueProviderResult, System.Web.Mvc]] 6fb5348c 267697 10707880 System.Collections.Generic.HashSet`1[[System.String, mscorlib]] 654e9388 160209 11535048 System.Web.HttpHeaderCollection 654ad44c 194416 12442624 System.Web.HttpCookieCollection 6fd1abbc 170480 14202840 System.Collections.Generic.HashSet`1+Slot[[System.String, mscorlib]][] 654b2204 95203 15613292 System.Web.HttpCachePolicy 06bde010 507282 16233024 System.Func`1[[System.Web.Mvc.ValueProviderResult, System.Web.Mvc]] 719c3a6c 469961 18106904 System.Int32[] 654e87e4 97208 18275104 System.Web.Hosting.IIS7WorkerRequest 654e2590 97208 19441600 System.Web.HttpRequest 654e285c 97208 19830432 System.Web.HttpResponse 715fbc80 422170 20264160 System.Collections.Generic.Dictionary`2[[System.String, mscorlib],[System.Object, mscorlib]] 654e2160 97208 23329920 System.Web.HttpContext 654e9614 388836 23330160 System.Web.HttpValueCollection 719c45c8 919071 47791692 System.Collections.Hashtable 654d5220 4808083 115393992 System.Web.HttpServerVarsCollectionEntry 719bfc20 4849839 116396136 System.Collections.ArrayList 719c4584 105080 119191278 System.Byte[] 70d45bec 9064979 145039664 System.Collections.Specialized.NameObjectCollectionBase+NameObjectEntry 719afe88 5391401 175028320 System.Object[] 719c5ed4 919078 237147240 System.Collections.Hashtable+bucket[] 719c2248 7055089 454532758 System.String Ok, so I've run windbg over the crash dump, (!dumpheap -live -stat) and I've found that there are LOT of objects relating to the Http context, which are still held in memory (98,000 after a typical work day in fact). Can anyone confirm... I shouldn't be seeing this right? There are Types occurring 97,208 times in the log - This means that the HttpRequest/HttpResponse, etc are being held in memory, causing ALOT of leakage. What could be causing this? I know they're not being stored in the session..my session is set to default timeout, and when inspecting, it only contains 3 small string objects.
Figured it out. Running GCroot highlighted the problem. See how there is a Castle.MicroKernel.Releasers.LifecycledComponentsReleasePolicy in the reference list? Castle wasn't being told to release the controller after the request had completed. Mystery solved. 0:000> !gcroot 95963d2c HandleTable: 00bb12f0 (pinned handle) -> 03062490 System.Object[] -> 021501dc Castle.Windsor.WindsorContainer -> 02150200 Castle.MicroKernel.DefaultKernel -> 02150304 System.Collections.Generic.Dictionary`2[[System.String, mscorlib],[Castle.MicroKernel.ISubSystem, Castle.Windsor]] -> 02150af8 System.Collections.Generic.Dictionary`2+Entry[[System.String, mscorlib],[Castle.MicroKernel.ISubSystem, Castle.Windsor]][] -> 02150b74 Castle.Windsor.Diagnostics.DefaultDiagnosticsSubSystem -> 02150b8c System.Collections.Generic.List`1[[Castle.Windsor.Diagnostics.IContainerDebuggerExtension, Castle.Windsor]] -> 02150d00 System.Object[] -> 02150d30 Castle.Windsor.Diagnostics.Extensions.ReleasePolicyTrackedObjects -> 02150d3c Castle.Windsor.Diagnostics.TrackedComponentsDiagnostic -> 02150e04 System.EventHandler`1[[Castle.Windsor.Diagnostics.TrackedInstancesEventArgs, Castle.Windsor]] -> 02150d54 Castle.MicroKernel.Releasers.LifecycledComponentsReleasePolicy -> 02150d84 System.Collections.Generic.Dictionary`2[[System.Object, mscorlib],[Castle.MicroKernel.Burden, Castle.Windsor]] -> 038da530 System.Collections.Generic.Dictionary`2+Entry[[System.Object, mscorlib],[Castle.MicroKernel.Burden, Castle.Windsor]][] -> 9596f3a4 WebController -> 9596f9cc System.Web.Mvc.ControllerContext -> 95965b5c System.Web.HttpContextWrapper -> 95964078 System.Web.HttpContext -> 95963d2c System.Web.Hosting.IIS7WorkerRequest