Contstraint stream breaks when switching from OptaPlanner 7.46.0 to 8.0.0 - constraints

I have a constraint that crashes in the latest OptaPlanner 8.0.0, but used to work fine on 7.46.0.
As expected, IntelliJ's code inspection (and the debugger) shows that after the first join, the stream is a TriConstraintStream. The runtime class makes more sense to me than the class OptaPlanner is trying to cast to.
When leaving out the last groupBy the error goes away, so that clause seems to cause the issue.
Did something change in the way join and groupby worked?
It seems that the underlying OptaPlanner code was refactored for 8.0.0, so I have trouble seeing what exactly changed in OptaPlanner.
Should I add something to ensure that a TriJoin is used instead of a BiJoin?
I could not find any relevant notes in the migration documentation.
protected Constraint preventProductionShortage(ConstraintFactory factory) {
return factory.from(Demand.class)
.groupBy(Demand::getSKU,
Demand::getWeekNumber
)//BiConstraintStream
.join(Demand.class,
equal((sku, weekNumber)-> sku, Demand::getSKU),
greaterThanOrEqual((sku, weekNumber)-> weekNumber, Demand::getWeekNumber)//TriConstraintStream
)
.groupBy((sku, weekNumber, totalDemand) -> sku,
(sku, weekNumber, totalDemand) -> weekNumber,
sum((sku, weekNumber, totalDemand) -> totalDemand.getOrderQuantity())
)//TriConstraintStream
.penalize("Penalty", HardMediumSoftScore.ONE_MEDIUM,
(sku_weekNumber, demandQty, productionQty) -> 1);
}
Stack trace:
java.lang.ClassCastException: class org.optaplanner.core.impl.score.stream.tri.CompositeTriJoiner cannot be cast to class org.optaplanner.core.impl.score.stream.bi.AbstractBiJoiner (org.optaplanner.core.impl.score.stream.tri.CompositeTriJoiner and org.optaplanner.core.impl.score.stream.bi.AbstractBiJoiner are in unnamed module of loader 'app')
at org.optaplanner.core.impl.score.stream.drools.common.rules.BiJoinMutator.<init>(BiJoinMutator.java:40)
at org.optaplanner.core.impl.score.stream.drools.common.rules.UniRuleAssembler.join(UniRuleAssembler.java:70)
at org.optaplanner.core.impl.score.stream.drools.common.rules.AbstractRuleAssembler.join(AbstractRuleAssembler.java:179)
at org.optaplanner.core.impl.score.stream.drools.common.ConstraintSubTree.getRuleAssembler(ConstraintSubTree.java:94)
at org.optaplanner.core.impl.score.stream.drools.common.ConstraintSubTree.getRuleAssembler(ConstraintSubTree.java:89)
at org.optaplanner.core.impl.score.stream.drools.common.ConstraintGraph.generateRule(ConstraintGraph.java:431)
at org.optaplanner.core.impl.score.stream.drools.common.ConstraintGraph.lambda$generateRule$57(ConstraintGraph.java:423)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
at org.optaplanner.core.impl.score.stream.drools.common.ConstraintGraph.generateRule(ConstraintGraph.java:424)
at org.optaplanner.core.impl.score.stream.drools.DroolsConstraintFactory.buildSessionFactory(DroolsConstraintFactory.java:101)
at org.optaplanner.core.impl.score.director.stream.ConstraintStreamScoreDirectorFactory.<init>(ConstraintStreamScoreDirectorFactory.java:77)
at org.optaplanner.test.impl.score.stream.DefaultConstraintVerifier.verifyThat(DefaultConstraintVerifier.java:63)
at org.optaplanner.test.impl.score.stream.DefaultConstraintVerifier.verifyThat(DefaultConstraintVerifier.java:32)
at com.ohly.planner.constraints.ConstraintsTest.weekShortageSingleSKU(ConstraintsTest.java:61)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:220)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:53)
Process finished with exit code -1
[edit] For completeness, the new function as suggested by Lukáš Petrovický
protected Constraint preventProductionShortage(ConstraintFactory factory) {
return factory.from(Demand.class)
.join(Demand.class,
equal(Demand::getSKU),
greaterThanOrEqual(demand -> demand.getWeekNumber()))
.groupBy((d, d2) -> d.getSKU(),
(d, d2) -> d.getWeekNumber(),
sum((d,d2) -> d2.getOrderQuantity())
)
...
[/edit]

This was an unfortunate bug not caught by the existing test coverage.
The fix is aimed at OptaPlanner 8.0.1, incl. test coverage improvement.
That said, I would argue that the constraint is not very efficient. Unless I'm missing some key implications, the following is semantically very similar, yet much faster:
protected Constraint preventProductionShortage(ConstraintFactory factory) {
return factory.from(Demand.class)
.join(Demand.class,
equal(Demand::getSKU),
greaterThanOrEqual(demand -> demand.getWeekNumber()))
.groupBy(..., ..., sum((demand, demand2) -> ...))
.penalize("Penalty", HardMediumSoftScore.ONE_MEDIUM);
}
Note how I eliminated the first use of groupBy(). There may be some difference though in how many tuples are penalized this way, which may or may not be what you want. Feel free to open another question on that.

Related

jetified-protobuf-javalite-3.14.0 com.google.protobuf:protobuf-javalite:3.14.0) jetified-protobuf-lite-3.0.1 (com.google.protobuf:protobuf-lite:3.0.1

I have several projects to use the com. com.google.protobuf:protobuf-lite:3.0.1 , now, I want to integrate firebase Performance monitoring of my application, I add the following to rely on:
implementation platform('com.google.firebase:firebase-bom:28.3.0')
implementation ("com.google.firebase:firebase-perf")
implementation 'com.google.firebase:firebase-analytics:19.0.1'
implementation 'com.google.firebase:firebase-crashlytics:18.2.1'
implementation 'com.google.firebase:firebase-core:19.0.1'
implementation 'com.google.firebase:firebase-messaging:22.0.0'
implementation 'com.google.firebase:firebase-auth:21.0.1'
I will get the following questions:
Duplicate class com.google.protobuf.WireFormat$FieldType$3 found in modules jetified-protobuf-javalite-3.14.0 (com.google.protobuf:protobuf-javalite:3.14.0) and jetified-protobuf-lite-3.0.1 (com.google.protobuf:protobuf-lite:3.0.1)
Then, I use it in the project
configurations.all {
exclude group: 'com.google.protobuf', module: "protobuf-lite"
}
Protobuf-Lite has been removed globally, after which compilation is not a problem, but when I open the application I have the following problems:
java.lang.NoClassDefFoundError: failed for class com.google.firebase.perf.v1.NetworkRequestMetric; see exception in other thread
at com.google.firebase.perf.v1.NetworkRequestMetric.newBuilder(NetworkRequestMetric.java:1336)
at com.google.firebase.perf.metrics.NetworkRequestMetricBuilder.(NetworkRequestMetricBuilder.java:61)
at com.google.firebase.perf.metrics.NetworkRequestMetricBuilder.(NetworkRequestMetricBuilder.java:92)
at com.google.firebase.perf.metrics.NetworkRequestMetricBuilder.builder(NetworkRequestMetricBuilder.java:84)
at com.google.firebase.perf.network.FirebasePerfUrlConnection.instrument(FirebasePerfUrlConnection.java:186)
at com.facebook.GraphRequest.createConnection(GraphRequest.java:1376)
at com.facebook.GraphRequest.toHttpConnection(GraphRequest.java:1045)
at com.facebook.GraphRequest.executeBatchAndWait(GraphRequest.java:1132)
at com.facebook.GraphRequest.executeBatchAndWait(GraphRequest.java:1109)
at com.facebook.GraphRequest.executeBatchAndWait(GraphRequest.java:1093)
at com.facebook.GraphRequest.executeAndWait(GraphRequest.java:1068)
at com.facebook.GraphRequest.executeAndWait(GraphRequest.java:962)
at com.facebook.internal.FetchedAppSettingsManager.getAppSettingsQueryResponse(FetchedAppSettingsManager.java:368)
at com.facebook.internal.FetchedAppSettingsManager.access$000(FetchedAppSettingsManager.java:59)
at com.facebook.internal.FetchedAppSettingsManager$1.run(FetchedAppSettingsManager.java:177)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:919)
2021-08-28 10:36:24.030 11807-11963/com.realu.dating E/AndroidRuntime: FATAL EXCEPTION: AsyncTask #1
Process: com.realu.dating, PID: 11807
java.lang.NoSuchMethodError: No static method registerDefaultInstance(Ljava/lang/Class;Lcom/google/protobuf/GeneratedMessageLite;)V in class Lcom/google/protobuf/GeneratedMessageLite; or its super classes (declaration of 'com.google.protobuf.GeneratedMessageLite' appears in /data/app/com.realu.dating-wIVBwzcBmy2s70YdCgdDJA==/base.apk!classes19.dex)
at com.google.firebase.perf.v1.NetworkRequestMetric.(NetworkRequestMetric.java:2460)
at com.google.firebase.perf.v1.NetworkRequestMetric.newBuilder(NetworkRequestMetric.java:1336)
at com.google.firebase.perf.metrics.NetworkRequestMetricBuilder.(NetworkRequestMetricBuilder.java:61)
at com.google.firebase.perf.metrics.NetworkRequestMetricBuilder.(NetworkRequestMetricBuilder.java:92)
at com.google.firebase.perf.metrics.NetworkRequestMetricBuilder.builder(NetworkRequestMetricBuilder.java:84)
at com.google.firebase.perf.network.FirebasePerfUrlConnection.instrument(FirebasePerfUrlConnection.java:186)
at com.facebook.GraphRequest.createConnection(GraphRequest.java:1376)
at com.facebook.GraphRequest.toHttpConnection(GraphRequest.java:1045)
at com.facebook.GraphRequest.executeBatchAndWait(GraphRequest.java:1132)
at com.facebook.GraphRequest.executeBatchAndWait(GraphRequest.java:1109)
at com.facebook.GraphRequest.executeBatchAndWait(GraphRequest.java:1093)
at com.facebook.GraphRequest.executeAndWait(GraphRequest.java:1068)
at com.facebook.GraphRequest.executeAndWait(GraphRequest.java:962)
What should I do about it, thanks

Spark - "too many open files" in shuffle

Using Spark 1.1
I have 2 datasets. One is very large and the other was reduced (using some 1:100 filtering) to much smaller scale. I need to reduce the large dataset to the same scale, by joining only those items from the smaller list with their corresponding counterparts in the larger list (those lists contain elements that have a mutual join field).
I am doing that using the following code:
The "if(joinKeys != null)" part is the relevant part
Smaller list is "joinKeys", larger list is "keyedEvents"
private static JavaRDD<ObjectNode> createOutputType(JavaRDD jsonsList, final String type, String outputPath,JavaPairRDD<String,String> joinKeys) {
outputPath = outputPath + "/" + type;
JavaRDD events = jsonsList.filter(new TypeFilter(type));
// This is in case we need to narrow the list to match some other list of ids... Recommendation List, for example... :)
if(joinKeys != null) {
JavaPairRDD<String,ObjectNode> keyedEvents = events.mapToPair(new KeyAdder("requestId"));
JavaRDD < ObjectNode > joinedEvents = joinKeys.join(keyedEvents).values().map(new PairToSecond());
events = joinedEvents;
}
JavaPairRDD<String,Iterable<ObjectNode>> groupedEvents = events.mapToPair(new KeyAdder("sliceKey")).groupByKey();
// Add convert jsons to strings and add "\n" at the end of each
JavaPairRDD<String, String> groupedStrings = groupedEvents.mapToPair(new JsonsToStrings());
groupedStrings.saveAsHadoopFile(outputPath, String.class, String.class, KeyBasedMultipleTextOutputFormat.class);
return events;
}
Thing is when running this job, I always get the same error:
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:40)
at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2757 in stage 13.0 failed 4 times, most recent failure: Lost task 2757.3 in stage 13.0 (TID 47681, hadoop-w-175.c.taboola-qa-01.internal): java.io.FileNotFoundException: /hadoop/spark/tmp/spark-local-20141201184944-ba09/36/shuffle_6_2757_2762 (Too many open files)
java.io.FileOutputStream.open(Native Method)
java.io.FileOutputStream.<init>(FileOutputStream.java:221)
org.apache.spark.storage.DiskBlockObjectWriter.open(BlockObjectWriter.scala:123)
org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:192)
org.apache.spark.shuffle.hash.HashShuffleWriter$$anonfun$write$1.apply(HashShuffleWriter.scala:67)
org.apache.spark.shuffle.hash.HashShuffleWriter$$anonfun$write$1.apply(HashShuffleWriter.scala:65)
scala.collection.Iterator$class.foreach(Iterator.scala:727)
scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
org.apache.spark.shuffle.hash.HashShuffleWriter.write(HashShuffleWriter.scala:65)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
I already increased my ulimits, by doing the following on all cluster machines:
echo "* soft nofile 900000" >> /etc/security/limits.conf
echo "root soft nofile 900000" >> /etc/security/limits.conf
echo "* hard nofile 990000" >> /etc/security/limits.conf
echo "root hard nofile 990000" >> /etc/security/limits.conf
echo "session required pam_limits.so" >> /etc/pam.d/common-session
echo "session required pam_limits.so" >> /etc/pam.d/common-session-noninteractive
But doesn't fix my problem...
The bdutil framework works in a way the user "hadoop" is the one running the job. The script that deploys the cluster, created a file /etc/security/limits.d/hadoop.conf that overrided the ulimit settings for "hadoop" user, which I wasn't aware of. By deleting this file, or alternatively setting the desired ulimits there, I was able to resolve the problem.

How to understand this Riak stacktrace?

Can anyone help me solve this problem, I have the stacktrace of the problem, but can't understand what the trace actually means.
The error occurs when I try to retrieve all data from a bucket, in a Riak Database. And I am using java-riak-client library as the ORM. I can figure out that its a MapReduce problem, but other than that.....
Here below is the actual stacktrace, I actually could not figure out what error is it pointing to,
and I tried to find out the record its displaying in error.
#Update: Yes the record is there, when i CURL
com.basho.riak.client.RiakException: java.io.IOException: <html><head><title>500 Internal Server Error</title></head><body><h1>Internal Server Error</h1>The server encountered an error while processing this request:<br><pre>{error,
{error,
{case_clause,
{error,
{0,
[{module,riak_kv_mrc_map},
{partition,913438523331814323877303020447676887284957839360},
{details,
[{fitting,
{fitting,<0.21083.23>,#Ref<0.0.31.39954>,follow,1}},
{name,0},
{module,riak_kv_mrc_map},
{arg,{{jsfun,<<"Riak.mapValuesJson">>},none}},
{output,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,sink,
undefined}},
{options,
[{log,sink},
{trace,[error]},
{sink,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,
sink,undefined}},
{sink_type,{fsm,10,infinity}}]},
{q_limit,64}]},
{type,forward_preflist},
{error,[preflist_exhausted]},
{input,
{ok,{r_object,<<"xxxx-users">>,
<<"xxxx#hotmail.com-userpass">>,
[{r_content,
{dict,7,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],
[],[],[],[]},
{{[],[],
[[<<"Links">>]],
[],[],[],[],[],[],[],
[[<<"content-type">>,97,112,112,108,
105,99,97,116,105,111,110,47,106,
115,111,110],
[<<"X-Riak-VTag">>,54,98,119,73,73,
84,107,120,66,70,107,86,102,67,103,
71,73,116,120,121,85,53]],
[[<<"index">>]],
[],
[[<<"X-Riak-Last-Modified">>|
{1407,514685,380030}]],
[],
[[<<"charset">>,117,116,102,45,56],
[<<"X-Riak-Meta">>]]}}},
<<"{\"identityId\":{\"userId\":\"xxxx#hotmail.com\",\"providerId\":\"userpass\"},\"firstName\":\"xx\",\"lastName\":\"xx\",\"fullName\":\"xx xx\",\"email\":\"xxxx#hotmail.com\",\"authMethod\":{\"method\":\"userPassword\"},\"passwordInfo\":{\"hasher\":\"bcrypt\",\"password\":\"$2a$10$Gm1VVCM09iyI7TQY7r8B7.Baa.YrtHHgREkQpTIH9ThyW4WzuUeJ.\"}}">>}],
[{<<35,9,254,249,83,228,76,146>>,
{1,63574733885}}],
{dict,1,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[]},
{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],
[[clean|true]],
[]}}},
undefined},
undefined}},
{modstate,
{state,
913438523331814323877303020447676887284957839360,
{fitting_details,
{fitting,<0.21083.23>,#Ref<0.0.31.39954>,
follow,1},
0,riak_kv_mrc_map,
{{jsfun,<<"Riak.mapValuesJson">>},none},
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,sink,
undefined},
[{log,sink},
{trace,[error]},
{sink,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,
sink,undefined}},
{sink_type,{fsm,10,infinity}}],
64},
{jsfun,<<"Riak.mapValuesJson">>},
none}},
{stack,[]}]}}},
[{riak_kv_wm_mapred,pipe_mapred_nonchunked,3,
[{file,"src/riak_kv_wm_mapred.erl"},{line,180}]},
{webmachine_resource,resource_call,3,
[{file,"src/webmachine_resource.erl"},{line,183}]},
{webmachine_resource,do,3,
[{file,"src/webmachine_resource.erl"},{line,141}]},
{webmachine_decision_core,resource_call,1,
[{file,"src/webmachine_decision_core.erl"},{line,48}]},
{webmachine_decision_core,decision,1,
[{file,"src/webmachine_decision_core.erl"},{line,481}]},
{webmachine_decision_core,handle_request,2,
[{file,"src/webmachine_decision_core.erl"},{line,33}]},
{webmachine_mochiweb,loop,1,
[{file,"src/webmachine_mochiweb.erl"},{line,97}]},
{mochiweb_http,parse_headers,5,
[{file,"src/mochiweb_http.erl"},{line,180}]}]}}</pre><P><HR><ADDRESS>mochiweb+webmachine web server</ADDRESS></body></html>
at com.basho.riak.client.query.MapReduce.execute(MapReduce.java:81)
at models.UserRecordsModel$.getAllUsers(UserRecordsModel.scala:131)
at controllers.DataRetrieval$$anonfun$getRegisteredUserData$1.apply(DataRetrieval.scala:42)
at controllers.DataRetrieval$$anonfun$getRegisteredUserData$1.apply(DataRetrieval.scala:38)
at play.api.mvc.ActionBuilder$$anonfun$apply$10.apply(Action.scala:221)
at play.api.mvc.ActionBuilder$$anonfun$apply$10.apply(Action.scala:220)
at securesocial.core.SecureSocial$SecuredActionBuilder$$anonfun$2$$anonfun$apply$1.apply(SecureSocial.scala:117)
at securesocial.core.SecureSocial$SecuredActionBuilder$$anonfun$2$$anonfun$apply$1.apply(SecureSocial.scala:113)
at scala.Option.map(Option.scala:145)
at securesocial.core.SecureSocial$SecuredActionBuilder$$anonfun$2.apply(SecureSocial.scala:113)
at securesocial.core.SecureSocial$SecuredActionBuilder$$anonfun$2.apply(SecureSocial.scala:112)
at scala.Option.flatMap(Option.scala:170)
at securesocial.core.SecureSocial$SecuredActionBuilder.invokeSecuredBlock(SecureSocial.scala:112)
at securesocial.core.SecureSocial$SecuredActionBuilder.invokeBlock(SecureSocial.scala:146)
at play.api.mvc.ActionBuilder$$anon$1.apply(Action.scala:309)
at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4$$anonfun$apply$5.apply(Action.scala:109)
at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4$$anonfun$apply$5.apply(Action.scala:109)
at play.utils.Threads$.withContextClassLoader(Threads.scala:18)
at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4.apply(Action.scala:108)
at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4.apply(Action.scala:107)
at scala.Option.map(Option.scala:145)
at play.api.mvc.Action$$anonfun$apply$1.apply(Action.scala:107)
at play.api.mvc.Action$$anonfun$apply$1.apply(Action.scala:100)
at play.api.libs.iteratee.Iteratee$$anonfun$mapM$1.apply(Iteratee.scala:481)
at play.api.libs.iteratee.Iteratee$$anonfun$mapM$1.apply(Iteratee.scala:481)
at play.api.libs.iteratee.Iteratee$$anonfun$flatMapM$1.apply(Iteratee.scala:517)
at play.api.libs.iteratee.Iteratee$$anonfun$flatMapM$1.apply(Iteratee.scala:517)
at play.api.libs.iteratee.Iteratee$$anonfun$flatMap$1$$anonfun$apply$13.apply(Iteratee.scala:493)
at play.api.libs.iteratee.Iteratee$$anonfun$flatMap$1$$anonfun$apply$13.apply(Iteratee.scala:493)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:42)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.io.IOException: <html><head><title>500 Internal Server Error</title></head><body><h1>Internal Server Error</h1>The server encountered an error while processing this request:<br><pre>{error,
{error,
{case_clause,
{error,
{0,
[{module,riak_kv_mrc_map},
{partition,913438523331814323877303020447676887284957839360},
{details,
[{fitting,
{fitting,<0.21083.23>,#Ref<0.0.31.39954>,follow,1}},
{name,0},
{module,riak_kv_mrc_map},
{arg,{{jsfun,<<"Riak.mapValuesJson">>},none}},
{output,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,sink,
undefined}},
{options,
[{log,sink},
{trace,[error]},
{sink,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,
sink,undefined}},
{sink_type,{fsm,10,infinity}}]},
{q_limit,64}]},
{type,forward_preflist},
{error,[preflist_exhausted]},
{input,
{ok,{r_object,<<"xxxx-users">>,
<<"xxxx#hotmail.com-userpass">>,
[{r_content,
{dict,7,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],
[],[],[],[]},
{{[],[],
[[<<"Links">>]],
[],[],[],[],[],[],[],
[[<<"content-type">>,97,112,112,108,
105,99,97,116,105,111,110,47,106,
115,111,110],
[<<"X-Riak-VTag">>,54,98,119,73,73,
84,107,120,66,70,107,86,102,67,103,
71,73,116,120,121,85,53]],
[[<<"index">>]],
[],
[[<<"X-Riak-Last-Modified">>|
{1407,514685,380030}]],
[],
[[<<"charset">>,117,116,102,45,56],
[<<"X-Riak-Meta">>]]}}},
<<"{\"identityId\":{\"userId\":\"xxxx#hotmail.com\",\"providerId\":\"userpass\"},\"firstName\":\"xx\",\"lastName\":\"xx\",\"fullName\":\"xx xx\",\"email\":\"xxxx#hotmail.com\",\"authMethod\":{\"method\":\"userPassword\"},\"passwordInfo\":{\"hasher\":\"bcrypt\",\"password\":\"$2a$10$Gm1VVCM09iyI7TQY7r8B7.Baa.YrtHHgREkQpTIH9ThyW4WzuUeJ.\"}}">>}],
[{<<35,9,254,249,83,228,76,146>>,
{1,63574733885}}],
{dict,1,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[]},
{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],
[[clean|true]],
[]}}},
undefined},
undefined}},
{modstate,
{state,
913438523331814323877303020447676887284957839360,
{fitting_details,
{fitting,<0.21083.23>,#Ref<0.0.31.39954>,
follow,1},
0,riak_kv_mrc_map,
{{jsfun,<<"Riak.mapValuesJson">>},none},
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,sink,
undefined},
[{log,sink},
{trace,[error]},
{sink,
{fitting,<0.21081.23>,#Ref<0.0.31.39954>,
sink,undefined}},
{sink_type,{fsm,10,infinity}}],
64},
{jsfun,<<"Riak.mapValuesJson">>},
none}},
{stack,[]}]}}},
[{riak_kv_wm_mapred,pipe_mapred_nonchunked,3,
[{file,"src/riak_kv_wm_mapred.erl"},{line,180}]},
{webmachine_resource,resource_call,3,
[{file,"src/webmachine_resource.erl"},{line,183}]},
{webmachine_resource,do,3,
[{file,"src/webmachine_resource.erl"},{line,141}]},
{webmachine_decision_core,resource_call,1,
[{file,"src/webmachine_decision_core.erl"},{line,48}]},
{webmachine_decision_core,decision,1,
[{file,"src/webmachine_decision_core.erl"},{line,481}]},
{webmachine_decision_core,handle_request,2,
[{file,"src/webmachine_decision_core.erl"},{line,33}]},
{webmachine_mochiweb,loop,1,
[{file,"src/webmachine_mochiweb.erl"},{line,97}]},
{mochiweb_http,parse_headers,5,
[{file,"src/mochiweb_http.erl"},{line,180}]}]}}</pre><P><HR><ADDRESS>mochiweb+webmachine web server</ADDRESS></body></html>
at com.basho.riak.client.raw.http.ConversionUtil.convert(ConversionUtil.java:589)
at com.basho.riak.client.raw.http.HTTPClientAdapter.mapReduce(HTTPClientAdapter.java:386)
at com.basho.riak.client.query.MapReduce.execute(MapReduce.java:79)
... 36 more
The stack trace is telling us that there was a case clause exception at line 180 of the fie riak_kv_wm_mapred.erl
The clause at that line is handling the responses for the pipe processing the map phase, which appears to be returning the error preflist_exhausted, which is not explicitly handled by the case statement.
That error usually indicates that one or more vnodes were overloaded or otherwise unavailable, and fallbacks had not yet started to take over their workload.
The affected partition was 913438523331814323877303020447676887284957839360, the console.log and error.log may have further details about what happened.

ASP.NET Memory Leak Strange Charts

I am trying to diagnose a memory leak, and I'm getting some strange numbers, perhaps someone can point out why I'm seeing what I'm seeing.
The first step that I performed, was attempting to see where the leak was occurring, in managed or unmanaged space.
I profiled the process, and got the following chart
According to the various documentation on leak diagnostics, I should see either, the private bytes run away while the "all heaps" don't (indicating an unmanaged leak) or they both run away in parallel, indicating a managed one.
It appears I do have a leak - (chart is CPU+Private Bytes+Managed Heaps).
What is puzzling me - is - why do my managed heaps consume only about 30MB between 9am and just before 5pm (but private bytes grow), then all of a sudden - BOOM my managed heaps jump to 3 gigs consumed?
Why would this happen?
UPDATE:
654cf3d8 199671 6389472 System.Web.HttpCacheValidateHandler
719c25e8 559507 6714084 System.Object
654b82e8 95499 6875928 System.Web.HttpServerVarsCollection
05e90a24 253641 7101948 System.Web.Mvc.NameValueCollectionValueProvider+ValueProviderResultPlaceholder+<>c__DisplayClass8
654e42e4 97208 7776640 System.Web.HttpWriter
04c2a5c8 264802 8473664 Castle.MicroKernel.BurdenReleaseDelegate
04c2ab68 264813 9533268 Castle.MicroKernel.Burden
06bde0a8 507282 10145640 System.Lazy`1[[System.Web.Mvc.ValueProviderResult, System.Web.Mvc]]
6fb5348c 267697 10707880 System.Collections.Generic.HashSet`1[[System.String, mscorlib]]
654e9388 160209 11535048 System.Web.HttpHeaderCollection
654ad44c 194416 12442624 System.Web.HttpCookieCollection
6fd1abbc 170480 14202840 System.Collections.Generic.HashSet`1+Slot[[System.String, mscorlib]][]
654b2204 95203 15613292 System.Web.HttpCachePolicy
06bde010 507282 16233024 System.Func`1[[System.Web.Mvc.ValueProviderResult, System.Web.Mvc]]
719c3a6c 469961 18106904 System.Int32[]
654e87e4 97208 18275104 System.Web.Hosting.IIS7WorkerRequest
654e2590 97208 19441600 System.Web.HttpRequest
654e285c 97208 19830432 System.Web.HttpResponse
715fbc80 422170 20264160 System.Collections.Generic.Dictionary`2[[System.String, mscorlib],[System.Object, mscorlib]]
654e2160 97208 23329920 System.Web.HttpContext
654e9614 388836 23330160 System.Web.HttpValueCollection
719c45c8 919071 47791692 System.Collections.Hashtable
654d5220 4808083 115393992 System.Web.HttpServerVarsCollectionEntry
719bfc20 4849839 116396136 System.Collections.ArrayList
719c4584 105080 119191278 System.Byte[]
70d45bec 9064979 145039664 System.Collections.Specialized.NameObjectCollectionBase+NameObjectEntry
719afe88 5391401 175028320 System.Object[]
719c5ed4 919078 237147240 System.Collections.Hashtable+bucket[]
719c2248 7055089 454532758 System.String
Ok, so I've run windbg over the crash dump, (!dumpheap -live -stat) and I've found that there are LOT of objects relating to the Http context, which are still held in memory (98,000 after a typical work day in fact).
Can anyone confirm... I shouldn't be seeing this right? There are Types occurring 97,208 times in the log - This means that the HttpRequest/HttpResponse, etc are being held in memory, causing ALOT of leakage.
What could be causing this? I know they're not being stored in the session..my session is set to default timeout, and when inspecting, it only contains 3 small string objects.
Figured it out. Running GCroot highlighted the problem. See how there is a Castle.MicroKernel.Releasers.LifecycledComponentsReleasePolicy in the reference list?
Castle wasn't being told to release the controller after the request had completed. Mystery solved.
0:000> !gcroot 95963d2c
HandleTable:
00bb12f0 (pinned handle)
-> 03062490 System.Object[]
-> 021501dc Castle.Windsor.WindsorContainer
-> 02150200 Castle.MicroKernel.DefaultKernel
-> 02150304 System.Collections.Generic.Dictionary`2[[System.String, mscorlib],[Castle.MicroKernel.ISubSystem, Castle.Windsor]]
-> 02150af8 System.Collections.Generic.Dictionary`2+Entry[[System.String, mscorlib],[Castle.MicroKernel.ISubSystem, Castle.Windsor]][]
-> 02150b74 Castle.Windsor.Diagnostics.DefaultDiagnosticsSubSystem
-> 02150b8c System.Collections.Generic.List`1[[Castle.Windsor.Diagnostics.IContainerDebuggerExtension, Castle.Windsor]]
-> 02150d00 System.Object[]
-> 02150d30 Castle.Windsor.Diagnostics.Extensions.ReleasePolicyTrackedObjects
-> 02150d3c Castle.Windsor.Diagnostics.TrackedComponentsDiagnostic
-> 02150e04 System.EventHandler`1[[Castle.Windsor.Diagnostics.TrackedInstancesEventArgs, Castle.Windsor]]
-> 02150d54 Castle.MicroKernel.Releasers.LifecycledComponentsReleasePolicy
-> 02150d84 System.Collections.Generic.Dictionary`2[[System.Object, mscorlib],[Castle.MicroKernel.Burden, Castle.Windsor]]
-> 038da530 System.Collections.Generic.Dictionary`2+Entry[[System.Object, mscorlib],[Castle.MicroKernel.Burden, Castle.Windsor]][]
-> 9596f3a4 WebController
-> 9596f9cc System.Web.Mvc.ControllerContext
-> 95965b5c System.Web.HttpContextWrapper
-> 95964078 System.Web.HttpContext
-> 95963d2c System.Web.Hosting.IIS7WorkerRequest

Changing map to pmap in my Clojure program leads to weird exception (ClassCastException)

As far as I know, pmap in Clojure works just like map, but it calculates results in parallel, using futures under the hood. So it should "just work" with a function and a sequence, if map works with them. (Unless there are evil side effects that prevent it, but in case of my program there is nothing more than loading data from http server and transforming it)
And in my case pmap doesn't work as expected. Why can this happen?
The problem arises here (if I change map to pmap):
https://github.com/magicgoose/DvachMaster/blob/master/src/dvach/core.clj#L82
(defn thread-list
"load threads from all pages, trying each page at most `max-trials` times with `retry-inteval`"
[board]
(try
(let [p0 (load-body (board-addr board 0))
numpages (count (:pages p0))
other-pages (map ; problem here
(comp
load-body
(partial board-addr board))
(range 1 numpages))
all-pages (cons p0 other-pages)
]
(doall
((comp (partial reduce concat) (partial map :threads)) all-pages)))
(catch Throwable e
(.printStackTrace e))))
The exception I get:
java.util.concurrent.ExecutionException: java.lang.ClassCastException: java.lang.Long cannot be cast to java.util.concurrent.Future
at java.util.concurrent.FutureTask$Sync.innerGet(Unknown Source)
at java.util.concurrent.FutureTask.get(Unknown Source)
at clojure.core$deref_future.invoke(core.clj:2108)
at clojure.core$future_call$reify__6267.deref(core.clj:6308)
at clojure.core$deref.invoke(core.clj:2128)
at clojure.core$pmap$step__6280$fn__6282.invoke(core.clj:6358)
at clojure.lang.LazySeq.sval(LazySeq.java:42)
at clojure.lang.LazySeq.seq(LazySeq.java:60)
at clojure.lang.RT.seq(RT.java:484)
at clojure.core$seq.invoke(core.clj:133)
at clojure.core$map$fn__4207.invoke(core.clj:2479)
at clojure.lang.LazySeq.sval(LazySeq.java:42)
at clojure.lang.LazySeq.seq(LazySeq.java:60)
at clojure.lang.Cons.next(Cons.java:39)
at clojure.lang.RT.next(RT.java:598)
at clojure.core$next.invoke(core.clj:64)
at clojure.core.protocols$fn__6034.invoke(protocols.clj:146)
at clojure.core.protocols$fn__6005$G__6000__6014.invoke(protocols.clj:19)
at clojure.core.protocols$seq_reduce.invoke(protocols.clj:27)
at clojure.core.protocols$fn__6026.invoke(protocols.clj:53)
at clojure.core.protocols$fn__5979$G__5974__5992.invoke(protocols.clj:13)
at clojure.core$reduce.invoke(core.clj:6175)
at clojure.lang.AFn.applyToHelper(AFn.java:163)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$partial$fn__4190.doInvoke(core.clj:2396)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$comp$fn__4154.invoke(core.clj:2331)
at dvach.core$thread_list.invoke(core.clj:91)
at dvach.core$eval3813.invoke(NO_SOURCE_FILE:2)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6582)
at clojure.core$eval.invoke(core.clj:2852)
at clojure.main$repl$read_eval_print__6588$fn__6591.invoke(main.clj:259)
at clojure.main$repl$read_eval_print__6588.invoke(main.clj:259)
at clojure.main$repl$fn__6597.invoke(main.clj:277)
at clojure.main$repl.doInvoke(main.clj:277)
at clojure.lang.RestFn.invoke(RestFn.java:1096)
at clojure.tools.nrepl.middleware.interruptible_eval$evaluate$fn__1023.invoke(interruptible_eval.clj:56)
at clojure.lang.AFn.applyToHelper(AFn.java:159)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:617)
at clojure.core$with_bindings_STAR_.doInvoke(core.clj:1788)
at clojure.lang.RestFn.invoke(RestFn.java:425)
at clojure.tools.nrepl.middleware.interruptible_eval$evaluate.invoke(interruptible_eval.clj:41)
at clojure.tools.nrepl.middleware.interruptible_eval$interruptible_eval$fn__1064$fn__1067.invoke(interruptible_eval.clj:171)
at clojure.core$comp$fn__4154.invoke(core.clj:2330)
at clojure.tools.nrepl.middleware.interruptible_eval$run_next$fn__1057.invoke(interruptible_eval.clj:138)
at clojure.lang.AFn.run(AFn.java:24)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.ClassCastException: java.lang.Long cannot be cast to java.util.concurrent.Future
at clojure.core$deref_future.invoke(core.clj:2108)
at clojure.core$deref.invoke(core.clj:2129)
at dvach.core$load_body.invoke(core.clj:74)
at clojure.core$comp$fn__4154.invoke(core.clj:2331)
at clojure.core$pmap$fn__6275$fn__6276.invoke(core.clj:6354)
at clojure.core$binding_conveyor_fn$fn__4107.invoke(core.clj:1836)
at clojure.lang.AFn.call(AFn.java:18)
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
... 3 more
The problem the stack trace complains about is with #max-trials on line 74; this should read max-trials instead. (max-trials is a loop variable initialized to #retry-count on line 66; it'll be a number then, to be decremented on each iteration.)
It may well arise intermittently, since that point in the code is only reached if the try block starting on line 68 fails to fetch the result.

Resources