workflow timeout functionality not working in aem - adobe
we are trying to implement timeout for a participant step. But it doesn't seem to be working. When i configure the timeout as immediate, we are able to see the timeout behavior , but when we select the actual time like 1 hr. It is not timing out after the specified time interval. Is there any other configuration that we have to do to achieve this?
This is because of a bug in com.adobe.granite.workflow.core.job.WorkflowJobUtils as I see. When I set timeout for a Participant step to 1h there is the message in log
03.10.2018 16:26:10.681 *DEBUG* [0:0:0:0:0:0:0:1 [1538573170619] POST /etc/workflow/instances HTTP/1.1] com.adobe.granite.workflow.core.job.WorkflowJobUtils calling createTimeoutJob for workitem /etc/workflow/instances/server0/2018-10-03/test-model_98/workItems/node1_etc_workflow_instances_server0_2018-10-03_test-model_98
03.10.2018 16:26:10.681 *DEBUG* [0:0:0:0:0:0:0:1 [1538573170619] POST /etc/workflow/instances HTTP/1.1] com.adobe.granite.workflow.core.job.WorkflowJobUtils create timeout event {com.adobe.granite.workflow.console.timeout.job=com.adobe.granite.workflow.job.TimeoutJob#989455c, event.timed.date=Wed Nov 14 08:26:10 MSK 2018, event.timed.id=/etc/workflow/instances/server0/2018-10-03/test-model_98/workItems/node1_etc_workflow_instances_server0_2018-10-03_test-model_98_1538573170632}
"event.timed.date=Wed Nov 14 08:26:10 MSK 2018", when the step is assigned to a participant on October 3 at 4 pm. You can add the logger and check from your side, I'm using AEM 6.3. This is the step configuration:
code from WorkflowJobUtils:
Long timeout = (Long)item.getNode().getMetaDataMap().get("timeoutMillis", Long.class);
Date date = calculateDate(timeout.longValue(), addOffset);
props.put("event.timed.date", date);
LOGGER.debug("create timeout event {} ", props);
jobBuilder.properties(props);
List<String> errors = new ArrayList();
ScheduledJobInfo result = jobBuilder.schedule().at(adjustDate(date)).add(errors);
private static Date calculateDate(long seconds, boolean addOffset) {
Calendar cal = Calendar.getInstance();
long millies = seconds * 1000L;
if (addOffset) {
millies += cal.getTimeInMillis();
}
cal.setTimeInMillis(millies);
return cal.getTime();
}
you see in the calculateDate method they expect seconds, but actually for the "1h" option milliseconds are set, 3 600 000. So they multiply it to 1 000 and get 3 600 000 000 millisecond or 3 600 000 seconds, 1 000 hours or 41.6 days.
you can add a new option for the Timeout dropdown here /libs/cq/workflow/components/model/step/tab_common/items/timeout/items/timeout/options as a workaround.
After I added a new option with value "1" there the timeout handler was called:
03.10.2018 16:38:54.200 *DEBUG* [0:0:0:0:0:0:0:1 [1538573934182] POST /etc/workflow/instances HTTP/1.1] com.adobe.granite.workflow.core.job.WorkflowJobUtils calling createTimeoutJob for workitem /etc/workflow/instances/server0/2018-10-03/test-model_99/workItems/node1_etc_workflow_instances_server0_2018-10-03_test-model_99
03.10.2018 16:38:54.201 *DEBUG* [0:0:0:0:0:0:0:1 [1538573934182] POST /etc/workflow/instances HTTP/1.1] com.adobe.granite.workflow.core.job.WorkflowJobUtils create timeout event {com.adobe.granite.workflow.console.timeout.job=com.adobe.granite.workflow.job.TimeoutJob#267dc5f3, event.timed.date=Wed Oct 03 16:38:55 MSK 2018, event.timed.id=/etc/workflow/instances/server0/2018-10-03/test-model_99/workItems/node1_etc_workflow_instances_server0_2018-10-03_test-model_99_1538573934194}
03.10.2018 16:38:59.335 *INFO* [sling-oak-observation-212] org.apache.sling.event.impl.jobs.queues.JobQueueImpl.Granite Workflow Timeout Queue Starting job queue Granite Workflow Timeout Queue
03.10.2018 16:38:59.336 *INFO* [sling-oak-observation-212] org.apache.sling.event Service [QueueMBean for queue Granite Workflow Timeout Queue,14915, [org.apache.sling.event.jobs.jmx.StatisticsMBean]] ServiceEvent REGISTERED
03.10.2018 16:38:59.435 *DEBUG* [sling-threadpool-7e90cbd1-c78f-4aff-add9-eba2c792e8fe-(apache-sling-job-thread-pool)-22-Granite Workflow Timeout Queue(com/adobe/granite/workflow/timeout/job)] com.adobe.granite.workflow.core.job.TimeoutHandler process timeout job for event JobImpl [properties=org.apache.sling.api.wrappers.ValueMapDecorator#136322bc : {slingevent:application=fa67a921-cc59-4097-953a-a6e69ff5b718, jcr:created=java.util.GregorianCalendar[time=1538573939311,areFieldsSet=true,areAllFieldsSet=true,lenient=false,zone=sun.util.calendar.ZoneInfo[id="GMT+03:00",offset=10800000,dstSavings=0,useDaylight=false,transitions=0,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2018,MONTH=9,WEEK_OF_YEAR=40,WEEK_OF_MONTH=1,DAY_OF_MONTH=3,DAY_OF_YEAR=276,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=1,AM_PM=1,HOUR=4,HOUR_OF_DAY=16,MINUTE=38,SECOND=59,MILLISECOND=311,ZONE_OFFSET=10800000,DST_OFFSET=0], event.timed.date=Wed Oct 03 16:38:55 MSK 2018, slingevent:created=java.util.GregorianCalendar[time=1538573939203,areFieldsSet=true,areAllFieldsSet=true,lenient=false,zone=sun.util.calendar.ZoneInfo[id="GMT+03:00",offset=10800000,dstSavings=0,useDaylight=false,transitions=0,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2018,MONTH=9,WEEK_OF_YEAR=40,WEEK_OF_MONTH=1,DAY_OF_MONTH=3,DAY_OF_YEAR=276,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=1,AM_PM=1,HOUR=4,HOUR_OF_DAY=16,MINUTE=38,SECOND=59,MILLISECOND=203,ZONE_OFFSET=10800000,DST_OFFSET=0], event.job.queuename=Granite Workflow Timeout Queue, event.job.queued.time=java.util.GregorianCalendar[time=1538573939203,areFieldsSet=true,areAllFieldsSet=true,lenient=false,zone=sun.util.calendar.ZoneInfo[id="GMT+03:00",offset=10800000,dstSavings=0,useDaylight=false,transitions=0,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2018,MONTH=9,WEEK_OF_YEAR=40,WEEK_OF_MONTH=1,DAY_OF_MONTH=3,DAY_OF_YEAR=276,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=1,AM_PM=1,HOUR=4,HOUR_OF_DAY=16,MINUTE=38,SECOND=59,MILLISECOND=203,ZONE_OFFSET=10800000,DST_OFFSET=0], jcr:createdBy=sling-event, sling:resourceType=slingevent:Job, slingevent:eventId=2018/10/3/16/38/fa67a921-cc59-4097-953a-a6e69ff5b718_92, event.timed.id=/etc/workflow/instances/server0/2018-10-03/test-model_99/workItems/node1_etc_workflow_instances_server0_2018-10-03_test-model_99_1538573934194, event.job.application=fa67a921-cc59-4097-953a-a6e69ff5b718, event.job.retries=10, com.adobe.granite.workflow.console.timeout.job=com.adobe.granite.workflow.job.TimeoutJob#29a9a2f7, event.job.started.time=java.util.GregorianCalendar[time=1538573939376,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="Europe/Minsk",offset=10800000,dstSavings=0,useDaylight=false,transitions=69,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2018,MONTH=9,WEEK_OF_YEAR=40,WEEK_OF_MONTH=1,DAY_OF_MONTH=3,DAY_OF_YEAR=276,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=1,AM_PM=1,HOUR=4,HOUR_OF_DAY=16,MINUTE=38,SECOND=59,MILLISECOND=376,ZONE_OFFSET=10800000,DST_OFFSET=0], jcr:primaryType=slingevent:Job, event.job.retrycount=0, :sling:jobs:asynchandler=org.apache.sling.event.impl.jobs.JobConsumerManager$JobConsumerWrapper$1#8f1e51d, event.job.topic=com/adobe/granite/workflow/timeout/job}, topic=com/adobe/granite/workflow/timeout/job, path=/var/eventing/jobs/assigned/fa67a921-cc59-4097-953a-a6e69ff5b718/com.adobe.granite.workflow.timeout.job/2018/10/3/16/38/fa67a921-cc59-4097-953a-a6e69ff5b718_92, jobId=2018/10/3/16/38/fa67a921-cc59-4097-953a-a6e69ff5b718_92]
03.10.2018 16:38:59.443 *DEBUG* [sling-threadpool-7e90cbd1-c78f-4aff-add9-eba2c792e8fe-(apache-sling-job-thread-pool)-22-Granite Workflow Timeout Queue(com/adobe/granite/workflow/timeout/job)] com.adobe.granite.workflow.core.job.TimeoutHandler process timeout job for work item /etc/workflow/instances/server0/2018-10-03/test-model_99/workItems/node1_etc_workflow_instances_server0_2018-10-03_test-model_99
03.10.2018 16:38:59.701 *INFO* [sling-threadpool-7e90cbd1-c78f-4aff-add9-eba2c792e8fe-(apache-sling-job-thread-pool)-22-Granite Workflow Timeout Queue(com/adobe/granite/workflow/timeout/job)] etc.workflow.scripts.test-timeout-handler$ecma
Test timeout handler /etc/workflow/scripts/test-timeout-handler.ecma is called
Related
Transfer Encoding chunked with okhttp only delivers full result
I am trying to get some insights on a chunked endpoint and therefore planned to print what the server sends me chunk by chunk. I failed to do so so I wrote a test to see if OkHttp/Retrofit are working as I expect it. The following test should deliver some chunks to the console but all I get is the full response. I am a bit lost what I am missing to even make the MockWebServer of OkHttp3 sending me chunks. I found this retrofit issue entry but the answer is a bit ambiguous for me: Chunked Transfer Encoding Response class ChunkTest { #Rule #JvmField val rule = RxImmediateSchedulerRule() // custom rule to run Rx synchronously #Test fun `test Chunked Response`() { val mockWebServer = MockWebServer() mockWebServer.enqueue(getMockChunkedResponse()) val retrofit = Retrofit.Builder() .baseUrl(mockWebServer.url("/")) .addCallAdapterFactory(RxJava2CallAdapterFactory.create()) .client(OkHttpClient.Builder().build()) .build() val chunkedApi = retrofit.create(ChunkedApi::class.java) chunkedApi.getChunked() .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribe({ System.out.println(it.string()) }, { System.out.println(it.message) }) mockWebServer.shutdown() } private fun getMockChunkedResponse(): MockResponse { val mockResponse = MockResponse() mockResponse.setHeader("Transfer-Encoding", "chunked") mockResponse.setChunkedBody("THIS IS A CHUNKED RESPONSE!", 5) return mockResponse } } interface ChunkedApi { #Streaming #GET("/") fun getChunked(): Flowable<ResponseBody> } Test console output: Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$2 execute INFO: MockWebServer[49293] starting to accept connections Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$3 processOneRequest INFO: MockWebServer[49293] received request: GET / HTTP/1.1 and responded: HTTP/1.1 200 OK THIS IS A CHUNKED RESPONSE! Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$2 acceptConnections INFO: MockWebServer[49293] done accepting connections: Socket closed I expected to be more like (body "cut" every 5 bytes): Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$2 execute INFO: MockWebServer[49293] starting to accept connections Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$3 processOneRequest INFO: MockWebServer[49293] received request: GET / HTTP/1.1 and responded: HTTP/1.1 200 OK THIS IS A CHUNKE D RESPO NSE! Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$2 acceptConnections INFO: MockWebServer[49293] done accepting connections: Socket closed
The OkHttp Mockserver does chunk the data, however it looks like the LoggingInterceptor waits until the whole chunks buffer is full then it displays it. From this nice summary about HTTP streaming: The use of Transfer-Encoding: chunked is what allows streaming within a single request or response. This means that the data is transmitted in a chunked manner, and does not impact the representation of the content. With that in mind, we are dealing with 1 "request / response", which means we'll have to do our chunks retrieval before getting the entire response. Then pushing each chunk in our own buffer, all that on an OkHttp network interceptor. Here is an example of said NetworkInterceptor: class ChunksInterceptor: Interceptor { val Utf8Charset = Charset.forName ("UTF-8") override fun intercept (chain: Interceptor.Chain): Response { val originalResponse = chain.proceed (chain.request ()) val responseBody = originalResponse.body () val source = responseBody!!.source () val buffer = Buffer () // We create our own Buffer // Returns true if there are no more bytes in this source while (!source.exhausted ()) { val readBytes = source.read (buffer, Long.MAX_VALUE) // We read the whole buffer val data = buffer.readString (Utf8Charset) println ("Read: $readBytes bytes") println ("Content: \n $data \n") } return originalResponse } } Then of course we register this Network Interceptor on the OkHttp client.
Riak db returns 500 Internal server error
I seem to be having an issue with our riak-kv db v2.2.3, we are currently running 5 node cluster, out of nowhere last week suddenly requests to our db started returning 500 internal server error for all/almost every request (also might be valuable to mention our db has nearly doubled in size over the past 2 weeks.) At first, we thought the issue was in the code making the requests however after sshing into one of the nodes in the cluster and attempting a simple request to list buckets we saw this: Command: curl -i http://localhost:8098/buckets?buckets=true Response: HTTP/1.1 500 Internal Server Error Vary: Accept-Encoding Server: MochiWeb/1.1 WebMachine/1.10.9 (cafe not found) Date: Sun, 10 Dec 2017 06:21:20 GMT Content-Type: text/html Content-Length: 1193 <html><head><title>500 Internal Server Error</title></head><body> <h1>Internal Server Error</h1>The server encountered an error while processing this request:<br><pre>{error, {error, {badmatch,{error,mailbox_overload}}, [{riak_kv_wm_buckets,produce_bucket_list,2, [{file,"src/riak_kv_wm_buckets.erl"},{line,225}]}, {webmachine_resource,resource_call,3, [{file,"src/webmachine_resource.erl"},{line,186}]}, {webmachine_resource,do,3, [{file,"src/webmachine_resource.erl"},{line,142}]}, {webmachine_decision_core,resource_call,1, [{file,"src/webmachine_decision_core.erl"},{line,48}]}, {webmachine_decision_core,decision,1, [{file,"src/webmachine_decision_core.erl"},{line,562}]}, {webmachine_decision_core,handle_request,2, [{file,"src/webmachine_decision_core.erl"},{line,33}]}, {webmachine_mochiweb,loop,2, [{file,"src/webmachine_mochiweb.erl"},{line,72}]}, {mochiweb_http,headers,5, [{file,"src/mochiweb_http.erl"},{line,105}]}]}}</pre><P><HR> <ADDRESS>mochiweb+webmachine web server</ADDRESS></body></html> After some more investigation into the issue, I pulled the logs on one of the riak nodes and saw this: 2017-12-10 03:54:58.654 [error] <0.24342.271>#yz_solrq_helper:send_solr_ops_for_entries:301 Updating a batch of Solr operations failed for index <<"attachment">> with error {error,{other,{ok,"500",[{"Content-Type","application/json; charset=UTF-8"},{"Transfer-Encoding","chunked"}],<<" {\"responseHeader\":{\"status\":500,\"QTime\":1}, error": { "msg ": "Exception writing document id 1*default*Attachment*07d8e24-dc32s-11e7-q9640-a7f8b4edb446*623 to the index; possible analysis error.", "trace": ""org.apache.solr.common.So lrException: Exception writing document id 1*default*Attachment*07d8e24-dc32s-11e7-q9640-a7f8b4edb446*623 to the index; possible analysis error. at org.apache.solr.update. DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:169) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69) at or g.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51) at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd (DistributedUpdateProcessor.java:952) at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:692) at org.apache.so lr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:141) at org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLo ader.java:106) at org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:68) at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java :99) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74) at org.apache.solr.handler.RequestHandlerBase.handleRequ est(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilte r.java:777) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFi lter.java:207) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) at org.eclipse.jetty.servlet.ServletHandler.doHandle(Servle tHandler.java:455) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHand ler.java:557) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(Contex tHandler.java:1075) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHa ndler.java:193) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedH andler.java:135) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255) at org.eclipse.jetty.server.handler.HandlerCo llection.handle(HandlerCollection.java:154) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) at org.eclipse.jetty.server.Server.han dle(Server.java:368) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489) at org.eclipse.jetty.server.BlockingHttpConnec tion.handleRequest(BlockingHttpConnection.java:53) at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953) at org.eclipse.jetty.s erver.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861) at org.ecli pse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72) at org.eclips e.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java :608) at org.eclip..." >> } } } After some googling some people said this error is Caused by: java.lang.OutOfMemoryError: Java heap space So I modified my riak config from: search.solr.jvm_options = -d64 -Xms4g -Xmx4g -XX:+UseStringCache -XX:+UseCompressedOops To: search.solr.jvm_options = -d64 -Xms8g -Xmx8g -XX:+UseStringCache -XX:+UseCompressedOops However I am still getting the same error any help is much appreciated.
.NET thread vanished
I have dump of w3wp process (taken by procdump) where I see strange situation. Simplified version on my code: logger.Log(Thread.CurrentThread.ManagedThreadId); using(semaphore_release_on_dispose_object) { // some third party code with unmanaged parts DangerousCode(); } semaphore_release_on_dispose_object - release semaphore in Dispose. Process dump was taken after some timeout, when it was clear that semaphore was not released I found thread in dump, but it seems become 'bad'. It listed by ~ command, but not listed by !threads command. When I switch to it I can see unmanaged stack, but not managed: 0:004> ~ ... 3 Id: 63ec.11b0 Suspend: 1 Teb: 000007ff`fffae000 Unfrozen ... 0:004> ~~[11b0]s ntdll!NtRemoveIoCompletion+0xa: 00000000`76e913aa c3 ret 0:003> k # Child-SP RetAddr Call Site 00 00000000`01a3f818 000007fe`fcea169d ntdll!NtRemoveIoCompletion+0xa 01 00000000`01a3f820 00000000`76d2a4e1 KERNELBASE!GetQueuedCompletionStatus+0x39 02 00000000`01a3f880 000007fe`f0311f7b kernel32!GetQueuedCompletionStatusStub+0x11 03 00000000`01a3f8c0 000007fe`f0312024 w3tp!THREAD_POOL_DATA::ThreadPoolThread+0x3b 04 00000000`01a3f910 000007fe`f03120a1 w3tp!THREAD_POOL_DATA::ThreadPoolThread+0x34 05 00000000`01a3f940 00000000`76d3652d w3tp!THREAD_MANAGER::ThreadManagerThread+0x61 06 00000000`01a3f970 00000000`76e6c521 kernel32!BaseThreadInitThunk+0xd 07 00000000`01a3f9a0 00000000`00000000 ntdll!RtlUserThreadStart+0x1d 0:003> !clrstack OS Thread Id: 0x11b0 (3) Child SP IP Call Site GetFrameContext failed: 1 0000000000000000 0000000000000000 So, I assume that somehow third party code some how 'broke' thread and skip finally block (using). Any ideas how it is possible?
Play Framework running Asynchronous Web Calls times out
I have to perform three HTTP requests from a Play Application. These calls are directed towards three subprojects of the main application. The architecture looks like this: main app | --modules | --component1 | --component2 | --component3 each component* is an individual sbt subprojects. The main class in main app runs this Action: def service = Action.async(BodyParsers.parse.json) { implicit request => val query = request.body val url1 = "http://localhost:9000/one" val url2 = "http://localhost:9000/two" val url3 = "http://localhost:9000/three" val sync_calls = for { a <- ws.url(url1).withRequestTimeout(Duration.Inf).withHeaders("Content-Type"->"application/json") .withBody(query).get() b <- ws.url(url2).withRequestTimeout(Duration.Inf).withHeaders("Content-Type"->"application/json") .withBody(a.body).get() c <- ws.url(url3).withRequestTimeout(Duration.Inf).withHeaders("Content-Type"->"application/json") .withBody(b.body).get() } yield c.body sync_calls.map(x => Ok(x)) } The components need to be activated one after the other, so they need to be a sequence. Each of the does a spark job. However when I call the service action I get this error: [error] application - ! #71og96ol6 - Internal server error, for (GET) [/automatic] -> play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[TimeoutException: Read timeout to localhost/127.0.0.1:9000 after 120000 ms]] [TimeoutException: Read timeout to localhost/127.0.0.1:9000 after 120000 ms] at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:280) at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:206) at play.api.GlobalSettings$class.onError(GlobalSettings.scala:160) at play.api.DefaultGlobal$.onError(GlobalSettings.scala:188) at play.api.http.GlobalSettingsHttpErrorHandler.onServerError(HttpErrorHandler.scala:98) │ at play.core.server.netty.PlayRequestHandler$$anonfun$2$$anonfun$apply$1.applyOrElse(PlayRequestHandler.scala:1│ 00) at play.core.server.netty.PlayRequestHandler$$anonfun$2$$anonfun$apply$1.applyOrElse(PlayRequestHandler.scala:9│ 9) at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:346) at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:345) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) Caused by: java.util.concurrent.TimeoutException: Read timeout to localhost/127.0.0.1:9000 after 120000 ms at org.asynchttpclient.netty.timeout.TimeoutTimerTask.expire(TimeoutTimerTask.java:43) at org.asynchttpclient.netty.timeout.ReadTimeoutTimerTask.run(ReadTimeoutTimerTask.java:54) at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:581) at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:655) at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:367) at java.lang.Thread.run(Thread.java:745) and HTTP/1.1 500 Internal Server Error Content-Length: 4931 Content-Type: text/html; charset=utf-8 Date: Tue, 25 Oct 2016 21:06:17 GMT <body id="play-error-page"> <p id="detail" class="pre">[TimeoutException: Read timeout to localhost/127.0.0.1:9000 after 120000 ms]</p> </body> I specifically set the Timeout for each call to Duration.Inf for the purpose of avoiding timouts. Why is this happening and how do I fix it?
PSI - Statusing Web Service - Results not as expected
I'm trying to update Status information on assignments via Statusing Web Service (PSI). Problem is, that the results are not as expected. I'll try to explain what I'm doing in detail: Two cases: 1) An assignment for the resource exists on specified tasks. I want to report work actuals (update status). 2) There is no assignment for the resource on specified tasks. I want to create the assignment and report work actuals. I have one task in my project (Auto scheduled, Fixed work). Resource availability of all resources is set to 100%. They all have the same calendar. Name: Task 31 - Fixed Work Duration: 12,5 days? Start: Thu 14.03.13 Finish: Tue 02.04.13 Resource Names: Resource 1 Work: 100 hrs First I execute an UpdateStatus with the following ChangeXML <Changes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Proj ID="a8a601ce-f3ab-4c01-97ce-fecdad2359d9"> <Assn ID="d7273a28-c038-486b-b997-cdb2450ceef5" ResID="8a164257-7960-4b76-9506-ccd0efabdb72"> <Change PID="251658250">900000</Change> </Assn> </Proj> </Changes> Then I call a SubmitStatusForResource client.SubmitStatusForResource(new Guid("8a164257-7960-4b76-9506-ccd0efabdb72"), null, "auto submit PSIStatusingGateway"); The following entry pops up in approval center (which is as I expected it): Status Update; Task 31; Task update; Resource 1; 3/20/2012; 15h; 15%; 85h Update in Project (still looks fine): Task Name: Task 31 - Fixed Work Duration: 12,5 days? Start: Thu 14.03.13 Finish: Tue 02.04.13 Resource Names: Resource 1 Work: 100 hrs Actual Work: 15 hrs Remaining Work: 85 hrs Then second case is executed: First I create a new assignment... client.CreateNewAssignmentWithWork( sName: Task 31 - Fixed Work, projGuid: "a8a601ce-f3ab-4c01-97ce-fecdad2359d9", taskGuid: "024d7b61-858b-40bb-ade3-009d7d821b3f", assnGuid: "e3451938-36a5-4df3-87b1-0eb4b25a1dab", sumTaskGuid: Guid.Empty, dtStart: 14.03.2013 08:00:00, dtFinish: 02.04.2013 15:36:00, actWork: 900000, fMilestone: false, fAddToTimesheet: false, fSubmit: false, sComment: "auto commit..."); Then I call the UpdateStatus again: <Changes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Proj ID="a8a601ce-f3ab-4c01-97ce-fecdad2359d9"> <Assn ID="e3451938-36a5-4df3-87b1-0eb4b25a1dab" ResID="c59ad8e2-7533-47bd-baa5-f5b03c3c43d6"> <Change PID="251658250">900000</Change> </Assn> </Proj> </Changes> And finally the SubmitStatusForResource again client.SubmitStatusForResource(new Guid("c59ad8e2-7533-47bd-baa5-f5b03c3c43d6"), null, "auto submit PSIStatusingGateway"); This creates the following entry in approval center: Status Update; Task 31 - Fixed Work; New reassignment request; Resource 2; 3/20/2012; 15h; 100%; 0h I accept it and update my project: Name: Task 31 - Fixed Work Duration: 6,76 days? Start: Thu 14.03.13 Finish: Mon 25.03.13 Resource Names: Resource 1;Resource 2 Work: 69,05 hrs Actual Work: 30 hrs Remaining Work: 39,05 hrs And I really don't get, why the new work would be 69,05 hours. The results I expected would have been: Name: Task 31 - Fixed Work Duration: 6,76 days? Start: Thu 14.03.13 Finish: Mon 25.03.13 Resource Names: Resource 1;Resource 2 Work: 65 hrs Actual Work: 30 hrs Remaining Work: 35 hrs I've spend quite a bunch of time, trying to find out, how to update the values to get the results that I want. I really would appreciate some help. This makes me want to rip my hair out! Thanks in advance PS: Forgot to say that I'm working with MS Project Server 2010 and MS Project Professional 2010