Could not find an ESM in the cluster. Groups: [localcloud] - cloudify
I tried to upload an application:
POST http://IP_VM_MNG:8100/2.7.1/upload/recipe.zip
and to deploy that:
POST http://IP_VM_MNG:8100/2.7.1/deployments/app_name
The response to deploy was:
{
"message": "Could not find an ESM in the cluster. Groups: [localcloud]",
"response": null,
"status": "Failed",
"verbose": "org.cloudifysource.rest.controllers.RestErrorException: esm_missing\n\tat org.cloudifysource.rest.validators.ValidateEsmExists.validateEsmExists(ValidateEsmExists.java:53)\n\tat org.cloudifysource.rest.validators.ValidateEsmExists.validate(ValidateEsmExists.java:39)\n\tat org.cloudifysource.rest.controllers.DeploymentsController.validateInstallApplication(DeploymentsController.java:860)\n\tat org.cloudifysource.rest.controllers.DeploymentsController.installApplication(DeploymentsController.java:756)\n\tat sun.reflect.GeneratedMethodAccessor331.invoke(Unknown Source)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)\n\tat java.lang.reflect.Method.invoke(Method.java:597)\n\tat org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:219)\n\tat org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132)\n\tat org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)\n\tat org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:745)\n\tat org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:686)\n\tat org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:80)\n\tat org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:925)\n\tat org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:856)\n\tat org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:936)\n\tat org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:838)\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:688)\n\tat org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:812)\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:770)\n\tat org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:669)\n\tat org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1336)\n\tat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)\n\tat org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)\n\tat org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)\n\tat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)\n\tat org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)\n\tat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)\n\tat org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:103)\n\tat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)\n\tat org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:113)\n\tat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)\n\tat org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:54)\n\tat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)\n\tat org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:45)\n\tat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)\n\tat org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilter(BasicAuthenticationFilter.java:150)\n\tat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)\n\tat org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter.doFilter(DefaultLoginPageGeneratingFilter.java:91)\n\tat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)\n\tat org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:183)\n\tat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)\n\tat org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:105)\n\tat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)\n\tat org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)\n\tat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)\n\tat org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)\n\tat org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)\n\tat org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:343)\n\tat org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:260)\n\tat org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)\n\tat org.openspaces.pu.container.jee.stats.RequestStatisticsFilter.doFilter(RequestStatisticsFilter.java:77)\n\tat org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)\n\tat org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)\n\tat org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560)\n\tat org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)\n\tat org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)\n\tat org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)\n\tat org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)\n\tat org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)\n\tat org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)\n\tat org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)\n\tat org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)\n\tat org.eclipse.jetty.server.Server.handle(Server.java:365)\n\tat org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:485)\n\tat org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:937)\n\tat org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:998)\n\tat org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:856)\n\tat org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)\n\tat org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)\n\tat org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628)\n\tat org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)\n\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)\n\tat org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)\n\tat java.lang.Thread.run(Thread.java:662)\n",
"messageId": "esm_missing"
}
Can someone help me to understand the cause?
The ESM is the Elastic Service Manager - Cloudify's orchestrator. For some reason, when the REST API call arrived at the Cloudify manager, it could not find the ESM (there should always be one running).
You need to look at the cloudify web-ui and see if you can find the orchestrator component. You can also look at the logs of the agent on the relevant machine - if the ESM crashed and failed to restart, you should see some indication there.
In general, even if the ESM failed, it should restart automatically. Is it possible that your machine is maxed out on memory or CPU? That could cause network discovery issues.
Also, please bear in mind the Cloudify 2 has reached end of life - you may want to consider moving to Cloudify 3
Related
Ansible showing task changed but the task has condition (creates: ) and does not actually execute
My ansible-playbook is running some long running task with async tag and also utilizes "creates:" condition, so it is run only once on the server. When I was writing the playbook yesterday, I am pretty sure, the task was skipped when the log set in "creates:" tag existed. It shows changed now though, everytime I run it. I am confused as I do not think I did change anything and I'd like to set up my registered varaible correctly as unchanged, when the condition is true. Output of ansible-play (debug section shows the task is changed: true): TASK [singleserver : Install Assure1 SingleServer role] ********************************************************************************************************************************* changed: [crassure1] TASK [singleserver : Debug] ************************************************************************************************************************************************************* ok: [crassure1] => { "msg": { "ansible_job_id": "637594935242.28556", "changed": true, "failed": false, "finished": 0, "results_file": "/root/.ansible_async/637594935242.28556", "started": 1 } } But if I check the actual results file on the target maschine, it correctly resolved condition and did not actually execute the shell script, so the task should be unchanged (shows message the task is skipped as the log exists): [root#crassure1 assure1]# cat "/root/.ansible_async/637594935242.28556" {"invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": true, "strip_empty_ends": true, "_raw_params": "/opt/install/install_command.sh", "removes": null, "argv": null, "creates": "/opt/assure1/logs/SetupWizard.log", "chdir": null, "stdin_add_newline": true, "stdin": null}}, "cmd": "/opt/install/install_command.sh", "changed": false, "rc": 0, "stdout": "skipped, since /opt/assure1/logs/SetupWizard.log exists"}[root#crassure1 assure1]# Connection reset by 172.24.36.123 port 22 My playbook section looks like this: - name: Install Assure1 SingleServer role shell: #cmd: "/opt/assure1/bin/SetupWizard -a --Depot /opt/install/:a1-local --First --WebFQDN crassure1.tspdata.local --Roles All" cmd: "/opt/install/install_command.sh" async: 7200 poll: 0 register: Assure1InstallWait args: creates: /opt/assure1/logs/SetupWizard.log - name: Debug debug: msg: "{{ Assure1InstallWait }}" - name: Check on Installation status every 15 minutes async_status: jid: "{{ Assure1InstallWait.ansible_job_id }}" register: job_result until: job_result.finished retries: 30 delay: 900 when: Assure1InstallWait is changed Is there something I am missing, or is that some kind of a bug? I am limited by Ansible version available in configured trusted repo, so I am using ansible 2.9.25
Q: "The module shell shows changed every time I run it" A: In async mode the task can't be skipped immediately. First, the module shell must find out whether the file /opt/assure1/logs/SetupWizard.log exists at the remote host or not. Then, if the file exists the module will decide to skip the execution of the command. But, you run the task asynchronously. In this case, Ansible starts the module and returns without waiting for the module to complete. That's what the registered variable Assure1InstallWait says. The task started but didn't finish yet. "msg": { "ansible_job_id": "637594935242.28556", "changed": true, "failed": false, "finished": 0, "results_file": "/root/.ansible_async/637594935242.28556", "started": 1 } The decision to set such a task changed is correct, I think because the execution on the remote host is going on. Print the registered result of the module async. You'll see, that the command was skipped because the file exists (you've printed the async file at the remote instead). Here the attribute changed is set false because now we know the command didn't execute job_result: ... attempts: 1 changed: false failed: false finished: 1 msg: Did not run command since '/tmp/SetupWizard.log' exists rc: 0 ...
Ansible task with async and become giving Job not found error
When I am trying to run a task asynchronously as another user using become in ansible plabook, I am getting "Job not found error". Can some one suggest me how can I successfully check the async job status. I am using ansible version 2.7 I read in some articles suggesting use the async_status task with same become user as async task, to read the job status. I tried that solution but still I am getting the same "job not found error" - hosts: localhost tasks: - shell: startInstance.sh register: start_task async: 180 poll: 0 become: yes become_user: venu - async_status: jid: "{{start_task.ansible_job_id}}" register: start_status until: start_status.finished retries: 30 become: yes become_user: venu Expected Result: I should be able to Fire and forget the job Actual_Result: {"ansible_job_id": "386361757265.15925428", "changed": false, "finished": 1, "msg": "could not find job", "started": 1}
Unable to get ECSOperator (Fargate) to work with Airflow
I'm getting this error when using ECSOperator to run tasks via ECS Fargate in Airflow 1.10.1. DAG code available here [2019-04-15 15:57:36,960] {{models.py:1788}} ERROR - An error occurred (InvalidParameterException) when calling the RunTask operation: Network Configuration must be provided when networkMode 'awsvpc' is specified. Not sure what is wrong there, as network_configuration is passed in the args dictionary, similar to what is done here https://github.com/apache/airflow/blob/master/tests/contrib/operators/test_ecs_operator.py#L61
network_configuration has been added to ESCOperator since Airflow v1.10.3. I would suggest upgrading the Airflow version to v1.10.3. Reference: https://github.com/apache/airflow/blob/1.10.3/airflow/contrib/operators/ecs_operator.py#L69
Sample configuration of ECSOperator to run Fargate on Airflow version - v1.10.3 def get_ecs_operator_args(param): return dict( launch_type="FARGATE", # The name of your task as defined in ECS task_definition="my_automation_task", # The name of your ECS cluster cluster="my-cluster", network_configuration={ 'awsvpcConfiguration': { 'securityGroups': ['sg-hijk', 'sg-abcd'], 'subnets': ['subnet-lmn'], 'assignPublicIp': "ENABLED" } }, overrides = { 'containerOverrides': [ { 'name': "my-container", 'command': ["python", "myCode.py", str(param)] } ] }, region_name="us-east-1") ecs_args = get_ecs_operator_args("{{ dag_run.conf['name'] }}") my_operator = ECSOperator( task_id= "task_0",**ecs_args, dag=dag)
How to understand this Riak stacktrace?
Can anyone help me solve this problem, I have the stacktrace of the problem, but can't understand what the trace actually means. The error occurs when I try to retrieve all data from a bucket, in a Riak Database. And I am using java-riak-client library as the ORM. I can figure out that its a MapReduce problem, but other than that..... Here below is the actual stacktrace, I actually could not figure out what error is it pointing to, and I tried to find out the record its displaying in error. #Update: Yes the record is there, when i CURL com.basho.riak.client.RiakException: java.io.IOException: <html><head><title>500 Internal Server Error</title></head><body><h1>Internal Server Error</h1>The server encountered an error while processing this request:<br><pre>{error, {error, {case_clause, {error, {0, [{module,riak_kv_mrc_map}, {partition,913438523331814323877303020447676887284957839360}, {details, [{fitting, {fitting,<0.21083.23>,#Ref<0.0.31.39954>,follow,1}}, {name,0}, {module,riak_kv_mrc_map}, {arg,{{jsfun,<<"Riak.mapValuesJson">>},none}}, {output, {fitting,<0.21081.23>,#Ref<0.0.31.39954>,sink, undefined}}, {options, [{log,sink}, {trace,[error]}, {sink, {fitting,<0.21081.23>,#Ref<0.0.31.39954>, sink,undefined}}, {sink_type,{fsm,10,infinity}}]}, {q_limit,64}]}, {type,forward_preflist}, {error,[preflist_exhausted]}, {input, {ok,{r_object,<<"xxxx-users">>, <<"xxxx#hotmail.com-userpass">>, [{r_content, {dict,7,16,16,8,80,48, {[],[],[],[],[],[],[],[],[],[],[],[], [],[],[],[]}, {{[],[], [[<<"Links">>]], [],[],[],[],[],[],[], [[<<"content-type">>,97,112,112,108, 105,99,97,116,105,111,110,47,106, 115,111,110], [<<"X-Riak-VTag">>,54,98,119,73,73, 84,107,120,66,70,107,86,102,67,103, 71,73,116,120,121,85,53]], [[<<"index">>]], [], [[<<"X-Riak-Last-Modified">>| {1407,514685,380030}]], [], [[<<"charset">>,117,116,102,45,56], [<<"X-Riak-Meta">>]]}}}, <<"{\"identityId\":{\"userId\":\"xxxx#hotmail.com\",\"providerId\":\"userpass\"},\"firstName\":\"xx\",\"lastName\":\"xx\",\"fullName\":\"xx xx\",\"email\":\"xxxx#hotmail.com\",\"authMethod\":{\"method\":\"userPassword\"},\"passwordInfo\":{\"hasher\":\"bcrypt\",\"password\":\"$2a$10$Gm1VVCM09iyI7TQY7r8B7.Baa.YrtHHgREkQpTIH9ThyW4WzuUeJ.\"}}">>}], [{<<35,9,254,249,83,228,76,146>>, {1,63574733885}}], {dict,1,16,16,8,80,48, {[],[],[],[],[],[],[],[],[],[],[],[],[],[], [],[]}, {{[],[],[],[],[],[],[],[],[],[],[],[],[],[], [[clean|true]], []}}}, undefined}, undefined}}, {modstate, {state, 913438523331814323877303020447676887284957839360, {fitting_details, {fitting,<0.21083.23>,#Ref<0.0.31.39954>, follow,1}, 0,riak_kv_mrc_map, {{jsfun,<<"Riak.mapValuesJson">>},none}, {fitting,<0.21081.23>,#Ref<0.0.31.39954>,sink, undefined}, [{log,sink}, {trace,[error]}, {sink, {fitting,<0.21081.23>,#Ref<0.0.31.39954>, sink,undefined}}, {sink_type,{fsm,10,infinity}}], 64}, {jsfun,<<"Riak.mapValuesJson">>}, none}}, {stack,[]}]}}}, [{riak_kv_wm_mapred,pipe_mapred_nonchunked,3, [{file,"src/riak_kv_wm_mapred.erl"},{line,180}]}, {webmachine_resource,resource_call,3, [{file,"src/webmachine_resource.erl"},{line,183}]}, {webmachine_resource,do,3, [{file,"src/webmachine_resource.erl"},{line,141}]}, {webmachine_decision_core,resource_call,1, [{file,"src/webmachine_decision_core.erl"},{line,48}]}, {webmachine_decision_core,decision,1, [{file,"src/webmachine_decision_core.erl"},{line,481}]}, {webmachine_decision_core,handle_request,2, [{file,"src/webmachine_decision_core.erl"},{line,33}]}, {webmachine_mochiweb,loop,1, [{file,"src/webmachine_mochiweb.erl"},{line,97}]}, {mochiweb_http,parse_headers,5, [{file,"src/mochiweb_http.erl"},{line,180}]}]}}</pre><P><HR><ADDRESS>mochiweb+webmachine web server</ADDRESS></body></html> at com.basho.riak.client.query.MapReduce.execute(MapReduce.java:81) at models.UserRecordsModel$.getAllUsers(UserRecordsModel.scala:131) at controllers.DataRetrieval$$anonfun$getRegisteredUserData$1.apply(DataRetrieval.scala:42) at controllers.DataRetrieval$$anonfun$getRegisteredUserData$1.apply(DataRetrieval.scala:38) at play.api.mvc.ActionBuilder$$anonfun$apply$10.apply(Action.scala:221) at play.api.mvc.ActionBuilder$$anonfun$apply$10.apply(Action.scala:220) at securesocial.core.SecureSocial$SecuredActionBuilder$$anonfun$2$$anonfun$apply$1.apply(SecureSocial.scala:117) at securesocial.core.SecureSocial$SecuredActionBuilder$$anonfun$2$$anonfun$apply$1.apply(SecureSocial.scala:113) at scala.Option.map(Option.scala:145) at securesocial.core.SecureSocial$SecuredActionBuilder$$anonfun$2.apply(SecureSocial.scala:113) at securesocial.core.SecureSocial$SecuredActionBuilder$$anonfun$2.apply(SecureSocial.scala:112) at scala.Option.flatMap(Option.scala:170) at securesocial.core.SecureSocial$SecuredActionBuilder.invokeSecuredBlock(SecureSocial.scala:112) at securesocial.core.SecureSocial$SecuredActionBuilder.invokeBlock(SecureSocial.scala:146) at play.api.mvc.ActionBuilder$$anon$1.apply(Action.scala:309) at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4$$anonfun$apply$5.apply(Action.scala:109) at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4$$anonfun$apply$5.apply(Action.scala:109) at play.utils.Threads$.withContextClassLoader(Threads.scala:18) at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4.apply(Action.scala:108) at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4.apply(Action.scala:107) at scala.Option.map(Option.scala:145) at play.api.mvc.Action$$anonfun$apply$1.apply(Action.scala:107) at play.api.mvc.Action$$anonfun$apply$1.apply(Action.scala:100) at play.api.libs.iteratee.Iteratee$$anonfun$mapM$1.apply(Iteratee.scala:481) at play.api.libs.iteratee.Iteratee$$anonfun$mapM$1.apply(Iteratee.scala:481) at play.api.libs.iteratee.Iteratee$$anonfun$flatMapM$1.apply(Iteratee.scala:517) at play.api.libs.iteratee.Iteratee$$anonfun$flatMapM$1.apply(Iteratee.scala:517) at play.api.libs.iteratee.Iteratee$$anonfun$flatMap$1$$anonfun$apply$13.apply(Iteratee.scala:493) at play.api.libs.iteratee.Iteratee$$anonfun$flatMap$1$$anonfun$apply$13.apply(Iteratee.scala:493) at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:42) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) Caused by: java.io.IOException: <html><head><title>500 Internal Server Error</title></head><body><h1>Internal Server Error</h1>The server encountered an error while processing this request:<br><pre>{error, {error, {case_clause, {error, {0, [{module,riak_kv_mrc_map}, {partition,913438523331814323877303020447676887284957839360}, {details, [{fitting, {fitting,<0.21083.23>,#Ref<0.0.31.39954>,follow,1}}, {name,0}, {module,riak_kv_mrc_map}, {arg,{{jsfun,<<"Riak.mapValuesJson">>},none}}, {output, {fitting,<0.21081.23>,#Ref<0.0.31.39954>,sink, undefined}}, {options, [{log,sink}, {trace,[error]}, {sink, {fitting,<0.21081.23>,#Ref<0.0.31.39954>, sink,undefined}}, {sink_type,{fsm,10,infinity}}]}, {q_limit,64}]}, {type,forward_preflist}, {error,[preflist_exhausted]}, {input, {ok,{r_object,<<"xxxx-users">>, <<"xxxx#hotmail.com-userpass">>, [{r_content, {dict,7,16,16,8,80,48, {[],[],[],[],[],[],[],[],[],[],[],[], [],[],[],[]}, {{[],[], [[<<"Links">>]], [],[],[],[],[],[],[], [[<<"content-type">>,97,112,112,108, 105,99,97,116,105,111,110,47,106, 115,111,110], [<<"X-Riak-VTag">>,54,98,119,73,73, 84,107,120,66,70,107,86,102,67,103, 71,73,116,120,121,85,53]], [[<<"index">>]], [], [[<<"X-Riak-Last-Modified">>| {1407,514685,380030}]], [], [[<<"charset">>,117,116,102,45,56], [<<"X-Riak-Meta">>]]}}}, <<"{\"identityId\":{\"userId\":\"xxxx#hotmail.com\",\"providerId\":\"userpass\"},\"firstName\":\"xx\",\"lastName\":\"xx\",\"fullName\":\"xx xx\",\"email\":\"xxxx#hotmail.com\",\"authMethod\":{\"method\":\"userPassword\"},\"passwordInfo\":{\"hasher\":\"bcrypt\",\"password\":\"$2a$10$Gm1VVCM09iyI7TQY7r8B7.Baa.YrtHHgREkQpTIH9ThyW4WzuUeJ.\"}}">>}], [{<<35,9,254,249,83,228,76,146>>, {1,63574733885}}], {dict,1,16,16,8,80,48, {[],[],[],[],[],[],[],[],[],[],[],[],[],[], [],[]}, {{[],[],[],[],[],[],[],[],[],[],[],[],[],[], [[clean|true]], []}}}, undefined}, undefined}}, {modstate, {state, 913438523331814323877303020447676887284957839360, {fitting_details, {fitting,<0.21083.23>,#Ref<0.0.31.39954>, follow,1}, 0,riak_kv_mrc_map, {{jsfun,<<"Riak.mapValuesJson">>},none}, {fitting,<0.21081.23>,#Ref<0.0.31.39954>,sink, undefined}, [{log,sink}, {trace,[error]}, {sink, {fitting,<0.21081.23>,#Ref<0.0.31.39954>, sink,undefined}}, {sink_type,{fsm,10,infinity}}], 64}, {jsfun,<<"Riak.mapValuesJson">>}, none}}, {stack,[]}]}}}, [{riak_kv_wm_mapred,pipe_mapred_nonchunked,3, [{file,"src/riak_kv_wm_mapred.erl"},{line,180}]}, {webmachine_resource,resource_call,3, [{file,"src/webmachine_resource.erl"},{line,183}]}, {webmachine_resource,do,3, [{file,"src/webmachine_resource.erl"},{line,141}]}, {webmachine_decision_core,resource_call,1, [{file,"src/webmachine_decision_core.erl"},{line,48}]}, {webmachine_decision_core,decision,1, [{file,"src/webmachine_decision_core.erl"},{line,481}]}, {webmachine_decision_core,handle_request,2, [{file,"src/webmachine_decision_core.erl"},{line,33}]}, {webmachine_mochiweb,loop,1, [{file,"src/webmachine_mochiweb.erl"},{line,97}]}, {mochiweb_http,parse_headers,5, [{file,"src/mochiweb_http.erl"},{line,180}]}]}}</pre><P><HR><ADDRESS>mochiweb+webmachine web server</ADDRESS></body></html> at com.basho.riak.client.raw.http.ConversionUtil.convert(ConversionUtil.java:589) at com.basho.riak.client.raw.http.HTTPClientAdapter.mapReduce(HTTPClientAdapter.java:386) at com.basho.riak.client.query.MapReduce.execute(MapReduce.java:79) ... 36 more
The stack trace is telling us that there was a case clause exception at line 180 of the fie riak_kv_wm_mapred.erl The clause at that line is handling the responses for the pipe processing the map phase, which appears to be returning the error preflist_exhausted, which is not explicitly handled by the case statement. That error usually indicates that one or more vnodes were overloaded or otherwise unavailable, and fallbacks had not yet started to take over their workload. The affected partition was 913438523331814323877303020447676887284957839360, the console.log and error.log may have further details about what happened.
How to get grunt task name given on command line?
In order to customize my grunt tasks, I need access to the grunt task name given on the command line when starting grunt. The options is no problem, since its well documented (grunt.options). It's also well documented how to find out the task name, when running a grunt task. But I need access to the task name before. Eg, the user writes grunt build --target=client When configuring the grunt job in my Gruntfile.js, I can use grunt.option('target') to get 'client'. But how do I get hold of parameter build before the task build starts? Any guidance is much appreciated!
Your grunt file is basically just a function. Try adding this line to the top: module.exports = function( grunt ) { /*==> */ console.log(grunt.option('target')); /*==> */ console.log(grunt.cli.tasks); // Add your pre task code here... Running with grunt build --target=client should give you the output: client [ 'build' ] At that point, you can run any code you need to before your task is run including setting values with new dependencies.
A better way is to use grunt.task.current which has information about the currently running task, including a name property. Within a task, the context (i.e. this) is the same object. So . . . grunt.registerTask('foo', 'Foobar all the things', function() { console.log(grunt.task.current.name); // foo console.log(this.name); // foo console.log(this === grunt.task.current); // true }); If build is an alias to other tasks and you just want to know what command was typed that led to the current task execution, I typically use process.argv[2]. If you examine process.argv, you'll see that argv[0] is node (because grunt is a node process), argv[1] is grunt, and argv[2] is the actual grunt task (followed by any params in the remainder of argv). EDIT: Example output from console.log(grunt.task.current) on grunt#0.4.5 from within a task (can't have a current task from not a current task). { nameArgs: 'server:dev', name: 'server', args: [], flags: {}, async: [Function], errorCount: [Getter], requires: [Function], requiresConfig: [Function], options: [Function], target: 'dev', data: { options: { debugPort: 5858, cwd: 'server' } }, files: [], filesSrc: [Getter] }
You can use grunt.util.hooker.hook for this. Example (part of Gruntfile.coffee): grunt.util.hooker.hook grunt.task, (opt) -> if grunt.task.current and grunt.task.current.nameArgs console.log "Task to run: " + grunt.task.current.nameArgs CMD: C:\some_dir>grunt concat --cmp my_cmp Task to run: concat Running "concat:coffee" (concat) task Task to run: concat:coffee File "core.coffee" created. Done, without errors. There is also a hack that I've used to prevent certain task execution: grunt.util.hooker.hook grunt.task, (opt) -> if grunt.task.current and grunt.task.current.nameArgs console.log "Task to run: " + grunt.task.current.nameArgs if grunt.task.current.nameArgs is "<some task you don't want user to run>" console.log "Ooooh, not <doing smth> today :(" exit() # Not valid. Don't know how to exit :), but will stop grunt anyway CMD, when allowed: C:\some_dir>grunt concat:coffee --cmp my_cmp Running "concat:coffee" (concat) task Task to run: concat:coffee File "core.coffee" created. Done, without errors. CMD, when prevented: C:\some_dir>grunt concat:coffee --cmp my_cmp Running "concat:coffee" (concat) task Task to run: concat:coffee Ooooh, not concating today :( Warning: exit is not defined Use --force to continue. Aborted due to warnings.